text
stringlengths
4
2.78M
--- author: - | Sudarshan Fernando$^{1}$[^1] and Murat Günaydin$^{2}$[^2]\ $^{1}$*Physical Sciences Department\ Kutztown University\ Kutztown, PA 19530, USA*\ $^{2}$*Center for Fundamental Theory\ Institute for Gravitation and the Cosmos\ Physics Department\ Pennsylvania State University\ University Park, PA 16802, USA* title: 'Minimal unitary representation of $SO^*(8) = SO(6,2)$ and its $SU(2)$ deformations as massless $6D$ conformal fields and their supersymmetric extensions' --- Introduction {#Intro} ============ Unitary representations of noncompact U-duality groups of extended supergravity theories were first studied in early 1980s [@Gunaydin:1981dc; @Gunaydin:1981yq; @Gunaydin:1981zm], motivated by the idea that, in a quantum theory, global symmetries must be realized unitarily, as well as by attempts to derive a composite grand unified theory (GUT) from $N = 8$ supergravity [@Ellis:1980tf; @Ellis:1980cf; @Gunaydin:1982gw]. In the composite model of [@Ellis:1980tf], the local R-symmetry group $SU(8)$ of $N = 8$ supergravity was conjectured to become dynamical at the quantum level. A similar scenario based on the exceptional supergravity theory [@Gunaydin:1983rk] leads to $E_6$ GUT with a family group $U(1)$. After the discovery of counter terms at higher loops in $N=8$ supergravity and the Green-Schwarz anomaly cancellation in superstring theory [@Green:1984sg], the work on composite models was all but abandoned. Recent work proving cancellations of divergences in $N=8$ supergravity up to four loops [@Bern:2008pv; @BjerrumBohr:2008dp; @ArkaniHamed:2008gz; @Chalmers:2000ks; @Green:2006gt; @Green:2006yu; @Green:2007zzb; @Kallosh:2008ru; @Bern:2009kd; @Kallosh:2009jb] revived the question of finiteness of $N=8$ supergravity as well as of exceptional supergravity. The oscillator method developed in [@Gunaydin:1981yq], to construct the relevant unitary representations of noncompact U-duality groups of supergravity theories, generalized and unified previous special constructions in the physics literature. The formulation of [@Gunaydin:1981yq] was later extended to noncompact supergroups in [@Bars:1982ep] using bosonic as well as fermionic oscillators. In these generalized formulations of [@Gunaydin:1981yq] and [@Bars:1982ep], one realizes the generators of noncompact groups or supergroups as bilinears of an arbitrary number $P$ (“colors”) of sets of oscillators transforming in an irreducible representation (typically fundamental) of their maximal compact subgroups or subsupergroups. For symplectic groups $Sp(2n,\mathbb{R})$ the minimum value of $P$ turns out to be one, and the resulting unitary representations are simply the singleton representations, which are known as metaplectic representations in the mathematics literature. In general the minimum allowed value of $P$ for a noncompact group is two, and the resulting unitary representations were later referred to as doubleton representations. For example, for the groups $SU(n,m)$ and $SO^*(2n)$, with maximal compact subgroups $SU(m)\times SU(n) \times U(1)$ and $U(n)$, respectively, one finds that $P_{min}=2$. Symplectic groups $Sp(2n,\mathbb{R})$ admit only two singleton irreducible representations (irreps). Noncompact groups or supergroups that do not admit singleton representations have an infinite number of doubleton irreps. Since the generators are realized as bilinears of free bosonic and fermionic oscillators, tensoring of the resulting representations is very straightforward within the oscillator approach. Furthermore the oscillator method is simple and yet very powerful for constructing positive energy unitary representations. Even though the positive energy singleton or doubleton irreps do not belong to the discrete series, by tensoring them one obtains positive energy unitary representations that belong, in general, to the holomorphic discrete series representations of the respective noncompact group or supergroup. The oscillator methods for constructing positive energy unitary representations of non-compact groups and supergroups were applied to spacetime supergroups beginning in the 1980s. The Kaluza-Klein spectrum of IIB supergravity spontaneously compactified over the product space $AdS_5 \times S^5$ was first obtained via the oscillator method by simple tensoring of the CPT self-conjugate doubleton supermultiplet of $N=8$ $AdS_5$ superalgebra $PSU(2,2\,|\,4)$ repeatedly with itself and restricting to the CPT self-conjugate short supermultiplets of $PSU(2,2\,|\,4)$ [@Gunaydin:1984fk]. The CPT self-conjugate doubleton supermultiplet does not have a Poincaré limit in five dimensions and decouples from the Kaluza-Klein spectrum as gauge modes. This led the authors of [@Gunaydin:1984fk] to the proposal that the field theory of CPT self-conjugate doubleton supermultiplet of $PSU(2,2\,|\,4)$ lives on the boundary of $AdS_5$, which can be identified with $4D$ Minkowski space on which $SO(4,2)$ acts as a conformal group. Furthermore they pointed out that the unique candidate for this theory is the four dimensional $N=4$ super Yang-Mills theory that was known to be conformally invariant. The spectra of the spontaneous compactifications of eleven dimensional supergravity over $AdS_4 \times S^7$ and $AdS_7 \times S^4$, that had been obtained by other methods previously, were fitted into supermultiplets of the symmetry superalgebras $OSp(8\,|\,4,\mathbb{R})$ and $OSp(8^*\,|\,4)$ obtained by oscillator methods in [@Gunaydin:1985tc] and [@Gunaydin:1984wc], respectively. Furthermore, the entire Kaluza-Klein spectra of eleven dimensional supergravity over these two spaces were obtained by tensoring the singleton and scalar doubleton supermultiplets of $OSp(8\,|\,4,\mathbb{R})$ and $OSp(8^*\,|\,4)$, respectively. The singleton and doubleton supermultiplets themselves do not have a Poincaré limit in four and seven dimensions and decouple from the respective spectra as gauge modes. Again it was proposed that the field theories of the singleton and scalar doubleton supermutiplets live on the boundaries of $AdS_4$ and $AdS_7$ as super conformally invariant theories [@Gunaydin:1984wc; @Gunaydin:1985tc]. The importance of these results was not fully realized until the work of Maldacena[@Maldacena:1997re] and subsequent works of Witten [@Witten:1998qj] and of Gubser et al. [@Gubser:1998bc] and have since become an integral part of the work on AdS/CFT dualities in M/superstring theory which has seen an exponential growth for over more than a decade now. Noncompact groups were also introduced into physics as spectrum generating symmetry groups during the 1960s. Inspired by the work of physicists on spectrum generating symmetry groups, Joseph introduced the concept of minimal unitary realizations of Lie groups in [@MR0342049]. These are unitary representations of corresponding noncompact groups over Hilbert spaces of functions of smallest possible (minimal) number of variables. Joseph gave the minimal realizations of the complex forms of classical Lie algebras and of the exceptional Lie algebra $\mathfrak{g}_2$ in a Cartan-Weil basis. The minimal unitary representation of the split exceptional group $E_{8(8)}$ was first identified within Langland’s classification by Vogan [@MR644845]. In an important paper, Kostant studied the minimal unitary representation of $SO(4,4)$ and its relation to triality in [@MR1103588]. A general study of minimal unitary representations of simply laced groups was given by Kazhdan and Savin[@MR1159103] and by Brylinski and Kostant [@MR1372999; @MR1278630]. The minimal unitary representations of quaternionic real forms of exceptional Lie groups were studied by Gross and Wallach [@MR1327538] and those of $SO(p,q)$ in [@MR1108044; @MR2020550; @MR2020551; @MR2020552]. Pioline, Kazhdan and Waldron [@Kazhdan:2001nx] reformulated the minimal unitary representations of simply laced groups given in [@MR1159103] and gave the spherical vectors for the simply laced exceptional groups necessary for the construction of modular forms. The relation of minimal representations of $SO(p,q)$ to conformal geometry was studied rather recently in [@Gover:2009vc]. Over the last decade, a great deal of progress was made towards the goal of constructing physically relevant unitary representations of U-duality groups of extended supergravity theories. An additional motivation towards this goal was provided by the proposals that certain extensions of U-duality groups may act as spectrum generating symmetry groups of these theories. Work on orbits of extremal black hole solutions in $N=8$ supergravity and $N=2$ Maxwell-Einstein supergravity theories with symmetric scalar manifolds led to the proposal that four dimensional U-duality groups act as spectrum generating conformal symmetry groups of the corresponding five dimensional supergravity theories [@Ferrara:1997uz; @Gunaydin:2000xr; @Gunaydin:2004ku; @Gunaydin:2003qm; @Gunaydin:2005gd; @Gunaydin:2009pk]. In attempts to find the corresponding spectrum generating symmetry groups of extremal black hole solutions of four dimensional supergravity theories with symmetric scalar manifolds, geometric quasiconformal realizations of three dimensional U-duality groups were discovered in [@Gunaydin:2000xr]. Based on this novel geometric realization, quasiconformal extensions of four dimensional U-duality groups were proposed as spectrum generating symmetry groups of the corresponding supergravity theories with symmetric scalar manifolds [@Gunaydin:2000xr; @Gunaydin:2004ku; @Gunaydin:2003qm; @Gunaydin:2005gd; @Gunaydin:2009pk]. A concrete implementation of the proposal that three dimensional U-duality groups act as spectrum generating quasiconformal groups was given in [@Gunaydin:2005mx; @Gunaydin:2007bg; @Gunaydin:2007qq] using the equivalence of equations of attractor flows of spherically symmetric stationary BPS black holes of four dimensional supergravity theories and the geodesic equations of a fiducial particle moving in the target space of the three dimensional supergravity theories obtained by reduction of the $4D$ theories on a timelike circle [@Breitenlohner:1987dg]. Quasiconformal realization of three dimensional U-duality group $E_{8(8)}$ of maximal supergravity in three dimensions is the first known geometric realization of $E_{8(8)}$[@Gunaydin:2000xr]. Quasiconformal action of $E_{8(8)}$ leaves invariant a generalized light-cone with respect to a quartic distance function in 57 dimensions. Quasiconformal realizations exist for various real forms of all noncompact groups as well as for their complex forms [@Gunaydin:2000xr; @Gunaydin:2005zz]. Remarkably, the quantization of geometric quasiconformal action of a noncompact group leads directly to its minimal unitary representation as was first shown explicitly for the maximally split exceptional group $E_{8(8)}$ with the maximal compact subgroup $SO(16)$ [@Gunaydin:2001bt]. The minimal unitary representation of three dimensional U-duality group $E_{8(-24)}$ of the exceptional supergravity [@Gunaydin:1983rk] was given in [@Gunaydin:2004md]. The minimal unitary representations of U-duality groups $\mathrm{F}_{4(4)}$, $\mathrm{E}_{6(2)}$, $\mathrm{E}_{7(-5)}$ , $\mathrm{E}_{8(-24)}$ and $SO(d+2,4)$ of $N=2$ Maxwell-Einstein supergravity theories with symmetric scalar manifolds were studied in [@Gunaydin:2005zz; @Gunaydin:2004md]. In [@Gunaydin:2006vz], a unified formulation of the minimal unitary representations of certain noncompact real forms of groups of type $A_2$, $G_2$, $D_4$, $F_4$, $E_6$, $E_7$, $E_8$ and $C_n$ was given. The minimal unitary representations of $Sp\left(2n,\mathbb{R}\right)$ are simply the singleton representations. In [@Gunaydin:2006vz], minimal unitary representations of noncompact groups $SU\left(m,n\right)$, $SO\left(m,n\right)$, $SO^*(2n)$ and $SL\left(m,\mathbb{R}\right)$ obtained by quasiconformal methods were also given explicitly. Furthermore, this unified approach was generalized to define and construct the corresponding minimal representations of non-compact supergroups $G$ whose even subgroups are of the form $H\times SL(2,\mathbb{R})$ with $H$ compact. The unified construction with $H$ simple or Abelian leads to the minimal unitary representations of supergroups $G(3), F(4)$ and $OSp\left(n|2,\mathbb{R}\right)$. The minimal unitary representations of $OSp\left(n|2,\mathbb{R}\right)$ with even subgroups $SO(n)\times Sp(2,\mathbb{R})$ are the singleton supermultiplets. The minimal realization of the one parameter family of Lie superalgebras $D\left(2,1;\sigma\right)$ with even subgroup $ SU(2)\times SU(2) \times SU(1,1)$ was also presented in [@Gunaydin:2006vz]. In mathematics literature, the term minimal unitary representation refers, in general, to a unique representation of the respective noncompact group. The symplectic group $Sp(2N,\mathbb{R})$ admits two singleton irreps whose quadratic Casimirs take on the same value. Both of these singleton representations are minimal unitary representations, even though in some of the mathematics literature only the scalar singleton is referred to as the minrep. Similarly one finds that the supergroups $OSp(M|2N,\mathbb{R}) $ with the even subgroup $SO(M) \times Sp(2N,\mathbb{R})$ admit two inequivalent singleton supermultiplets [@Gunaydin:1985tc; @Gunaydin:1988kz; @Gunaydin:1987hb]. For noncompact groups or supergroups that admit only doubleton irreps, this raises the question as to whether any of the doubleton unitary representations can be identified with the minimal representation, and if so, how the infinite set of doubletons are related to the minrep. More recently, we investigated this issue for $5D$ anti-de Sitter or $4D$ conformal group $SU(2,2)$ and corresponding supergroups $SU(2,2|N)$ [@Fernando:2009fq]. We gave a detailed study of the minimal unitary representation of the group $SU(2,2)$ by quantization of its quasiconformal realization and showed that it coincides with the scalar doubleton representation corresponding to a massless scalar field in four dimensions. Furthermore we showed that the minrep of $SU(2,2)$ admits a one-parameter family ($\zeta$) of deformations, and for a positive (negative) integer value of the deformation parameter $\zeta$, one obtains a positive energy unitary irreducible representation of $SU(2,2)$ corresponding to a massless conformal field in four dimensions transforming in $\left( 0 \,,\, \frac{\zeta}{2} \right)$ $\left( \left( -\frac{\zeta}{2} \,,\, 0 \right) \right)$ representation of the Lorentz subgroup, $SL(2,\mathbb{C})$ of $SU(2,2)$. These are simply the doubleton representations of $SU(2,2)$ that describe massless conformal fields in four dimensions [@Gunaydin:1998sw; @Gunaydin:1998jc]. They were referred to as ladder (or most degenerate discrete series) unitary representations by Mack and Todorov, who showed that they remain irreducible under restriction to the Poincaré subgroup [@Mack:1969dg]. Hence the deformation parameter can be identified with twice the helicity $h$ of the corresponding massless representation of the Poincaré group. We extended these results to the minimal unitary representations of supergroups $SU(2,2\,|\,N)$ with the even subgroup $SU(2,2)\times U(N)$ and their deformations. The minimal unitary supermultiplet of $SU(2,2|N)$ coincides with the CPT self-conjugate (scalar) doubleton supermultiplet, and for $PSU(2,2|4)$ it is simply the four dimensional $N=4$ Yang-Mills supermultiplet. Again in the supersymmetric case, one finds a one-parameter family of deformations of the minimal unitary supermultiplet of $SU(2,2|N)$. Each integer value of the deformation parameter $\zeta$ leads to a unique unitary supermultiplet of $SU(2,2\,|\,N)$. The minimal unitary supermultiplet of $SU(2,2\,|\,N)$ and its deformations turn out to be precisely the doubleton supermultiplets that were constructed and studied using the oscillator method earlier [@Gunaydin:1984fk; @Gunaydin:1998sw; @Gunaydin:1998jc]. These results extend to the minreps of $SU(m,n)$ and of $SU(m,n\,|N)$ and their deformations in a straightforward manner. In this paper we give a detailed study of the minimal unitary representation of $7D$ anti-de Sitter or $6D$ conformal group $SO^*(8)=SO(6,2)$, obtained by quantizing its realization as a quasiconformal group that leaves invariant a quartic light-cone in nine dimensions, its deformations and their supersymmetric extensions to supermultiplets of $OSp(8^*|2N)$. The oscillator construction of the positive energy unitary supermultiplets of $OSp(8^*|2N)$ were first given in [@Gunaydin:1984wc]. These unitary supermultiplets were further studied in [@Gunaydin:1999ci; @Fernando:2001ak] where it was shown that the doubleton supermultiplets correspond to massless conformal supermultiplets in six dimensions. A classification of the positive energy unitary supermultiplets of $6D$ superconformal algebras using other methods was given in [@Minwalla:1997ka; @Dobrev:2002dt]. The oscillator construction of positive energy representations of general supergroups $OSp(2M^*|2N) $ with maximal compact subgroup $SO^*(2M)\times USp(2N)$ was given in [@Gunaydin:1990ag]. The plan of our paper is as follows. In section \[quasiconf\] we review the geometric quasiconformal realizations of groups $SO(d+2,2)$ as invariance groups of a light-cone with respect to a quartic distance function in $(2d+1)$ dimensional space. The quantization of this geometric realization leads to the minimal unitary representation of $SO(d+2,2)$ over an Hilbert space of functions in $d+1$ variables. We then specialize and study the case of $SO(6,2)$ in great detail in section \[minrepSO(6,2)\]. In section \[minrepSO\*(8)\], we study the minimal unitary realization of $SO^*(8)$ which is isomorphic to $SO(6,2)$. The transformations relating the $SO^*(8)$ basis to that of $SO(6,2)$ is given in Appendix \[app:bogoliubov\]. Section \[SU(1,1)ofSO\*(8)\] discusses the properties of a distinguished $SU(1,1)$ subgroup of $SO^*(8)$ generated by singular (isotonic) oscillators. We then give the K-type decomposition[^3] of the minrep of $SO^*(8)$ in section \[SU2SU2U1\] and show that it coincides with the K-type decomposition of scalar doubleton representation corresponding to a massless conformal scalar field in six dimensions that were studied in [@Gunaydin:1984wc; @Gunaydin:1999ci; @Fernando:2001ak]. Section \[USp(2N)\] reviews the fermionic construction of relevant representations of $USp(2N)$. In section \[minrepOSp(8\*|2N)-5Gr\] and section \[minrepOSp(8\*|2N)-3Gr\], we give the minimal unitary realization of the superalgebra $OSp(8^*|2N)$ with even subgroup $SO^*(8) \times USp(2N)$ obtained from quantizing its quasiconformal realization. Section \[minrepsupermultiplet\] presents the minimal unitary supermultiplets of $OSp(8^*|2N)$. We devote a special subsection to the minimal unitary supermultiplet of $OSp(8^*|4)$, which is the symmetry supergroup of M-theory compactified over $AdS_7 \times S^4$ and show that it coincides with the $(2,0)$ doubleton supermultiplet studied in [@Gunaydin:1984wc; @Gunaydin:1999ci; @Fernando:2001ak]. M-theory compactified over $AdS_7\times S^4$ is believed to be dual to a $(2,0)$ six dimensional superconformal theory based on this supermultiplet. We then study the general deformations of the minrep of $SO^*(8)$, independently of supersymmetry, in section \[SO\*(8)deformations\], and show that there exist an infinite family of deformations labeled by the spin $t$ of an $SU(2)_{\buildrel _\circ \over {T}}$ subgroup of the semi-simple part of the little group $SO(4)$ of massless states in six dimensions. For every spin value $t$, one obtains a positive energy unitary irreducible representation of $SO^*(8)$ corresponding to a massless conformal field in six dimensions with Dynkin labels $(2t,0,0)$ with respect to the covering group $SU^*(4)$ of the six dimensional Lorentz group $SO(5,1)$. The $SU(2)$ spin label $t$ for deformations is the $6D$ analog of the helicity label for deformations of the minrep of $4D$ conformal group[@Fernando:2009fq]. Quasiconformal Realizations of $SO\left(d+2,2\right)$ and Their Minimal Unitary Representations {#quasiconf} =============================================================================================== Geometric realizations of $SO\left(d+2,2\right)$ as quasiconformal groups {#geomSO(d+2,2)} ------------------------------------------------------------------------- Lie algebra of the $(d+2)$ dimensional conformal group $SO\left(d+2,2\right)$ can be given a 5-graded decomposition with respect to its subalgebra $\mathfrak{so}(d) \oplus \mathfrak{so}(1,1)$ [@Gunaydin:2005zz] $$\mathfrak{so}\left(d+2,2\right) = \mathbf{1}^{(-2)} \oplus \left(\mathbf{d}, \mathbf{2} \right)^{(-1)} \oplus \left[ \Delta \oplus \mathfrak{sp}\left(2,\mathbb{R}\right) \oplus \mathfrak{so}\left(d\right) \right] \oplus \left(\mathbf{d}, \mathbf{2} \right)^{(+1)} \oplus \mathbf{1}^{(+2)}$$ where $\Delta$ is the $SO(1,1)$ generator that determines the five grading. The superscript $m$ labels the grade of a generator: $${\left[ \Delta \, , \, \mathfrak{g}^{(m)} \right]} = m \, \mathfrak{g}^{(m)}$$ In the above decomposition, $\left(\mathbf{d}, \mathbf{2} \right)^{(m)}$ labels the generators transforming in the $(d,2)$ representation of $SO(d) \times Sp(2,\mathbb{R})$ with grade $m$. Generators of quasiconformal action are realized as differential operators acting on a $(2d+1)$ dimensional space $\mathcal{T}$ corresponding to the Heisenberg subalgebra generated by the elements of $\mathfrak{g}^{(-2)} \oplus \mathfrak{g}^{(-1)}$ subspace. We shall denote the coordinates of the space $\mathcal{T}$ as $\mathcal{X} = \left( X^{i,\alpha} , x \right)$, where $X^{i,\alpha}$ transform in the $(d,2)$ representation of $SO(d) \times Sp(2,\mathbb{R})$, with $i = 1,2,\dots,d$ and $\alpha = 1,2$, and $x$ is a singlet coordinate. Let $\epsilon_{\alpha\beta}$ be the symplectic metric of $Sp(2,\mathbb{R})$ and $\eta_{ij}$ the $SO(d)$ invariant metric ($\eta_{ij} = - \delta_{ij}$). Then the quartic polynomial in $X^{i,\alpha}$ $$\mathcal{I}_4 (X) = \eta_{ij} \eta_{kl} \epsilon_{\alpha\gamma} \epsilon_{\beta\delta} X^{i,\alpha} X^{j,\beta} X^{k,\gamma} X^{l,\delta}$$ is invariant under $SO(d) \times Sp(2,\mathbb{R})$ subgroup. We shall label the generators belonging to various grade subspaces as follows $$\mathfrak{so}(d+2,2) = K_- \oplus U_{i,\alpha} \oplus \left[ \Delta \oplus J_{\alpha\beta} \oplus M_{ij} \right] \oplus \widetilde{U}_{i,\alpha} \oplus K_+$$ where $J_{\alpha\beta}$ and $M_{ij}$ are the generators of $Sp(2,\mathbb{R})$ and $SO(d)$ subgroups, respectively. The infinitesimal generators of the quasiconformal action of $SO(d+2,2)$ take on the form $$\begin{split} K_+ &= \frac{1}{2} \left( 2 x^2 - \mathcal{I}_4 \right) \frac{\partial}{\partial x} - \frac{1}{4} \frac{\partial \mathcal{I}_4}{\partial X^{i,\alpha}} \eta^{ij} \epsilon^{\alpha\beta} \frac{\partial}{\partial X^{j,\beta}} + x \, X^{i,\alpha} \frac{\partial}{\partial X^{i,\alpha}} \\ U_{i,\alpha} &= \frac{\partial}{\partial X^{i,\alpha}} - \eta_{ij} \epsilon_{\alpha\beta} X^{i,\beta} \frac{\partial}{\partial x} \\ M_{ij} &= \eta_{ik} X^{k,\alpha} \frac{\partial}{\partial X^{j,\alpha}} - \eta_{jk} X^{k,\alpha} \frac{\partial}{\partial X^{i,\alpha}} \\ J_{\alpha\beta} &= \epsilon_{\alpha\gamma} X^{i,\gamma} \frac{\partial}{\partial X^{i,\beta}} + \epsilon_{\beta\gamma} X^{i,\gamma} \frac{\partial}{\partial X^{i,\alpha}} \\ K_- &= \frac{\partial}{\partial x} \\ \Delta & = 2 \, x \frac{\partial}{\partial x} + X^{i,\alpha} \frac{\partial}{\partial X^{i,\alpha}} \\ \widetilde{U}_{i,\alpha} &= {\left[ U_{i,\alpha} \, , \, K_+ \right]} \end{split}$$ where $\epsilon^{\alpha\beta}$ is the inverse symplectic metric such that $\epsilon^{\alpha\beta} \epsilon_{\beta\gamma} = \delta^\alpha_{~\gamma}$. Using $$\frac{\partial \mathcal{I}_4}{\partial X^{i,\alpha}} = - 4 \, \eta_{ij} \, \eta_{kl} \, X^{j,\beta} X^{k,\gamma} X^{l,\delta} \, \epsilon_{\beta\gamma} \epsilon_{\alpha\delta}$$ one obtains the explicit form of grade $+1$ generators $\widetilde{U}^{i,\alpha}$: $$\begin{split} \widetilde{U}_{i,\alpha} &= \eta_{ij} \epsilon_{\alpha\delta} \left( \eta_{kl} \epsilon_{\beta\gamma} X^{j,\beta} X^{k,\gamma} X^{l,\delta} - x X^{j,\delta} \right) \frac{\partial}{\partial x} + x \frac{\partial}{\partial X^{i,\alpha}} \\ & \quad - \eta_{ij} \epsilon_{\alpha\beta} X^{j,\beta} X^{k,\gamma} \frac{\partial}{\partial X^{k,\gamma}} - \eta_{jk} \epsilon_{\alpha\delta} X^{k,\delta} X^{j,\gamma} \frac{\partial}{\partial X^{i,\gamma}} \\ & \quad + \eta_{ij} \epsilon_{\alpha\gamma} X^{k,\gamma} X^{j,\beta} \frac{\partial}{\partial X^{k,\beta}} + \eta_{ij} \epsilon_{\beta\gamma} X^{j,\beta} X^{k,\gamma} \frac{\partial}{\partial X^{k,\alpha}} \end{split}$$ The above generators satisfy the following commutation relations: $$\begin{split} {\left[ M_{ij} \, , \, M_{kl} \right]} &= \eta_{jk} \, M_{il} - \eta_{ik} \, M_{jl} - \eta_{jl} \, M_{ik} + \eta_{il} \, M_{jk} \\ {\left[ J_{\alpha\beta} \, , \, J_{\gamma\delta} \right]} &= \epsilon_{\gamma\beta} \, J_{\alpha\delta} + \epsilon_{\gamma\alpha} \, J_{\beta\delta} + \epsilon_{\delta\beta} \, J_{\alpha\gamma} + \epsilon_{\delta\alpha} \, J_{\beta\gamma} \end{split}$$ $$\begin{split} {\left[ \Delta \, , \, K_\pm \right]} &= \pm 2 \, K_\pm \qquad \qquad \qquad {\left[ K_- \, , \, K_+ \right]} = \Delta \\ {\left[ \Delta \, , \, U_{i,\alpha} \right]} &= - U_{i,\alpha} \qquad \qquad \qquad {\left[ \Delta \, , \, \widetilde{U}_{i,\alpha} \right]} = \widetilde{U}_{i,\alpha} \\ {\left[ U_{i,\alpha} \, , \, K_+ \right]} &= \widetilde{U}_{i,\alpha} \qquad \qquad \qquad \quad {\left[ \widetilde{U}_{i,\alpha} \, , \, K_- \right]} = - U_{i,\alpha} \\ {\left[ U_{i,\alpha} \, , \, U_{j,\beta} \right]} &= 2 \, \eta_{ij} \epsilon_{\alpha\beta} \, K_- \qquad \qquad {\left[ \widetilde{U}_{i,\alpha} \, , \, \widetilde{U}_{j,\beta} \right]} = 2 \, \eta_{ij} \epsilon_{\alpha\beta} \, K_+ \end{split}$$ $$\begin{split} {\left[ M_{ij} \, , \, U_{k,\alpha} \right]} &= \eta_{jk} \, U_{i,\alpha} - \eta_{ik} \, U_{j,\alpha} \qquad \qquad {\left[ M_{ij} \, , \, \widetilde{U}_{k,\alpha} \right]} = \eta_{jk} \, \widetilde{U}_{i,\alpha} - \eta_{ik} \, \widetilde{U}_{j,\alpha} \\ {\left[ J_{\alpha\beta} \, , \, U_{i,\gamma} \right]} &= \epsilon_{\gamma\beta} \, U_{i,\alpha} + \epsilon_{\gamma\alpha} \, U_{i,\beta} \qquad \qquad {\left[ J_{\alpha\beta} \, , \, \widetilde{U}_{i,\gamma} \right]} = \epsilon_{\gamma\beta} \, \widetilde{U}_{i,\alpha} + \epsilon_{\gamma\alpha} \, \widetilde{U}_{i,\beta} \\ \end{split}$$ $${\left[ U_{i,\alpha} \, , \, \widetilde{U}_{j,\beta} \right]} = \eta_{ij} \epsilon_{\alpha\beta} \, \Delta - 2 \, \epsilon_{\alpha\beta} \, M_{ij} - \eta_{ij} \, J_{\alpha\beta}$$ One defines the quartic norm $\mathcal{N}_4 (\mathcal{X})$ of a vector $\mathcal{X}= \left( X^{i,\alpha} , x \right)$ in $\mathcal{T}$ as $$\mathcal{N}_4 \left(\mathcal{X} \right) := \mathcal{I}_4\left(X\right) + 2 \, x^2$$ and the “quartic distance” between any two points with coordinate vectors $\mathcal{X}$ and $\mathcal{Y}$ as $$d \left( \mathcal{X} , \mathcal{Y} \right) := \mathcal{N}_4 \left( \delta \left( \mathcal{X} , \mathcal{Y} \right) \right)$$ where $\delta \left( \mathcal{X} , \mathcal{Y} \right)$ is the “symplectic” difference of two vectors $\mathcal{X}$ and $\mathcal{Y}$ in the $(2d+1)$ dimensional space $\mathcal{T}$ given by [@Gunaydin:2000xr; @Gunaydin:2005zz] $$\delta \left( \mathcal{X} , \mathcal{Y} \right) := \left( X^{i,\alpha} - Y^{i,\alpha} \,,\, x - y - \eta_{ij} \epsilon_{\alpha\beta} \, X^{i,\alpha} Y^{j,\beta} \right) \,.$$ Under the quasiconformal action of the generators of $SO(d+2,2)$ the quartic distance function transforms as: $$\begin{split} \Delta d \left( \mathcal{X} , \mathcal{Y} \right) &= 4 \, d \left( \mathcal{X} , \mathcal{Y} \right) \\ \widetilde{U}_{i,\alpha} d \left( \mathcal{X} , \mathcal{Y} \right) &= - 2 \, \eta_{ij} \epsilon_{\alpha\beta} \left( X^{j,\beta} + Y^{j,\beta} \right) d \left( \mathcal{X} , \mathcal{Y} \right) \\ K_+ d \left( \mathcal{X} , \mathcal{Y} \right) &= 2 \, \left( x + y \right) \, d \left( \mathcal{X} , \mathcal{Y} \right) \\ M_{ij} \, d \left( \mathcal{X} , \mathcal{Y} \right) &= 0 \\ J_{\alpha\beta} \, d \left( \mathcal{X} , \mathcal{Y} \right) &= 0 \\ U_{i,\alpha} \, d \left( \mathcal{X} , \mathcal{Y} \right) &= 0 \\ K_- \, d \left( \mathcal{X} , \mathcal{Y} \right) &= 0 \end{split}$$ They imply that light-like separations $$d \left( \mathcal{X} , \mathcal{Y} \right) = 0$$ are left invariant under the quasiconformal action. In other words, the quasiconformal action of $SO(d+2,2)$ leaves the “light-cone” in $\mathcal{T}$ with respect to the [*quartic*]{} distance function invariant. Minimal unitary representations of $SO\left(d+2,2\right)$ from quantization of their quasiconformal realizations {#minrepSO(d+2,2)} ---------------------------------------------------------------------------------------------------------------- Minimal unitary representations of noncompact groups can be obtained by the quantization of their geometric realizations as quasiconformal groups [@Gunaydin:2001bt; @Gunaydin:2004md; @Gunaydin:2005zz; @Gunaydin:2006vz; @Gunaydin:2007qq]. In this section we shall review the minimal unitary representations of orthogonal groups $SO(d+2,2)$ thus obtained following [@Gunaydin:2005zz; @Gunaydin:2006vz]. Let $X^i$ and $P_i$ be the quantum mechanical coordinate and momentum operators on $\mathbb{R}^{(d)}$ satisfying the canonical commutation relations $${\left[ X^i \, , \, P_j \right]} = i \, {\delta^i_j} \,.$$ The generators belonging to the subspace $\mathfrak{g}^{(-2)} \oplus \mathfrak{g}^{(-1)}$ of the Lie algebra of $SO(d+2,2)$ form an Heisenberg algebra $${\left[ U_{i,\alpha} \, , \, U_{j,\beta} \right]} = 2 \, \eta_{ij} \epsilon_{\alpha\beta} \, K_- \label{heisenberg}$$ with $K_-$ playing the role of the central charge. We shall relabel the generators and define $$U_{i,1} \equiv U_{i} \qquad \qquad \qquad U_{i,2} \equiv V_{i}$$ and realize the Heisenberg algebra (equation (\[heisenberg\])) in terms of coordinate and momentum operators $X^i$, $P_i$ and an extra “central charge coordinate” $x$ as $$\begin{split} U_i = x P_i & \qquad \qquad V^i = x X^i \\ &K_- = \frac{1}{2} x^2 \end{split}$$ $${\left[ V^i \, , \, U_j \right]} = 2 i \, \delta^i_j \, K_- \label{heisenberg2}$$ By introducing the quantum mechanical momentum operator $p$, conjugate to the central charge coordinate $x$, such that $${\left[ x \, , \, p \right]} = i$$ one can realize the generators of $SO(d) \times Sp(2,\mathbb{R}) \times SO(1,1)$ subgroup belonging to the grade zero subalgebra of $\mathfrak{so}(d+2,2)$ as bilinears of canonically conjugate pairs of coordinate and momentum operators [@Gunaydin:2005zz; @Gunaydin:2006vz]: $$\begin{split} M_{ij} &= - i \, \delta_{ik} X^k P_j + i \, \delta_{jk} X^k P_i \\ J_0 &= \frac{1}{2} \left( X^i P_i + P_i X^i \right) \\ J_- &= - \delta_{ij} X^i X^j \\ J_+ &= - \delta^{ij} P_i P_j \\ \Delta &= \frac{1}{2} \left( x p + p x \right) \end{split} \label{gradezeroSO(d+2,2)}$$ The generators $M_{ij}$ of $SO(d)$ satisfy the commutation relations $${\left[ M_{ij} \, , \, M_{kl} \right]} = - \delta_{jk} \, M_{il} + \delta_{ik} \, M_{jl} + \delta_{jl} \, M_{ik} - \delta_{il} \, M_{jk}$$ and the generators $J_0$ and $J_{\pm}$ of $Sp(2,\mathbb{R})$ satisfy $${\left[ J_0 \, , \, J_\pm \right]} = \pm 2 i \, J_\pm \qquad \qquad \qquad {\left[ J_- \, , \, J_+ \right]} = 4 i \, J_0 \,.$$ Note that the compact generator of this $Sp(2,\mathbb{R})$ is $\left( J_+ + J_- \right)$. The coordinate $X^i$ and momentum $P_i$ operators transform in the vector representation of $SO(d)$ subgroup generated by $M_{ij}$ and form doublets of the symplectic group $Sp(2,\mathbb{R}$): $$\begin{aligned} {\left[ J_0 \, , \, V^i \right]} &= - i \, V^i \\ {\left[ J_0 \, , \, U_i \right]} &= + i \, U_i \end{aligned} \qquad \begin{aligned} {\left[ J_- \, , \, V^i \right]} &= 0 \\ {\left[ J_- \, , \, U_i \right]} &= - 2 i \, \delta_{ij} \, V^j \end{aligned} \qquad \begin{aligned} {\left[ J_+ \, , \, V^i \right]} &= + 2 i \, \delta^{ij} \, U_j \\ {\left[ J_+ \, , \, U_i \right]} &= 0 \end{aligned}$$ There is a normal ordering ambiguity in defining the quantum operator corresponding to the quartic invariant. We shall choose the quantum quartic invariant given in [@Gunaydin:2005zz]: $$\begin{split} \mathcal{I}_4 &= \left( \delta_{ij} X^i X^j \right) \left( \delta^{ij} P_i P_j \right) + \left( \delta^{ij} P_i P_j \right) \left( \delta_{ij} X^i X^j \right) \\ & \qquad - \left( X^i P_i \right) \left( P_j X^j \right) - \left( P_i X^i \right) \left( X^j P_j \right) \end{split}$$ In terms of the quartic invariant, the grade +2 generator $K_+$ of $SO(d+2,2)$ takes the form $$K_+ = \frac{1}{2} p^2 + \frac{1}{4 \, x^2} \left( \mathcal{I}_4 + \frac{d^2+3}{2} \right) \,.$$ Then grade $+1$ generators are obtained by the commutation of grade $-1$ generators with $K_+$: $$\widetilde{U}_i = - i {\left[ U_i \, , \, K_+ \right]} \qquad \qquad \widetilde{V}^i = - i {\left[ V^i \, , \, K_+ \right]}$$ which explicitly read as follows: $$\begin{split} \widetilde{U}_i &= p \, P_i - \frac{1}{2 \, x} \, \delta_{ij} \delta^{kl} \left( X^j P_k P_l + P_k P_l X^j \right) \\ & \quad + \frac{1}{4 \, x} \left[ P_i \left( X^j P_j + P_j X^j \right) + \left(X^j P_j + P_j X^j \right) P_i \right] \\ \widetilde{V}^i &= p \, X^i + \frac{1}{2 \, x} \, \delta^{ij} \delta_{kl} \left( P_j X^k X^l + X^k X^l P_j \right) \\ & \quad - \frac{1}{4 \, x} \left[ X^i \left( X^j P_j + P_j X^j \right) + \left( X^j P_j + P_j X^j \right) X^i \right] \end{split}$$ Conversely we also have $$V^i = i {\left[ \widetilde{V}^i \, , \, K_- \right]} \qquad U_i = i {\left[ \widetilde{U}_i \, , \, K_- \right]} \,.$$ The generators in $\mathfrak{g}^{(+1)} \oplus \mathfrak{g}^{(+2)}$ subspace form an Heisenberg algebra isomorphic to equation (\[heisenberg2\]): $${\left[ \widetilde{V}^i \, , \, \widetilde{U}_j \right]} = 2 i \, \delta^i_j \, K_+$$ Commutators ${\left[ \mathfrak{g}^{(-1)} \, , \, \mathfrak{g}^{(+1)} \right]}$ close into grade zero subspace $\mathfrak{g}^{(0)}$: $$\begin{split} {\left[ U_i \, , \, \widetilde{U}_j \right]} &= - i \, \delta_{ij} \, J_+ \qquad {\left[ V^i \, , \, \widetilde{V}^j \right]} = - i \, \delta^{ij} \, J_- \\ {\left[ V^i \, , \, \widetilde{U}_j \right]} &= - 2 \, \delta^{ik} \, M_{kj} + i \, \delta^i_j \left( J_0 + \Delta \right) \\ {\left[ U_i \, , \, \widetilde{V}^j \right]} &= + 2 \, \delta^{jk} \, M_{ik} + i \, \delta^j_i \left( J_0 - \Delta \right) \end{split}$$ $\Delta$ is the generator that determines the 5-grading: $$\begin{aligned} {\left[ K_- \, , \, K_+ \right]} &= i \, \Delta \\ {\left[ \Delta \, , \, U_i \right]} &= - i \, U_i \\ {\left[ \Delta \, , \, \widetilde{U}_i \right]} &= + i \, \widetilde{U}_i \end{aligned} \qquad \qquad \begin{aligned} {\left[ \Delta \, , \, K_\pm \right]} &= \pm 2 i \, K_\pm \\ {\left[ \Delta \, , \, V^i \right]} &= - i \, V^i \\ {\left[ \Delta \, , \, \widetilde{V}^i \right]} &= + i \, \widetilde{V}^i \end{aligned}$$ We note that in this realization, the generators $M_{ij}$ are anti-hermitian and all the other generators of $SO(d+2,2)$ are hermitian. The quadratic Casimir operators of subalgebras $\mathfrak{so}(d)$ and $\mathfrak{sp}(2,\mathbb{R})_J$ of grade zero subspace, and $\mathfrak{sp}(2,\mathbb{R})_K$ generated by $K_{\pm}$ and $\Delta$ are given by $$\begin{split} M_{ij} M^{ij} &= - \mathcal{I}_4 - 2 d \\ J_- J_+ + J_+ J_- - 2 \left(J_0\right)^2 &= \mathcal{I}_4 + \frac{d^2}{2} \\ K_- K_+ + K_+ K_- - \frac{1}{2} \Delta^2 &= \frac{1}{4} \mathcal{I}_4 + \frac{d^2}{8} \,. \end{split}$$ They all reduce to the quartic invariant operator $\mathcal{I}_4$ modulo some additive constants. Furthermore, grade $\pm1$ generators belonging to the coset $$\frac{SO(d+2,2)}{SO(d)\times SO(2,2)}$$ satisfy the identity $$U_i \widetilde{V}^i + \widetilde{V}^i U_i - V^i \widetilde{U}_i - \widetilde{U}_i V^i = 2 \mathcal{I}_4 + d \left( d + 4 \right)$$ in the above realization. The above relations prove the existence of a family of degree 2 polynomials in the enveloping algebra of $\mathfrak{so}(d+2,2)$ that degenerate to a $c$-number for the minimal unitary realization, in accordance with Joseph’s theorem [@MR0342049]: $$\label{eq:JosephIdeal} \begin{split} &M_{ij} M^{ij} + \kappa_1 \left( J_- J_+ + J_+ J_- - 2 \left(J_0\right)^2 \right) + 4 \, \kappa_2 \left( K_- K_+ + K_+ K_- - \frac{1}{2} \Delta^2 \right) \\ &- \frac{1}{2} \left( \kappa_1 + \kappa_2 - 1 \right) \left( U_i \widetilde{V}^i + \widetilde{V}^i U_i - V^i \widetilde{U}_i - \widetilde{U}_i V^i \right) \\ & \qquad \qquad = \frac{1}{2} d \left[ d - 4 \left( \kappa_1 + \kappa_2 \right) \right] \end{split}$$ The quadratic Casimir of $\mathfrak{so}(d+2,2)$ corresponds to the choice $2 \kappa_1 = 2 \kappa_2 = - 1$ in . Hence the eigenvalue of the quadratic Casimir for the minimal unitary representation is equal to $\frac{1}{2} d \left( d + 4 \right)$. This minimal unitary representation is realized over the Hilbert space of square integrable functions in $(d+1)$ variables. Minimal Unitary Realization of $SO(6,2)$ over the Hilbert Space of $L^2$ Functions in Five Variables {#minrepSO(6,2)} ==================================================================================================== We shall specialize the minimal unitary realization of $SO(d+2,2)$ given above to the case of $SO(6,2)$. The corresponding 5-grading of the Lie algebra of $SO(6,2)$ is with respect to its subalgebra $\mathfrak{g}^{(0)} = \mathfrak{so}(4) \oplus \mathfrak{sp}(2,\mathbb{R}) \oplus \mathfrak{so}(1,1)$: $$\begin{split} \mathfrak{so}(6,2) &= \mathfrak{g}^{(-2)} \oplus \mathfrak{g}^{(-1)} \oplus \left[ \mathfrak{so}(4) \oplus \mathfrak{sp}(2,\mathbb{R}) \oplus \mathfrak{so}(1,1) \right] \oplus \mathfrak{g}^{(+1)} \oplus \mathfrak{g}^{(+2)} \\ &= K_- \oplus \left[ U_i \oplus V^i \right] \oplus \left[ M_{ij} \oplus J_{\pm,0} \oplus \Delta \right] \oplus \left[ \widetilde{U}_i \oplus \widetilde{V}^i \right] \oplus K_+ \end{split}$$ where $i,j,\dots = 1,2,3,4$. The noncompact 3-grading of $SO(6,2)$ with respect to the subgroup $SO(5,1) \times SO(1,1)$ {#SO(5,1)xSO(1,1)} ------------------------------------------------------------------------------------------- Considered as the six dimensional conformal group, $SO(6,2)$ has a natural 3-grading with respect to the generator $\mathcal{D}$ of dilatations whose eigenvalues determine the conformal dimensions of operators and states. Let us denote the corresponding 3-graded decomposition of $\mathfrak{so}(6,2)$ as $$\mathfrak{so}(6,2) = \mathfrak{N}^- \oplus \mathfrak{N}^0 \oplus \mathfrak{N}^+$$ where $\mathfrak{N}^0 = \mathfrak{so}(5,1) \oplus \mathfrak{so}(1,1)_{\mathcal{D}}$ with the subalgebra $\mathfrak{so}(5,1)$ in $\mathfrak{N}^{0}$ representing the Lorentz algebra in six dimensions. The *noncompact* dilatation generator $\mathfrak{so}(1,1)_{\mathcal{D}}$ is given by $$\mathcal{D} = \frac{1}{2} \left( \Delta + J_0 \right) = \frac{1}{4} \left( x p + p x + X^i P_i + P_i X^i \right)$$ and the generators belonging to $\mathfrak{N}^{\pm}$ and $\mathfrak{N}^0$ are as follows: $$\begin{split} \mathfrak{N}^- &= K_- \oplus J_- \oplus V^i \\ \mathfrak{N}^0 &= \mathcal{D} \oplus \frac{1}{2} \left( \Delta - J_0 \right) \oplus M_{ij} \oplus U_i \oplus \widetilde{V}^i \\ \mathfrak{N}^+ &= K_+ \oplus J_+ \oplus \widetilde{U}_i \end{split}$$ The Lorentz generators $\mathcal{M}_{\mu\nu}$ ($\mu,\nu,\dots = 0,1,2,\dots,5$) are given by $$\begin{aligned} \mathcal{M}_{0i} &= \frac{1}{2} \left( U_i + \delta_{ij} \widetilde{V}^j \right) \\ \mathcal{M}_{ij} &= - i \, M_{ij} \end{aligned} \qquad \qquad \begin{aligned} \mathcal{M}_{05} &= \frac{1}{2} \left( \Delta - J_0 \right) \\ \mathcal{M}_{i5} &= \frac{1}{2} \left( U_i - \delta_{ij} \widetilde{V}^j \right) \end{aligned}$$ and satisfy the $\mathfrak{so}(5,1)$ commutation relations $${\left[ \mathcal{M}_{\mu\nu} \, , \, \mathcal{M}_{\rho\tau} \right]} = i \left( \eta_{\nu\rho} \mathcal{M}_{\mu\tau} - \eta_{\mu\rho} \mathcal{M}_{\nu\tau} - \eta_{\nu\tau} \mathcal{M}_{\mu\rho} + \eta_{\mu\tau} \mathcal{M}_{\nu\rho} \right)$$ where $\eta_{\mu\nu} = \mathrm{diag} (-,+,+,+,+,+)$. The six translation generators $\mathcal{P}_\mu$ ($\mu = 0,1,2,\dots,5$) of the conformal group $SO(6,2)$ are given by $$\mathcal{P}_0 = K_+ - \frac{1}{2} J_+ \qquad \quad \mathcal{P}_i = \widetilde{U}_i \quad (i = 1,2,3,4) \qquad \quad \mathcal{P}_5 = K_+ + \frac{1}{2} J_+$$ and the special conformal generators $\mathcal{K}_\mu$ ($\mu = 0,1,2,\dots,5$) are given by $$\mathcal{K}_0 = - \frac{1}{2} J_- + K_- \qquad \quad \mathcal{K}_i = - V^i \quad (i = 1,2,3,4) \qquad \quad \mathcal{K}_5 = - \frac{1}{2} J_- - K_- \,.$$ These generators satisfy the commutation relations of $SO(6,2)$ as the six dimensional conformal algebra: $$\begin{split} {\left[ \mathcal{M}_{\mu\nu} \, , \, \mathcal{M}_{\rho\tau} \right]} &= i \left( \eta_{\nu\rho} \mathcal{M}_{\mu\tau} - \eta_{\mu\rho} \mathcal{M}_{\nu\tau} - \eta_{\nu\tau} \mathcal{M}_{\mu\rho} + \eta_{\mu\tau} \mathcal{M}_{\nu\rho} \right) \\ {\left[ \mathcal{P}_\mu \, , \, \mathcal{M}_{\nu\rho} \right]} &= i \left( \eta_{\mu\nu} \, \mathcal{P}_\rho - \eta_{\mu\rho} \, \mathcal{P}_\nu \right) \\ {\left[ \mathcal{K}_\mu \, , \, \mathcal{M}_{\nu\rho} \right]} &= i \left( \eta_{\mu\nu} \, \mathcal{K}_\rho - \eta_{\mu\rho} \, \mathcal{K}_\nu \right) \\ {\left[ \mathcal{D} \, , \, \mathcal{M}_{\mu\nu} \right]} &= {\left[ \mathcal{P}_\mu \, , \, \mathcal{P}_\nu \right]} = {\left[ \mathcal{K}_\mu \, , \, \mathcal{K}_\nu \right]} = 0 \\ {\left[ \mathcal{D} \, , \, \mathcal{P}_\mu \right]} &= + i \, \mathcal{P}_\mu \qquad \qquad {\left[ \mathcal{D} \, , \, \mathcal{K}_\mu \right]} = - i \, \mathcal{K}_\mu \\ {\left[ \mathcal{P}_\mu \, , \, \mathcal{K}_\nu \right]} &= 2 i \left( \eta_{\mu\nu} \, \mathcal{D} + \mathcal{M}_{\mu\nu} \right) \end{split}$$ We should note that the $6D$ Poincaré mass operator vanishes identically $$\mathcal{M}^2 = \eta_{\mu\nu} \mathcal{P}^\mu \mathcal{P}^\nu = 0$$ for the minimal unitary realization given above. Hence the minimal unitary representation of $SO(6,2)$ corresponds to a massless representation as a six dimensional conformal group. We shall refer to the above 3-graded decomposition as the *noncompact* 3-grading. The compact 3-grading of $SO(6,2)$ with respect to the subgroup $SO(6) \times SO(2)$ {#SO(6)xSO(2)} ------------------------------------------------------------------------------------ The Lie algebra $\mathfrak{so}(6,2)$ has a 3-grading with respect to its maximal compact subalgebra $\mathfrak{C}^0 = \mathfrak{so}(6) \oplus \mathfrak{so}(2)$, determined by the $\mathfrak{so}(2)$ generator $$H = \frac{1}{2} \left[ \left( K_+ + K_- \right) - \frac{1}{2} ( J_+ + J_- ) \right]$$ such that $$\mathfrak{so}(6,2) = \mathfrak{C}^- \oplus \left[ \mathfrak{so}(6) \oplus \mathfrak{so}(2) \right] \oplus \mathfrak{C}^+$$ and satisfy $${\left[ H \, , \, \mathfrak{C}^+ \right]} = + \, \mathfrak{C}^+ \qquad \qquad \qquad {\left[ H \, , \, \mathfrak{C}^- \right]} = - \, \mathfrak{C}^- \,.$$ In this decomposition: $$\begin{split} \mathfrak{C}^0 &= \mathfrak{so}(6) \oplus \mathfrak{so}(2) = \left[ M_{ij} \oplus \left( \left( K_+ + K_- \right) + \frac{1}{2} \left( J_+ + J_- \right) \right) \oplus \left( U_i - \delta_{ij} \widetilde{V}^j \right) \right. \\ & \qquad \qquad \qquad \qquad \quad \left. \left( \widetilde{U}_i + \delta_{ij} V^j \right) \right] \oplus \frac{1}{2} \left[ \left( K_+ + K_- \right) - \frac{1}{2} \left( J_+ + J_- \right) \right] \\ \mathfrak{C}^+ &= \left[ \Delta - i \left( K_+ - K_- \right) \right] \oplus \left[ J_0 + \frac{i}{2} \left( J_+ - J_- \right) \right] \oplus \left[ \frac{1}{2} \left( U_i + \delta_{ij} \widetilde{V}^j \right) - \frac{i}{2} \left( \widetilde{U}_i - \delta_{ij} V^j \right) \right] \\ \mathfrak{C}^- &= \left[ \Delta + i \left( K_+ - K_- \right) \right] \oplus \left[ J_0 - \frac{i}{2} \left( J_+ - J_- \right) \right] \oplus \left[ \frac{1}{2} \left( U_i + \delta_{ij} \widetilde{V}^j \right) + \frac{i}{2} \left( \widetilde{U}_i - \delta_{ij} V^j \right) \right] \end{split}$$ Note that in the above 3-grading, the operators belonging to $\mathfrak{C}^+$ are Hermitian conjugates of those belonging to $\mathfrak{C}^-$. In the corresponding minimal unitary realization one takes only the hermitian linear combinations of these operators as generators of $\mathfrak{so}(6,2)$. The generator $H$ is the conformal Hamiltonian or the $AdS$ energy depending on whether one is considering $SO(6,2)$ as six dimensional conformal group or the seven dimensional $AdS$ group. We shall refer to this grading as the *compact* 3-grading. The $\mathfrak{so}(6)$ generators $\widetilde{M}_{MN}$ ($M,N,\dots = 1,2,\dots,6$) in grade zero subspace $\mathfrak{C}^0$ are given by $$\begin{aligned} \widetilde{M}_{ij} &= i \, M_{ij} \\ \widetilde{M}_{i6} &= \frac{1}{2} \left( \widetilde{U}_i + \delta_{ij} V^j \right) \end{aligned} \qquad \qquad \begin{aligned} \widetilde{M}_{i5} &= \frac{1}{2} \left( U_i - \delta_{ij} \widetilde{V}^j \right) \\ \widetilde{M}_{56} &= \frac{1}{2} \left[ \left( K_+ + K_- \right) + \frac{1}{2} \left( J_+ + J_- \right) \right] \end{aligned}$$ and satisfy the $\mathfrak{so}(6)$ algebra $${\left[ \widetilde{M}_{MN} \, , \, \widetilde{M}_{PQ} \right]} = i \left( \delta_{NP} \widetilde{M}_{MQ} - \delta_{MP} \widetilde{M}_{NQ} - \delta_{NQ} \widetilde{M}_{MP} + \delta_{MQ} \widetilde{M}_{NP} \right) \,.$$ To give the decomposition of the minrep of $SO(6,2)$ into its K-finite vectors of its maximal compact subgroup $SO(6) \times SO(2)$ we shall define the oscillators $$c_i = \frac{1}{\sqrt{2}} \left( X^i + i \, P_i \right) \qquad \qquad \qquad c_i^\dag = \frac{1}{\sqrt{2}} \left( X^i - i \, P_i \right)$$ or conversely $$X^i = \frac{1}{\sqrt{2}} \left( c_i^\dag + c_i \right) \qquad \qquad \qquad P_i = \frac{i}{\sqrt{2}} \left( c_i^\dag - c_i \right) \,.$$ These oscillators satisfy the commutation relations $${\left[ c_i \, , \, c_j^\dag \right]} = \delta_{ij} \,.$$ The quartic invariant operator $\mathcal{I}_4$ takes on a simple form in terms of these oscillators: $$\begin{split} \mathcal{I}_4 &= - \left( c_i^\dag c_j - c_j^\dag c_i \right)^2 - 8 \\ & = - M_{ij} M_{ij} - 8 \end{split}$$ The $\mathfrak{so}(2)$ generator in $\mathfrak{C}^0$, that determines the 3-grading and plays the role of the $AdS$ energy [@Gunaydin:1984wc; @Gunaydin:1999ci; @Fernando:2001ak], is given in terms of $x$, $p$ and oscillators $c_i$, $c_i^\dag$ as: $$\begin{split} H &= \frac{1}{2} \left[ \left( K_+ + K_- \right) - \frac{1}{2} \left( J_+ + J_- \right) \right] \\ &= \frac{1}{4} \left( x^2 + p^2 \right) + \frac{1}{2} c_i^\dag c_i - \frac{1}{8 \, x^2} \left( c_i^\dag c_j - c_j^\dag c_i \right)^2 + \frac{3}{16 \, x^2} + 1 \end{split} \label{SO2generator}$$ We can also write the $\mathfrak{so}(6)$ generators $\widetilde{M}_{MN}$ in terms of these oscillators as follows: $$\begin{split} \widetilde{M}_{ij} &= i M_{ij} \\ &= i \left( c_i^\dag c_j - c_j^\dag c_i \right) \\ \widetilde{M}_{i5} &= \frac{1}{2} \left( U_i - \delta_{ij} \widetilde{V}^j \right) \\ &= \frac{i}{2 \sqrt{2}} \left( x + i \, p \right) c_i^\dag - \frac{i}{2 \sqrt{2}} \left( x - i \, p \right) c_i - \frac{i}{2 \sqrt{2} \, x} \left( c_i^\dag c_j - c_j^\dag c_i \right) \left( c_j^\dag + c_j \right) + \frac{3 i}{4 \sqrt{2} \, x} \left( c_i^\dag + c_i \right) \\ \widetilde{M}_{i6} &= \frac{1}{2} \left( \widetilde{U}_i + \delta_{ij} V^j \right) \\ &= \frac{1}{2 \sqrt{2}} \left( x + i \, p \right) c_i^\dag + \frac{1}{2 \sqrt{2}} \left( x - i \, p \right) c_i - \frac{1}{2 \sqrt{2} \, x} \left( c_i^\dag c_j - c_j^\dag c_i \right) \left( c_j^\dag - c_j \right) + \frac{3}{4 \sqrt{2} \, x} \left( c_i^\dag - c_i \right) \\ \widetilde{M}_{56} &= \frac{1}{2} \left[ \left( K_+ + K_- \right) + \frac{1}{2} \left( J_+ + J_- \right) \right] \\ &= \frac{1}{4} \left( x^2 + p^2 \right) - \frac{1}{2} c_i^\dag c_i - \frac{1}{8 \, x^2} \left( c_i^\dag c_j - c_j^\dag c_i \right)^2 + \frac{3}{16 \, x^2} - 1 \end{split}$$ Six operators that belong to the grade $+1$ subspace $\mathfrak{C}^+$ have the following form in terms of these oscillators: $$\begin{split} \frac{1}{2} \left[ \left( U_i + \delta_{ij} \widetilde{V}^j \right) - i \left( \widetilde{U}_i - \delta_{ij} V^j \right) \right] &= \frac{i}{\sqrt{2}} \, \left( x - i \, p \right) c_i^\dag \\ &\quad + \frac{i}{2 \sqrt{2} \, x} \left[ c_j^\dag \left( c_i^\dag c_j - c_j^\dag c_i \right) + \left( c_i^\dag c_j - c_j^\dag c_i \right) c_j^\dag \right] \\ J_0 + \frac{i}{2} \left( J_+ - J_- \right) &= i \, c_i^\dag c_i^\dag \\ \Delta - i \left( K_+ - K_- \right) &= \frac{i}{2} \, \left( x - i \, p \right)^2 + \frac{i}{4 \, x^2} \left[ \left( c_i^\dag c_j - c_j^\dag c_i \right)^2 + \frac{9}{2} \right] \end{split}$$ and those that belong to the grade $-1$ subspace $\mathfrak{C}^-$ are given by $$\begin{split} \frac{1}{2} \left[ \left( U_i + \delta_{ij} \widetilde{V}^j \right) + i \left( \widetilde{U}_i - \delta_{ij} V^j \right) \right] &= - \frac{i}{\sqrt{2}} \, \left( x + i \, p \right) c_i \\ &\quad + \frac{i}{2 \sqrt{2} \, x} \left[ c_j \left( c_i^\dag c_j - c_j^\dag c_i \right) + \left( c_i^\dag c_j - c_j^\dag c_i \right) c_j \right] \\ J_0 - \frac{i}{2} \left( J_+ - J_- \right) &= - i \, c_i c_i \\ \Delta + i \left( K_+ - K_- \right) &= - \frac{i}{\sqrt{2}} \, \left( x + i \, p \right)^2 - \frac{i}{4 \, x^2} \left[ \left( c_i^\dag c_j - c_j^\dag c_i \right)^2 + \frac{9}{2} \right] \,. \end{split}$$ One could also give a decomposition of $SO(6,2)$ with respect to the subgroup $SO(4) \times SO(2,2)$, which we present in appendix \[SO(4)xSO(2,2)\]. Minimal Unitary Representation of $SO^*(8)$ {#minrepSO*(8)} =========================================== The groups $SO(d+2,2)$ have supersymmetric extensions which are in general supergroups of the form $OSp(d+2,2\,|\,2n,\mathbb{R})$ with even subgroups $SO(d+2,2) \times Sp(2n,\mathbb{R})$. The supergroups whose even subgroups are products of two simple noncompact groups do not, in general, admit any unitary representations. Furthermore, if the group $SO(d+2,2)$ is considered as a conformal group in $(d+2)$ dimensions or as anti-de Sitter group in $(d+3)$ dimensions, its factor group in its supersymmetric extension is the $R$-symmetry group which must be compact [@Nahm:1977tg]. Remarkably, either the existence of exceptional superalgebras or certain special isomorphisms allow such possibilities for special values of $d$. The group $SO(5,2)$ has an extension to the exceptional supergroup $F(4)$ with even subgroup $SO(5,2) \times SU(2)$ which admits positive energy unitary representations. The covering group of $SO(4,2)$ is the group $SU(2,2)$ which extends to an infinite family of supergroups $SU(2,2|N)$ with even subgroups $SU(2,2) \times U(N)$ that admit positive energy unitary representations. Similarly isomorphism of $SO(3,2)$ to $Sp(4,\mathbb{R})$ allows extension to supergroups $OSp(N|4,\mathbb{R})$ with even subgroups $Sp(4,\mathbb{R}) \times SO(N)$ that admit positive energy unitary representations. Since $SO(2,2)$ is not simple, one finds a rich family of supersymmetric extensions that admit positive energy unitary unitary representations that were studied in [@Gunaydin:1986fe]. Similarly, the Lie algebra of $SO(6,2)$ is isomorphic to that of $SO^*(8)$ which have extensions to supergroups $OSp(8^*|2N)$ with even subgroups $SO^*(8) \times USp(2N)$ that admit positive energy unitary representations.[^4] Hence we will now study the minimal unitary realizations of $SO^*(8)$ and their supersymmetric extensions in the subsequent sections. The 5-grading of $SO^*(8)$ with respect to the subgroup $SO^*(4) \times SU(2) \times SO(1,1)$ {#5GrSO*(8)} ---------------------------------------------------------------- The noncompact Lie algebra $\mathfrak{so}^*(8)$ has a 5-grading with respect to its subalgebra $\mathfrak{so}^*(4) \oplus \mathfrak{su}(2) \oplus \mathfrak{so}(1,1)$, where the $\mathfrak{so}(1,1)$ generator $\Delta$ defines the 5-grading [@Gunaydin:2006vz]: $$\mathfrak{so}^*(8) = \mathfrak{g}^{(-2)} \oplus \mathfrak{g}^{(-1)} \oplus \left[ \mathfrak{so}^*(4) \oplus \mathfrak{su}(2) \oplus \Delta \right] \oplus \mathfrak{g}^{(+1)} \oplus \mathfrak{g}^{(+2)}$$ such that $${\left[ \Delta \, , \, \mathfrak{g}^{(m)} \right]} = m \, \mathfrak{g}^{(m)}$$ In this decomposition, $\mathfrak{g}^{(\pm 2)}$ subspaces are one-dimensional, and $\mathfrak{g}^{(\pm 1)}$ subspaces transform in the $\left( \mathbf{4} , \mathbf{2} \right)$ dimensional representation of $SO^*(4) \times SU(2)$. Since $SO^*(4) = SU(1,1)\times SU(2)$, the grade zero subalgebra $SU(1,1) \times SU(2) \times SU(2) \times SO(1,1)$ is also isomorphic to that of $SO(6,2)$. For the study of the minrep of $SO^*(8)$ we shall relabel the oscillators introduced in the previous sections as $a_m$, $b_m$ and their hermitian conjugates $a^m = \left( a_m \right)^\dag$, $b^m = \left( b_m \right)^\dag$ ($m,n,\dots = 1,2$) $$\begin{aligned} a_m &= \frac{1}{\sqrt{2}} \left( X^m + i \, P_m \right) \\ b_m &= \frac{1}{\sqrt{2}} \left( X^{2+m} + i \, P_{2+m} \right) \end{aligned} \qquad \qquad \qquad \begin{aligned} a^m &= \frac{1}{\sqrt{2}} \left( X^m - i \, P_m \right) \\ b^m &= \frac{1}{\sqrt{2}} \left( X^{2+m} - i \, P_{2+m} \right) \end{aligned}$$ so that $$\begin{aligned} X^m &= \frac{1}{\sqrt{2}} \left( a^m + a_m \right) \\ X^{2+m} &= \frac{1}{\sqrt{2}} \left( b^m + b_m \right) \end{aligned} \qquad \qquad \qquad \begin{aligned} P_m &= \frac{i}{\sqrt{2}} \left( a^m - a_m \right) \\ P_{2+m} &= \frac{i}{\sqrt{2}} \left( b^m - b_m \right) \,. \end{aligned}$$ They satisfy the commutation relations $${\left[ a_m \, , \, a^n \right]} = \delta^n_m \qquad \qquad \qquad {\left[ b_m \, , \, b^n \right]} = \delta^n_m \,.$$ Then the generators of $\mathfrak{su}(2)$ of $\mathfrak{g}^{(0)}$ that commute with $\mathfrak{so}^*(4)$ can be realized as follows: $$S_+ = a^m b_m \qquad \qquad S_- = \left( S_+ \right)^\dag = a_m b^m \qquad \qquad S_0 = \frac{1}{2} \left( N_a - N_b \right) \label{SU(2)S_generators}$$ where $N_a = a^m a_m$ and $N_b = b^m b_m$ are the respective number operators. We denote this subalgebra as $\mathfrak{su}(2)_S$. Its generators satisfy: $${\left[ S_+ \, , \, S_- \right]} = 2 \, S_0 \qquad \qquad \qquad {\left[ S_0 \, , \, S_\pm \right]} = \pm S_\pm$$ The quadratic Casimir of $\mathfrak{su}(2)_S$ is given by $$\begin{split} \mathcal{C}_2 \left[ \mathfrak{su}(2)_S \right] = \mathcal{S}^2 &= {S_0}^2 + \frac{1}{2} \left( S_+ S_- + S_- S_+ \right) \\ &= \frac{1}{2} \left(N_a + N_b \right) \left[ \frac{1}{2} \left( N_a + N_b \right) + 1 \right] - 2 a^{[m} b^{n]} \, a_{[m} b_{n]} \end{split}$$ where square bracketing $a_{[m} b_{n]} = \frac{1}{2} \left( a_m b_n - a_n b_m \right)$ represents antisymmetrization of weight one. We shall label the simple factors of $\mathfrak{so}^*(4)$ subalgebra that commutes with $\mathfrak{su}(2)_S$ as $$\mathfrak{so}^*(4) = \mathfrak{su}(2)_A \oplus \mathfrak{su}(1,1)_N$$ and denote the generators of $\mathfrak{su}(2)_A$ and $\mathfrak{su}(1,1)_N$ as $A_{\pm,0}$ and $N_{\pm,0}$, respectively. In terms of the above $a$-type and $b$-type oscillators, these generators have the following realization: $$\begin{aligned} A_+ &= a^1 a_2 + b^1 b_2 \\ A_- &= \left( A_+ \right)^\dag = a_1 a^2 + b_1 b^2 \\ A_0 &= \frac{1}{2} \left( a^1 a_1 - a^2 a_2 + b^1 b_1 - b^2 b_2 \right) \end{aligned} \qquad \qquad \begin{aligned} N_+ &= a^1 b^2 - a^2 b^1 \\ N_- &= \left( N_+ \right)^\dag = a_1 b_2 - a_2 b_1 \\ N_0 &= \frac{1}{2} \left( N_a + N_b \right) + 1 \end{aligned} \label{SU(2)AN_generators}$$ They satisfy the commutation relations: $$\begin{aligned} {\left[ A_+ \, , \, A_- \right]} &= 2 \, A_0 \\ {\left[ A_0 \, , \, A_\pm \right]} &= \pm A_\pm \end{aligned} \qquad \qquad \qquad \begin{aligned} {\left[ N_- \, , \, N_+ \right]} &= 2 \, N_0 \\ {\left[ N_0 \, , \, N_\pm \right]} &= \pm N_\pm \end{aligned}$$ The quadratic Casimirs of these subalgebras $$\begin{split} \mathcal{C}_2 \left[ \mathfrak{su}(2)_A \right] = \mathcal{A}^2 &= {A_0}^2 + \frac{1}{2} \left( A_+ A_- + A_- A_+ \right) \\ \mathcal{C}_2 \left[ \mathfrak{su}(1,1)_N \right] = \mathcal{N}^2 &= {N_0}^2 - \frac{1}{2} \left( N_+ N_- + N_- N_+ \right) \end{split}$$ coincide and are equal to that of $\mathfrak{su}(2)_S$: $$\mathcal{S}^2 = \mathcal{A}^2 = \mathcal{N}^2$$ The transformations relating the $SO^*(8)$ oscillators $a_m$ and $b_m$ to the $SO(6,2)$ oscillators $c_i$ are given in Appendix \[app:bogoliubov\]. The generator that defines the 5-grading can be written as $$\Delta = \frac{1}{2} \left( x p + p x \right)$$ and the $\mathfrak{g}^{(\pm 2)}$ generators are realized as $$K_- = \frac{1}{2} x^2 \qquad \qquad \qquad K_+ = \frac{1}{2} p^2 + \frac{1}{4 \, x^2} \left( 8 \, \mathcal{S}^2 + \frac{3}{2} \right) \,.$$ These three generators form another $\mathfrak{su}(1,1)_K$ subalgebra $${\left[ K_- \, , \, K_+ \right]} = i \, \Delta \qquad \qquad \qquad {\left[ \Delta \, , \, K_\pm \right]} = \pm 2 i \, K_\pm$$ with the quadratic Casimir operator $$\mathcal{C}_2 \left[ \mathfrak{su}(1,1)_K \right] = \mathcal{K}^2 = \frac{1}{2} (K_+ K_- + K_- K_+) - \frac{1}{4} \Delta^2 = \, \mathcal{S}^2 \,.$$ The eight generators that are in $\mathfrak{g}^{(-1)}$ subspace take the form $$\begin{aligned} U_m &= x \, a_m \\ V_m &= x \, b_m \end{aligned} \qquad \qquad \qquad \qquad \begin{aligned} U^m &= x \, a^m \\ V^m &= x \, b^m \end{aligned}$$ and together with $K_-$ form an Heisenberg algebra: $$\begin{split} {\left[ U_m \, , \, U^n \right]} &= {\left[ V_m \, , \, V^n \right]} = 2 \, \delta^n_m \, K_- \\ {\left[ U_m \, , \, U_n \right]} &= {\left[ V_m \, , \, V_n \right]} = 0 \end{split}$$ The generators in $\mathfrak{g}^{(+1)}$ are obtained from the commutators ${\left[ \mathfrak{g}^{(-1)} \, , \, \mathfrak{g}^{(+2)} \right]}$: $$\begin{split} \widetilde{U}_m = i {\left[ U_m \, , \, K_+ \right]} & \qquad \qquad \qquad \qquad \widetilde{U}^m = \left( \widetilde{U}_m \right)^\dag = i {\left[ U^m \, , \, K_+ \right]} \\ \widetilde{V}_m = i {\left[ V_m \, , \, K_+ \right]} & \qquad \qquad \qquad \qquad \widetilde{V}^m = \left( \widetilde{V}_m \right)^\dag = i {\left[ V^m \, , \, K_+ \right]} \end{split}$$ Explicitly they are given by $$\begin{split} \widetilde{U}_m &= - p \, a_m + \frac{2i}{x} \left[ \left( S_0 + \frac{3}{4} \right) a_m + S_- b_m \right] \\ \widetilde{U}^m &= - p \, a^m - \frac{2i}{x} \left[ \left( S_0 - \frac{3}{4} \right) a^m + S_+ b^m \right] \\ \widetilde{V}_m &= - p \, b_m - \frac{2i}{x} \left[ \left( S_0 - \frac{3}{4} \right) b_m - S_+ a_m \right] \\ \widetilde{V}^m &= - p \, b^m + \frac{2i}{x} \left[ \left( S_0 + \frac{3}{4} \right) b^m - S_- a^m \right] \end{split} \label{g+1bosonic}$$ and also form another Heisenberg algebra with $K_+$ as its “central charge”: $$\begin{split} {\left[ \widetilde{U}_m \, , \, \widetilde{U}^n \right]} &= {\left[ \widetilde{V}_m \, , \, \widetilde{V}^n \right]} = 2 \, \delta^n_m \, K_+ \\ {\left[ \widetilde{U}_m \, , \, \widetilde{U}_n \right]} &= {\left[ \widetilde{V}_m \, , \, \widetilde{V}_n \right]} = 0 \end{split}$$ The commutators ${\left[ \mathfrak{g}^{(-2)} \, , \, \mathfrak{g}^{(+1)} \right]}$ take the following form: $$\begin{aligned} {\left[ \widetilde{U}_m \, , \, K_- \right]} &= i \, U_m \\ {\left[ \widetilde{V}_m \, , \, K_- \right]} &= i \, V_m \end{aligned} \qquad \qquad \qquad \qquad \begin{aligned} {\left[ \widetilde{U}^m \, , \, K_- \right]} &= i \, U^m \\ {\left[ \widetilde{V}^m \, , \, K_- \right]} &= i \, V^m \end{aligned}$$ Finally, the non-vanishing commutators of the form ${\left[ \mathfrak{g}^{(-1)} \, , \, \mathfrak{g}^{(+1)} \right]}$ are as follows: $$\begin{split} {\left[ U_m \, , \, \widetilde{U}^n \right]} &= - \delta^n_m \, \Delta - 2 i \, \delta^n_m \, N_0 - 2 i \, \delta^n_m \, S_0 - 2 i \, A^n_{~m} \\ {\left[ V_m \, , \, \widetilde{V}^n \right]} &= - \delta^n_m \, \Delta - 2 i \, \delta^n_m \, N_0 + 2 i \, \delta^n_m \, S_0 - 2 i \, A^n_{~m} \\ {\left[ U_m \, , \, \widetilde{V}^n \right]} &= - 2 i \, \delta^n_m \, S_- \qquad \qquad {\left[ V_m \, , \, \widetilde{U}^n \right]} = - 2 i \, \delta^n_m \, S_+ \\ {\left[ U_m \, , \, \widetilde{V}_n \right]} &= - 2 i \, \epsilon_{mn} \, N_- \qquad \qquad {\left[ V_m \, , \, \widetilde{U}_n \right]} = + 2 i \, \epsilon_{mn} \, N_- \end{split}$$ where we have labeled the generators of $\mathfrak{su}(2)_A$ as $A^m_{~n}$: $$A^1_{~1} = - A^2_{~2} = A_0 \qquad \qquad \qquad A^1_{~2} = A_+ \qquad \qquad \qquad A^2_{~1} = \left( {A^1_{~2}} \right)^\dag = A_-$$ and denoted the completely antisymmetric tensor by $\epsilon_{mn}$ ($\epsilon_{12} = +1$). With the generators defined above, the 5-grading of the Lie algebra $\mathfrak{so}^*(8)$, defined by $\Delta$, takes the form: $$\begin{split} \mathfrak{so}^*(8) &= ~ \mathbf{1} ~~ \oplus ~~ \left( \mathbf{4} , \mathbf{2} \right) ~ \oplus \left[ \mathfrak{su}(2)_A \oplus \mathfrak{su}(1,1)_N \oplus \mathfrak{su}(2)_S \oplus \mathfrak{so}(1,1)_{\Delta} \right] \oplus ~ \left( \mathbf{4} , \mathbf{2} \right) ~ \oplus ~ \mathbf{1} \\ &= K_- \oplus \left[ U_m \,,\, U^m \,,\, V_m \,,\, V^m \right] \oplus \left[ ~ A_{\pm,0} ~ \oplus ~ N_{\pm,0} ~ \oplus ~ S_{\pm,0} ~ \oplus ~ \Delta ~ \right] \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \oplus \left[ \widetilde{U}_m \,,\, \widetilde{U}^m \,,\, \widetilde{V}_m \,,\, \widetilde{V}^m \right] \oplus K_+ \, \end{split} \label{so*(8)5-grading}$$ As expected, the quadratic Casimir of $\mathfrak{so}^*(8)$, given by $$\mathcal{C}_2 \left[ \mathfrak{so}^*(8) \right] = \mathcal{C}_2 \left[ \mathfrak{su}(2)_S \right] + \mathcal{C}_2 \left[ \mathfrak{su}(2)_A \right] + \mathcal{C}_2 \left[ \mathfrak{su}(1,1)_N \right] + \mathcal{C}_2 \left[ \mathfrak{su}(1,1)_K \right] - \frac{i}{4} \mathcal{F} \left( U , V \right)$$ where $$\begin{split} \mathcal{F} \left( U , V \right) &= \left( U_m \widetilde{U}^m + V_m \widetilde{V}^m + \widetilde{U}^m U_m + \widetilde{V}^m V_m \right) \\ & \qquad - \left( U^m \widetilde{U}_m + V^m \widetilde{V}_m + \widetilde{U}_m U^m + \widetilde{V}_m V^m \right) \end{split}$$ reduces to a $c$-number, $-4$. The noncompact 3-grading of $SO^*(8)$ with respect to the subgroup $SU^*(4) \times SO(1,1)$ {#NC3GrSO*(8)} ------------------------------------------------------------------------------------------- Considered as the six dimensional conformal group, $SO^*(8)$ has a noncompact 3-grading determined by the dilatation generator $\mathcal{D}$: $$\mathfrak{so}^*(8) = \mathfrak{N}^- \oplus \mathfrak{N}^0 \oplus \mathfrak{N}^+$$ where $\mathfrak{N}^0 = \mathfrak{su}^*(4) \oplus \mathfrak{so}(1,1)_{\mathcal{D}}$ and $$\mathcal{D} = \frac{1}{2} \left[ \Delta - i \left( N_+ - N_- \right) \right] \,.$$ The generators that belong to $\mathfrak{N}^{\pm}$ and $\mathfrak{N}^0$ subspaces are as follows: $$\begin{split} \mathfrak{N}^- &= K_- \oplus \left[ N_0 - \frac{1}{2} \left( N_+ + N_- \right) \right] \\ & \quad \oplus \left( U^1 - V_2 \right) \oplus \left( U^2 + V_1 \right) \oplus \left( V^1 + U_2 \right) \oplus \left( V^2 - U_1 \right) \\ \mathfrak{N}^0 &= \mathcal{D} \oplus \frac{1}{2} \left[ \Delta + i \left( N_+ - N_- \right) \right] \oplus S_{\pm,0} \oplus A_{\pm,0} \\ & \quad \oplus \left( U^1 + V_2 \right) \oplus \left( U^2 - V_1 \right) \oplus \left( V^1 - U_2 \right) \oplus \left( V^2 + U_1 \right) \\ & \quad \oplus \left( \widetilde{U}^1 - \widetilde{V}_2 \right) \oplus \left( \widetilde{U}^2 + \widetilde{V}_1 \right) \oplus \left( \widetilde{V}^1 + \widetilde{U}_2 \right) \oplus \left( \widetilde{V}^2 - \widetilde{U}_1 \right) \\ \mathfrak{N}^+ &= K_+ \oplus \left[ N_0 + \frac{1}{2} \left( N_+ + N_- \right) \right] \\ & \quad \oplus \left( \widetilde{U}^1 + \widetilde{V}_2 \right) \oplus \left( \widetilde{U}^2 - \widetilde{V}_1 \right) \oplus \left( \widetilde{V}^1 - \widetilde{U}_2 \right) \oplus \left( \widetilde{V}^2 + \widetilde{U}_1 \right) \end{split}$$ Since $\mathfrak{su}^*(4) \simeq \mathfrak{so}(5,1)$, we find that the Lorentz generators $\mathcal{M}_{\mu\nu}$ ($\mu,\nu,\dots = 0,1,2,\dots,5$) in six dimensions are given by: $$\begin{aligned} \mathcal{M}_{01} &= \frac{1}{4} \left[ \left( U^1 + V_2 \right) + \left( V^2 + U_1 \right) + i \left( \widetilde{U}^1 - \widetilde{V}_2 \right) + i \left( \widetilde{V}^2 - \widetilde{U}_1 \right) \right] \\ \mathcal{M}_{02} &= \frac{i}{4} \left[ \left( U^1 + V_2 \right) - \left( V^2 + U_1 \right) + i \left( \widetilde{U}^1 - \widetilde{V}_2 \right) - i \left( \widetilde{V}^2 - \widetilde{U}_1 \right) \right] \\ \mathcal{M}_{03} &= \frac{i}{4} \left[ \left( U^2 - V_1 \right) + \left( V^1 - U_2 \right) + i \left( \widetilde{U}^2 + \widetilde{V}_1 \right) + i \left( \widetilde{V}^1 + \widetilde{U}_2 \right) \right] \\ \mathcal{M}_{04} &= - \frac{1}{4} \left[ \left( U^2 - V_1 \right) - \left( V^1 - U_2 \right) + i \left( \widetilde{U}^2 + \widetilde{V}_1 \right) - i \left( \widetilde{V}^1 + \widetilde{U}_2 \right) \right] \end{aligned}$$ $$\begin{aligned} \mathcal{M}_{15} &= \frac{1}{4} \left[ \left( U^1 + V_2 \right) + \left( V^2 + U_1 \right) - i \left( \widetilde{U}^1 - \widetilde{V}_2 \right) - i \left( \widetilde{V}^2 - \widetilde{U}_1 \right) \right] \\ \mathcal{M}_{25} &= \frac{i}{4} \left[ \left( U^1 + V_2 \right) - \left( V^2 + U_1 \right) - i \left( \widetilde{U}^1 - \widetilde{V}_2 \right) + i \left( \widetilde{V}^2 - \widetilde{U}_1 \right) \right] \\ \mathcal{M}_{35} &= \frac{i}{4} \left[ \left( U^2 - V_1 \right) + \left( V^1 - U_2 \right) - i \left( \widetilde{U}^2 + \widetilde{V}_1 \right) - i \left( \widetilde{V}^1 + \widetilde{U}_2 \right) \right] \\ \mathcal{M}_{45} &= - \frac{1}{4} \left[ \left( U^2 - V_1 \right) - \left( V^1 - U_2 \right) - i \left( \widetilde{U}^2 + \widetilde{V}_1 \right) + i \left( \widetilde{V}^1 + \widetilde{U}_2 \right) \right] \end{aligned}$$ $$\begin{aligned} \mathcal{M}_{12} &= S_0 + A_0 \\ \mathcal{M}_{14} &= \frac{i}{2} \left( S_+ - S_- - A_+ + A_- \right) \\ \mathcal{M}_{24} &= - \frac{1}{2} \left( S_+ + S_- - A_+ - A_- \right) \end{aligned} \qquad \qquad \begin{aligned} \mathcal{M}_{13} &= \frac{1}{2} \left( S_+ + S_- + A_+ + A_- \right) \\ \mathcal{M}_{23} &= \frac{i}{2} \left( S_+ - S_- + A_+ - A_- \right) \\ \mathcal{M}_{34} &= S_0 - A_0 \end{aligned}$$ $$\mathcal{M}_{05} = \frac{1}{2} \left[ \Delta + i \left( N_+ - N_- \right) \right]$$ These Lorentz generators satisfy the $\mathfrak{so}(5,1)$ commutation relations $${\left[ \mathcal{M}_{\mu\nu} \, , \, \mathcal{M}_{\rho\tau} \right]} = i \left( \eta_{\nu\rho} \mathcal{M}_{\mu\tau} - \eta_{\mu\rho} \mathcal{M}_{\nu\tau} - \eta_{\nu\tau} \mathcal{M}_{\mu\rho} + \eta_{\mu\tau} \mathcal{M}_{\nu\rho} \right)$$ where $\eta_{\mu\nu} = \mathrm{diag} (-,+,+,+,+,+)$. The six generators of grade $+1$ space are the momentum generators $\mathcal{P}_\mu$, and the six generators of grade $-1$ space are the special conformal transformations $\mathcal{K}_\mu$ ($\mu = 0,1,2,\dots,5$): $$\begin{aligned} \mathcal{P}_0 &= K_+ + \left[ N_0 + \frac{1}{2} \left( N_+ + N_- \right) \right] \\ \mathcal{P}_1 &= - \frac{1}{2} \left[ \left( \widetilde{U}^1 + \widetilde{V}_2 \right) + \left( \widetilde{V}^2 + \widetilde{U}_1 \right) \right] \\ \mathcal{P}_2 &= - \frac{i}{2} \left[ \left( \widetilde{U}^1 + \widetilde{V}_2 \right) - \left( \widetilde{V}^2 + \widetilde{U}_1 \right) \right] \\ \mathcal{P}_3 &= - \frac{i}{2} \left[ \left( \widetilde{U}^2 - \widetilde{V}_1 \right) + \left( \widetilde{V}^1 - \widetilde{U}_2 \right) \right] \\ \mathcal{P}_4 &= \frac{1}{2} \left[ \left( \widetilde{U}^2 - \widetilde{V}_1 \right) - \left( \widetilde{V}^1 - \widetilde{U}_2 \right) \right] \\ \mathcal{P}_5 &= K_+ - \left[ N_0 + \frac{1}{2} \left( N_+ + N_- \right) \right] \end{aligned} \qquad \qquad \begin{aligned} \mathcal{K}_0 &= \left[ N_0 - \frac{1}{2} \left( N_+ + N_- \right) \right] + K_- \\ \mathcal{K}_1 &= \frac{i}{2} \left[ \left( U^1 - V_2 \right) + \left( V^2 - U_1 \right) \right] \\ \mathcal{K}_2 &= - \frac{1}{2} \left[ \left( U^1 - V_2 \right) - \left( V^2 - U_1 \right) \right] \\ \mathcal{K}_3 &= - \frac{1}{2} \left[ \left( U^2 + V_1 \right) + \left( V^1 + U_2 \right) \right] \\ \mathcal{K}_4 &= - \frac{i}{2} \left[ \left( U^2 + V_1 \right) - \left( V^1 + U_2 \right) \right] \\ \mathcal{K}_5 &= \left[ N_0 - \frac{1}{2} \left( N_+ + N_- \right) \right] - K_- \end{aligned} \label{SO*(8)PKgenerators}$$ Together with the generators $\mathcal{M}_{\mu\nu}$ and $\mathcal{D}$, they satisfy the commutation relations: $$\begin{split} {\left[ \mathcal{D} \, , \, \mathcal{P}_\mu \right]} &= + i \, \mathcal{P}_\mu \qquad \qquad {\left[ \mathcal{D} \, , \, \mathcal{K}_\mu \right]} = - i \, \mathcal{K}_\mu \\ {\left[ \mathcal{D} \, , \, \mathcal{M}_{\mu\nu} \right]} &= {\left[ \mathcal{P}_\mu \, , \, \mathcal{P}_\nu \right]} = {\left[ \mathcal{K}_\mu \, , \, \mathcal{K}_\nu \right]} = 0 \\ {\left[ \mathcal{P}_\mu \, , \, \mathcal{M}_{\nu\rho} \right]} &= i \left( \eta_{\mu\nu} \, \mathcal{P}_\rho - \eta_{\mu\rho} \, \mathcal{P}_\nu \right) \\ {\left[ \mathcal{K}_\mu \, , \, \mathcal{M}_{\nu\rho} \right]} &= i \left( \eta_{\mu\nu} \, \mathcal{K}_\rho - \eta_{\mu\rho} \, \mathcal{K}_\nu \right) \\ {\left[ \mathcal{P}_\mu \, , \, \mathcal{K}_\nu \right]} &= 2 i \left( \eta_{\mu\nu} \, \mathcal{D} + \mathcal{M}_{\mu\nu} \right) \end{split}$$ It is also important to note that the six dimensional Poincaré mass operator vanishes identically $$\mathcal{M}^2 = \eta_{\mu\nu} \mathcal{P}^\mu \mathcal{P}^\nu =0$$ for the minimal unitary realization of $SO^*(8)$ given above. In Appendix \[C3GrSO\*(8)\], we give the compact 3-grading of $SO^*(8)$ with respect to the maximal compact subgroup $SU(4) \times U(1)_E$: $$\mathfrak{so}^*(8) = \mathfrak{C}^- \oplus \mathfrak{C}^0 \oplus \mathfrak{C}^+$$ where the $\mathfrak{u}(1)_E$ generator $H$ that determines the compact 3-grading is $$H = N_0 + \frac{1}{2} \left( K_+ + K_- \right) \,. \label{BosonicHamiltonian}$$ It is the $AdS$ energy operator or the conformal Hamiltonian when $SO^*(8)$ is taken as the seven-dimensional $AdS$ group or the six-dimensional conformal group, respectively. It should also be pointed out that, in the earlier noncompact 3-grading with respect to $\mathfrak{N}^0 = \mathfrak{su}^*(4) \oplus \mathfrak{so}(1,1)_{\mathcal{D}}$, this $AdS$ energy corresponds to $\frac{1}{2} \left( \mathcal{K}_0 + \mathcal{P}_0 \right)$. (See equation (\[SO\*(8)PKgenerators\]).) In Appendix \[C3GrSO\*(8)\], we also give the decomposition of the Lie subalgebra $\mathfrak{su}(4)$ of $\mathfrak{C}^0$ with respect to its subalgebra $ \mathfrak{su}(2)_S \oplus \mathfrak{su}(2)_A \oplus \mathfrak{u}(1)_J$ where the $U(1)$ charge $$J = N_0 -\frac{1}{2} \left( K_+ + K_- \right) \,.$$ is equal to $\frac{1}{2} \left( \mathcal{K}_5 - \mathcal{P}_5 \right)$ in the above noncompact 3-grading with respect to $\mathfrak{N}^0 = \mathfrak{su}^*(4) \oplus \mathfrak{so}(1,1)_{\mathcal{D}}$. (See equation (\[SO\*(8)PKgenerators\]).) Distinguished $SU(1,1)_K$ subgroup of $SO^*(8)$ generated by the isotonic (singular) oscillators {#SU(1,1)ofSO*(8)} ================================================================================================ Note that in terms of the oscillators $a_m$, $b_m$ (and their respective hermitian conjugates $a^m$, $b^m$) and the coordinate $x$ and its conjugate momentum $p$, the $\mathfrak{u}(1)$ generator $H$ (as given in equation (\[BosonicHamiltonian\])) has the following form: $$\begin{split} H &= N_0 + \frac{1}{2} \left( K_+ + K_- \right) \\ &= \frac{1}{2} \left( N_a + N_b \right) + 1 + \frac{1}{4} \left( x^2 + p^2 \right) + \frac{1}{8 \, x^2} \left( 8 \, \mathcal{S}^2 + \frac{3}{2} \right) \\ &= H_a + H_b + H_\odot \end{split}$$ This $H$ plays the role of the seven dimensional $AdS$ energy operator or the six dimensional conformal Hamiltonian. $H_a$ and $H_b$ are the contributions to the Hamiltonian from $a$-type and $b$-type oscillators, that correspond to standard non-singular harmonic oscillators. On the other hand, $H_\odot$ is the Hamiltonian of a singular harmonic oscillator with a potential function $$V \left( x \right) = \frac{G}{x^2} \qquad \mbox{where} \quad G = \frac{1}{4} \left( 8 \, \mathcal{S}^2 + \frac{3}{2} \right) \,.$$ $H_\odot$ also arises as the Hamiltonian of conformal quantum mechanics [@de; @Alfaro:1976je] with $G$ playing the role of coupling constant [@Gunaydin:2001bt]. In some literature it is also referred to as the isotonic oscillator [@Casahorran:1995vt; @carinena-2007]. Let us now consider this singular harmonic oscillator Hamiltonian $$\begin{split} H_\odot &= \frac{1}{2} \left( K_+ + K_- \right) = \frac{1}{4} \left( x^2 + p^2 \right) + \frac{1}{8 \, x^2} \left( 8 \, \mathcal{S}^2 + \frac{3}{2} \right) \\ &= \frac{1}{4} \left( x^2 - \frac{\partial^2}{\partial x^2} \right) + \frac{1}{8 \, x^2} \left( 8 \, \mathcal{S}^2 + \frac{3}{2} \right) \,. \end{split} \label{SingularHamiltonian}$$ Together with the following generators $B_-$ and $B_+$ of $\mathfrak{C}^-$ and $\mathfrak{C}^+$ subspaces of $SO^*(8)$ (see Appendix \[C3GrSO\*(8)\]): $$\begin{split} B_- = \frac{i}{2} \left[ \Delta + i \left( K_+ - K_- \right) \right] &= \frac{1}{4} \left( x + i p \right)^2 - \frac{1}{2 \, x^2} \left( 2 \, \mathcal{S}^2 + \frac{3}{8} \right) \\ &= \frac{1}{4} \left( x + \frac{\partial}{\partial x} \right)^2 - \frac{1}{2 \, x^2} \left( 2 \, \mathcal{S}^2 + \frac{3}{8} \right) \\ B_+ = - \frac{i}{2} \left[ \Delta - i \left( K_+ - K_- \right) \right] &= \frac{1}{4} \left( x - i p \right)^2 - \frac{1}{2 \, x^2} \left( 2 \, \mathcal{S}^2 + \frac{3}{8} \right) \\ &= \frac{1}{4} \left( x - \frac{\partial}{\partial x} \right)^2 - \frac{1}{2 \, x^2} \left( 2 \, \mathcal{S}^2 + \frac{3}{8} \right) \end{split}$$ $H_\odot$ generates the distinguished $\mathfrak{su}(1,1)_K$ subalgebra[^5] $${\left[ B_- \, , \, B_+ \right]} = 2 \, H_\odot \qquad \qquad {\left[ H_\odot \, , \, B_+ \right]} = + \, B_+ \qquad \qquad {\left[ H_\odot \, , \, B_- \right]} = - \, B_- \,.$$ For a given eigenvalue $\mathfrak{s} \left( \mathfrak{s} + 1 \right)$ of the quadratic Casimir operator $\mathcal{S}^2 $ of $SU(2)_S$, the wave functions corresponding to the lowest energy eigenvalue of this singular harmonic oscillator Hamiltonian $H_\odot$ will be superpositions of functions of the form $\psi_0^{(\alpha_\mathfrak{s})} \left( x \right) \Lambda \left( \mathfrak{s} , m_\mathfrak{s} \right)$, where $\Lambda \left( \mathfrak{s} . m_\mathfrak{s} \right)$ is an eigenstate of $\mathcal{S}^2$ and $S_0$, independent of $x$: $$\mathcal{S}^2 \Lambda \left( \mathfrak{s} , m_\mathfrak{s} \right) = \mathfrak{s} \left( \mathfrak{s} + 1 \right) \Lambda \left( \mathfrak{s} , m_\mathfrak{s} \right) \qquad \qquad S_0 \, \Lambda \left( \mathfrak{s} , m_\mathfrak{s} \right) = m_\mathfrak{s} \, \Lambda \left( \mathfrak{s} , m_\mathfrak{s} \right)$$ and $\psi_0^{(\alpha_\mathfrak{s})} \left( x \right)$ is a function that satisfies $$B_- \, \psi_0^{(\alpha_\mathfrak{s})} \left( x \right) \Lambda \left( \mathfrak{s} , m_\mathfrak{s} \right) = 0$$ whose solution is given by [@MR858831] $$\psi_0^{(\alpha_\mathfrak{s})} \left( x \right) = C_0 \, x^{\alpha_\mathfrak{s}} e^{-x^2/2} \label{singularwavefunctions}$$ where $C_0$ is a normalization constant and $$\alpha_\mathfrak{s} = \frac{1}{2} + \sqrt{1 + 4 \, \mathfrak{s} \left( \mathfrak{s} + 1 \right)} = 2 \mathfrak{s} + \frac{3}{2} \,.$$ The normalizability of the state imposes the constraint $$\alpha_\mathfrak{s} \geq \frac{1}{2} \,.$$ Clearly, $\psi_0^{(\alpha_\mathfrak{s} = 2 \mathfrak{s} + 3/2)} \left( x \right) \Lambda \left( \mathfrak{s} , m_\mathfrak{s} \right)$ is an eigenstate of $H_\odot$ with eigenvalue $E_{\odot,0}^{(\alpha_\mathfrak{s})} = \left( \mathfrak{s} + 1 \right)$: $$H_\odot \, \psi_0^{(2 \mathfrak{s} + 3/2)} \left( x \right) \Lambda \left( \mathfrak{s} , m_\mathfrak{s} \right) = \left( \mathfrak{s} + 1 \right) \, \psi_0^{(2 \mathfrak{s} + 3/2)} \left( x \right) \Lambda \left( \mathfrak{s} , m_\mathfrak{s} \right) \,.$$ The lowest energy normalizable eigenstate of $H_\odot$ corresponds to the case $\mathfrak{s} = 0$ (therefore $\alpha_\mathfrak{s} = \frac{3}{2}$). Note that $\Lambda \left( 0 , 0 \right)$ is simply the Fock vacuum ${\left\lvert 0 \right\rangle}$ of $a$- and $b$-type oscillators. This “ground state” has energy $$E_{\odot,0}^{(3/2)} = 1 \,.$$ The higher energy eigenstates of $H_\odot$ can be obtained from $\psi_0^{(3/2)} \left( x \right) \Lambda \left( 0 , 0 \right)$ by acting on it repeatedly with the raising operator $B_+$: $$\psi_n^{(3/2)} \left( x \right) \Lambda \left( 0 , 0 \right) = C_n \, \left( B_+ \right)^n \psi_0^{(3/2)} \left( x \right) \Lambda \left( 0 , 0 \right) \label{isotonicgroundstate}$$ where $C_n$ are normalization constants. They correspond to energy eigenvalues: $$H_\odot \, \psi_n^{(3/2)} \left( x \right) \Lambda \left( 0 , 0 \right) = E_{\odot,n}^{(3/2)} \, \psi_n^{(3/2)} \left( x \right) \Lambda \left( 0 , 0 \right)$$ where $$E_{\odot,n}^{(3/2)} = E_{\odot,0}^{(3/2)} + n = 1 + n \,.$$ We shall denote the corresponding states as ${\left\lvert \psi_n^{(3/2)} \left( x \right) \Lambda \left( 0 , 0 \right) \right\rangle} = {\left\lvert \psi_n^{(3/2)} \left( x \right) \right\rangle} \otimes {\left\lvert \Lambda \left( 0 , 0 \right) \right\rangle}$ and refer to them as the particle basis of the state space of the (isotonic) singular oscillator. $SU(2)_S \times SU(2)_A \times U(1)_J$ Basis of the Minimal Unitary Representation of $SO^*(8)$ {#SU2SU2U1} ================================================================================================ Consider the tensor product of Fock spaces of the oscillators $a^m$ and $b^m$. The vacuum state ${\left\lvert 0 \right\rangle}$ is annihilated by all $a_m$ and $b_m$: $$a_m {\left\lvert 0 \right\rangle} = b_m {\left\lvert 0 \right\rangle} = 0$$ where $m = 1,2$. Note that ${\left\lvert \Lambda \left( 0 , 0 \right) \right\rangle} = {\left\lvert 0 \right\rangle}$. A “particle basis” of states in this Fock space is provided by tensor products of the following states $${\left\lvert n_{a,m} \right\rangle} = \frac{1}{\sqrt{n_{a,m} !}} \left( a^m \right)^{n_{a,m}} {\left\lvert 0 \right\rangle} \qquad \qquad \qquad {\left\lvert n_{b,m} \right\rangle} = \frac{1}{\sqrt{n_{b,m} !}} \left( b^m \right)^{n_{b,m}} {\left\lvert 0 \right\rangle}$$ where $m = 1,2$ and $n_{a,m}$ and $n_{a,m}$ are non-negative integers. To construct a “particle basis” of the Hilbert space of the minimal unitary representation of $SO^*(8)$, consider the following tensor products of the above states with the state space of the singular (isotonic) oscillator: $${\left\lvert n_{a,1} \right\rangle} \otimes {\left\lvert n_{a,2} \right\rangle} \otimes {\left\lvert n_{b,1} \right\rangle} \otimes {\left\lvert n_{b,2} \right\rangle} \otimes {\left\lvert \psi_n^{(\alpha_\mathfrak{s})} \left( x \right) \Lambda \left( 0 , 0 \right) \right\rangle}$$ and denote them as $$\left( a^1 \right)^{n_{a,1}} \left( a^2 \right)^{n_{a,2}} \left( b^1 \right)^{n_{b,1}} \left( b^2 \right)^{n_{b,2}} {\left\lvert \psi_n^{(\alpha_\mathfrak{s})} \left( x \right) \right\rangle}$$ or simply as $${\left\lvert \psi_n^{(\alpha_\mathfrak{s})} \left( x \right) \,;\, n_{a,1} , n_{a,2} , n_{b,1} , n_{b,2} \right\rangle} \,.$$ For a fixed $N = n_{a,1} + n_{a,2} + n_{b,1} + n_{b,2}$, these states transform in the $\left( \frac{N}{2} , \frac{N}{2} \right)$ representation under the $SU(2)_S \times SU(2)_A$ subgroup: $$\begin{aligned} \mathfrak{s} &= \frac{N}{2} \\ \mathfrak{s}_3 &= \frac{1}{2} \left( n_{a,1} + n_{a,2} - n_{b,1} - n_{b,2} \right) \end{aligned} \qquad \qquad \begin{aligned} \mathfrak{a} &= \frac{N}{2} \\ \mathfrak{a}_3 &= \frac{1}{2} \left( n_{a,1} - n_{a,2} + n_{b,1} - n_{b,2} \right) \end{aligned}$$ However they are, in general, not eigenstates of $U(1)_J$ or the energy operator $H$ ($AdS_7$ energy or $6D$ conformal Hamiltonian) that determines the compact 3-grading of $SO^*(8)$, given in appendix \[C3GrSO\*(8)\], and commutes with $SU(4)$ generators. There exists a unique lowest energy state in this Hilbert space, namely $${\left\lvert \psi_0^{(3/2)} \left( x \right) \,;\, 0 , 0 , 0 , 0 \right\rangle}$$ that is annihilated by the following six operators in $\mathfrak{C}^-$ subspace of $\mathfrak{so}^*(8)$: $$\begin{aligned} Y_m &= \frac{1}{2} \left( U_m - i \, \widetilde{U}_m \right) \\ Z_m &= \frac{1}{2} \left( V_m - i \, \widetilde{V}_m \right) \end{aligned} \qquad \qquad \qquad \begin{aligned} N_- &= a_1 b_2 - a_2 b_1 \\ B_- &= \frac{i}{2} \left[ \Delta + i \left( K_+ - K_- \right) \right] \end{aligned}$$ in the compact three grading, and transforms as a singlet of $SU(2)_S \times SU(2)_A$ and is an eigenstate of $H$ and $J$ with eigenvalue $E = 2$ and $\mathfrak{J}=0$, respectively. This state is also a singlet of $SU(4)$ subgroup. Hence the minimal unitary representation of $SO^*(8)$ is a unitary lowest weight representation. All the other states of the particle basis of the minrep with higher energies can be obtained from ${\left\lvert \psi_0^{(3/2)} \left( x \right) \,;\, 0 , 0 , 0 , 0 \right\rangle}$ by repeatedly acting on it with the following operators in $\mathfrak{C}^+$ subspace of $\mathfrak{so}^*(8)$: $$\begin{aligned} Y^m &= \frac{1}{2} \left( U^m + i \, \widetilde{U}^m \right) \\ Z^m &= \frac{1}{2} \left( V^m + i \, \widetilde{V}^m \right) \end{aligned} \qquad \qquad \qquad \begin{aligned} N_+ &= a^1 b^2 - a^2 b^1 \\ B_+ &= - \frac{i}{2} \left[ \Delta - i \left( K_+ - K_- \right) \right] \end{aligned}$$ The above six operators in $\mathfrak{C}^+$ transform under $\mathfrak{su}(2)_S \oplus \mathfrak{su}(2)_A \oplus \mathfrak{u}(1)_J$ as follows: $$6 = ( 1/2 , 1/2 )^0 \oplus ( 0 , 0 )^{+1} \oplus ( 0 , 0 )^{-1} \,.$$ The operators $\left( Y^1 , Z^1 \right)$ and $\left( Y^2 , Z^2 \right)$ form two doublets under $\mathfrak{su}(2)_S$. The operators $\left( Y^1 , Y^2 \right)$ and $\left( Z^1 , Z^2 \right)$ form two doublets under $\mathfrak{su}(2)_A$. $N_+$ and $B_+$ are both singlets under $\mathfrak{su}(2)_S$ and $\mathfrak{su}(2)_A$. $Y^m$ and $Z^m$ have zero $J$-charge, while $N_+$ and $B_+$ have $J$-charges $+1$ and $-1$, respectively. We list the charges of these six operators with respect to $\left( S_0 , A_0 , J \right)$ in Table \[Table:Grade+1generators\]. [|c||r|r|r|r|r|r|]{} \ & & & & & &\ $\mathfrak{C}^+$ generator & $Y^1$ & $Y^2$ & $Z^1$ & $Z^2$ & $N_+$ & $B_+$\ & & & & & &\ & & & & & &\ \ & & & & & &\ $\mathfrak{C}^+$ generator & $Y^1$ & $Y^2$ & $Z^1$ & $Z^1$ & $N_+$ & $B_+$\ & & & & & &\ & & & & & &\ & & & & & &\ & & & & & &\ $\mathfrak{s}_3$ & $+\frac{1}{2}$ & $+\frac{1}{2}$ & $-\frac{1}{2}$ & $-\frac{1}{2}$ & 0 & 0\ & & & & & &\ $\mathfrak{a}_3$ & $+\frac{1}{2}$ & $-\frac{1}{2}$ & $+\frac{1}{2}$ & $-\frac{1}{2}$ & 0 & 0\ & & & & & &\ $\mathfrak{J}$ & 0 & 0 & 0 & 0 & $+1$ & $-1$\ The $\mathfrak{C}^+$ operators commute with each other and satisfy the following important relation: $$Y^1 Z^2 - Y^2 Z^1 = N_+ B_+$$ All the states that belong to a given $AdS$ energy level form an irrep of $SU(4)$. We give the $SU(2)_S \times SU(2)_A \times U(1)_J$ decomposition of these $SU(4)$ irreps in Table \[Table:ScalarDoubleton\] for the first three energy levels together with their respective $SU(4)$ Dynkin labels.[^6] [|l|c|c|]{} \ & &\ Irrep $\left( \mathfrak{s} , \mathfrak{a} \right)^{\mathfrak{J}}$ & $E$ & $SU(4)_{Dynkin}$\ & &\ & &\ & &\ \ & &\ Irrep $\left( \mathfrak{s} , \mathfrak{a} \right)^{\mathfrak{J}}$ & $E$ & $SU(4)$\ & & Dynkin\ & &\ & &\ & &\ & &\ $\left( 0 , 0 \right)^0$ & 2 & $(0,0,0)$\ & &\ $\left( 0 , 0 \right)^{-1} \oplus \left( 0 , 0 \right)^{+1}$ & 3 & $(0,1,0)$\ $\oplus \left( \frac{1}{2} , \frac{1}{2} \right)^0$ & &\ & &\ $\left( 0 , 0 \right)^{-2} \oplus \left( 0 , 0 \right)^0 \oplus \left( 0 , 0 \right)^{+2}$ & 4 & $(0,2,0)$\ $\oplus \left( \frac{1}{2} , \frac{1}{2} \right)^{-1} \oplus \left( \frac{1}{2} , \frac{1}{2} \right)^{+1}$ & &\ $\oplus \left( 1 , 1 \right)^0$ & &\ & &\ & &\ \[8pt\] By acting on the lowest weight state ${\left\lvert \Omega \right\rangle}$ with $\mathfrak{C}^+$ generators $n$ times, one obtains a set of states that are eigenstates of $H$ with energy eigenvalues $n + 2$. They form an $SU(4)$ irrep with Dynkin labels $(0,n,0)$, which decomposes into the following irreps of $SU(2)_S \times SU(2)_A \times U(1)_J$ subgroup labelled by $\left( \mathfrak{s} , \mathfrak{a} \right)^{\mathfrak{J}}$: $$\begin{split} \left( 0 , n , 0 \right)_{SU(4) \mathrm{~Dynkin}} &= \left( 0 , 0 \right)^{-n} \oplus \left( 0 , 0 \right)^{-n+2} \oplus \dots \oplus \left( 0 , 0 \right)^{n} \\ & \quad \oplus \left( \frac{1}{2} , \frac{1}{2} \right)^{-n+1} \oplus \left( \frac{1}{2} , \frac{1}{2} \right)^{-n+3} \oplus \dots \oplus \left( \frac{1}{2} , \frac{1}{2} \right)^{n-1} \\ & \quad \oplus \left( 1 , 1 \right)^{-n+2} \oplus \left( 1 , 1 \right)^{-n+4} \oplus \dots \oplus \left( 1 , 1 \right)^{n-2} \\ & \qquad \vdots \\ & \quad \oplus \left( \frac{n}{2} , \frac{n}{2} \right)^0 \end{split}$$ Comparing the $SU(4)$ content of the minrep of $SO^*(8)$ with that of the scalar doubleton representation of the seven dimensional $AdS$ group $SO(6,2) \simeq SO^*(8)$ obtained by the oscillator method [@Gunaydin:1984wc; @Gunaydin:1999ci; @Fernando:2001ak], we see that they coincide exactly. Thus the minrep of $SO^*(8)$ is simply the scalar doubleton representation of $SO^*(8)$ whose Poincaré limit in $AdS_7$ is singular, just like Dirac’s singletons of $SO(3,2)$ in $AdS_4$. The doubleton representations correspond to massless representations of $SO^*(8)$ considered as six dimensional conformal group [@Gunaydin:1984wc; @Fernando:2001ak]. At this point we should stress one important point. In the oscillator construction of the scalar doubleton given in [@Gunaydin:1984wc], one is working in the Fock space of two sets of oscillators transforming in the fundamental representation of the maximal compact subgroup $U(4)$. The Fock space of all eight oscillators decomposes into an infinite family of doubleton representations that correspond to massless conformal fields of ever increasing spin. In contrast, the minimal unitary representation we constructed above is over the tensor product of Fock space of four oscillators and the state space of a singular oscillator. In the subsequent sections we shall extend our construction of the minrep of $SO^*(8)$ to the construction of minimal unitary representation of the supergroup $OSp(8^*|2N)$ which correspond to supermultiplets of massless conformal fields in six dimensions. Construction of Finite-Dimensional Representations of $USp(2N)$ in terms of Fermionic Oscillators {#USp(2N)} ================================================================================================= We define two sets of $N$ fermionic oscillators $\alpha_r$, $\beta_r$ and their hermitian conjugates $\alpha^r = \left( \alpha_r \right)^\dag$, $\beta^r = \left( \beta_r \right)^\dag$ ($r = 1,2,\dots,N$), such that they satisfy the usual anti-commutation relations: $${\left\{ \alpha_r \, , \, \alpha^s \right\}} = {\left\{ \beta_r \, , \, \beta^s \right\}} = \delta^s_r \qquad \qquad {\left\{ \alpha_r \, , \, \alpha_s \right\}} = {\left\{ \alpha_r \, , \, \beta_s \right\}} = {\left\{ \beta_r \, , \, \beta_s \right\}} = 0$$ The Lie algebra $\mathfrak{usp}(2N)$ has a 3-graded decomposition with respect to its subalgebra $\mathfrak{u}(N)$ as follows: $$\begin{split} \mathfrak{usp}(2N) &= \mathfrak{g}^{(-1)} \, \oplus \, \mathfrak{g}^{(0)} \, \oplus \, \mathfrak{g}^{(+1)} \\ &= \, S_{rs} \, \oplus \, M^r_{~s} \, \oplus \, S^{rs} \end{split}$$ where $$\begin{split} S_{rs} &= \alpha_r \beta_s + \alpha_s \beta_r \\ M^r_{~s} &= \alpha^r \alpha_s - \beta_s \beta^r \\ S^{rs} &= \beta^r \alpha^s + \beta^s \alpha^r = \left( S_{rs} \right)^\dag \,. \end{split} \label{USp(2N)generators}$$ They satisfy the commutation relations: $$\begin{split} {\left[ S_{rs} \, , \, S^{tu} \right]} &= - \delta^t_s \, M^u_{~r} - \delta^t_r \, M^u_{~s} - \delta^u_s \, M^t_{~r} - \delta^u_r \, M^t_{~s} \\ {\left[ M^r_{~s} \, , \, S_{tu} \right]} &= - \delta^r_u \, S_{st} - \delta^r_t \, S_{su} \\ {\left[ M^r_{~s} \, , \, S^{tu} \right]} &= \delta^u_s \, S^{rt} + \delta^t_s \, S^{ru} \\ {\left[ M^r_{~s} \, , \, M^t_{~u} \right]} &= \delta^t_s \, M^r_{~u} - \delta^r_u \, M^t_{~s} \end{split}$$ The quadratic Casimir of $\mathfrak{usp}(2N)$ is given by $$\begin{split} \mathcal{C}_2 \left[ \mathfrak{usp}(2N) \right] &= M^r_{~s} M^s_{~r} + \frac{1}{2} \left( S_{rs} S^{rs} + S^{rs} S_{rs} \right) \\ &= N \left( N + 2 \right) - \left( N_\alpha + N_\beta \right) \left[ \left( N_\alpha + N_\beta \right) + 2 \right] - 8 \, \alpha^{(r} \beta^{s)} \, \alpha_{(r} \beta_{s)} \end{split}$$ where “$(rs)$” represents symmetrization of weight one, $\alpha_{(r} \beta_{s)} = \frac{1}{2} \left( \alpha_r \beta_s + \alpha_s \beta_r \right)$. We choose the fermionic Fock vacuum such that $$\alpha_r {\left\lvert 0 \right\rangle} = \beta_r {\left\lvert 0 \right\rangle} = 0 \,.$$ To generate an irrep of $USp(2N)$ in the Fock space in a $U(N)$ basis, one chooses a set of states ${\left\lvert \Omega \right\rangle}$, transforming irreducibly under $U(N)$ and are annihilated by all grade $-1$ operators $S_{rs}$ (i.e. $S_{rs} {\left\lvert \Omega \right\rangle} = 0$), and act on them with grade $+1$ operators $S^{rs}$ [@Gunaydin:1990ag]: $$\left\{ {\left\lvert \Omega \right\rangle} \,,\, S^{rs} {\left\lvert \Omega \right\rangle} \,,\, S^{rs} S^{tu} {\left\lvert \Omega \right\rangle} \,,\, \dots\dots \right\}$$ The possible sets of states ${\left\lvert \Omega \right\rangle}$, that transform irreducibly under $U(N)$ and are annihilated by $S_{rs}$, are of the form $$\alpha^{r_1} \alpha^{r_2} \dots \alpha^{r_m} {\left\lvert 0 \right\rangle}$$ or of the equivalent form $$\beta^{r_1} \beta^{r_2} \dots \beta^{r_m} {\left\lvert 0 \right\rangle}$$ where $m \le N$. They lead to irreps of $USp(2N)$ with Dynkin labels [@Gunaydin:1990ag] $$( \, \underbrace{0 , \dots , 0}_{(N-m-1)} , 1 , \underbrace{0 , \dots , 0}_{(m)} \, ) \,.$$ In addition, we have the following states $$\alpha^{[r} \beta^{s]} {\left\lvert 0 \right\rangle} = \frac{1}{2} \left( \alpha^r \beta^s - \alpha^s \beta^r \right) {\left\lvert 0 \right\rangle}$$ that are annihilated by all grade $-1$ operators $S_{tu}$. They lead to the irrep of $USp(2N)$ with Dynkin labels $$( \, \underbrace{0 , \dots , 0}_{(N-3)} , 1 , 0 , 0 \, ) \,.$$ Note that in the special case of $\mathfrak{usp}(4)$, the states $\alpha^r \alpha^s {\left\lvert 0 \right\rangle}$, $\beta^r \beta^s {\left\lvert 0 \right\rangle}$ and $\alpha^{[r} \beta^{s]} {\left\lvert 0 \right\rangle}$ all lead to the trivial representation. It is important to also note that the following bilinears of fermionic oscillators $$F_+ = \alpha^r \beta_r \qquad \qquad F_- = \beta^r \alpha_r \qquad \qquad F_0 = \frac{1}{2} \left( N_\alpha - N_\beta \right) \label{SU(2)F}$$ where $N_\alpha = \alpha^r \alpha_r$ and $N_\beta = \beta^r \beta_r$ are the respective number operators, generate a $\mathfrak{usp}(2)_F \simeq \mathfrak{su}(2)_F$ algebra $${\left[ F_+ \, , \, F_- \right]} = 2 \, F_0 \qquad \qquad \qquad {\left[ F_0 \, , \, F_\pm \right]} = \pm F_\pm$$ that commutes with the $\mathfrak{usp}(2N)$ algebra defined above. Nonetheless, the equivalent irreps of $USp(2N)$ constructed from the states ${\left\lvert \Omega \right\rangle}$ involving only $\alpha$-type excitations or $\beta$-type excitations can form non-trivial representations of this $USp(2)_F$. For example, the two irreps with Dynkin labels (1,0) of $USp(4)$ constructed from $\alpha^r {\left\lvert 0 \right\rangle}$ and $\beta^r {\left\lvert 0 \right\rangle}$ form spin $\frac{1}{2}$ representation (doublet) of $USp(2)_F$. The three singlet irreps of $USp(4)$ corrsponding to $\alpha^r \alpha^s {\left\lvert 0 \right\rangle}$, $\beta^r \beta^s {\left\lvert 0 \right\rangle}$ and $\alpha^{[r} \beta^{s]} {\left\lvert 0 \right\rangle}$ form the spin 1 representation (triplet) of $USp(2)_F$. The irrep of $USp(4)$ with Dynkin labels (0,1) defined by the vacuum state ${\left\lvert \Omega \right\rangle} = {\left\lvert 0 \right\rangle}$ is a singlet of $USp(2)_F$. We should note that the representations of $USp(2N)$ obtained above by using two sets of fermionic oscillators transforming in the fundamental representation of the subgroup $U(N)$ are the compact analogs of the doubleton representations of $SO^*(2M)$ constructed using two sets of bosonic oscillators transforming in the fundamental representation of $U(M)$ [@Gunaydin:1990ag]. By realizing the generators of $USp(2N)$ in terms of an arbitrary (even) number of sets of oscillators, one can construct all the finite dimensional representations of $USp(2N)$ [@Gunaydin:1990ag; @Gunaydin:1984wc]. Minimal Unitary Representations of Supergroups $OSp(8^*|2N)$ with Even Subgroups $SO^*(8) \times USp(2N)$ {#minrepOSp(8*|2N)-5Gr} ========================================================================================================== The noncompact groups $SO^*(2M)$ with maximal compact subgroups $U(M)$ have extensions to supergroups $OSp(2M^*|2N)$ with the even subgroups $SO^*(2M) \times USp(2N)$ that admit positive energy unitary representations. In this section we shall study the minimal unitary representations of $OSp(8^*|2N)$, leaving to a separate study the minimal representations of $OSp(2M^*|2N)$ for arbitrary $M$ [@FGP2010]. We define minimal unitary representations (supermultiplets) of $OSp(2M^*|2N)$ as those irreducible unitary representations ( supermultiplets) that contain the minimal unitary representation of $SO^*(2M)$. The superalgebra $\mathfrak{osp}(8^*|2N)$ has a 5-grading $$\mathfrak{osp}(8^*|2N) = \mathfrak{g}^{(-2)} \oplus \mathfrak{g}^{(-1)} \oplus \mathfrak{g}^{(0)} \oplus \mathfrak{g}^{(+1)} \oplus \mathfrak{g}^{(+2)}$$ with respect to the subsuperalgebra $$\mathfrak{g}^{(0)} = \mathfrak{osp}(4^*|2N) \oplus \mathfrak{su}(2) \oplus \mathfrak{so}(1,1)_\Delta \,.$$ In the extension of the minimal unitary realization of $SO^*(8)$ to that of $OSp(8^*|2N)$, the grade $-2$ generator $K_-$ remains unchanged. However, now grade $-1$ subspace contains both even (bosonic) and odd (fermionic) generators. More precisely, the grade $-1$ subspace $\mathfrak{g}^{(-1)}$ of $\mathfrak{osp}(8^*|2N)$ contains 8 bosonic generators: $$\begin{aligned} U_m &= x \, a_m \\ V_m &= x \, b_m \end{aligned} \qquad \qquad \qquad \begin{aligned} U^m &= x \, a^m \\ V^m &= x \, b^m \end{aligned}$$ and $4 N$ supersymmetry generators: $$\begin{aligned} Q_r &= x \, \alpha_r \\ S_r &= x \, \beta_r \end{aligned} \qquad \qquad \qquad \begin{aligned} Q^r &= x \, \alpha^r \\ S^r &= x \, \beta^r \end{aligned} \label{5GsusyGr-1}$$ They (anti-)commute into the single bosonic generator $K_-$ in grade $-2$ subspace $\mathfrak{g}^{(-2)}$ as follows: $$\begin{split} {\left[ U_m \, , \, U^n \right]} &= {\left[ V_m \, , \, V^n \right]} = 2 \, \delta^n_m \, K_- \\ {\left\{ Q_r \, , \, Q^s \right\}} &= {\left\{ S_r \, , \, S^s \right\}} = 2 \, \delta^s_r \, K_- \end{split}$$ Even and odd generators in $\mathfrak{g}^{(-1)}$ commute with each other and together form a super Heisenberg algebra with $$K_- = \frac{1}{2} x^2$$ as its “central charge.” Now the $\mathfrak{su}(2)_S$ subalgebra in grade zero subspace $\mathfrak{g}^{(0)}$ of $\mathfrak{so}^*(8)$ receives contributions from fermionic oscillators in the supersymmetric extension of $\mathfrak{so}^*(8)$ to $\mathfrak{osp}(8^*|2N)$. The resultant $\mathfrak{su}(2)$ that commutes with $\mathfrak{osp}(4^*|2N)$ of grade zero subalgebra is simply the diagonal subalgebra of $\mathfrak{su}(2)_S $ and $\mathfrak{su}(2)_F$ defined earlier in equation (\[SU(2)F\]). We shall label it as $\mathfrak{su}(2)_T$ in the supersymmetric extension. Its generators are: $$\begin{split} T_+ &= S_+ + F_+ = a^m b_m + \alpha^r \beta_r \\ T_- &= S_- + F_- = b^m a_m + \beta^r \alpha_r \\ T_0 &= S_0 + F_0 = \frac{1}{2} \left( N_a - N_b + N_\alpha - N_\beta \right) \end{split}$$ so that $${\left[ T_+ \, , \, T_- \right]} = 2 \, T_0 \qquad \qquad \qquad {\left[ T_0 \, , \, T_\pm \right]} = \pm T_\pm \,.$$ The subsuperalgebra $\mathfrak{osp}(4^*|2N)$ belonging to grade zero subspace $\mathfrak{g}^{(0)}$ has an even subalgebra $\mathfrak{so}^*(4) \oplus \mathfrak{usp}(2N)$. The generators of $\mathfrak{so}^*(4) = \mathfrak{su}(2)_A \oplus \mathfrak{su}(1,1)_N$ were denoted as $A_{\pm,0}$ and $N_{\pm,0}$ (see equation(\[SU(2)AN\_generators\])), and the generators of $\mathfrak{usp}(2N)$ were denoted as $S_{rs}$, $M^r_{~s}$, $S^{rs}$ ($r,s,\dots = 1,\dots,N$) (see equation (\[USp(2N)generators\])). The $8 N$ supersymmetry generators of $\mathfrak{osp}(4^*|2N)$ are realized by the following bilinears: $$\begin{aligned} \Pi_{mr} &= a_m \beta_r - b_m \alpha_r \\ \Sigma_m^{~r} &= a_m \alpha^r + b_m \beta^r \end{aligned} \qquad \qquad \begin{aligned} \overline{\Pi}^{mr} &= \left( \Pi_{mr} \right)^\dag = a^m \beta^r - b^m \alpha^r \\ \overline{\Sigma}^m_{~r} &= \left( \Sigma_m^{~r} \right)^\dag = a^m \alpha_r + b^m \beta_r \end{aligned}$$ Recalling that $$\mathcal{C}_2 \left[ \mathfrak{su}(2)_A \right] = \mathcal{C}_2 \left[ \mathfrak{su}(1,1)_N \right] = \mathcal{C}_2 \left[ \mathfrak{su}(2)_S \right]$$ we find that the quadratic Casimir of $\mathfrak{osp}(4^*|2N)$ can be written as $$\mathcal{C}_2 \left[ \mathfrak{osp}(4^*|2N) \right] = \mathcal{C}_2 \left[ \mathfrak{su}(2)_S \right] - \frac{1}{8} \mathcal{C}_2 \left[ \mathfrak{usp}(2N) \right] + \frac{1}{4} \mathcal{F} \left( \Pi , \Sigma \right)$$ where $$\mathcal{F} \left( \Pi , \Sigma \right) = \Pi_{mr} \, \overline{\Pi}^{mr} - \overline{\Pi}^{mr} \, \Pi_{mr} + \Sigma_m^{~r} \, \overline{\Sigma}^m_{~r} - \overline{\Sigma}^m_{~r} \, \Sigma_m^{~r} \,.$$ Remarkably, it reduces to the quadratic Casimir of $SU(2)_T$ modulo an additive constant for the minimal unitary realization $$\mathcal{C}_2 \left[ \mathfrak{osp}(4^*|2N) \right] = \mathcal{C}_2 \left[ \mathfrak{su}(2)_{T} \right] - \frac{N \left( N - 4 \right)}{16}$$ where $$\mathcal{C}_2 \left[ \mathfrak{su}(2)_{T} \right] = {T_0}^2 + \frac{1}{2} \left( T_+ \, T_- + T_- \, T_+ \right) \equiv \mathcal{T}^2 \,.$$ Using this result, one can write the grade $+2$ generator in full generality as $$K_+ = \frac{1}{2} p^2 + \frac{1}{4 \, x^2} \left( 8 \, \mathcal{T}^2 + \frac{3}{2} \right)$$ that is valid for all $N$. The generators in grade $+1$ subspace $\mathfrak{g}^{(+1)}$ are obtained from the commutators of $\mathfrak{g}^{(-1)}$ generators with $K_+$: $$\begin{aligned} \widetilde{U}_m &= i {\left[ U_m \, , \, K_+ \right]} \\ \widetilde{V}_m &= i {\left[ V_m \, , \, K_+ \right]} \\ \widetilde{Q}_r &= i {\left[ Q_r \, , \, K_+ \right]} \\ \widetilde{S}_r &= i {\left[ S_r \, , \, K_+ \right]} \end{aligned} \qquad \qquad \qquad \begin{aligned} \widetilde{U}^m &= i {\left[ U^m \, , \, K_+ \right]} \\ \widetilde{V}^m &= i {\left[ V^m \, , \, K_+ \right]} \\ \widetilde{Q}^r &= i {\left[ Q^r \, , \, K_+ \right]} \\ \widetilde{S}^r &= i {\left[ S^r \, , \, K_+ \right]} \end{aligned}$$ The explicit form of the even and odd generators of $\mathfrak{g}^{(+1)}$ are as follows: $$\begin{split} \widetilde{U}_m &= - p \, a_m + \frac{2i}{x} \left[ \left( T_0 + \frac{3}{4} \right) a_m + T_- b_m \right] \\ \widetilde{U}^m &= - p \, a^m - \frac{2i}{x} \left[ \left( T_0 - \frac{3}{4} \right) a^m + T_+ b^m \right] \\ \widetilde{V}_m &= - p \, b_m - \frac{2i}{x} \left[ \left( T_0 - \frac{3}{4} \right) b_m - T_+ a_m \right] \\ \widetilde{V}^m &= - p \, b^m + \frac{2i}{x} \left[ \left( T_0 + \frac{3}{4} \right) b^m - T_- a^m \right] \end{split}$$ $$\begin{split} \widetilde{Q}_r &= - p \, \alpha_r + \frac{2i}{x} \left[ \left( T_0 + \frac{3}{4} \right) \alpha_r + T_- \beta_r \right] \\ \widetilde{Q}^r &= - p \, \alpha^r - \frac{2i}{x} \left[ \left( T_0 - \frac{3}{4} \right) \alpha^r + T_+ \beta^r \right] \\ \widetilde{S}_r &= - p \, \beta_r - \frac{2i}{x} \left[ \left( T_0 - \frac{3}{4} \right) \beta_r - T_+ \alpha_r \right] \\ \widetilde{S}^r &= - p \, \beta^r + \frac{2i}{x} \left[ \left( T_0 + \frac{3}{4} \right) \beta^r - T_- \alpha^r \right] \end{split} \label{5GsusyGr+1}$$ They form a (super-)Heisenberg algebra together with $K_+$: $$\begin{split} {\left[ \widetilde{U}_m \, , \, \widetilde{U}^n \right]} &= {\left[ \widetilde{V}_m \, , \, \widetilde{V}^n \right]} = 2 \, \delta^n_m \, K_+ \\ {\left\{ \widetilde{Q}_r \, , \, \widetilde{Q}^s \right\}} &= {\left\{ \widetilde{S}_r \, , \, \widetilde{S}^s \right\}} = 2 \, \delta^s_r \, K_+ \end{split}$$ The commutation relations of grade $+1$ generators with the grade $-2$ generator $K_-$ are: $$\begin{aligned} {\left[ \widetilde{U}_m \, , \, K_- \right]} &= i \, U_m \\ {\left[ \widetilde{V}_m \, , \, K_- \right]} &= i \, V_m \\ {\left[ \widetilde{Q}_r \, , \, K_- \right]} &= i \, Q_r \\ {\left[ \widetilde{S}_r \, , \, K_- \right]} &= i \, S_r \end{aligned} \qquad \qquad \begin{aligned} {\left[ \widetilde{U}^m \, , \, K_- \right]} &= i \, U^m \\ {\left[ \widetilde{V}^m \, , \, K_- \right]} &= i \, V^m \\ {\left[ \widetilde{Q}^r \, , \, K_- \right]} &= i \, Q^r \\ {\left[ \widetilde{S}^r \, , \, K_- \right]} &= i \, S^r \end{aligned}$$ In terms of the generators defined above the 5-graded decomposition of the Lie superalgebra $\mathfrak{osp}(8^*|2N)$, defined by the generator $\Delta$, takes the form: $$\begin{split} \mathfrak{osp}(8^*|4) &= \mathfrak{g}^{(-2)} \oplus \mathfrak{g}^{(-1)} \oplus \left[ \mathfrak{osp}(4^*|2N) \oplus \mathfrak{su}(2) \oplus \mathfrak{so}(1,1)_\Delta \right] \oplus \mathfrak{g}^{(+1)} \oplus \mathfrak{g}^{(+2)} \\ &= K_- \oplus \left[ U_m \,,\, U^m \,,\, V_m \,,\, V^m \,,\, Q_r \,,\, Q^r \,,\, S_r \,,\, S^r \right] \\ & \qquad \oplus \left[ A_{\pm,0} \,,\, N_{\pm,0} \,,\, S_{rs} \,,\, M^r_{~s} \,,\, S^{rs} \,,\, \Pi_{mr} \,,\, \overline{\Pi}^{mr} \,,\, \Sigma_m^{~r} \,,\, \overline{\Sigma}^m_{~r} \,,\, T_{\pm,0} \,,\, \Delta \right] \\ & \qquad \qquad \oplus \left[ \widetilde{U}_m \,,\, \widetilde{U}^m \,,\, \widetilde{V}_m \,,\, \widetilde{V}^m \,,\, \widetilde{Q}_r \,,\, \widetilde{Q}^r \,,\, \widetilde{S}_r \,,\, \widetilde{S}^r \right] \oplus K_+ \end{split}$$ We give the additional (super-)commutation relations of this superalgebra in the 5-graded basis in appendix \[OSp(8\*|2N)-5Gr\]. Compact 3-Grading of $OSp(8^*|2N)$ and its Minimal Unitary Representation {#minrepOSp(8*|2N)-3Gr} ========================================================================= The Lie superalgebra $\mathfrak{osp}(8^*|2N)$ can be given a 3-graded decomposition with respect to its compact subsuperalgebra $\mathfrak{u}(4|N) = \mathfrak{su}(4|N) \oplus \mathfrak{u}(1)_{\mathcal{H}}$ $$\mathfrak{osp}(8^*|2N) = \mathfrak{C}^- \oplus \mathfrak{C}^0 \oplus \mathfrak{C}^+$$ where $$\begin{split} \mathfrak{C}^- &= \frac{1}{2} \left( U_m - i \, \widetilde{U}_m \right) \oplus \frac{1}{2} \left( V_m - i \, \widetilde{V}_m \right) \oplus N_- \oplus \frac{i}{2} \left[ \Delta + i \left( K_+ - K_- \right) \right] \oplus S_{rs} \\ & \qquad \oplus \frac{1}{2} \left( Q_r - i \, \widetilde{Q}_r \right) \oplus \frac{1}{2} \left( S_r - i \, \widetilde{S}_r \right) \oplus \Pi_{mr} \\ \mathfrak{C}^0 &= \left[ T_{\pm,0} \oplus A_{\pm,0} \oplus \left[ N_0 - \frac{1}{2} \left( K_+ + K_- \right) \right] \oplus \frac{1}{2} \left( U_m + i \, \widetilde{U}_m \right) \oplus \frac{1}{2} \left( U^m - i \, \widetilde{U}^m \right) \right. \\ & \qquad \left. \oplus \frac{1}{2} \left( V_m + i \, \widetilde{V}_m \right) \oplus \frac{1}{2} \left( V^m - i \, \widetilde{V}^m \right) \oplus M^r_{~s} \oplus \left[ \frac{1}{2} \left( K_+ + K_- \right) + \frac{2}{N} M_0 \right] \right] \oplus \mathcal{H} \\ & \quad \oplus \frac{1}{2} \left( Q_r + i \, \widetilde{Q}_r \right) \oplus \frac{1}{2} \left( Q^r - i \, \widetilde{Q}^r \right) \oplus \frac{1}{2} \left( S_r + i \, \widetilde{S}_r \right) \oplus \frac{1}{2} \left( S^r - i \, \widetilde{S}^r \right) \oplus \Sigma_m^{~r} \oplus \overline{\Sigma}^m_{~r} \\ \mathfrak{C}^+ &= \frac{1}{2} \left( U^m + i \, \widetilde{U}^m \right) \oplus \frac{1}{2} \left( V^m + i \, \widetilde{V}^m \right) \oplus N_+ \oplus - \frac{i}{2} \left[ \Delta - i \left( K_+ - K_- \right) \right] \oplus S^{rs} \\ & \qquad \oplus \frac{1}{2} \left( Q^r + i \, \widetilde{Q}^r \right) \oplus \frac{1}{2} \left( S^r + i \, \widetilde{S}^r \right) \oplus \overline{\Pi}^{mr} \end{split}$$ The $U(1)$ generator $\mathcal{H}$ that defines the compact 3-grading of $\mathfrak{osp}(8^*|2N)$ is given by $$\mathcal{H} = \frac{1}{2} \left( K_+ + K_- \right) + N_0 + M_0$$ where $M_0$ is the generator of the $U(1)$ factor of $U(N)$ subgroup of $USp(2N)$ (equation (\[USp(2N)generators\])): $$M_0 = \frac{1}{2} \left( N_\alpha + N_\beta - N \right)= \frac{1}{2} \left( N_\alpha - N_\beta \right)$$ Therefore $$\mathcal{H} = \frac{1}{4} \left( x^2 + p^2 \right) + \frac{1}{8 \, x^2} \left( 8 \, \mathcal{T}^2 + \frac{3}{2} \right) + \frac{1}{2} \left( N_a + N_b + N_\alpha + N_\beta \right) + \frac{2 - N}{2} \,. \label{3-GrGenerator}$$ plays the role of the “total energy” operator. Note that, in the supersymmetric extension, the $\mathfrak{u}(1)$ generator $H$ in $\mathfrak{so}^*(8)$ (equation (\[BosonicHamiltonian2\])), which is the $AdS_7$ energy, that determines its compact 3-grading becomes $$\begin{split} H_B &= \frac{1}{2} \left( K_+ + K_- \right) + N_0 \\ &= \frac{1}{4} \left( x^2 + p^2 \right) + \frac{1}{8 \, x^2} \left( 8 \, \mathcal{T}^2 + \frac{3}{2} \right) + \frac{1}{2} \left( N_a + N_b \right) + 1 \\ &= H_\odot + H_a + H_b \,. \end{split} \label{NonSusyH}$$ The Hamiltonian of the singular oscillator now has contributions from the fermionic oscillators: $$H_\odot = \frac{1}{4} \left( x^2 + p^2 \right) + \frac{1}{8 \, x^2} \left( 8 \, \mathcal{T}^2 + \frac{3}{2} \right)$$ where $\mathcal{T}^2$ is the quadratic Casimir of $SU(2)_T$, the diagonal subgroup of $SU(2)_S$ and $SU(2)_F$ which are realized in terms of purely bosonic and purely fermionic oscillators, respectively. $H_a$ and $H_b$ remain unchanged in the supersymmetric extension: $$H_a = \frac{1}{2} \left( N_a + 1 \right) \qquad \qquad \qquad H_b = \frac{1}{2} \left( N_b + 1 \right)$$ The explicit expressions for the bosonic operators that belong to the subspace $\mathfrak{C}^-$ of $\mathfrak{osp}(8^*|2N)$ in the compact 3-grading are as follows[^7]: $$\begin{split} Y_m &= \frac{1}{2} \left( U_m - i \, \widetilde{U}_m \right) = \frac{1}{2} \left( x + i \, p \right) a_m + \frac{1}{x} \left[ \left( T_0 + \frac{3}{4} \right) a_m + T_- b_m \right] \\ Z_m &= \frac{1}{2} \left( V_m - i \, \widetilde{V}_m \right) = \frac{1}{2} \left( x + i \, p \right) b_m - \frac{1}{x} \left[ \left( T_0 - \frac{3}{4} \right) b_m - T_+ a_m \right] \\ N_- &= a_1 b_2 - a_2 b_1 \\ B_- &= \frac{i}{2} \left[ \Delta + i \left( K_+ - K_- \right) \right] = \frac{1}{4} \left( x + i \, p \right)^2 - \frac{1}{8 \, x^2} \left( 8 \, \mathcal{T}^2 + \frac{3}{2} \right) \\ S_{rs} &= \alpha_r \beta_s + \alpha_s \beta_r \end{split} \label{OSp(8*|N)Gr-1B}$$ The $4 N$ supersymmetry generators in $\mathfrak{C}^-$ subspace are given by: $$\begin{split} \mathfrak{Q}_r &= \frac{1}{2} \left( Q_r - i \, \widetilde{Q}_r \right) = \frac{1}{2} \left( x + i \, p \right) \alpha_r + \frac{1}{x} \left[ \left( T_0 + \frac{3}{4} \right) \alpha_r + T_- \beta_r \right] \\ \mathfrak{S}_r &= \frac{1}{2} \left( S_r - i \, \widetilde{S}_r \right) = \frac{1}{2} \left( x + i \, p \right) \beta_r - \frac{1}{x} \left[ \left( T_0 - \frac{3}{4} \right) \beta_r - T_+ \alpha_r \right] \\ \Pi_{mr} &= a_m \beta_r - b_m \alpha_r \end{split} \label{OSp(8*|N)Gr-1F}$$ The operators that belong to $\mathfrak{C}^+$ subspace are the Hermitian conjugates of those in $\mathfrak{C}^-$. The bosonic operators in $\mathfrak{C}^+$ are: $$\begin{split} Y^m &= \frac{1}{2} \left( U^m + i \, \widetilde{U}^m \right) = \frac{1}{2} \left( x - i \, p \right) a^m + \frac{1}{x} \left[ \left( T_0 - \frac{3}{4} \right) a^m + T_+ b^m \right] \\ Z^m &= \frac{1}{2} \left( V^m + i \, \widetilde{V}^m \right) = \frac{1}{2} \left( x - i \, p \right) b^m - \frac{1}{x} \left[ \left( T_0 + \frac{3}{4} \right) b^m - T_- a^m \right] \\ N_+ &= a^1 b^2 - a^2 b^1 \\ B_+ &= - \frac{i}{2} \left[ \Delta - i \left( K_+ - K_- \right) \right] = \frac{1}{4} \left( x - i \, p \right)^2 - \frac{1}{8 \, x^2} \left( 8 \, \mathcal{T}^2 + \frac{3}{2} \right) \\ S^{rs} &= \alpha^r \beta^s + \alpha^s \beta^r \end{split} \label{OSp(8*|N)Gr+1B}$$ The $4 N$ supersymmetry generators in $\mathfrak{C}^+$ subspace are: $$\begin{split} \mathfrak{Q}^r &= \frac{1}{2} \left( Q^r + i \, \widetilde{Q}^r \right) = \frac{1}{2} \left( x - i \, p \right) \alpha^r + \frac{1}{x} \left[ \left( T_0 - \frac{3}{4} \right) \alpha^r + T_+ \beta^r \right] \\ \mathfrak{S}^r &= \frac{1}{2} \left( S^r + i \, \widetilde{S}^r \right) = \frac{1}{2} \left( x - i \, p \right) \beta^r - \frac{1}{x} \left[ \left( T_0 + \frac{3}{4} \right) \beta^r - T_- \alpha^r \right] \\ \overline{\Pi}^{mr} &= a^m \beta^r - b^m \alpha^r \end{split} \label{OSp(8*|N)Gr+1F}$$ We find again the following relation $$Y^1 \, Z^2 - Y^2 \, Z^1 = N_+ \, B_+$$ among the generators in $\mathfrak{C}^+$ within the supersymmetric extension of the minrep. We give the (super-)commutation relations between these $\mathfrak{C}^-$ and $\mathfrak{C}^+$ operators and the explicit form of the generators of grade zero subspace $\mathfrak{C}^0$ and their (super-)commutation relations in appendix \[OSp(8\*|2N)-3Gr\]. In the supersymmetric extension, the quadratic Casimirs of two $SU(2)$ subgroups are no longer identical. The generators of $SU(2)_A$ remain unchanged, but $SU(2)_S$ generators get contributions from fermions and go over to $SU(2)_T$. (See appendix \[OSp(8\*|2N)-3Gr\] for their explicit forms.) As we showed in section \[SU2SU2U1\], the minimal unitary representation of $SO^*(8) \simeq SO(6,2)$ is a lowest weight representation with a unique lowest weight vector ${\left\lvert \psi^{(3/2)}_0 (x) \,;\, 0 , 0 , 0 , 0 \right\rangle}$ that is annihilated by all the operators in $\mathfrak{C}^-$ subspace and corresponds to a conformal scalar in six dimensions. The lowest weight vector ${\left\lvert \psi^{(3/2)}_0 (x) \,;\, 0 , 0 , 0 , 0 \right\rangle}$ is a singlet of the semi-simple part of the little group, namely $SO(4) = SU(2)_S \times SU(2)_A$, of massless states in six dimensions. Now the minimal unitary representation of $OSp(8^*|2N)$ constructed above restricts to a finite number of inequivalent unitary irreducible representations of $SO^*(8)$, whose realization involves fermionic as well as bosonic oscillators. We shall refer to the resulting representations of $SO^*(8)$ as “deformations” of the minimal unitary representation. These deformations of the minimal unitary representation of $SO^*(8)$ also satisfy the Poincaré massless condition $$\mathcal{M}^2 = \eta_{\mu\nu} P^\mu P^\nu = 0$$ and hence correspond to massless conformal fields in six dimensions. Note that $SO(4)$ is the six dimensional analog of the little group $SO(2)$ of massless states in four dimensions. The minimal unitary representation of $4D$ conformal group $SO(4,2) = SU(2,2) / \mathbf{Z}_2$ corresponds to a massless conformal field in four dimensions [@Fernando:2009fq], and its deformations, which are labeled by a real parameter $\zeta$, also describe massless conformal fields. For physical fields, this parameter is simply twice the helicity of a massless unitary representation of the Poincaré subgroup of $SO(4,2)$. They are the doubleton representations of $SU(2,2)$, whose supersymmetric extensions were studied in [@Gunaydin:1984fk; @Gunaydin:1998jc; @Gunaydin:1998sw]. It was shown long time ago that the corresponding representations of the conformal group $SU(2,2)$ remain irreducible under the restriction to the four dimensional Poincaré subgroup [@Mack:1969dg].[^8] We expect the massless doubleton representations of $SO^*(8)$ to remain irreducible under restriction to $6D$ Poincaré subgroup as well. Minimal Unitary Supermultiplet of $\mathfrak{osp}(8^*|2N)$ {#minrepsupermultiplet} ========================================================== Since the subgroup $SU(2)_S$ of $SO^*(8)$ is replaced by $SU(2)_T$ when $SO^*(8)$ is extended to the supergroup $OSp(8^*|2N)$, the parameter $\alpha$ in the wave functions $\psi_n^{(\alpha)} \left( x \right)$ (as defined in equation (\[singularwavefunctions\])) now depends on $\mathfrak{t}$, instead of $\mathfrak{s}$, where $\mathfrak{t}$ is the $SU(2)_T$ spin. In this section, for the sake of simplicity, we shall denote the tensor product of the lowest energy state of the “singular” part $H_\odot$ of the bosonic Hamiltonian $H$ with the coordinate wave function $$\psi_0^{(\alpha_\mathfrak{t} = 3/2)} \left( x \right) = C_0 \, x^{\frac{3}{2}} \, e^{- x^2 / 2}$$ and the vacuum state of all the bosonic and fermionic oscillators $a^m$, $b^m$, $\alpha^r$ and $\beta^r$ simply as ${\left\lvert \psi_0^{(3/2)} \right\rangle}$: $$\begin{split} a_m \, {\left\lvert \psi_0^{(3/2)} \right\rangle} = b_m \, {\left\lvert \psi_0^{(3/2)} \right\rangle} &= 0 \\ \alpha_\mu \, {\left\lvert \psi_0^{(3/2)} \right\rangle} = \beta_\mu \, {\left\lvert \psi_0^{(3/2)} \right\rangle} &= 0 \end{split}$$ Note that for a general state involving bosonic and fermionic excitations, $$\alpha_\mathfrak{t} = 2 \mathfrak{t} +3/2$$ if $\mathfrak{t} \left( \mathfrak{t} + 1 \right)$ is the eigenvalue of the quadratic Casimir $\mathcal{T}^2$ of $SU(2)_T$ on that state. Minimal unitary supermultiplet of $\mathfrak{osp}(8^*|4)$ {#minsupermultipletN=2} --------------------------------------------------------- First we shall present the results for the case $N = 2$ (i.e. for $USp(4)$), which is relevant to the symmetry supergroup of the $S^4$ compactification of the eleven dimensional supergravity. The oscillator construction of the unitary supermultiplets of $OSp(8^*|4)$ has been studied in [@Gunaydin:1984wc; @Gunaydin:1999ci; @Fernando:2001ak]. It has 32 supersymmetry generators, 16 of which belong to grade zero subspace $\mathfrak{C}^0$ and 8 each belong to grade $\pm 1$ subspaces $\mathfrak{C}^{\pm}$. The state ${\left\lvert \psi_0^{(3/2)} \right\rangle}$ is the unique normalizable lowest energy state annihilated by all 9 bosonic operators as well as all 8 supersymmetry generators in $\mathfrak{C}^-$ subspace. It is a singlet of $SU(4\,|\,2)$ subalgebra. By acting on it with grade $+1$ operators in the subspace $\mathfrak{C}^+$, one obtains an infinite set of states which forms a basis for the minimal unitary irreducible representation of $\mathfrak{osp}(8^*|4)$. This infinite set of states can be decomposed into a finite number of irreducible representations of the even subgroup $SO^*(8) \times USp(4)$, with each irrep of $SO^*(8) \simeq SO(6,2)$ corresponding to a massless conformal field in six dimensions. In Table \[Table:minrepsupermultipletN=2\], we present the supermultiplet that is obtained by starting from this unique lowest weight vector $${\left\lvert \psi_0^{(3/2)} \right\rangle}$$ and acting on it with the generators of grade $+1$ subspace $\mathfrak{C}^+$. The resulting minimal unitary supermultiplet is the ultra-short doubleton supermultiplet of $AdS_7$ supergroup $OSp(8^*|4)$ which does not have a Poincaré limit in seven dimensions and whose field theory lives on the boundary of $AdS_7$ on which $SO^*(8)$ acts as the conformal group [@Gunaydin:1984wc]. It describes a massless $(2,0)$ conformal supermultiplet whose interacting field theory is believed to be dual to M-theory on $AdS_7\times S^4$ [@Maldacena:1997re]. The corresponding minimal supermultiplet of $4D$ superconformal algebra $SU(2,2|4)$ is the $\mathcal{N} = 4$ Yang-Mills supermultiplet in four dimensions [@Fernando:2009fq]. In earlier literature, it was called the CPT self-conjugate doubleton supermultiplet. In the twistorial oscillator approach, the lowest weight vector ${\left\lvert \Omega \right\rangle}$ for this supermultiplet is the vacuum vector ${\left\lvert 0 \right\rangle}$ of all the oscillators in the $SU(4\,|\,2) \times U(1)$ basis [@Gunaydin:1984wc; @Gunaydin:1999ci]. Recalling that the positive energy unitary irreducible representations of $SO^*(8)$ are uniquely labeled by their lowest energy $SU(4)$ irreps, we note that each such $SU(4)$ irrep can in turn be uniquely labeled by an irrep of its subgroup $SU(2)_T \times SU(2)_A \times U(1)_J$ with respect to which it admits a compact three grading. Denoting the $SU(2)_T \times SU(2)_A$ spins as $\mathfrak{t}, \mathfrak{a}$ and the eigenvalue of $J$ as $\mathfrak{J}$, the Table \[Table:minrepsupermultipletN=2\] also gives the decompositions of the lowest energy $SU(4)$ irreps of $SO^*(8)$ in the minimal supermultiplet. The $USp(4)$ transformation properties of these $SO^*(8)$ irreps follow from the results of section \[USp(2N)\]. In Table \[Table:minrepsupermultipletN=2\], $\left( \mathcal{Q}\right)^n {\left\lvert \Omega \right\rangle}$ denotes symbolically a lowest energy irrep of $SO^*(8)$ obtained by acting on the lowest weight state ${\left\lvert \Omega \right\rangle}$ with $ n$ copies of supersymmetry generators $\mathcal{Q} = \left\{ \mathfrak{Q}^r , \mathfrak{S}^r , \Pi^{mr} \right\}$. [|r||c|c||l||c|c|]{} \ & & & & &\ States      & $H$ & $\mathcal{H}$ & $( \mathfrak{t} , \mathfrak{a} )^{\mathfrak{J}}$ & $SU(4)=SU^*(4)$ & $USp(4)$\ & & & & Dynkin & Dynkin\ & & & & &\ & & & & &\ \ & & & & &\ States      & $H$ & $\mathcal{H}$ & $( \mathfrak{t} , \mathfrak{a} )^{\mathfrak{J}}$ & $SU(4)=SU^*(4)$ & $USp(4)$\ & & & & Dynkin & Dynkin\ & & & & &\ & & & & &\ & & & & &\ & & & & &\ ${\left\lvert \psi_0^{(3/2)} \right\rangle}$ & 2 & 1 & $(0,0)^0$ & (0,0,0) & (0,1)\ & & & & &\ $\mathcal{Q} {\left\lvert \psi_0^{(3/2)} \right\rangle}$ & $\frac{5}{2}$ & 2 & $\left( \frac{1}{2} , 0 \right)^{-\frac{1}{2}} \oplus \left( 0 , \frac{1}{2} \right)^{+\frac{1}{2}}$ & (1,0,0) & (1,0)\ & & & & &\ $\left( \mathcal{Q} \right)^2 {\left\lvert \psi_0^{(3/2)} \right\rangle}$ & 3 & 3 & $\left( 1 , 0 \right)^{-1} \oplus \left( \frac{1}{2} , \frac{1}{2} \right)^{0} \oplus \left( 0 , 1 \right)^{+1}$ & (2,0,0) & (0,0)\ Minimal unitary supermultiplet of $OSp(8^*|2N)$ {#minsupermultipletN} ----------------------------------------------- The supergroup $OSp(8^*|2N)$ has $16N$ supersymmetry generators, $8N$ of which belong to grade zero subspace $\mathfrak{C}^0$ and $4N$ each belong to grade $\pm 1$ subspaces $\mathfrak{C}^{\pm}$ in the compact three grading. Once again, the state ${\left\lvert \psi_0^{(3/2)} \right\rangle}$ is the unique normalizable lowest energy state annihilated by all $6 + N(N+1)/2$ bosonic operators as well as all $4N$ supersymmetry generators in $\mathfrak{C}^-$ subspace. It is a singlet of the subsuperalgebra $\mathfrak{su}(4\,|\,N)$. By acting on it with grade $+1$ operators in the subspace $\mathfrak{C}^+$, one obtains an infinite set of states which forms a basis for the minimal unitary irreducible representation of $\mathfrak{osp}(8^*|2N)$. This infinite set of states can be decomposed into a finite number of irreducible representations of the even subgroup $SO^*(8) \times USp(2N)$, with each irrep of $SO^*(8) \simeq SO(6,2)$ corresponding to a massless conformal field in six dimensions. In Table \[Table:minrepsupermultipletN\], we present the minimal unitary supermultiplet of $\mathfrak{osp}(8^*|2N)$ obtained by starting from the lowest weight state $${\left\lvert \psi_0^{(3/2)} \right\rangle} \,.$$ $\left( \mathcal{Q}\right)^n {\left\lvert \psi_0^{(3/2)} \right\rangle}$ denotes, symbolically, the set of states obtained by acting on the lowest weight state ${\left\lvert \psi_0^{(3/2)} \right\rangle}$ $n$ times with supersymmetry generators where $\mathcal{Q} = \left\{ \mathfrak{Q}^r , \mathfrak{S}^r , \Pi^{mr} \right\}$ that determine the $SO^*(8)$ irreps and their $USp(2N)$ transformation properties uniquely. [|r||c|c||l||c|c|]{} \ & & & & &\ State      & $H$ & $\mathcal{H}$ & $( \mathfrak{t} , \mathfrak{a} )^{\mathfrak{J}}$ & $SU(4)$ & $USp(2N)$\ & & & & Dynkin & Dynkin\ & & & & &\ & & & & &\ \ & & & & &\ State      & $H$ & $\mathcal{H}$ & $( \mathfrak{t} , \mathfrak{a} )^{\mathfrak{J}}$ & $SU(4)$ & $USp(2N)$\ & & & & Dynkin & Dynkin\ & & & & &\ & & & & &\ & & & & &\ & & & & &\ ${\left\lvert \Omega \right\rangle}$ & 2 & $2 - \frac{N}{2}$ & $(0,0)^0$ & (0,0,0) & $(\underbrace{0,\dots,0}_{(N-1)},1)$\ & & & & &\ $\mathcal{Q} {\left\lvert \Omega \right\rangle}$ & $\frac{5}{2}$ & $3 - \frac{N}{2}$ & $\left( \frac{1}{2} , 0 \right)^{-\frac{1}{2}} \oplus \left( 0 , \frac{1}{2} \right)^{+\frac{1}{2}}$ & (1,0,0) & $(\underbrace{0,\dots,0}_{(N-2)},1,0)$\ & & & & &\ $\left( \mathcal{Q} \right)^2 {\left\lvert \Omega \right\rangle}$ & 3 & $4 - \frac{N}{2}$ & $\left( 1 , 0 \right)^{-1} \oplus \left( \frac{1}{2} , \frac{1}{2} \right)^{0} \oplus \left( 0 , 1 \right)^{+1}$ & (2,0,0) & $(\underbrace{0,\dots,0}_{(N-3)},1,0,0)$\ & & & & &\      & & & & &\ & & & & &\ $\left( \mathcal{Q} \right)^n {\left\lvert \Omega \right\rangle}$ & $2 + \frac{n}{2}$ & $2 + n - \frac{N}{2}$ & $\left( \frac{n}{2} , 0 \right)^{-\frac{n}{2}} \oplus \left( \frac{n-1}{2} , \frac{1}{2} \right)^{-\frac{n}{2}+1}$ & $(n,0,0)$ & $(\underbrace{0,\dots,0}_{(N-n-1)},1,\underbrace{0,\dots,0}_{(n)})$\ & & & $\oplus \dots \dots$ & &\ & & & $\oplus \left( \frac{1}{2} , \frac{n-1}{2} \right)^{\frac{n}{2}-1} \oplus \left( 0 , \frac{n}{2} \right)^{\frac{n}{2}}$ & &\ & & & & &\      & & & & &\ & & & & &\ $\left( \mathcal{Q} \right)^N {\left\lvert \Omega \right\rangle}$ & $2 + \frac{N}{2}$ & $2 + \frac{N}{2}$ & $\left( \frac{N}{2} , 0 \right)^{-\frac{N}{2}} \oplus \left( \frac{N-1}{2} , \frac{1}{2} \right)^{-\frac{N}{2}+1}$ & $(N,0,0)$ & $(0,\dots,0)$\ & & & $\oplus \dots \dots$ & &\ & & & $\oplus \left( \frac{1}{2} , \frac{N-1}{2} \right)^{\frac{N}{2}-1} \oplus \left( 0 , \frac{N}{2} \right)^{\frac{N}{2}}$ & &\ Deformations of the Minimal Unitary Representation of $SO^*(8)$ {#SO*(8)deformations} =============================================================== Above we showed that the minrep of $SO^*(8)$ is simply the scalar doubleton representation that describes a conformal scalar field in six dimensions. The group $SO^*(8)$ admits infinitely many doubleton representations corresponding to $6D$ massless conformal fields of arbitrary spin [@Gunaydin:1984wc; @Gunaydin:1999ci; @Fernando:2001ak]. They all can be constructed by the oscillator method over the Fock space of two pairs of twistorial oscillators transforming in the spinor representation of $SO^*(8)$. One would like to know whether the doubleton representations corresponding to massless conformal fields of higher spin can all be obtained from the minimal unitary representation by a “deformation” in a manner similar to what happens in the case of $4D$ conformal group $SU(2,2)$ [@Fernando:2009fq]. Remarkably, once again, we find that there exists infinitely many deformations of the minrep labeled by the spin ($t$) of an $SU(2)$ subgroup. By allowing this spin $t$ to take on all possible values, we obtain all the doubleton irreps as deformations of the minrep of $SO^*(8)$. To realize the deformations of the minimal representation of $SO^*(8)$, we first introduce an arbitrary number $P$ pairs of fermionic oscillators $\xi_x$ and $\chi_x$ and their hermitian conjugates $\xi^x = \left( \xi_x \right)^\dag$ and $\chi^x = \left( \chi_x \right)^\dag$ ($x = 1,2,\dots,P$) that satisfy the usual anti-commutation relations $${\left\{ \xi_x \, , \, \xi^y \right\}} = {\left\{ \chi_x \, , \, \chi^y \right\}} = \delta^x_y \qquad \qquad {\left\{ \xi_x \, , \, \xi_y \right\}} = {\left\{ \xi_x \, , \, \chi_y \right\}} = {\left\{ \chi_x \, , \, \chi_y \right\}} = 0 \,.$$ Note that the following bilinears of these fermionic oscillators $$G_+ = \xi^x \chi_x \qquad \qquad G_- = \chi^x \xi_x \qquad \qquad G_0 = \frac{1}{2} \left( N_\xi - N_\chi \right)$$ where $N_\xi = \xi^x \xi_x$ and $N_\chi = \chi^x \chi_x$ are the respective number operators, generate an $\mathfrak{su}(2)_G$ algebra: $${\left[ G_+ \, , \, G_- \right]} = 2 \, G_0 \qquad \qquad \qquad {\left[ G_0 \, , \, G_\pm \right]} = \pm G_0$$ We choose the Fock vacuum of these fermionic oscillators such that $$\xi_x {\left\lvert 0 \right\rangle} = \chi_x {\left\lvert 0 \right\rangle} = 0$$ for all $x = 1,2,\dots,P$. Clearly a state of the form $$\chi^{[1} \chi^2 \chi^3 \dots \chi^{P]} {\left\lvert 0 \right\rangle}$$ has a definite eigenvalue of $G_0$ and is annihilated by the operator $G_-$. Note that square bracketing of fermionic indices imply complete anti-symmetrization of weight one. By acting on this state with the operator $G_+$, one can obtain $P$ other states, namely: $$\xi^{[1} \chi^2 \chi^3 \dots \chi^{P]} {\left\lvert 0 \right\rangle} \, \oplus \, \xi^{[1} \xi^2 \chi^3 \dots \chi^{P]} {\left\lvert 0 \right\rangle} \, \oplus \, \dots \dots \, \oplus \, \xi^{[1} \xi^2 \xi^3 \dots \xi^{P]} {\left\lvert 0 \right\rangle}$$ This set of $P+1$ states transforms irreducibly under $\mathfrak{su}(2)_G$ in the spin $t = \frac{P}{2}$ representation. Recall that the “undeformed” minimal unitary realization of $\mathfrak{so}^*(8)$ has a 5-graded decomposition with respect to the subalgebra $\mathfrak{g}^{(0)} = \mathfrak{su}(2)_A \oplus \mathfrak{su}(1,1)_N \oplus \mathfrak{su}(2)_S \oplus \mathfrak{so}(1,1)$, as given in equation (\[so\*(8)5-grading\]). Now to deform the minimal unitary realization of $\mathfrak{so}^*(8)$, we extend the subalgebra $\mathfrak{su}(2)_S$ to the diagonal subalgebra $\mathfrak{su}(2)_{\buildrel _\circ \over {T}}$ of $\mathfrak{su}(2)_S$ and $\mathfrak{su}(2)_G$. In other words, the generators of $\mathfrak{su}(2)_S$ receive contributions from the $\xi$- and $\chi$-type fermionic oscillators as follows: $$\begin{split} \buildrel _\circ \over {T}_+ &= S_+ + G_+ = a^m b_m + \xi^x \chi_x \\ \buildrel _\circ \over {T}_- &= S_- + G_- = b^m a_m + \chi^x \xi_x \\ \buildrel _\circ \over {T}_0 &= S_0 + G_0 = \frac{1}{2} \left( N_a - N_b + N_\xi - N_\chi \right) \end{split}$$ The quadratic Casimir of this subalgebra $\mathfrak{su}(2)_{\buildrel _\circ \over {T}}$ is given by $$\mathcal{C}_2 \left[ \mathfrak{su}(2)_{\buildrel _\circ \over {T}} \right] = \,\, \buildrel _\circ \over {\mathcal{T}}^2 = \,\, \buildrel _\circ \over {T}_0 \buildrel _\circ \over {T}_0 + \frac{1}{2} \left( \, \buildrel _\circ \over {T}_+ \buildrel _\circ \over {T}_- + \buildrel _\circ \over {T}_- \buildrel _\circ \over {T}_+ \right) \,.$$ The 5-graded decomposition of the deformed minimal unitary realization, which we denote as $\mathfrak{so}^*(8)_D$, is now with respect to the subalgebra $\mathfrak{su}(2)_A \oplus \mathfrak{su}(1,1)_N \oplus \mathfrak{su}(2)_{\buildrel _\circ \over {T}} \oplus \mathfrak{so}(1,1)$, where, once again, the $\mathfrak{so}(1,1)$ generator $\Delta$ defines the 5-grading: $$\mathfrak{so}^*(8)_D = \mathfrak{g}^{(-2)}_D \oplus \mathfrak{g}^{(-1)}_D \oplus \left[ \mathfrak{su}(2)_A \oplus \mathfrak{su}(1,1)_N \oplus \mathfrak{su}(2)_{\buildrel _\circ \over {T}} \oplus \Delta \right] \oplus \mathfrak{g}^{(+1)}_D \oplus \mathfrak{g}^{(+2)}_D$$ The rest of the grade zero subspace, $\mathfrak{su}(2)_A \oplus \mathfrak{su}(1,1)_N \oplus \Delta$, remains unchanged under this deformation (see equation (\[SU(2)AN\_generators\])). However, it should be noted that the quadratic Casimir of $\mathfrak{su}(2)_{\buildrel _\circ \over {T}}$ is no longer equal to those of $\mathfrak{su}(2)_A$ and $\mathfrak{su}(1,1)_N$. Grade $-2$ and $-1$ generators of $\mathfrak{so}^*(8)_D$ are also the same as those of the undeformed $\mathfrak{so}^*(8)$: $$\buildrel _\circ \over {K}_- = K_- = \frac{1}{2} x^2$$ $$\begin{aligned} \buildrel _\circ \over {U}_m &= U_m = x \, a_m \\ \buildrel _\circ \over {V}_m &= V_m = x \, b_m \end{aligned} \qquad \qquad \qquad \qquad \begin{aligned} \buildrel _\circ \over {U}^m &= U^m = x \, a^m \\ \buildrel _\circ \over {V}^m &= V^m = x \, b^m \end{aligned}$$ However, since $\mathfrak{su}(2)_S$ has now been extended to $\mathfrak{su}(2)_{\buildrel _\circ \over {T}}$, the grade $+2$ generator, which previously contained the quadratic Casimir $\mathcal{S}^2$ of $\mathfrak{su}(2)_S$, now depends on $\buildrel _\circ \over {\mathcal{T}}^2$: $$\buildrel _\circ \over {K}_+ = \frac{1}{2} p^2 + \frac{1}{4 \, x^2} \left( 8 \, \buildrel _\circ \over {\mathcal{T}}^2 + \frac{3}{2} \right) \,.$$ The generators in grade $+1$ subspace are also modified since they are obtained from the commutators of the form ${\left[ \mathfrak{g}_D^{(-1)} \, , \, \mathfrak{g}_D^{(+2)} \right]}$: $$\begin{split} \buildrel _\circ \over {\widetilde{U}}_m = i {\left[ \,\buildrel _\circ \over {U}_m \, , \, \buildrel _\circ \over {K}_+ \right]} & \qquad \qquad \qquad \qquad \buildrel _\circ \over {\widetilde{U}}^m = \left( \buildrel _\circ \over {\widetilde{U}}_m \right)^\dag = i {\left[ \,\buildrel _\circ \over {U}^m \, , \, \buildrel _\circ \over {K}_+ \right]} \\ \buildrel _\circ \over {\widetilde{V}}_m = i {\left[ \,\buildrel _\circ \over {V}_m \, , \, \buildrel _\circ \over {K}_+ \right]} & \qquad \qquad \qquad \qquad \buildrel _\circ \over {\widetilde{V}}^m = \left( \buildrel _\circ \over {\widetilde{V}}_m \right)^\dag = i {\left[ \,\buildrel _\circ \over {V}^m \, , \, \buildrel _\circ \over {K}_+ \right]} \end{split}$$ The explicit form of these grade $+1$ generators are as follows: $$\begin{split} \buildrel _\circ \over {\widetilde{U}}_m &= - p \, a_m + \frac{2i}{x} \left[ \left( \buildrel _\circ \over {T}_0 + \frac{3}{4} \right) a_m + \buildrel _\circ \over {T}_- b_m \right] \\ \buildrel _\circ \over {\widetilde{U}}^m &= - p \, a^m - \frac{2i}{x} \left[ \left( \buildrel _\circ \over {T}_0 - \frac{3}{4} \right) a^m + \buildrel _\circ \over {T}_+ b^m \right] \\ \buildrel _\circ \over {\widetilde{V}}_m &= - p \, b_m - \frac{2i}{x} \left[ \left( \buildrel _\circ \over {T}_0 - \frac{3}{4} \right) b_m - \buildrel _\circ \over {T}_+ a_m \right] \\ \buildrel _\circ \over {\widetilde{V}}^m &= - p \, b^m + \frac{2i}{x} \left[ \left( \buildrel _\circ \over {T}_0 + \frac{3}{4} \right) b^m - \buildrel _\circ \over {T}_- a^m \right] \end{split}$$ The deformed generators of $\mathfrak{so}^*(8)_D$ with “$\circ$” over them satisfy the same commutation relations as the corresponding “undeformed” generators of $\mathfrak{so}^*(8)$. Therefore, the 5-grading of $\mathfrak{so}^*(8)_D$, defined by $\Delta$, takes the form: $$\begin{split} \mathfrak{so}^*(8)_D &= ~ \mathbf{1} ~~ \oplus ~~ \left( \mathbf{4} , \mathbf{2} \right) ~ \oplus \left[ \mathfrak{su}(2)_A \oplus \mathfrak{su}(1,1)_N \oplus \mathfrak{su}(2)_{\buildrel _\circ \over {T}} \oplus \mathfrak{so}(1,1)_{\Delta} \right] \oplus ~ \left( \mathbf{4} , \mathbf{2} \right) ~ \oplus ~ \mathbf{1} \\ &= \buildrel _\circ \over {K}_- \oplus \left[ \,\buildrel _\circ \over {U}_m \,,\, \buildrel _\circ \over {U}^m \,,\, \buildrel _\circ \over {V}_m \,,\, \buildrel _\circ \over {V}^m \, \right] \oplus \left[ ~ A_{\pm,0} ~ \oplus ~ N_{\pm,0} ~ \oplus ~ \buildrel _\circ \over {T}_{\pm,0} ~ \oplus ~ \Delta ~ \right] \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \oplus \left[ \,\buildrel _\circ \over {\widetilde{U}}_m \,,\, \buildrel _\circ \over {\widetilde{U}}^m \,,\, \buildrel _\circ \over {\widetilde{V}}_m \,,\, \buildrel _\circ \over {\widetilde{V}}^m \, \right] \oplus \buildrel _\circ \over {K}_+ \, \end{split}$$ The quadratic Casimir of $\mathfrak{so}^*(8)_D$ is given by $$\begin{split} \mathcal{C}_2 \left[ \mathfrak{so}^*(8)_D \right] &= \mathcal{C}_2 \left[ \mathfrak{su}(2)_{\buildrel _\circ \over {T}} \right] + \mathcal{C}_2 \left[ \mathfrak{su}(2)_A \right] + \mathcal{C}_2 \left[ \mathfrak{su}(1,1)_N \right] + \mathcal{C}_2 \left[ \mathfrak{su}(1,1)_{\buildrel _\circ \over {K}} \right] \\ & \quad - \frac{i}{4} \, \mathcal{F} \left( \buildrel _\circ \over {U} \,,\, \buildrel _\circ \over {V} \right) \end{split}$$ where $$\begin{split} \mathcal{F} \left( \buildrel _\circ \over {U} \,,\, \buildrel _\circ \over {V} \right) &= \left( \buildrel _\circ \over {U}_m \buildrel _\circ \over {\widetilde{U}}^m + \buildrel _\circ \over {V}_m \buildrel _\circ \over {\widetilde{V}}^m + \buildrel _\circ \over {\widetilde{U}}^m \buildrel _\circ \over {U}_m + \buildrel _\circ \over {\widetilde{V}}^m \buildrel _\circ \over {V}_m \right) \\ & \qquad - \left( \buildrel _\circ \over {U}^m \buildrel _\circ \over {\widetilde{U}}_m + \buildrel _\circ \over {V}^m \buildrel _\circ \over {\widetilde{V}}_m + \buildrel _\circ \over {\widetilde{U}}_m \buildrel _\circ \over {U}^m + \buildrel _\circ \over {\widetilde{V}}_m \buildrel _\circ \over {V}^m \right) \end{split}$$ and reduces to $$\mathcal{C}_2 \left[ \mathfrak{so}^*(8)_D \right] = 2 \, \mathcal{G}^2 - 4$$ where $\mathcal{G}^2$ is the quadratic Casimir of $\mathfrak{su}(2)_G$. The 3-grading of $SO^*(8)_D$ with respect to the subgroup $SU(4) \times U(1)$ {#3GrSO*(8)deformed} ---------------------------------------------------------------- The Lie algebra of $\mathfrak{so}^*(8)_D$ can be given a compact 3-grading $$\mathfrak{so}^*(8)_D = \mathfrak{C}^-_D \oplus \mathfrak{C}^0_D \oplus \mathfrak{C}^+_D$$ with respect to its maximal compact subalgebra $\mathfrak{su}(4) \oplus \mathfrak{u}(1)$, determined by the $\mathfrak{u}(1)$ generator $$\buildrel _\circ \over {H} = N_0 + \frac{1}{2} \left( \buildrel _\circ \over {K}_+ + \buildrel _\circ \over {K}_- \right) \,. \label{deformedH}$$ The generators that belong to the grade 0,$\pm1$ subspaces are as follows: $$\begin{split} \mathfrak{C}^-_D &= \left( \buildrel _\circ \over {U}_m - \, i \buildrel _\circ \over {\widetilde{U}}_m \right) \oplus \left( \buildrel _\circ \over {V}_m - \, i \buildrel _\circ \over {\widetilde{V}}_m \right) \oplus N_- \oplus \left[ \Delta + i \left( \buildrel _\circ \over {K}_+ - \buildrel _\circ \over {K}_- \right) \right] \\ \mathfrak{C}^0_D &= \left[ \buildrel _\circ \over {T}_{\pm,0} \oplus A_{\pm,0} \oplus \left( N_0 - \frac{1}{2} \left( \buildrel _\circ \over {K}_+ + \buildrel _\circ \over {K}_- \right) \right) \right. \\ & \qquad \left. \oplus \left( \buildrel _\circ \over {U}_m + \, i \buildrel _\circ \over {\widetilde{U}}_m \right) \oplus \left( \buildrel _\circ \over {V}_m + \, i \buildrel _\circ \over {\widetilde{V}}_m \right) \oplus \left( \buildrel _\circ \over {U}^m - \, i \buildrel _\circ \over {\widetilde{U}}^m \right) \oplus \left( \buildrel _\circ \over {V}^m - \, i \buildrel _\circ \over {\widetilde{V}}^m \right) \right] \oplus \buildrel _\circ \over {H} \\ \mathfrak{C}^+_D &= \left( \buildrel _\circ \over {U}^m + \, i \buildrel _\circ \over {\widetilde{U}}^m \right) \oplus \left( \buildrel _\circ \over {V}^m + \, i \buildrel _\circ \over {\widetilde{V}}^m \right) \oplus N_+ \oplus \left[ \Delta - i \left( \buildrel _\circ \over {K}_+ - \buildrel _\circ \over {K}_- \right) \right] \end{split} \label{3Gr-SO*8deformed}$$ Deformed minreps of $SO^*(8)$ as massless $6D$ conformal fields {#LWVSO*(8)deformed} --------------------------------------------------------------- Consider the vacuum state ${\left\lvert 0 \right\rangle}$ that is annihilated by the bosonic oscillators $a_m$, $b_m$ ($m = 1,2$) and the fermionic oscillators $\xi_x$, $\chi_x$ ($x = 1,2,\dots,P$): $$a_m {\left\lvert 0 \right\rangle} = b_m {\left\lvert 0 \right\rangle} = \xi_x {\left\lvert 0 \right\rangle} = \chi_x {\left\lvert 0 \right\rangle} = 0$$ The tensor products of the states of the form $\left( a^m \right)^{n_{a,m}} {\left\lvert 0 \right\rangle}$, $\left( b^m \right)^{n_{b,m}} {\left\lvert 0 \right\rangle}$, $\xi^x {\left\lvert 0 \right\rangle}$ and $\chi^x {\left\lvert 0 \right\rangle}$, where $n_{a,m}$ and $n_{b,m}$ are non-negative integers, form a “particle basis” of states in this Fock space. As the “particle basis” of the Hilbert space of the deformed minimal unitary representation of $SO^*(8)$, we take the tensor product of the above states with the state space of the singular (isotonic) oscillator: $$\left( a^1 \right)^{n_{a,1}} {\left\lvert 0 \right\rangle} \otimes \left( a^2 \right)^{n_{a,2}} {\left\lvert 0 \right\rangle} \otimes \left( b^1 \right)^{n_{b,1}} {\left\lvert 0 \right\rangle} \otimes \left( b^2 \right)^{n_{b,2}} {\left\lvert 0 \right\rangle} \otimes \xi^{[x_1} \dots \xi^{x_k} \chi^{x_{k+1}} \dots \chi^{x_P]} {\left\lvert 0 \right\rangle} \otimes {\left\lvert \psi_n^{(\alpha_t)} \right\rangle}$$ where square brackets imply full antisymmetrization and denote them as $$\left( a^1 \right)^{n_{a,1}} \left( a^2 \right)^{n_{a,2}} \left( b^1 \right)^{n_{b,1}} \left( b^2 \right)^{n_{b,2}} \xi^{[x_1} \dots \xi^{x_k} \chi^{x_{k+1}} \dots \chi^{x_P]} {\left\lvert \psi_n^{(\alpha_t)} \right\rangle}$$ or simply as $${\left\lvert \psi_n^{(\alpha_t)} \,;\, n_{a,1} , n_{a,2} , n_{b,1} , n_{b,2} \,;\, \frac{P}{2} , k - \frac{P}{2} \right\rangle}$$ where $k = 0,1,2,\dots,P$. Note that $\alpha_t$ now depends on the eigenvalue $t$ of the quadratic Casimir of $SU(2)_{\buildrel _\circ \over {T}}$. Note that the $(P+1)$ states $${\left\lvert \psi_n^{(\alpha_t)} \,;\, 0 , 0 , 0 , 0 \,;\, \frac{P}{2} , k - \frac{P}{2} \right\rangle} \qquad k=0,1,\cdots ,P$$ are annihilated by all grade $-1$ operators in $\mathfrak{C}^-_D$ and transforms in the spin $t = \frac{P}{2}$ representation of $\mathfrak{su}(2)_{\buildrel _\circ \over {T}}$, if $\alpha_t$ satisfies $$\alpha_t = 2 \, t + \frac{3}{2} \,.$$ These states have a definite eigenvalue of $t + 2$ with respect to $\buildrel _\circ \over {H}$ (given in equation (\[deformedH\])): $$\buildrel _\circ \over {H} {\left\lvert \psi_n^{(2t+3/2)} \,;\, 0 , 0 , 0 , 0 \,;\, \frac{P}{2} , k - \frac{P}{2} \right\rangle} = \left( t + 2 \right) {\left\lvert \psi_n^{(2t+3/2)} \,;\, 0 , 0 , 0 , 0 \,;\, \frac{P}{2} , k - \frac{P}{2} \right\rangle}$$ By acting on these $(P+1)$ states with the coset generators $(C^{1m},C^{2m})$ of $$SU(4) \,/\, \left[ SU(2)_{\buildrel _\circ \over {T}} \times SU(2)_A \times U(1) \right]$$ one obtains a set of states, which we denote collectively as ${\left\lvert \Omega^{(2t+3/2)} \right\rangle}$, transforming in an irrep of $SU(4)$ with Dynkin labels $(2t,0,0)$ that are eigenstates of $\buildrel _\circ \over {H}$ with eigenvalue $( t+2)$. The states ${\left\lvert \Omega^{(2t+3/2)} \right\rangle}$ are annihilated by all the operators in $\mathfrak{C}^-_D$. Therefore, the deformed minimal unitary representation of $SO^*(8)$ is a unitary lowest weight representation. All the other states of the “particle basis” of the deformed minrep can be obtained from the set of states ${\left\lvert \Omega^{(2t+3/2)} \right\rangle}$ by repeatedly acting on it with grade $+1$ operators in the $\mathfrak{C}^+_D$ subspace of $SO^*(8)_D$. In Table \[Table:deformedSO\*(8)minrep\], we present the deformed minrep of $SO^*(8)$. The notation $\left( \mathfrak{C}^+_D \right)^n {\left\lvert \Omega^{(\alpha_t)} \right\rangle}$ represents all the states obtained by acting on the lowest weight state ${\left\lvert \Omega^{(\alpha_t)} \right\rangle}$ with $n$ grade $+1$ generators $( \buildrel _\circ \over {Y}^m \,,\, \buildrel _\circ \over {Z}^m \,,\, \buildrel _\circ \over {N}_+ \,,\, \buildrel _\circ \over {B}_+ )$. The deformed minrep with parameter $t$ corresponds to a massless conformal field in six dimensions whose transformation under the $6D$ Lorentz group $SU^*(4)$ coincides with the transformation of the states ${\left\lvert \Omega^{(\alpha_t)} \right\rangle}$ under the $SU(4)$ subgroup of $SO^*(8)_D$ and its conformal dimension is equal to $-(t+2)$ . [|l||c|c|]{} \ & &\ State & $E$ & $SU(4)$\ & & Dynkin\ & &\ & &\ \ & &\ State & $E$ & $SU(4)$\ & & Dynkin\ & &\ & &\ & &\ & &\ ${\left\lvert \Omega^{(\alpha_t)} \right\rangle}$ & $t+2$ & $(2t,0,0)$\ & &\ $\mathfrak{C}^+_D {\left\lvert \Omega^{(\alpha_t)} \right\rangle}$ & $t+3$ & $(2t,1,0)$\ & &\ $\left( \mathfrak{C}^+_D \right)^2 {\left\lvert \Omega^{(\alpha_t)} \right\rangle}$ & $t+4$ & $(2t,2,0)$\ & &\ & &\ & &\ $\left( \mathfrak{C}^+_D \right)^n {\left\lvert \Omega^{(\alpha_t)} \right\rangle}$ & $t+n+2$ & $(2t,n,0)$\ & &\ & &\ Conclusions =========== In this paper we first studied the minimal unitary representation of $SO^*(8) \simeq SO(6,2)$, which is the seven dimensional $AdS$ group or the six dimensional conformal group, obtained by quantizing its quasiconformal realization. The resulting minrep coincides with the scalar doubleton representation of $SO^*(8)$, whose Poincaré limit in $AdS_7$ is singular. We then introduced supersymmetry and extended these results to construct the minimal unitary supermultiplet of $OSp(8^*|2N)$, and, in particular, the minimal unitary supermultiplet of $OSp(8^*|4)$. The minimal unitary supermultiplet of $OSp(8^*|4)$ is simply the massless supermultiplet of (2,0) conformal field theory in six dimensions that is believed to be dual to M-theory on $AdS_7 \times S^4$. Finally, we presented a method to introduce a family of deformations of the minrep of $SO^*(8)$, with respect to one of its $SU(2)$ subgroups. For each non-negative integer or half-integer value of the deformation parameter corresponding to the spin $t$ of $SU(2)$ one obtains a unique positive energy unitary irreducible representation of $SO^*(8)$ which describes a massless conformal field of higher spin in six dimensions and coincide with the infinite family of doubletons studied in [@Gunaydin:1984wc; @Gunaydin:1999ci; @Fernando:2001ak]. One can also obtain the “deformed” minimal unitary supermultiplets of $OSp(8^*|2N)$ by first deforming the minrep of $SO^*(8)$ and then extending it to the superalgebras $OSp(8^*|2N)$. These deformed minimal unitary supermultiplets correspond to six dimensional massless superconformal multiplets involving higher spin fields than the undeformed minimal unitary supermultiplet and will be given in a separate study [@workinprogress_sfmg]. [**Acknowledgements:**]{} We would like to thank Oleksandr Pavlyk for many stimulating discussions and his generous help with Mathematica. S.F. would like to thank the Center for Fundamental Theory of the Institute for Gravitation and the Cosmos at Pennsylvania State University, where part of this work was done, for their warm hospitality.\ This work was supported in part by the National Science Foundation under grants numbered PHY-0555605 and PHY-0855356. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Appendix {#appendix .unnumbered} ======== The decomposition of $SO(6,2)$ with respect to the subgroup $SO(4) \times SO(2,2)$ {#SO(4)xSO(2,2)} =================================================================== The generators $\widetilde{M}_{ij} = i \, M_{ij}$ ($i,j,\dots = 1,2,3,4$) form the $\mathfrak{so}(4)$ subalgebra of $\mathfrak{so}(6,2) \supset \mathfrak{so}(4) \oplus \mathfrak{so}(2,2)$. This $\mathfrak{so}(4)$ can be decomposed as a direct sum $$\mathfrak{so}(4) = \mathfrak{su}(2)_L \oplus \mathfrak{su}(2)_R$$ where the generators of the two $\mathfrak{su}(2)$ subalgebras are given by: $$\begin{aligned} L_1 &= \frac{1}{2} \left( \widetilde{M}_{23} - \widetilde{M}_{14} \right) \\ R_1 &= \frac{1}{2} \left( \widetilde{M}_{23} + \widetilde{M}_{14} \right) \end{aligned} \qquad \begin{aligned} L_2 &= \frac{1}{2} \left( \widetilde{M}_{13} + \widetilde{M}_{24} \right) \\ R_2 &= \frac{1}{2} \left( \widetilde{M}_{13} - \widetilde{M}_{24} \right) \end{aligned} \qquad \begin{aligned} L_3 &= \frac{1}{2} \left( \widetilde{M}_{12} - \widetilde{M}_{34} \right) \\ R_3 &= \frac{1}{2} \left( \widetilde{M}_{12} + \widetilde{M}_{34} \right) \end{aligned}$$ They satisfy the commutation relations $$\begin{split} {\left[ L_+ \, , \, L_- \right]} = 2 \, L_3 & \qquad \qquad \qquad {\left[ L_3 \, , \, L_\pm \right]} = \pm \, L_\pm \\ {\left[ R_+ \, , \, R_- \right]} = 2 \, R_3 & \qquad \qquad \qquad {\left[ R_3 \, , \, R_\pm \right]} = \pm \, R_\pm \end{split}$$ where $$\begin{split} L_{\pm} &= L_1 \pm i \, L_2 \\ R_{\pm} &= R_1 \pm i \, R_2 \,. \end{split}$$ The quadratic Casimir operators of the two $\mathfrak{su}(2)$’s are equal: $$L^2 = L_1^2 + L_2^2 + L_3^2 = R^2 = R_1^2 + R_2^2 + R_3^2 =\frac{1}{8} \mathcal{I}_4 + 1$$ The centralizer of $SU(2)_L \times SU(2)_R$ within $SO(6,2)$ is $SO(2,2)$, which also decomposes as $$SO(2,2) = SU(1,1)_{\mathfrak{L}} \times SU(1,1)_{\mathfrak{R}}$$ where $SU(1,1)_{\mathfrak{L}}$ is generated by $K_+ , K_-$ and $\Delta$ and $SU(1,1)_{\mathfrak{R}}$ is generated by $J_{\pm}$ and $J_0$. In their compact bases the generators of $SU(1,1)_{\mathfrak{L}}$ and $SU(1,1)_{\mathfrak{R}}$ take the form $$\begin{aligned} \mathfrak{L}_+ &= - \frac{1}{2} \left[ \Delta - i \left( K_+ - K_- \right) \right] \\ \mathfrak{L}_- &= - \frac{1}{2} \left[ \Delta + i \left( K_+ - K_- \right) \right] \\ \mathfrak{L}_3 &= \frac{1}{2} \left( K_+ + K_- \right) \end{aligned} \qquad \qquad \begin{aligned} \mathfrak{R}_+ &= - \frac{1}{2} \left[ J_0 + \frac{i}{2} \left( J_+ - J_- \right) \right] \\ \mathfrak{R}_- &= - \frac{1}{2} \left[ J_0 - \frac{i}{2} \left( J_+ - J_- \right) \right] \\ \mathfrak{R}_3 &= - \frac{1}{4} \left( J_+ + J_- \right) \end{aligned}$$ and satisfy the commutation relations: $$\begin{split} {\left[ \mathfrak{L}_+ \, , \, \mathfrak{L}_- \right]} = - 2 \, \mathfrak{L}_3 & \qquad \qquad \qquad {\left[ \mathfrak{L}_3 \, , \, \mathfrak{L}_\pm \right]} = \pm \, \mathfrak{L}_\pm \\ {\left[ \mathfrak{R}_+ \, , \, \mathfrak{R}_- \right]} = - 2 \, \mathfrak{R}_3 & \qquad \qquad \qquad {\left[ \mathfrak{R}_3 \, , \, \mathfrak{R}_\pm \right]} = \pm \, \mathfrak{R}_\pm \end{split}$$ Their quadratic Casimir operators $$\mathfrak{L}^2 = {\mathfrak{L}_3}^2 - \frac{1}{2} \left( \mathfrak{L}_+ \mathfrak{L}_- + \mathfrak{L}_- \mathfrak{L}_+ \right) \qquad \qquad \qquad \mathfrak{R}^2 = {\mathfrak{R}_3}^2 - \frac{1}{2} \left( \mathfrak{R}_+ \mathfrak{R}_- + \mathfrak{R}_- \mathfrak{R}_+ \right)$$ coincide: $$\mathfrak{L}^2 = \mathfrak{R}^2 = \frac{1}{8} \mathcal{I}_4 + 1$$ Thus the quadratic Casimirs of $SU(2)_L$, $SU(2)_R$, $SU(1,1)_{\mathfrak{L}}$ and $SU(1,1)_{\mathfrak{R}}$ are all equal within the minimal unitary realization of $SO(6,2)$. The compact 3-grading of $SO^*(8)$ with respect to the subgroup $SU(4) \times U(1)$ {#C3GrSO*(8)} =================================================================================== The Lie algebra $\mathfrak{so}^*(8)$ can be given a compact 3-grading $$\mathfrak{so}^*(8) = \mathfrak{C}^- \oplus \mathfrak{C}^0 \oplus \mathfrak{C}^+$$ with respect to its maximal compact subalgebra $\mathfrak{su}(4) \oplus \mathfrak{u}(1)$, determined by the $\mathfrak{u}(1)$ generator $$H = N_0 + \frac{1}{2} \left( K_+ + K_- \right) \,. \label{BosonicHamiltonian2}$$ The operators that belong to the grade 0,$\pm1$ subspaces are as follows: $$\begin{split} \mathfrak{C}^- &= \left( U_m - i \, \widetilde{U}_m \right) \oplus \left( V_m - i \, \widetilde{V}_m \right) \oplus N_- \oplus \left[ \Delta + i \left( K_+ - K_- \right) \right] \\ \mathfrak{C}^0 &= \left[ S_{\pm,0} \oplus A_{\pm,0} \oplus \left( N_0 - \frac{1}{2} \left( K_+ + K_- \right) \right) \right. \\ & \qquad \left. \oplus \left( U_m + i \, \widetilde{U}_m \right) \oplus \left( V_m + i \, \widetilde{V}_m \right) \oplus \left( U^m - i \, \widetilde{U}^m \right) \oplus \left( V^m - i \, \widetilde{V}^m \right) \right] \oplus H \\ \mathfrak{C}^+ &= \left( U^m + i \, \widetilde{U}^m \right) \oplus \left( V^m + i \, \widetilde{V}^m \right) \oplus N_+ \oplus \left[ \Delta - i \left( K_+ - K_- \right) \right] \end{split} \label{SO*8Gr3}$$ It is convenient to express the generators of the $\mathfrak{su}(4)$ subalgebra in $\mathfrak{C}^0$ subspace in its $\mathfrak{su}(4) \supset \mathfrak{su}(2)_S \oplus \mathfrak{su}(2)_A \oplus \mathfrak{u}(1)_J$ decomposition, where $$J = N_0 - \frac{1}{2} \left( K_+ + K_- \right)$$ determines a 3-grading of $\mathfrak{su}(4)$. We note that $S_{\pm,0}$ and $A_{\pm,0}$ were given in equations (\[SU(2)S\_generators\]) and (\[SU(2)AN\_generators\]). The remaining generators of $\mathfrak{su}(4)$ are given by $$\begin{aligned} C_{1m} &= \frac{1}{2} \left( U_m + i \, \widetilde{U}_m \right) \\ C^{1m} &= \frac{1}{2} \left( U^m - i \, \widetilde{U}^m \right) \end{aligned} \qquad\qquad \begin{aligned} C_{2m} &= \frac{1}{2} \left( V_m + i \, \widetilde{V}_m \right) \\ C^{2m} &= \frac{1}{2} \left( V^m - i \, \widetilde{V}^m \right) \,. \end{aligned}$$ Then the $\mathfrak{su}(4)$ algebra becomes $$\begin{split} &{\left[ S^{m^\prime}_{~n^\prime} \, , \, S^{k^\prime}_{~l^\prime} \right]} = \delta^{k^\prime}_{n^\prime} \, S^{m^\prime}_{~l^\prime} - \delta^{m^\prime}_{l^\prime} \, S^{k^\prime}_{~n^\prime} \qquad \qquad \qquad {\left[ A^m_{~n} \, , \, A^k_{~l} \right]} = \delta^k_n \, A^m_{~l} - \delta^m_l \, A^k_{~n} \\ &{\left[ C^{m^\prime m} \, , \, C_{n^\prime n} \right]} = \delta^m_n \, S^{m^\prime}_{~n^\prime} + \delta^{m^\prime}_{n^\prime} \, A^m_{~n} + \delta^{m^\prime}_{n^\prime} \delta^m_n \, J \\ &{\left[ S^{m^\prime}_{~n^\prime} \, , \, C^{k^\prime m} \right]} = \delta^{k^\prime}_{n^\prime} \, C^{m^\prime m} - \frac{1}{2} \delta^{m^\prime}_{n^\prime} \, C^{k^\prime m} \qquad {\left[ A^m_{~n} \, , \, C^{m^\prime k} \right]} = \delta^k_n \, C^{m^\prime m} - \frac{1}{2} \delta^m_n \, C^{m^\prime k} \,. \end{split}$$ where we have labeled the generators of $\mathfrak{su}(2)_S$ and $\mathfrak{su}(2)_A$ as $S^m_{~n}$ and $A^m_{~n}$, respectively: $$\begin{aligned} S^1_{~1} &= - S^2_{~2} = S_0 \\ A^1_{~1} &= - A^2_{~2} = A_0 \end{aligned} \qquad \qquad \qquad \begin{aligned} S^1_{~2} &= S_+ \\ A^1_{~2} &= A_+ \end{aligned} \qquad \qquad \qquad \begin{aligned} S^2_{~1} = \left( {S^1_{~2}} \right)^\dag &= S_- \\ A^2_{~1} = \left( {A^1_{~2}} \right)^\dag &= A_- \end{aligned}$$ We shall label $\mathfrak{C}^\pm$ operators as $$\begin{aligned} Y_m &= \frac{1}{2} \left( U_m - i \, \widetilde{U}_m \right) \\ Z_m &= \frac{1}{2} \left( V_m - i \, \widetilde{V}_m \right) \\ N_- & \\ B_- &= \frac{i}{2} \left[ \Delta + i \left( K_+ - K_- \right) \right] \end{aligned} \qquad \qquad \begin{aligned} Y^m &= \frac{1}{2} \left( U^m + i \, \widetilde{U}^m \right) \\ Z^m &= \frac{1}{2} \left( V^m + i \, \widetilde{V}^m \right) \\ N_+ & \\ B_+ &= - \frac{i}{2} \left[ \Delta - i \left( K_+ - K_- \right) \right] \,. \end{aligned} \label{SO*8Grpm1}$$ The commutators ${\left[ \mathfrak{C}^- \, , \, \mathfrak{C}^+ \right]}$ close into $\mathfrak{C}^0$: $$\begin{aligned} {\left[ Y_m \, , \, Y^n \right]} &= \delta^n_m \, H + \delta^n_m \, S_0 + A^n_{~m} \\ {\left[ Y_m \, , \, Z^n \right]} &= \delta^n_m \, S_- \\ {\left[ Y_m \, , \, N_+ \right]} &= + \epsilon_{mn} \, C^{2n} \\ {\left[ Y_m \, , \, B_+ \right]} &= C_{1m} \\ {\left[ N_- \, , \, N_+ \right]} &= H + J \end{aligned} \qquad \qquad \begin{aligned} {\left[ Z_m \, , \, Z^n \right]} &= \delta^n_m \, H - \delta^n_m \, S_0 + A^n_{~m} \\ {\left[ N_- \, , \, B_+ \right]} &= 0 \\ {\left[ Z_m \, , \, N_+ \right]} &= - \epsilon_{mn} C^{1n} \\ {\left[ Z_m \, , \, B_+ \right]} &= C_{2m} \\ {\left[ B_- \, , \, B_+ \right]} &= H - J \end{aligned}$$ The quadratic Casimir of this subalgebra $\mathfrak{su}(4)$ is given by $$\mathcal{C}_2 \left[ \mathfrak{su}(4) \right] = S^{m^\prime}_{~n^\prime} S^{n^\prime}_{~m^\prime} + A^m_{~n} A^n_{~m} + \left( C^{m^\prime m} C_{m^\prime m} + C_{m^\prime m} C^{m^\prime m} \right) + J^2 \,.$$ Transformations between $SO(6,2)$ oscillators $c_i$ and $SO^*(8)$ oscillators $a_m$, $b_m$ {#app:bogoliubov} ========================================================================================== The minrep of $SO(6,2)$ can be related to the minrep of $SO^*(8)$ very simply by rewriting the oscillators $a_m$, $b_m$ and $a^m$, $b^m$ in terms of $c_i$ and $c_i^\dag$ as follows: $$\begin{aligned} a_1 &= - \frac{i}{\sqrt{2}} \left( c_1 + i \, c_2 \right) \\ a_2 &= \frac{1}{\sqrt{2}} \left( c_3 + i \, c_4 \right) \\ b_1 &= \frac{1}{\sqrt{2}} \left( c_3 - i \, c_4 \right) \\ b_2 &= - \frac{i}{\sqrt{2}} \left( c_1 - i \, c_2 \right) \end{aligned} \qquad \qquad \qquad \begin{aligned} a^1 &= \frac{i}{\sqrt{2}} \left( c_1^\dag - i \, c_2^\dag \right) \\ a^2 &= \frac{1}{\sqrt{2}} \left( c_3^\dag - i \, c_4^\dag \right) \\ b^1 &= \frac{1}{\sqrt{2}} \left( c_3^\dag + i \, c_4^\dag \right) \\ b^2 &= \frac{i}{\sqrt{2}} \left( c_1^\dag + i \, c_2^\dag \right) \end{aligned}$$ Then it is easy to see that we have the following mapping between the subalgebra $\mathfrak{su}(2)_L \oplus \mathfrak{su}(2)_R \oplus \mathfrak{su}(1,1)_{\mathfrak{R}}$ of $\mathfrak{so}(6,2)$ and the subalgebra $\mathfrak{su}(2)_A \oplus \mathfrak{su}(2)_S \oplus \mathfrak{su}(1,1)_N$ of $\mathfrak{so}^*(8)$: $$\begin{aligned} L_3 &\longrightarrow A_0 \\ L_+ &\longrightarrow i \, A_+ \\ L_- &\longrightarrow - i \, A_- \end{aligned} \qquad \qquad \begin{aligned} R_3 &\longrightarrow S_0 \\ R_+ &\longrightarrow i \, S_+ \\ R_- &\longrightarrow - i \, S_- \end{aligned} \qquad \qquad \begin{aligned} \mathfrak{R}_3 &\longrightarrow N_0 \\ \mathfrak{R}_+ &\longrightarrow - i \, N_+ \\ \mathfrak{R}_- &\longrightarrow i \, N_- \end{aligned}$$ The relation between $\mathfrak{su}(1,1)_{\mathfrak{L}}$ of $\mathfrak{so}(6,2)$ and $\mathfrak{su}(1,1)_K$ of $\mathfrak{so}^*(8)$ is quite straight forward. The superalgebra $\mathfrak{osp}(8^*|2N)$ in the 5-grading with respect to the subsuperalgebra $\mathfrak{osp}(4^*|2N)$ {#OSp(8*|2N)-5Gr} ======================================================================================================================= We gave the explicit realization of the superalgebra $\mathfrak{osp}(8^*|2N)$ in the 5-grading with respect to the subsuperalgebra $\mathfrak{osp}(4^*|2N)$ in section \[minrepOSp(8\*|2N)-5Gr\]. In this appendix, we provide the commutation relations between the generators of $\mathfrak{osp}(8^*|2N)$ in this basis. The (super-)commutation relations between the generators of grade zero subspace $\mathfrak{g}^{(0)} = \mathfrak{osp}(4^*|2N)$ are as follows: $$\begin{aligned} {\left\{ \Pi_{mr} \, , \, \overline{\Pi}^{ns} \right\}} &= \delta^s_r \, A^n_{~m} - \delta^n_m \, M^s_{~r} + \delta^s_r \, \delta^n_m \, N_0 \\ {\left\{ \Sigma_m^{~r} \, , \, \overline{\Sigma}^n_{~s} \right\}} &= \delta^r_s \, A^n_{~m} + \delta^n_m \, M^r_{~s} + \delta^r_s \, \delta^n_m \, N_0 \\ {\left[ A^m_{~n} \, , \, \Pi_{kr} \right]} &= - \delta^m_k \, \Pi_{nr} + \frac{1}{2} \delta^m_n \, \Pi_{kr} \\ {\left[ A^m_{~n} \, , \, \Sigma_k^{~r} \right]} &= - \delta^m_k \, \Sigma_n^{~r} + \frac{1}{2} \delta^m_n \, \Sigma_k^{~r} \\ {\left[ S_{rs} \, , \, \overline{\Pi}^{mt} \right]} &= \delta^t_s \, \overline{\Sigma}^m_{~r} + \delta^t_r \, \overline{\Sigma}^m_{~s} \\ {\left[ S_{rs} \, , \, \Sigma_m^{~t} \right]} &= - \delta^t_r \, \Pi_{ms} - \delta^t_s \, \Pi_{mr} \\ {\left[ M^r_{~s} \, , \, \Pi_{mt} \right]} &= - \delta^r_t \, \Pi_{ms} \\ {\left[ M^r_{~s} \, , \, \Sigma_m^{~t} \right]} &= \delta^t_s \, \Sigma_m^{~r} \\ {\left[ S_{rs} \, , \, \Pi_{mt} \right]} &= 0 \end{aligned} \qquad \qquad \begin{aligned} {\left\{ \Pi_{mr} \, , \, \Sigma_n^{~s} \right\}} &= \epsilon_{mn} \, \delta^s_{~r} \, N_- \\ {\left\{ \Pi_{mr} \, , \, \overline{\Sigma}^n_{~s} \right\}} &= - \delta^n_m \, S_{rs} \\ {\left[ N_+ \, , \, \Pi_{mr} \right]} &= - \epsilon_{mn} \, \overline{\Sigma}^n_{~r} \\ {\left[ N_+ \, , \, \Sigma_m^{~r} \right]} &= \epsilon_{mn} \, \overline{\Pi}^{nr} \\ {\left[ N_- \, , \, \Pi_{mr} \right]} &= 0 \\ {\left[ N_- \, , \, \Sigma_m^{~r} \right]} &= 0 \\ {\left[ N_0 \, , \, \Pi_{mr} \right]} &= - \frac{1}{2} \, \Pi_{mr} \\ {\left[ N_0 \, , \, \Sigma_m^{~r} \right]} &= - \frac{1}{2} \, \Sigma_m^{~r} \\ {\left[ S_{rs} \, , \, \overline{\Sigma}^m_{~t} \right]} &= 0 \end{aligned}$$ The anticommutators between the supersymmetry generators in $\mathfrak{g}^{(-1)}$ and $\mathfrak{g}^{(+1)}$ (given in equations (\[5GsusyGr-1\]) and (\[5GsusyGr+1\])) close into the bosonic generators in $\mathfrak{g}^{(0)}$: $$\begin{aligned} {\left\{ Q_r \, , \, \widetilde{Q}_s \right\}} &= 0 \\ {\left\{ Q_r \, , \, \widetilde{Q}^s \right\}} &= - \delta^s_r \, \Delta - 2 i \, \delta^s_r \, T_0 + 2 i \, M^s_{~r} \\ {\left\{ S_r \, , \, \widetilde{S}_s \right\}} &= 0 \\ {\left\{ S_r \, , \, \widetilde{S}^s \right\}} &= - \delta^s_r \, \Delta + 2 i \, \delta^s_r \, T_0 + 2 i \, M^s_{~r} \end{aligned} \qquad \qquad \begin{aligned} {\left\{ Q_r \, , \, \widetilde{S}_s \right\}} &= - 2 i \, S_{rs} \\ {\left\{ Q_r \, , \, \widetilde{S}^s \right\}} &= - 2 i \, \delta^s_r \, T_- \\ {\left\{ S_r \, , \, \widetilde{Q}_s \right\}} &= + 2 i \, S_{rs} \\ {\left\{ S_r \, , \, \widetilde{Q}^s \right\}} &= - 2 i \, \delta^s_r \, T_+ \end{aligned}$$ Finally, the commutators between the bosonic (even) and fermionic (odd) generators of $\mathfrak{g}^{(-1)}$ and $\mathfrak{g}^{(+1)}$ subspaces close into the fermionic (odd) generators of $\mathfrak{g}^{(0)}$: $$\begin{aligned} {\left[ U_m \, , \, \widetilde{Q}_r \right]} &= 0 \\ {\left[ U_m \, , \, \widetilde{Q}^r \right]} &= - 2 i \, \Sigma_m^{~r} \\ {\left[ U_m \, , \, \widetilde{S}_r \right]} &= - 2 i \, \Pi_{mr} \\ {\left[ U_m \, , \, \widetilde{S}^r \right]} &= 0 \end{aligned} \qquad \qquad \begin{aligned} {\left[ V_m \, , \, \widetilde{Q}_r \right]} &= + 2 i \, \Pi_{mr} \\ {\left[ V_m \, , \, \widetilde{Q}^r \right]} &= 0 \\ {\left[ V_m \, , \, \widetilde{S}_r \right]} &= 0 \\ {\left[ V_m \, , \, \widetilde{S}^r \right]} &= - 2 i \, \Sigma_m^{~r} \end{aligned}$$ $$\begin{aligned} {\left[ Q_r \, , \, \widetilde{U}_m \right]} &= 0 \\ {\left[ Q^r \, , \, \widetilde{U}_m \right]} &= - 2 i \, \Sigma_m^{~r} \\ {\left[ S_r \, , \, \widetilde{U}_m \right]} &= - 2 i \, \Pi_{mr} \\ {\left[ S^r \, , \, \widetilde{U}_m \right]} &= 0 \end{aligned} \qquad \qquad \begin{aligned} {\left[ Q_r \, , \, \widetilde{V}_m \right]} &= + 2 i \, \Pi_{mr} \\ {\left[ Q^r \, , \, \widetilde{V}_m \right]} &= 0 \\ {\left[ S_r \, , \, \widetilde{V}_m \right]} &= 0 \\ {\left[ S^r \, , \, \widetilde{V}_m \right]} &= - 2 i \, \Sigma_m^{~r} \end{aligned}$$ The superalgebra $\mathfrak{osp}(8^*|2N)$ in the 3-grading with respect to the subsuperalgebra $\mathfrak{u}(4|N)$ {#OSp(8*|2N)-3Gr} ================================================================================================================== As shown in section \[minrepOSp(8\*|2N)-3Gr\], the superalgebra $\mathfrak{osp}(8^*|2N)$ has a 3-graded decomposition with respect to the subsuperalgebra $\mathfrak{u}(4|N)$. In this appendix we give the explicit form of the remaining bosonic and supersymmetry generators and some useful (super-)commutation relations among them. The commutators ${\left[ \mathfrak{C}^- \, , \, \mathfrak{C}^+ \right]}$ close into $\mathfrak{C}^0$: $$\begin{aligned} {\left[ Y_m \, , \, Y^n \right]} &= \delta^n_m \, H_B + \delta^n_m \, T_0 + A^n_{~m} \\ {\left[ Y_m \, , \, Z^n \right]} &= \delta^n_m \, T_- \\ {\left[ Y_m \, , \, N_+ \right]} &= + \epsilon_{mn} \, C^{2n} \\ {\left[ Y_m \, , \, B_+ \right]} &= C_{1m} \\ {\left[ N_- \, , \, N_+ \right]} &= H_B + J \end{aligned} \qquad \qquad \begin{aligned} {\left[ Z_m \, , \, Z^n \right]} &= \delta^n_m \, H_B - \delta^n_m \, T_0 + A^n_{~m} \\ {\left[ N_- \, , \, B_+ \right]} &= 0 \\ {\left[ Z_m \, , \, N_+ \right]} &= - \epsilon_{mn} C^{1n} \\ {\left[ Z_m \, , \, B_+ \right]} &= C_{2m} \\ {\left[ B_- \, , \, B_+ \right]} &= H_B - J \end{aligned}$$ where the generators $C^{mn}$ and $C_{mn}$ are coset generators $SU(4) \,/\, \left[ SU(2)_T \times SU(2)_A \times U(1)_J \right]$ defined below. The $\mathfrak{su}(4|N)$ part of $\mathfrak{C}^0$ has an even subalgebra $\mathfrak{su}(4|N) \supset \mathfrak{su}(4) \oplus \mathfrak{su}(N) \oplus \mathfrak{u}(1)_D$, where the $\mathfrak{u}(1)_D$ charge and $\mathfrak{su}(N)$ generators are given by $$\begin{split} D &= \frac{1}{2} \left( K_+ + K_- \right) + \frac{2}{N} \, M_0 \\ \widetilde{M}^r_{~s} &= \alpha^r \alpha_s - \beta_s \beta^r - \frac{2}{N} \, \delta^r_s \, M_0 \end{split}$$ The generators of $\mathfrak{su}(4)$, in its $\mathfrak{su}(4) \supset \mathfrak{su}(2)_T \oplus \mathfrak{su}(2)_A \oplus \mathfrak{u}(1)_J$ decomposition, are realized as follows: $$\begin{aligned} T_+ &= a^m b_m + \alpha^r \beta_r \\ T_- &= b^m a_m + \beta^r \alpha_r \\ T_0 &= \frac{1}{2} \left( N_a - N_b + N_\alpha - N_\beta \right) \end{aligned} \qquad \qquad \begin{aligned} A_+ &= a^1 a_2 + b^1 b_2 \\ A_- &= a_1 a^2 + b_1 b^2 \\ A_0 &= \frac{1}{2} \left( a^1 a_1 - a^2 a_2 + b^1 b_1 - b^2 b_2 \right) \end{aligned}$$ $$\begin{split} J &= N_0 - \frac{1}{2} \left( K_+ + K_- \right) \\ &= - \frac{1}{4} \left( x^2 + p^2 \right) - \frac{1}{8 \, x^2} \left( 8 \, \mathcal{T}^2 + \frac{3}{2} \right) + \frac{1}{2} \left( N_a + N_b \right) + 1 \end{split}$$ $$\begin{split} C_{1m} &= \frac{1}{2} \left( U_m + i \, \widetilde{U}_m \right) = \frac{1}{2} \left( x - i \, p \right) a_m - \frac{1}{x} \left[ \left( T_0 + \frac{3}{4} \right) a_m + T_- b_m \right] \\ C^{1m} &= \frac{1}{2} \left( U^m - i \, \widetilde{U}^m \right) = \frac{1}{2} \left( x + i \, p \right) a^m - \frac{1}{x} \left[ \left( T_0 - \frac{3}{4} \right) a^m + T_+ b^m \right] \\ C_{2m} &= \frac{1}{2} \left( V_m + i \, \widetilde{V}_m \right) = \frac{1}{2} \left( x - i \, p \right) b_m + \frac{1}{x} \left[ \left( T_0 - \frac{3}{4} \right) b_m - T_+ a_m \right] \\ C^{2m} &= \frac{1}{2} \left( V^m - i \, \widetilde{V}^m \right) = \frac{1}{2} \left( x + i \, p \right) b^m + \frac{1}{x} \left[ \left( T_0 + \frac{3}{4} \right) b^m - T_- a^m \right] \end{split}$$ One half of total supersymmetry generators of $\mathfrak{osp}(8^*|2N)$ belong to grade zero space, as part of the subsuperalgebra $\mathfrak{su}(4|N)$. Below we list these $8 N$ supersymmetry generators: $$\begin{split} \widetilde{\mathfrak{Q}}_r &= \frac{1}{2} \left( Q_r + i \, \widetilde{Q}_r \right) = \frac{1}{2} \left( x - i \, p \right) \alpha_r - \frac{1}{x} \left[ \left( T_0 + \frac{3}{4} \right) \alpha_r + T_- \beta_r \right] \\ \widetilde{\mathfrak{Q}}^r &= \frac{1}{2} \left( Q^r - i \, \widetilde{Q}^r \right) = \frac{1}{2} \left( x + i \, p \right) \alpha^r - \frac{1}{x} \left[ \left( T_0 - \frac{3}{4} \right) \alpha^r + T_+ \beta^r \right] \\ \widetilde{\mathfrak{S}}_r &= \frac{1}{2} \left( S_r + i \, \widetilde{S}_r \right) = \frac{1}{2} \left( x - i \, p \right) \beta_r + \frac{1}{x} \left[ \left( T_0 - \frac{3}{4} \right) \beta_r - T_+ \alpha_r \right] \\ \widetilde{\mathfrak{S}}^r &= \frac{1}{2} \left( S^r - i \, \widetilde{S}^r \right) = \frac{1}{2} \left( x + i \, p \right) \beta^r + \frac{1}{x} \left[ \left( T_0 + \frac{3}{4} \right) \beta^r - T_- \alpha^r \right] \\ \widetilde{\Sigma}_m^{~r} &= \Sigma_m^{~r} = a_m \alpha^r + b_m \beta^r \\ \widetilde{\Sigma}^m_{~r} &= \overline{\Sigma}^m_{~r} = a^m \alpha_r + b^m \beta_r \end{split}$$ Under supercommutation, they close in to the bosonic generators of $\mathfrak{su}(4|N)$: $$\begin{aligned} {\left\{ \widetilde{\mathfrak{Q}}_r \, , \, \widetilde{\mathfrak{Q}}^s \right\}} &= - \delta^s_r \, T_0 + \widetilde{M}^s_{~r} + \delta^s_r \, D \\ {\left\{ \widetilde{\mathfrak{S}}_r \, , \, \widetilde{\mathfrak{S}}^s \right\}} &= + \delta^s_r \, T_0 + \widetilde{M}^s_{~r} + \delta^s_r \, D \\ {\left\{ \widetilde{\Sigma}_m^{~r} \, , \, \widetilde{\Sigma}^n_{~s} \right\}} &= \delta^r_s \, A^n_{~m} + \delta^n_m \, \widetilde{M}^r_{~s} + \delta^r_s \, \delta^n_m \, D + \delta^r_s \, \delta^n_m \, J \end{aligned} \qquad \qquad \begin{aligned} {\left\{ \widetilde{\mathfrak{Q}}_r \, , \, \widetilde{\mathfrak{S}}^s \right\}} &= - \delta^s_r \, T_- \\ {\left\{ \widetilde{\Sigma}_m^{~r} \, , \, \widetilde{\mathfrak{Q}}_s \right\}} &= \delta^r_s \, C_{1m} \\ {\left\{ \widetilde{\Sigma}_m^{~r} \, , \, \widetilde{\mathfrak{S}}_s \right\}} &= \delta^r_s \, C_{2m} \end{aligned}$$ These supersymmetry generators in $\mathfrak{C}^{\pm}$ satisfy the following (anti-)commutation relations: $$\begin{split} {\left\{ \mathfrak{Q}_r \, , \, \mathfrak{Q}^s \right\}} &= + \delta^s_r \, T_0 - \widetilde{M}^s_{~r} + \delta^s_r \, H_\odot - \frac{2}{N} \, \delta^s_r \, M_0 \\ {\left\{ \mathfrak{S}_r \, , \, \mathfrak{S}^s \right\}} &= - \delta^s_r \, T_0 - \widetilde{M}^s_{~r} + \delta^s_r \, H_\odot - \frac{2}{N} \, \delta^s_r \, M_0 \\ {\left\{ \Pi_{mr} \, , \, \overline{\Pi}^{ns} \right\}} &= \delta^s_r \, A^n_m - \delta^n_m \, \widetilde{M}^s_{~r} + \delta^n_m \, \delta^s_r \, N_0 - \frac{2}{N} \, \delta^n_m \, \delta^s_r \, M_0 \\ {\left\{ \mathfrak{Q}_r \, , \, \mathfrak{S}^s \right\}} &= \delta^s_r \, T_- \\ {\left\{ \Pi_{mr} \, , \, \mathfrak{Q}^s \right\}} &= - \delta^s_r \, C_{2m} \\ {\left\{ \Pi_{mr} \, , \, \mathfrak{S}^s \right\}} &= + \delta^s_r \, C_{1m} \end{split}$$ The commutators between bosonic operators in $\mathfrak{C}^-$ and supersymmetry generators in $\mathfrak{C}^+$ are as follows: $$\begin{aligned} {\left[ Y_n \, , \, \mathfrak{Q}^r \right]} &= \widetilde{\Sigma}_n^{~r} \\ {\left[ Z_n \, , \, \mathfrak{Q}^r \right]} &= 0 \\ {\left[ N_- \, , \, \mathfrak{Q}^r \right]} &= 0 \\ {\left[ B_- \, , \, \mathfrak{Q}^r \right]} &= \widetilde{\mathfrak{Q}}^r \\ {\left[ S_{st} \, , \, \mathfrak{Q}^r \right]} &= - 2 \, \delta^r_{(s} \, \widetilde{\mathfrak{S}}_{t)} \end{aligned} \quad \begin{aligned} {\left[ Y_n \, , \, \mathfrak{S}^r \right]} &= 0 \\ {\left[ Z_n \, , \, \mathfrak{S}^r \right]} &= \widetilde{\Sigma}_n^{~r} \\ {\left[ N_- \, , \, \mathfrak{S}^r \right]} &= 0 \\ {\left[ B_- \, , \, \mathfrak{S}^r \right]} &= \widetilde{\mathfrak{S}}^r \\ {\left[ S_{st} \, , \, \mathfrak{S}^r \right]} &= + 2 \, \delta^r_{(s} \, \widetilde{\mathfrak{Q}}_{t)} \end{aligned} \quad \begin{aligned} {\left[ Y_n \, , \, \overline{\Pi}^{mr} \right]} &= + \delta^m_n \, \widetilde{\mathfrak{S}}^r \\ {\left[ Z_n \, , \, \overline{\Pi}^{mr} \right]} &= - \delta^m_n \, \widetilde{\mathfrak{Q}}^r \\ {\left[ N_- \, , \, \overline{\Pi}^{mr} \right]} &= \epsilon^{mn} \, \widetilde{\Sigma}_n^{~r} \\ {\left[ B_- \, , \, \overline{\Pi}^{mr} \right]} &= 0 \\ {\left[ S_{st} \, , \, \overline{\Pi}^{mr} \right]} &= 2 \, \delta^r_{(s} \, \widetilde{\Sigma}^m_{~t)} \end{aligned}$$ Note that we have used the notation “$(st)$” to indicate symmetrization of indices $s$ and $t$ with weight $1$. The anticommutators of supersymmetry generators in $\mathfrak{C}^0$ and those in $\mathfrak{C}^+$ can be written as $$\begin{aligned} {\left\{ \widetilde{\mathfrak{Q}}_r \, , \, \mathfrak{Q}^s \right\}} &= \delta^s_r \, B_+ \\ {\left\{ \widetilde{\mathfrak{Q}}^r \, , \, \mathfrak{Q}^s \right\}} &= 0 \\ {\left\{ \widetilde{\mathfrak{S}}_r \, , \, \mathfrak{Q}^s \right\}} &= 0 \\ {\left\{ \widetilde{\mathfrak{S}}^r \, , \, \mathfrak{Q}^s \right\}} &= + S^{rs} \\ {\left\{ \widetilde{\Sigma}^m_{~r} \, , \, \mathfrak{Q}^s \right\}} &= + \delta^s_r \, Y^m \\ {\left\{ \widetilde{\Sigma}_m^{~r} \, , \, \mathfrak{Q}^s \right\}} &= 0 \end{aligned} \qquad \quad \begin{aligned} {\left\{ \widetilde{\mathfrak{Q}}_r \, , \, \mathfrak{S}^s \right\}} &= 0 \\ {\left\{ \widetilde{\mathfrak{Q}}^r \, , \, \mathfrak{S}^s \right\}} &= - S^{rs} \\ {\left\{ \widetilde{\mathfrak{S}}_r \, , \, \mathfrak{S}^s \right\}} &= \delta^s_r \, B_+ \\ {\left\{ \widetilde{\mathfrak{S}}^r \, , \, \mathfrak{S}^s \right\}} &= 0 \\ {\left\{ \widetilde{\Sigma}^m_{~r} \, , \, \mathfrak{S}^s \right\}} &= + \delta^s_r \, Z^m \\ {\left\{ \widetilde{\Sigma}_m^{~r} \, , \, \mathfrak{S}^s \right\}} &= 0 \end{aligned} \qquad \quad \begin{aligned} {\left\{ \widetilde{\mathfrak{Q}}_r \, , \, \overline{\Pi}^{ns} \right\}} &= - \delta^s_r \, Z^n \\ {\left\{ \widetilde{\mathfrak{Q}}^r \, , \, \overline{\Pi}^{ns} \right\}} &= 0 \\ {\left\{ \widetilde{\mathfrak{S}}_r \, , \, \overline{\Pi}^{ns} \right\}} &= + \delta^s_r \, Y^n \\ {\left\{ \widetilde{\mathfrak{S}}^r \, , \, \overline{\Pi}^{ns} \right\}} &= 0 \\ {\left\{ \widetilde{\Sigma}^m_{~r} \, , \, \overline{\Pi}^{ns} \right\}} &= - \epsilon^{mn} \, \delta^s_r \, N_+ \\ {\left\{ \widetilde{\Sigma}_m^{~r} \, , \, \overline{\Pi}^{ns} \right\}} &= - \delta^n_m \, S^{rs} \end{aligned}$$ [10]{} M. G[ü]{}naydin and C. Saclioglu, “[Bosonic construction of the Lie algebras of some noncompact groups appearing in supergravity theories and their oscillator-like unitary representations]{},” [*Phys. Lett.*]{} [**B108**]{} (1982) 180. M. G[ü]{}naydin and C. Saclioglu, “[Oscillator-like unitary representations of noncompact groups with a Jordan structure and the noncompact groups of supergravity]{},” [*Commun. Math. Phys.*]{} [**87**]{} (1982) 159. M. G[ü]{}naydin, “[Unitary realizations of the noncompact symmetry groups of supergravity]{},”. Presented at 2nd Europhysics Study Conf. on Unification of Fundamental Interactions, Erice, Sicily, Oct 6-14, 1981. J. R. Ellis, M. K. Gaillard, L. Maiani, and B. Zumino, “[Attempts at superunification ]{},”. Presented at Europhysics Study Conf. on Unification of the Fundamental Interactions, Erice, Italy, Mar 17-24, 1980. J. R. Ellis, M. K. Gaillard, and B. Zumino, “[A Grand Unified Theory Obtained from Broken Supergravity]{},” [*Phys. Lett.*]{} [**B94**]{} (1980) 343. M. G[ü]{}naydin, “[Present status of the attempts at a realistic GUT in extended supergravity theories]{},”. Presented at 21st Int. Conf. on High Energy Physics, Paris, France, Jul 26-31, 1982. M. G[ü]{}naydin, G. Sierra, and P. K. Townsend, “Exceptional supergravity theories and the magic square,” [*Phys. Lett.*]{} [**B133**]{} (1983) 72. M. B. Green and J. H. Schwarz, “[Anomaly Cancellation in Supersymmetric D=10 Gauge Theory and Superstring Theory]{},” [*Phys. Lett.*]{} [**B149**]{} (1984) 117–122. Z. Bern, J. J. M. Carrasco, L. J. Dixon, H. Johansson, and R. Roiban, “[Manifest Ultraviolet Behavior for the Three-Loop Four- Point Amplitude of N=8 Supergravity]{},” [[0808.4112]{}](http://www.arXiv.org/abs/0808.4112). N. E. J. Bjerrum-Bohr and P. Vanhove, “[On Cancellations of Ultraviolet Divergences in Supergravity Amplitudes]{},” [*Fortsch. Phys.*]{} [**56**]{} (2008) 824–832, [[0806.1726]{}](http://www.arXiv.org/abs/0806.1726). N. Arkani-Hamed, F. Cachazo, and J. Kaplan, “[What is the Simplest Quantum Field Theory?]{},” [[0808.1446]{}](http://www.arXiv.org/abs/0808.1446). G. Chalmers, “[On the finiteness of N = 8 quantum supergravity]{},” [[hep-th/0008162]{}](http://www.arXiv.org/abs/hep-th/0008162). M. B. Green, J. G. Russo, and P. Vanhove, “[Non-renormalisation conditions in type II string theory and maximal supergravity]{},” [*JHEP*]{} [**02**]{} (2007) 099, [[hep-th/0610299]{}](http://www.arXiv.org/abs/hep-th/0610299). M. B. Green, J. G. Russo, and P. Vanhove, “[Ultraviolet properties of maximal supergravity]{},” [*Phys. Rev. Lett.*]{} [**98**]{} (2007) 131602, [[hep-th/0611273]{}](http://www.arXiv.org/abs/hep-th/0611273). M. B. Green, H. Ooguri, and J. H. Schwarz, “[Decoupling Supergravity from the Superstring]{},” [*Phys. Rev. Lett.*]{} [**99**]{} (2007) 041601, [[0704.0777]{}](http://www.arXiv.org/abs/0704.0777). R. Kallosh, C. H. Lee, and T. Rube, “[N=8 Supergravity 4-point Amplitudes]{},” [[0811.3417]{}](http://www.arXiv.org/abs/0811.3417). Z. Bern, J. J. Carrasco, L. J. Dixon, H. Johansson, and R. Roiban, “[The Ultraviolet Behavior of N=8 Supergravity at Four Loops]{},” [*Phys. Rev. Lett.*]{} [**103**]{} (2009) 081301, [[0905.2326]{}](http://www.arXiv.org/abs/0905.2326). R. Kallosh, “[On UV Finiteness of the Four Loop N=8 Supergravity]{},” [ *JHEP*]{} [**09**]{} (2009) 116, [[0906.3495]{}](http://www.arXiv.org/abs/0906.3495). I. Bars and M. Gunaydin, “[Unitary Representations of Noncompact Supergroups]{},” [*Commun. Math. Phys.*]{} [**91**]{} (1983) 31. M. Gunaydin and N. Marcus, “[The Spectrum of the $S^5$ Compactification of the Chiral $N=2, D=10$ Supergravity and the Unitary Supermultiplets of $U(2, 2/4)$]{},” [*Class. Quant. Grav.*]{} [**2**]{} (1985) L11. M. Gunaydin and N. P. Warner, “[Unitary Supermultiplets of $OSp(8/4,R)$ and the Spectrum of the $S^7$ Compactification of Eleven-Dimensional Supergravity]{},” [*Nucl. Phys.*]{} [**B272**]{} (1986) 99. M. Gunaydin, P. van Nieuwenhuizen, and N. P. Warner, “[General Construction of the Unitary Representations of Anti-De Sitter Superalgebras and the Spectrum of the $S^4$ Compactification of Eleven-Dimensional Supergravity]{},” [ *Nucl. Phys.*]{} [**B255**]{} (1985) 63. J. M. Maldacena, “[The large N limit of superconformal field theories and supergravity]{},” [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 231–252, [[hep-th/9711200]{}](http://www.arXiv.org/abs/hep-th/9711200). E. Witten, “[Anti-de Sitter space and holography]{},” [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 253–291, [[hep-th/9802150]{}](http://www.arXiv.org/abs/hep-th/9802150). S. S. Gubser, I. R. Klebanov, and A. M. Polyakov, “[Gauge theory correlators from non-critical string theory]{},” [*Phys. Lett.*]{} [**B428**]{} (1998) 105–114, [[hep-th/9802109]{}](http://www.arXiv.org/abs/hep-th/9802109). A. Joseph, “Minimal realizations and spectrum generating algebras,” [ *Comm. Math. Phys.*]{} [**36**]{} (1974) 325–338. D. A. Vogan, Jr., “Singular unitary representations,” in [*Noncommutative harmonic analysis and [L]{}ie groups ([M]{}arseille, 1980)*]{}, vol. 880 of [ *Lecture Notes in Math.*]{}, pp. 506–535. Springer, Berlin, 1981. B. Kostant, “The vanishing of scalar curvature and the minimal representation of [${SO}(4,4)$]{},” in [*Operator algebras, unitary representations, enveloping algebras, and invariant theory ([P]{}aris, 1989)*]{}, vol. 92 of [ *Progr. Math.*]{}, pp. 85–124. Birkhäuser Boston, Boston, MA, 1990. D. Kazhdan and G. Savin, “The smallest representation of simply laced groups,” in [*Festschrift in honor of I. I. Piatetski-Shapiro on the occasion of his sixtieth birthday, Part I (Ramat Aviv, 1989)*]{}, vol. 2 of [ *Israel Math. Conf. Proc.*]{}, pp. 209–223. Weizmann, Jerusalem, 1990. R. Brylinski and B. Kostant, “Lagrangian models of minimal representations of [$E\sb 6$]{}, [$E\sb 7$]{} and [$E\sb 8$]{},” in [*Functional analysis on the eve of the 21st century, Vol. 1 (New Brunswick, NJ, 1993)*]{}, vol. 131 of [ *Progr. Math.*]{}, pp. 13–63. Birkhäuser Boston, Boston, MA, 1995. R. Brylinski and B. Kostant, “Minimal representations, geometric quantization, and unitarity,” [*Proc. Nat. Acad. Sci. U.S.A.*]{} [**91**]{} (1994), no. 13, 6026–6029. B. H. Gross and N. R. Wallach, “A distinguished family of unitary representations for the exceptional groups of real rank [$=4$]{},” in [*Lie theory and geometry*]{}, vol. 123 of [*Progr. Math.*]{}, pp. 289–304. Birkhäuser Boston, Boston, MA, 1994. B. Binegar and R. Zierau, “Unitarization of a singular representation of [${\rm SO}(p,q)$]{},” [*Comm. Math. Phys.*]{} [**138**]{} (1991), no. 2, 245–258. T. Kobayashi and B. [Ø]{}rsted, “Analysis on the minimal representation of [$O(p,q)$]{}. [I]{}. [R]{}ealization via conformal geometry,” [*Adv. Math.*]{} [**180**]{} (2003), no. 2, 486–512. T. Kobayashi and B. [Ø]{}rsted, “Analysis on the minimal representation of [$ O(p,q)$]{}. [II]{}. [B]{}ranching laws,” [*Adv. Math.*]{} [**180**]{} (2003), no. 2, 513–550. T. Kobayashi and B. [Ø]{}rsted, “Analysis on the minimal representation of [$ O(p,q)$]{}. [III]{}. [U]{}ltrahyperbolic equations on [${\mathbf{ R}}^{p-1,q-1}$]{},” [*Adv. Math.*]{} [**180**]{} (2003), no. 2, 551–595. D. Kazhdan, B. Pioline, and A. Waldron, “[M]{}inimal representations, spherical vectors, and exceptional theta series. [I]{},” [*Commun. Math. Phys.*]{} [ **226**]{} (2002) 1–40, [[hep-th/0107222]{}](http://www.arXiv.org/abs/hep-th/0107222). A. R. Gover and A. Waldron, “[The so(d+2,2) Minimal Representation and Ambient Tractors: the Conformal Geometry of Momentum Space]{},” [[0903.1394]{}](http://www.arXiv.org/abs/0903.1394). S. Ferrara and M. Günaydin, “[O]{}rbits of exceptional groups, duality and [BPS]{} states in string theory,” [*Int. J. Mod. Phys.*]{} [**A13**]{} (1998) 2075–2088, [[hep-th/9708025]{}](http://www.arXiv.org/abs/hep-th/9708025). M. G[ü]{}naydin, K. Koepsell, and H. Nicolai, “[C]{}onformal and quasiconformal realizations of exceptional [L]{}ie groups,” [*Commun. Math. Phys.*]{} [ **221**]{} (2001) 57–76, [[hep-th/0008063]{}](http://www.arXiv.org/abs/hep-th/0008063). M. G[ü]{}naydin, “[Realizations of exceptional U-duality groups as conformal and quasiconformal groups and their minimal unitary representations]{},” [ *Comment. Phys. Math. Soc. Sci. Fenn.*]{} [**166**]{} (2004) 111–125, [[hep-th/0409263]{}](http://www.arXiv.org/abs/hep-th/0409263). M. G[ü]{}naydin, “[Realizations of exceptional U-duality groups as conformal and quasi-conformal groups and their minimal unitary representations]{},”. Prepared for 3rd International Symposium on Quantum Theory and Symmetries (QTS3), Cincinnati, Ohio, 10-14 Sep 2003. M. G[ü]{}naydin, “[U]{}nitary realizations of [U]{}-duality groups as conformal and quasiconformal groups and extremal black holes of supergravity theories,” [*AIP Conf. Proc.*]{} [**767**]{} (2005) 268–287, [[hep-th/0502235]{}](http://www.arXiv.org/abs/hep-th/0502235). M. Gunaydin, “[Lectures on Spectrum Generating Symmetries and U-duality in Supergravity, Extremal Black Holes, Quantum Attractors and Harmonic Superspace]{},” [[0908.0374]{}](http://www.arXiv.org/abs/0908.0374). M. G[ü]{}naydin, A. Neitzke, B. Pioline, and A. Waldron, “[BPS]{} black holes, quantum attractor flows and automorphic forms,” [*Phys. Rev.*]{} [**D73**]{} (2006) 084019, [[hep-th/0512296]{}](http://www.arXiv.org/abs/hep-th/0512296). M. G[ü]{}naydin, A. Neitzke, B. Pioline, and A. Waldron, “[Quantum Attractor Flows]{},” [*JHEP*]{} [**09**]{} (2007) 056, [[0707.0267]{}](http://www.arXiv.org/abs/0707.0267). M. G[ü]{}naydin, A. Neitzke, O. Pavlyk, and B. Pioline, “[Quasi-conformal actions, quaternionic discrete series and twistors: ${SU(2,1)}$ and ${G_{2(2)}}$]{},” [*Commun. Math. Phys.*]{} [**283**]{} (2008) 169–226, [[0707.1669]{}](http://www.arXiv.org/abs/0707.1669). P. Breitenlohner, G. W. Gibbons, and D. Maison, “[F]{}our-dimensional black holes from [K]{}aluza-[K]{}lein theories,” [*Commun. Math. Phys.*]{} [**120**]{} (1988) 295. M. G[ü]{}naydin and O. Pavlyk, “Generalized spacetimes defined by cubic forms and the minimal unitary realizations of their quasiconformal groups,” [ *JHEP*]{} [**08**]{} (2005) 101, [[hep-th/0506010]{}](http://www.arXiv.org/abs/hep-th/0506010). M. G[ü]{}naydin, K. Koepsell, and H. Nicolai, “[T]{}he minimal unitary representation of ${E_{8(8)}}$,” [*Adv. Theor. Math. Phys.*]{} [**5**]{} (2002) 923–946, [[hep-th/0109005]{}](http://www.arXiv.org/abs/hep-th/0109005). M. G[ü]{}naydin and O. Pavlyk, “[M]{}inimal unitary realizations of exceptional [U]{}-duality groups and their subgroups as quasiconformal groups,” [*JHEP*]{} [**01**]{} (2005) 019, [[hep-th/0409272]{}](http://www.arXiv.org/abs/hep-th/0409272). M. G[ü]{}naydin and O. Pavlyk, “A unified approach to the minimal unitary realizations of noncompact groups and supergroups,” [*JHEP*]{} [**09**]{} (2006) 050, [[hep-th/0604077]{}](http://www.arXiv.org/abs/hep-th/0604077). M. Gunaydin and S. J. Hyun, “[Unitary lowest weight representations of the noncompact supergroup $OSp(2n|2m,R)$]{},” [*J. Math. Phys.*]{} [**29**]{} (1988) 2367. M. Gunaydin, “[Unitary highest weight representations of noncompact supergroups]{},” [*J. Math. Phys.*]{} [**29**]{} (1988) 1275–1282. S. Fernando and M. Gunaydin, “[Minimal unitary representation of SU(2,2) and its deformations as massless conformal fields and their supersymmetric extensions]{},” [[0908.3624]{}](http://www.arXiv.org/abs/0908.3624). M. Gunaydin, D. Minic, and M. Zagermann, “[4D doubleton conformal theories, CPT and II B string on AdS(5) x S(5)]{},” [*Nucl. Phys.*]{} [**B534**]{} (1998) 96–120, [[hep-th/9806042]{}](http://www.arXiv.org/abs/hep-th/9806042). M. Gunaydin, D. Minic, and M. Zagermann, “[Novel supermultiplets of $SU(2,2|4)$ and the $AdS_5/CFT_4 $ duality]{},” [*Nucl. Phys.*]{} [**B544**]{} (1999) 737–758, [[hep-th/9810226]{}](http://www.arXiv.org/abs/hep-th/9810226). G. Mack and I. Todorov, “[Irreducibility of the ladder representations of U(2,2) when restricted to the Poincare subgroup]{},” [*J. Math. Phys.*]{} [ **10**]{} (1969) 2078–2085. M. Gunaydin and S. Takemae, “[Unitary supermultiplets of $OSp(8*|4)$ and the AdS(7)/CFT(6) duality]{},” [*Nucl. Phys.*]{} [**B578**]{} (2000) 405–448, [[hep-th/9910110]{}](http://www.arXiv.org/abs/hep-th/9910110). S. Fernando, M. Gunaydin, and S. Takemae, “[Supercoherent states of $OSp(8^*|2N)$, conformal superfields and the $AdS_7/CFT_6$ duality]{},” [ *Nucl. Phys.*]{} [**B628**]{} (2002) 79–111, [[hep-th/0106161]{}](http://www.arXiv.org/abs/hep-th/0106161). S. Minwalla, “[Restrictions imposed by superconformal invariance on quantum field theories]{},” [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 781–846, [[hep-th/9712074]{}](http://www.arXiv.org/abs/hep-th/9712074). V. K. Dobrev, “[Positive energy unitary irreducible representations of D = 6 conformal supersymmetry]{},” [*J. Phys.*]{} [**A35**]{} (2002) 7079–7100, [[hep-th/0201076]{}](http://www.arXiv.org/abs/hep-th/0201076). M. Gunaydin and R. J. Scalise, “[Unitary Lowest Weight Representations of the Noncompact Supergroup $OSp(2m*|2n)$]{},” [*J. Math. Phys.*]{} [**32**]{} (1991) 599–606. W. Nahm, “[Supersymmetries and their representations]{},” [*Nucl. Phys.*]{} [**B135**]{} (1978) 149. M. Gunaydin, G. Sierra, and P. K. Townsend, “[The Unitary Supermultiplets of $d = 3$ Anti-de Sitter and $d = 2$ Conformal Superalgebras]{},” [*Nucl. Phys.*]{} [**B274**]{} (1986) 429. V. de Alfaro, S. Fubini, and G. Furlan, “Conformal invariance in quantum mechanics,” [*Nuovo Cim.*]{} [**A34**]{} (1976) 569. J. Casahorran, “[On a novel supersymmetric connection between harmonic and isotonic oscillators]{},” [*Physica A*]{} [**217**]{} (1995) 429–39. DFTUZ-94-28. J. F. Carinena, A. M. Perelomov, M. F. Ranada, and M. Santander, “A quantum exactly solvable non-linear oscillator related with the isotonic oscillator,” 2008. A. Perelomov, [*Generalized coherent states and their applications*]{}. Texts and Monographs in Physics. Springer-Verlag, Berlin, 1986. S. Fernando, M. G[ü]{}naydin, and O. Pavlyk. work in progress. S. Fernando and M. G[ü]{}naydin. work in progress. [^1]: fernando@kutztown.edu [^2]: murat@phys.psu.edu [^3]: K-type decomposition is the decomposition with respect to the maximal compact subgroup. [^4]: As such it belongs to an infinite family of supergroups $OSp(2M^*|2N)$ with even subgroups $SO^*(2M) \times USp(2N)$ whose positive energy unitary representations were studied in [@Gunaydin:1990ag]. [^5]: This is the $SU(1,1)$ subgroup generated by the longest root vector. [^6]: Our convention of Dynkin labels is such that, the fundamental representation $\mathbf{4}$ of $SU(4)$ corresponds to $(1,0,0)$. [^7]: Note that we are using the same symbols for the generators of $SO^*(8)$ considered as a subgroup of $OSp(8^*|2N)$ that now includes contributions from the fermions. [^8]: These representations are sometimes referred to as the ladder representations in the literature.
--- abstract: 'After the observation of non-zero $\theta_{13}$ the goal has shifted to observe $CP$ violation in the leptonic sector. Neutrino oscillation experiments can, directly, probe the Dirac $CP$ phases. Alternatively, one can measure $CP$ violation in the leptonic sector using Leptonic Unitarity Quadrangle(LUQ). The existence of Standard Model (SM) gauge singlets - sterile neutrinos - will provide additional sources of $CP$ violation. We investigate the connection between neutrino survival probability and rephasing invariants of the $4\times4$ neutrino mixing matrix. In general, LUQ contain eight geometrical parameters out of which five are independent. We obtain $CP$ asymmetry($P_{\nu_f\rightarrow\nu_{f''}}-P_{\bar{\nu}_f\rightarrow\bar{\nu}_{f''}}$) in terms of these independent parameters of the LUQ and search for the possibilities of extracting information on these independent geometrical parameters in short baseline(SBL) and long baseline(LBL) experiments, thus, looking for constructing LUQ and possible measurement of $CP$ violation. We find that it is not possible to construct LUQ using data from LBL experiments because $CP$ asymmetry is sensitive to only three of the five independent parameters of LUQ. However, for SBL experiments, $CP$ asymmetry is found to be sensitive to all five independent parameters making it possible to construct LUQ and measure $CP$ violation.' author: - 'Surender Verma[^1] and Shankita Bhardwaj[^2]' date: | *Department of Physics and Astronomical Science,\ Central University of Himachal Pradesh, Dharamshala 176215, INDIA.* title: Prospects for Reconstruction of Leptonic Unitarity Quadrangle and Neutrino Oscillation Experiments --- Introduction ============ The observation of non-zero $\theta_{13}$ [@dc; @db; @reno] has, conclusively, established three types of oscillations and provides an opportunity for possible measurement of i) $CP$ violation in the leptonic sector, and ii) to determine neutrino mass hierarchy. $CP$ violation is, essentially, a three flavor effect [@cp] and is attributed to the non-trivial phases of the neutrino mixing matrix. In general, for three generation case, the neutrino mixing matrix contain three phases, one Dirac-type $CP$ violating phase and two Majorana-type $CP$ violating phases. However, only Dirac phase manifests itself in the neutrino oscillation experiments. Looking at the current and future neutrino facilities, which are either at planning or data acquisition stage, the measurement of $CP$ violation is not beyond realization [@t2k; @minos]. The current experimental data on neutrino masses and mixings can be explained within the paradigm of three active neutrino isodoublets. However, reconciliation with the short baseline(SBL) anomalies such as LSND, MiniBooNE and Gallium [@lsnd; @mb; @mb1; @gm; @tm; @df] requires the introduction of Standard Model(SM) gauge singlet(s)-sterile neutrino(s) because they involve the mass-squared difference, $\Delta m^{2}_{\text{SBL}}\gg\Delta m_{\text{Atm}}^{2}\gg\Delta m_{\text{Solar}}^{2}$. The possible existence of Standard Model(SM) gauge singlet fermion(s) is an attractive extension to our quest to understand fundamental physics including origin of non-zero neutrino masses and dark matter puzzle. In presence of sterile neutrinos, the standard three neutrino picture must be enlarged to accommodate the more mass eigenstates having non-zero mixing with the standard three active flavors. Also, there will be additional sources of $CP$ violation in presence of these SM gauge singlets. So, it is important to study the prospects of detecting these additional sources of $CP$ violation. In general, $CP$ violation can be studied in two ways. One way is to directly measure $CP$ violating phase in the neutrino oscillation experiments [@t2k; @minos] and second is to construct the leptonic unitarity triangle(LUT)/quadrangle(LUQ) [@he; @xing; @xing1]. In the present work, we have followed the second approach. In four neutrino mixing models, $CP$ violation is, generally, expected to be violated and is attributed to the nontrivial complex phases in $4\times 4$ neutrino mixing matrix. Neutrino oscillation experiments play crucial role to study Pontecorvo-Maki-Nakagawa-Sakata(PMNS) matrix and to, directly, measure $CP$ violation, $P_{\nu_f\rightarrow\nu_{f'}}-P_{\bar{\nu}_f\rightarrow\bar{\nu}_{f'}}\neq 0$, in the leptonic sector. Alternatively, in order to measure $CP$ violation, in a rephasing invariant manner using Leptonic Unitarity Quadrangle(LUQ), one has to construct rephase invariants from $4\times4$ neutrino mixing matrix $V$ given by $J_{ff'}^{ij}\equiv \Im\left(V_{fi}V_{f'j}V_{fj}^{*}V_{f'i}^{*}\right)$ [@cj], where $(i,j)=0,1,2,3$ and $(f,f')=s,e,\mu,\tau$. In Sec. II, we present the connection between neutrino survival probability and rephasing invariants of the $4\times4$ neutrino mixing matrix. In general, LUQ contain eight geometric parameters, out of which five parameters are independent. In Sec. III, we present $CP$ asymmetry, $P_{\nu_f\rightarrow\nu_{f'}}-P_{\bar{\nu}_f\rightarrow\bar{\nu}_{f'}}$, in terms of these independent parameters of the LUQ and search for the possibilities of extracting information on these independent geometrical parameters in short baseline(SBL) and long baseline(LBL) experiments, thus, looking for constructing LUQ and possible measurement of $CP$ violation. In Sec. IV, we draw our conclusions. Connecting Leptonic Unitarity Quadrangle to Mixing matrix ========================================================= In four neutrino mixing, the flavor($\nu_f$, $f=s,e,\mu,\tau$) and mass eigenstates($\nu_j$, $j=0,1,2,3$) are connected through (4$\times$4) unitary matrix $V$ as $$\centering \begin{pmatrix} \nu_{s}\\ \nu_{e}\\ \nu_{\mu}\\ \nu_{\tau} \end{pmatrix} = \begin{pmatrix} V_{s0} & V_{s1} &V_{s2} & V_{s3}\\ V_{e0} & V_{e1} &V_{e2} & V_{e3}\\ V_{\mu0} & V_{\mu1} & V_{\mu2} & V_{\mu3}\\ V_{\tau0} & V_{\tau1} & V_{\tau2} &V_{\tau3}\\ \end{pmatrix} \begin{pmatrix} \nu_{0}\\ \nu_{1}\\ \nu_{2}\\ \nu_{3} \end{pmatrix}.$$ The unitarity of mixing matrix $V$ ($V^{\dagger}V=VV^{\dagger}=1$) provide eight normalization and twelve orthogonality relations which corresponds to twelve unitarity quadrangles in the complex plane. Now, let’s consider a flavor state $|\nu_f\rangle$ that converts to $|\nu_{f'}\rangle$ after travelling a distance $L$ km. The vacuum transition probability for this conversion is given by [@xing; @smb] $$P_{\nu_f\rightarrow \nu_{f'}}=\delta_{ff'}-4\sum_{i<j}^{}\left(\Re\left(V_{fi}V_{f'j}V_{fj}^{*}V_{f'i}^{*}\right)\sin^2\left(X_{ij}\right)\right) +2\sum_{i<j}^{}\left(J_{ff'}^{ij}\sin\left(2X_{ij}\right)\right),$$ where $ X_{ij}\equiv 1.27\Delta m_{ij}^{2}L/E$ with $\Delta m_{ij}^{2}\equiv m_j^{2}-m_i^{2}$, $L$ is baseline length, and $E$ is the neutrino beam energy. The vacuum transition probability for antineutrino can be directly obtained from Eqn.(2) by replacing $J_{ff'}^{ij}\rightarrow-J_{ff'}^{ij}$. The flavor transition can be attributed to change in phase shift [@smb; @jb] of the transition probability by phase angle $\lambda_{f'f:ij}$ defined as $$\lambda_{f'f:ij}=arg\left(V_{f'i}V_{fi}^{*}V_{fj}V_{f'j}^{*}\right),$$ with $\lambda_{f'f;ij}=-\lambda_{ff'ij}=-\lambda_{f'f;ji}$ and $X_{ij}=-X_{ji}$. In terms of phase angle Eqn.(2) can be written as $$\begin{aligned} \nonumber P_{\nu_f\rightarrow \nu_{f'}}= &\delta_{ff'}&-4\sum_{i<j}^{}\left(\Re\left(V_{fi}V_{f'j}V_{fj}^{*}V_{f'i}^{*}\right)\sin^2\left(X_{ij}-\lambda_{f'f;ij}\right)\right)\\ &+&2\sum_{i<j}^{}\left(J_{ff'}^{ij}\sin(2X_{ij}-\lambda_{f'f;ij})\right)\end{aligned}$$ for neutrinos and $$\begin{aligned} \nonumber P_{\bar\nu_f\rightarrow \bar\nu_{f'}}=&\delta_{ff'}&-4\sum_{i<j}^{}\left(\Re\left(V_{fi}V_{f'j}V_{fj}^{*}V_{f'i}^{*}\right)\sin^2\left(X_{ij}+\lambda_{f'f;ij}\right)\right)\\ &-&2\sum_{i<j}^{}\left(J_{ff'}^{ij}\sin\left(2X_{ij}+\lambda_{f'f;ij}\right)\right)\end{aligned}$$ for antineutrinos.\ Eqn.(4) and Eqn.(5) are the oscillation probabilities for $f\neq f'$. We can, also, investigate the dependence of the survival probability, which is given by $\nu_f\rightarrow\nu_f(\bar\nu_f\rightarrow\bar\nu_f)$, on parameters of leptonic unitarity quadrangle by taking $f=f'$ and phase shift $\lambda_{f'f;ij}=0$ and calculate disappearance probability. The survival probability $(f=f')$ depends on four parameters, viz. $$\left( a_{s}, b_{s}, c_{s}, d_{s}\right)\equiv \left(|V_{f0}|^{2},|V_{f1}|^{2},|V_{f2}|^{2},|V_{f3}|^{2}\right),$$ which obey the unitarity constraint of the mixing matrix $V$. Hence, under $f=f'$, Eqn.(4) and Eqn.(5) contains only three independent degrees of freedom among all nine parameters in the mixing matrix $V$. The disappearance probability can be expressed in terms of three independent parameters $(a_{s},b_{s},c_{s})$ as $$\begin{aligned} \nonumber P_{dis}=& &1-P_{\nu_{f}\rightarrow \nu_{f}}=4\sum_{i<j}\left(\Re \left(|V_{fi}|^2|V_{fj}|^{2}\right)\sin^{2}\left(X_{ij}\right)\right)-2\sum_{i<j}\left(J_{ff}^{ij}\sin\left(2X_{ij}\right)\right),\\ \nonumber &=&4\Bigl(a_{s}b_{s}\sin^2\left(X_{01}\right)+b_sc_s\sin^2\left(X_{12}\right)\\ &+&\left(1-a_s-b_s-c_s\right)\left(c_s\sin^2\left(X_{23}\right)+a_s\sin^2\left(X_{03}\right)\right)\Bigr). \end{aligned}$$ From Eqn.(7), it is clear that disappearance oscillation experiments cannot provide information on all the geometric parameters of LUQ. So, in order to investigate $CP$ violation, in a rephase invariant way, we have to consider neutrino flavor oscillations with non-trivial phase shifts i.e, $\lambda_{f'f;ij}\neq 0$. For this reason we have considered appearance oscillation probabilities to possibly determine LUQ parameters. The orthogonality relation $$V_{f0}V_{f'0}^{*}+V_{f1}V_{f'1}^{*}+V_{f2}V_{f'2}^{*}+V_{f3}V_{f'3}^{*}=0,$$ can be viewed as a quadrangle in complex plane, shown in Fig.(1). Its four sides are, $$\left(a,b,c,d\right)=\left(|V_{f0}V_{f'0}^{*}|,|V_{f1}V_{f'1}^{*}|,|V_{f2}V_{f'2}^{*}|,|V_{f3}V_{f'3}^{*}|\right),$$ and angles can be expressed as $$\begin{aligned} \nonumber \alpha=arg\left(-\frac{V_{f0}V_{f'0}^{*}}{V_{f1}V_{f'1}^{*}}\right), \beta=arg\left(-\frac{V_{f1}V_{f'1}^{*}}{V_{f2}V_{f'2}^{*}}\right),\\ \gamma=arg\left(-\frac{V_{f2}V_{f'2}^{*}}{V_{f3}V_{f'3}^{*}}\right), \delta=arg\left(-\frac{V_{f3}V_{f'3}^{*}}{V_{f0}V_{f'0}^{*}}\right).\end{aligned}$$ From these relations we can write $$\alpha=\pi-\lambda_{f'f;01},\beta=\pi-\lambda_{f'f;12},\gamma=\pi-\lambda_{f'f;23},\delta=\pi-\lambda_{f'f;03}.$$ Using Eqn.(9) and Eqn.(11), we can write the oscillation probabilities (Eqns.(4)-(5)) in terms geometrical parameters of LUQ for neutrino and antineutrino as, $$\begin{aligned} \nonumber P=&&a^{2}+b^{2}+c^{2}+d^{2}-2ab\cos\left(2X_{01}\pm\alpha\right)-2bc\cos\left(2X_{12}\pm\beta\right)\\ &&-2cd\cos\left(2X_{23}\pm\gamma\right)-2ad\cos\left(2X_{03}\pm\delta\right), \end{aligned}$$ where upper (lower) sign in “$\pm$” sign correspond to neutrino (antineutrino) oscillations and $P=P_{\nu_f \rightarrow \nu_{f'}}$ for neutrino $P=P_{\bar\nu_f \rightarrow \bar\nu_{f'}}$ for antineutrino. Taking into account the fact that $P_{\nu_{f}\rightarrow \nu_{f'}}(L=0)=0$, because neutrinos from the source do not have sufficient time to oscillate, we can write Eqn.(12) as $$\begin{aligned} \nonumber P=&&4ab \sin\left(X_{01}\pm\alpha\right)\sin X_{01}+4bc\sin\left(X_{12}\pm\beta\right)\sin X_{12}\\ &&4cd\sin\left(X_{23}\pm\gamma\right)\sin X_{23}+4ad\sin\left(X_{03}\pm\delta\right)\sin X_{03}.\end{aligned}$$ Eqns.(12)-(13) shows the connection of geometrical parameters $\left(a, b, c, d, \alpha,\beta,\gamma,\delta\right)$ of LUQ to the oscillation probabilities of four neutrino mixing. Here, the geometrical parameters $(\alpha,\beta,\gamma,\delta)$ are playing the role of phase shift in neutrino flavor oscillations. $CP$ asymmetry in terms of independent parameters of LUQ ======================================================== In order to, uniquely, determine LUQ we choose two sides and three angles viz. $\left(b,c,\alpha,\beta,\gamma\right)$ as the five independent geometrical parameters. Then, all other parameters of the LUQ can be expressed in terms of these five independent parameters. From Fig.(1), we find that $$\alpha+\beta+\gamma+\delta=2\pi, \delta=2\pi-\sigma,$$ and $$a=-q\sin\left(\gamma-\gamma_2\right)\csc\sigma, d=-q\sin\left(\alpha-\alpha_2\right)\csc\sigma,$$ where, $q=\sqrt{b^2+c^2-2bc\cos\beta}, \gamma_2=\sin^{-1}\left(\frac{b\sin\beta}{q}\right)$, $\alpha_2=\sin^{-1}\left(\frac{c\sin\beta}{q}\right)$ and $\sigma\equiv \alpha+\beta+\gamma$. We follow the parametrization of the mixing matrix $V$ from [@para] and find the independent parameters of LUQ viz. $b, c, \alpha, \beta, \gamma$ in terms of mixing angles($\theta_{ij}$) and $CP$-violating phases($\delta_{ij}$). In general, for n-generations, the mixing matrix $V$ can be parametrized in terms of $n_{\theta}=\frac{n(n-1)}{2}$ angles and $n_{\delta}=\frac{n(n+1)}{2}$ phases. However, number of physical phases which characterize the mixing matrix is smaller than $n_{\delta}$ because mixing matrix enters into the charged current together with fields of charged leptons and neutrinos. We ignore Majorana-type $CP$-violating phases as they will not manifest themselves in the oscillation probabilities. Number of Dirac phases in the mixing matrix is $n_{\delta}^{D}=\frac{(n-1)(n-2)}{2}$. For $n=4$, $n_{\delta}^{D}=3$. Hence, we write $$\begin{aligned} V=V_{23}\tilde{V}_{13}V_{03}V_{12}\tilde{V}_{02}\tilde{V}_{01},\end{aligned}$$ where $\tilde{V}_{13}, \tilde{V}_{02}, \tilde{V}_{01}$($V_{23}, V_{03}, V_{12}$) are complex(real) elements. We can write the independent parameters $b, c, \alpha, \beta, \gamma$ in terms of mixing angles and $CP$-violating phases as, $$\begin{aligned} b=&&|s_{01}^2 \left(c_{13}s_{02}s_{12}+c_{02}s_{03}s_{13} e^{i (\delta_{02}+\delta_{13})}\right)\\ && \left(c_{02}c_{13}s_{03}s_{23}+e^{i \delta_{02}}s_{02} \left(c_{12}c_{23}-e^{i \delta_{13}}s_{12}s_{13} s_{23}\right)\right)|, \nonumber \\ c=&&|\left(-c_{13}s_{02}s_{03}s_{23}+e^{i \delta_{02}} \left(c_{23} (c_{02}c_{12}-s_{12})-e^{i \delta_{13}}s_{13} s_{23} (c_{02}s_{12}+c_{12})\right)\right) \\ &&\left(c_{02}c_{13}s_{12} +c_{12}c_{13}-s_{02}s_{03}s_{13} e^{i (\delta_{02}+\delta_{13})}\right)|, \nonumber \\ \alpha=&&\arg\left[-\frac{c_{01}^2}{s_{01}^2}\right],\\ \nonumber \beta=&&\arg\Bigl[s_{01}^2 \Bigl(c_{13} s_{02} s_{12} + c_{02} e^{i(\delta_{02} + \delta_{13})} s_{03} s_{13}\Bigr) \Bigl(c_{02} c_{13} s_{03} s_{23}+ e^{i \delta_{02}} s_{02}\\ \nonumber &&\left(c_{12} c_{23}-e^{i \delta_{13}} s_{12} s_{13} s_{23}\right)\Bigr)\Bigr]-\arg\Bigl[\Bigl(c_{12} c_{13} + c_{02} c_{13} s_{12} - e^{i (\delta_{02} + \delta_{13})} s_{02} s_{03} s_{13}\Bigr)\\ &&\Bigl(c_{13} s_{02} s_{03} s_{23} + e^{i \delta_{02}} \left(-c_{02} c_{12} c_{23} + c_{23} s_{12} + e^{i \delta_{13}} \left(c_{12} + c_{02} s_{12}\right) s_{13} s_{23}\right)\Bigr)\Bigr],\\ \nonumber \gamma=&&\arg\Bigl[-e^{-i \left(\delta_{02} + \delta_{13}\right)} \Bigl(-c_{13} \left(c_{12} + c_{02} s_{12}\right) + e^{i\left(\delta_{02} + \delta_{13}\right)} s_{02} s_{03} s_{13}\Bigr) \Bigl(c_{13} s_{02} s_{03} s_{23}\\ \nonumber &&+ e^{i \delta_{02}} \left(-c_{02} c_{12} c_{23} + c_{23} s_{12} + e^{i \delta_{13}} (c_{12} + c_{02} s_{12}) s_{13} s_{23}\right)\Bigr)\Bigr]-\arg\Bigl[c_{03}^2 c_{13} s_{13} s_{23}\Bigr], \\ \end{aligned}$$ where $s_{ij}=\sin\theta_{ij}$ and $c_{ij}=\cos\theta_{ij}$. Using Eqns.(14)-(15) we can write, Eqn.(12) in terms of five independent parameters $(b,c,\alpha,\beta,\gamma)$. The oscillation probability can be written as $$\begin{aligned} \nonumber P=&&q^{2}\sin^{2}\left(\gamma-\gamma_{2}\right)\csc^{2}\sigma+b^{2}+c^{2}+q^{2}\sin^{2}\left(\alpha-\alpha_{2}\right)\csc^{2}\sigma\\ \nonumber &&+2qb\sin\left(\gamma-\gamma_{2}\right)\csc\sigma\cos\left(2X_{01}\pm\alpha\right)\\ \nonumber &&-2bc\cos\left(2X_{12}\pm\beta\right)\\ \nonumber &&+2cq\sin\left(\alpha-\alpha_{2}\right)\csc\sigma\cos\left(2X_{23}\pm\gamma\right)\\ &&-2q^{2}\sin\left(\gamma-\gamma_{2}\right)\sin\left(\alpha-\alpha_{2}\right)\csc^{2}\sigma\cos\left(2X_{03}\mp\sigma\right).\end{aligned}$$ For the three generation case, assuming additional mixing angles to be extremely small, the oscillation probability obtained in Eqn.(22), can be written in term of three independent geometric parameters, $c',\alpha' $and $\gamma'$ of unitarity triangle shown in Fig.(2). Under this approximation and using Eqns.(17)-(21), the expression for oscillation probability(Eqn.(22)) can be written as $$\begin{aligned} \nonumber P_{3\nu}=&&\frac{c'^{2}}{\sin^{2}\left(\alpha'+\gamma'\right)}\Bigl(\sin^{2}\gamma'+\sin^{2}\left(\alpha'+\gamma'\right)+\sin^{2}\alpha' +2\sin\alpha'\sin\left(\alpha'+\gamma'\right)\cos(2X_{23}\pm\gamma')\\ &&-2\sin\alpha'\sin\gamma'\cos\left(2X_{03}\mp\left(\alpha'+\gamma'\right)\right)\Bigr),\end{aligned}$$ where, $P_{3\nu}$ is oscillation probability in three generation case with three independent geometric parameters i.e. one side and two angles, ($c',\alpha'$ and $\gamma')$ of the unitarity triangle, which is same as in Ref.[@he]. Eqn.(22) provide the oscillation probability in terms of five independent geometric parameters of the LUQ. From current neutrino oscillation data [@garcia], $|\Delta m_{23}^2|=2.45\times10^{-3} eV^2$, $\Delta m_{12}^2=7.50\times10^{-5} eV^2$ and considering the case of $E/L\thicksim\Delta m_{12}^2$ we find that $X_{12}$ is $\mathcal{O}(1)$ and $X_{01}, X_{03}>>1$. Thus, the oscillations induced by the oscillation frequencies $X_{23}, X_{01}$ and $X_{03}$ will be averaged out due to integration over the neutrino production region and energy resolution function. However, the possibility of exploring possible $CP$ asymmetry is still open because the combination of results from different oscillation experiments can be used to indirectly measure $CP$ violation effects in a $1+3$ scenario. Thus, we can write oscillation probability as $$\begin{aligned} \nonumber P=&& q^{2}\sin^{2}\left(\gamma-\gamma_{2}\right)\csc^{2}\sigma+b^{2}+c^{2}+q^{2}\sin^{2}\left(\alpha-\alpha_{2}\right)\csc^{2}\sigma\\ &&-2bc\cos\left(2X_{12}\pm\beta\right). \end{aligned}$$ The $CP$ asymmetry can be written as $$\begin{aligned} \Delta P=&&4bc\sin\left(2X_{12}\right)\sin\beta. \end{aligned}$$ A realistic method to infer about the parameters $b, c, \alpha, \beta, \gamma$ of the LUQ is to measure the distortion of neutrino energy spectrum. The current neutrino oscillation experiments such as MINOS (with $E/L\thicksim8\times10^{-4} eV^2$) [@minos] and NO$\nu$A (with $E/L\thicksim5\times10^{-4}eV^2$) [@nova] are insensitive to the oscillation frequency $X_{12}$. However, the condition $E/L\thicksim\Delta m_{12}^2$ can be realized in future neutrino factories [@nufact] with $L=(2000-7500)$ km and $E=(1-10) $ GeV. So, one can get the information, only, on three ($b, c, \beta$) out of five independent geometrical parameters ($b, c, \alpha, \beta, \gamma$) of LUQ making it impossible to construct LUQ and to measure of CP asymmetry. In general, matter effects becomes important in long baseline experiments. The oscillation probability(Eqn. (22)) will get modified due to interaction of neutrinos with matter. Especially, in Eqn. (22), we note that the independent parameters of LUQ are associated with different oscillating terms. These frequencies($X_{01}, X_{12}, X_{23}, X_{03}$) will have different behavior for a sufficiently large baseline and high precision long baseline experiments, in which case, the oscillation probability(Eqn. 24) will depend on all the five independent geometric parameters of LUQ. Thus, it is possible, in principle, to extract information on all five independent geometric parameters of LUQ in presence of terrestrial matter effects. However, in current LBL experiments such as MINOS and NO$\nu$A, matter effects are insignificant[@he] and expression derived for oscillation probability(Eqn. (24)) is still valid. Lets now take the case of short baseline neutrino oscillation experiments, with neutrino energy $E=\mathcal{O}(1 GeV)$ and $L=\mathcal{O}(1 km)$ ($E/L\thicksim 0.1 eV^2$ such that $|X_{01}|\thicksim |X_{02}|\thicksim |X_{03}|\thicksim 1$). If the interpretation of the short baseline anomalies based on neutrino oscillation is correct then we can neglect the oscillations due to frequencies $X_{12}, X_{23}$ as their contributions will be small. Under these approximations, Eqn.(22) can be written as $$\begin{aligned} \nonumber P=&&q^{2}\sin^{2}\left(\gamma-\gamma_{2}\right)\csc^{2}\sigma+b^{2}+c^{2}+q^{2}\sin^{2}\left(\alpha-\alpha_{2}\right)\csc^{2}\sigma\\ \nonumber &&+2qb\sin\left(\gamma-\gamma_{2}\right)\csc\sigma\cos\left(2X_{01}\pm\alpha\right)\\ &&-2q^{2}\sin\left(\gamma-\gamma_{2}\right)\sin\left(\alpha-\alpha_{2}\right)\csc^{2}\sigma\cos\left(2X_{03}\mp\sigma\right),\end{aligned}$$ and $CP$ asymmetry as $$\begin{aligned} \nonumber \Delta P=&&-4qb\sin\left(\gamma-\gamma_{2}\right)\csc\sigma\sin\left(2X_{01}\right)\sin\alpha\\ &&-4q^{2}\sin\left(\gamma-\gamma_{2}\right)\sin\left(\alpha-\alpha_{2}\right)\csc\sigma\sin\left(2X_{03}\right).\end{aligned}$$ The $CP$ asymmetry is sensitive to all the five independent parameters viz. $b, c, \alpha, \beta, \gamma$ of LUQ. Thus, next generation short baseline experiments provide unique opportunity to construct leptonic unitarity quadrangle and to measure $CP$ violation. Such opportunity to measure $CP$ asymmetry will not be there if we try to directly measure $CP$ phases under aforementioned approximations because the information on $CP$ phases will be lost as new oscillations are, averaged out or small. Conclusions =========== In summary, we have investigated the connection between neutrino survival probability and rephasing invariants of the $4\times4$ neutrino mixing matrix. The $CP$ asymmetry can be measured in two ways. First way is to directly measure $CP$ violating phases in the oscillation experiments and second is to extract information about the parameters of LUQ from oscillation experiments and to construct it. There exist five independent parameters($b, c, \alpha, \beta, \gamma$) of LUQ . We obtain the relation between the oscillation probability, $CP$ asymmetry and independent parameters of LUQ. We have, also, studied the prospects of measuring these parameters and possible measurement of $CP$ asymmetry in future long baseline, neutrino factories and short baseline experiments. We find that it is not possible to fully construct LUQ using data from long baseline experiments because $CP$ asymmetry is sensitive to only three of the five independent parameters of LUQ. However, we expect that matter effects becomes important in long baseline experiments and oscillation probability(Eqn. (22)) will get modified due to interaction of neutrinos with matter. The frequencies($X_{01}, X_{12}, X_{23}, X_{03}$) will have different behavior for a sufficiently large baseline and high precision long baseline experiments, in which case, the oscillation probability(Eqn. 24) will depend on all the five independent geometric parameters of LUQ. Thus, it is possible, in principle, to extract information on all five independent geometric parameters of LUQ in presence of matter effects. Also, using data from short baseline experiments, we can construct and subsequently measure $CP$ violation because $CP$ asymmetry depends on all the five independent geometric parameters of LUQ. ****\ S. V. acknowledges the financial support provided by University Grants Commission (UGC)-Basic Science Research(BSR), Government of India vide Grant No. F.20-2(03)/2013(BSR). S. B. acknowledges the financial support provided by the Central University of Himachal Pradesh. [50.]{} Y. Abe *et. al.* (Double Chooz Collaboration), Phys. Rev. Lett. **108**, 131801 (2012); J. High Energy Phys. 10 (2014) 086. F. P. An *et. al.* (Daya Bay Collaboration), Phys. Rev. Lett. **108**, 171803 (2012); Phys. Rev. D **90**, 071101(R) (2014). J. K. Ahn *et. al.* (Reno Collaboration), Phys. Rev. Lett. **108**, 191802 (2012); **112**, 061801 (2014). N. Cabibbo, Phys. Lett. B **72**, 333 (1978). K. Abe *et. al.* (T2K Collaboration), Phys. Rev. Lett. **112**, 061802 (2014). P. Adamson *et. al.* (MINOS Collaboration), Phys. Rev. Lett. **110**, 171801 (2013), and references therein. A. Aguilar-Arevalo *et. al.* (LSND Collaboration), Phys. Rev. D **64**, 112007 (2001). A. Aguilar-Arevalo *et. al.* (MiniBooNE Collaboration), Phys. Rev. Lett. **102**, 101802 (2009). A. Aguilar-Arevalo *et. al.* (MiniBooNE Collaboration), Phys. Rev. Lett. **110**, 161801 (2013). G. Mention *et. al.*, Phys. Rev. D **83**, 073006 (2011). T. Mueller *et. al.*, Phys. Rev. C **83**, 054615 (2011). D. Frekers *et. al.*, Phys. Lett. B **706**, 134 (2011). Hong-Jian He and Xun-Jie Xu, Phys. Rev. D **89**, 073002 (2014). Wan-lei Guo and Zhi-zhong Xing, Phys. Rev. D **65**, 073020 (2002). Wan-lei Guo and Zhi-zhong Xing, Phys. Rev. D **66**, 097302 (2002). C. Jarlskog, Phys. Rev. Lett. **55**, 1039 (1985). S. M. Bilenky and S. T. Petcov, Rev. Mod. Phys. **59**, 671 (1987), and references therein. J. Beringer *et. al.* (Particle Data Group), Phys. Rev. D **86**, 010001 (2012), and references therein. H. Fritzsch and J. Plankl, Phys. Rev. D **35**, 1732 (1987). M. Gonzalez-Garcia, M. Maltoni and T. Schwetz, J. High Energy Physics **1411**, 052 (2014). J. Bian (NO$\nu$A Collaboration), arXiv: 1309.7898; D. S. Ayres *et. al.* (NO$\nu$A Collaboration), NOVA-doc-593, and references therein. R. J. Abrams *et. al.* (IDS-NF Collaboration), arXiv: 1112.2853, and references therein. [^1]: Electronic address: s\_7verma@yahoo.co.in [^2]: Electronic address: shankita.bhardwaj982@gmail.com
--- abstract: 'A class of generalized Randall-Sundrum type II (RS) brane-world models with Weyl fluid are confronted with the Gold supernovae data set and BBN constraints. We consider three models with different evolutionary history of the Weyl fluid, characterized by the parameter $\alpha$. For $\alpha =0$ the Weyl curvature of the bulk appears as dark radiation on the brane, while for $\alpha =~2$ and $~3$ the brane radiates, leaving a Weyl fluid on the brane with energy density decreasing slower than that of (dark) matter. In each case the contribution $\Omega_d$ of the Weyl fluid represents but a few percent of the energy content of the Universe. All models fit reasonably well the Gold2006 data. The best fit model for $\alpha =0$ is for $\Omega_d=0.04$. In order to obey BBN constraints in this model however, the brane had to radiate at earlier times.' author: - 'László Á. Gergely' - Zoltán Keresztes - 'Gyula M. Szabó' title: 'Cosmological tests of generalized RS brane-worlds with Weyl fluid' --- [ttct0001.sty]{} xexpast\#1\*\#2\#3\#4@[ @a[\#1]{} tempcnta\#2 tempcnta&gt;@ whilenumtempcnta&gt;@ @bxexpast @bxexnoop @b@a \#4@]{} xexnoop \#1@ \#1\#2?[ &gt;\#2&gt; gobble firstofone ]{} [ address=[Departments of Theoretical and Experimental Physics, University of Szeged, Dóm tér 9, Szeged 6720, Hungary]{}, email=[gergely@physx.u-szeged.hu]{}, ]{} [ address=[Departments of Theoretical and Experimental Physics, University of Szeged, Dóm tér 9, Szeged 6720, Hungary]{}, email=[zkeresztes@titan.physx.u-szeged.hu]{}, ]{} [ address=[Departments of Theoretical and Experimental Physics, University of Szeged, Dóm tér 9, Szeged 6720, Hungary]{}, email=[szgy@titan.physx.u-szeged.hu]{}, ]{} Introduction ============ The $\Lambda $CDM model according to which our Universe is a Friedmann-Lemaitre-Robertson-Walker (FLRW) space-time with flat spatial sections containing approximately 3% baryonic matter, 24% cold dark matter, the rest being given by the contribution of a cosmological constant $\Lambda$ seems to be in excellent agreement with current observational data. As the dark sector (dark matter and dark energy - a generalization of the vacuum expectation energy represented by the cosmological constant) remains unknown, alternate gravitational theories have been advanced. The string-theory motivated brane-world models contain our observable Universe as a time-evolving 3-dimensional brane embedded in a 5-dimensional bulk. Standard model fields act on the brane, but gravity is allowed to leak into the fifth dimension [@RS2], where non-standard model fields could also exist (for a review see [@MaartensLR]). The projection of the 5-dimensional Einstein equation onto the brane generates an effective Einstein equation with new source terms as compared to general relativity [@SMS], [@Decomp]. Among them, the energy-momentum squared source term modifies early cosmology [@BDEL] and becomes important during the final stages of gravitational collapse [@collapse]. It behaves as the dominant source term before the Bing Bang Nucleosynthesis (BBN). This quadratic source term is proportional to $1/\lambda$. The value of the brane tension $\lambda$ is constrained by the deviation from the gravitational Newton-law still compatible with nowadays rigurous experiments [@tabletop]-[@GK]. The emerging high value of $\lambda$ and the fast decrease of the square of the energy density of matter implies that in a cosmological context this source term can be safely ignored at present time. The Weyl curvature of the bulk gives rise to a non-local bulk effect on the brane, appearing as fluid on the brane (the Weyl fluid). In the simplest case, when the bulk is static, the Weyl fluid is a radiation field (dark radiation). This situation represents an equilibrium configuration, without any energy exchange between the brane and the bulk. BBN constraints the amounts of the energy density of the dark radiation as $-1.02\times 10^{-4}\leq \Omega _{d}\leq 2.62\times 10^{-5}$ [@BBN]. (Here $\Omega _{d}$ is the dimensionless dark radiation energy density parameter.) More generic Weyl fluids were also considered [@GK], [@LSR]-[@Pal]. Depending on how this Weyl fluid evolves, its present day amount can be either negligible or not. It is this aspect we wish to consider here, based on our previous analysis [@supernova1]-[@supernova2]. Various brane-world models were confronted with supernova data [@Sahni]-[@Fay], however in all these models the contribution of Weyl fluid was dropped, assuming it was pure dark radiation during all stages all the cosmological evolution. In our analysis we keep the contribution of the (non-radiation like) Weyl fluid. We already gave the analytical expression in terms of elliptical integrals for the luminosity distance when the Weyl contribution is small [@supernova1] and can be considered a perturbation. [^1] Then we have tested the models with Weyl fluid characterizing a bulk-brane energy exchange, by comparing their predictions with the best available supernova data [@supernova2]. Here we present additional analysis and strenghten our conclusions. Our model consists of a spatially flat FLRW brane embedded symmetrically into a 5-dimensional Vaidya-anti de Sitter bulk. The latter has a cosmological constant $\widetilde{\Lambda }$, black holes with masses $m$ on either sides of the brane and radiation. If the radiation is swithced off, $m$ is constant, and the bulk becomes Schwarzschild-anti de Sitter. In this configuration the Weyl fluid appears as dark radiation on the brane (with energy density $m/a^{4}$). Any radiation escaping from the brane causes $m$ to vary. The ansatz $m=m_{0}a^{\alpha }$, with $\alpha =2,~3$, comparable with structure formation has been recently advanced [@Pal]. The brane coupling constant $\kappa ^{2}$ is related to the coupling constant in the bulk $\widetilde{\kappa }^{2}$ and brane tension $\lambda$ as $6\kappa ^{2}=\widetilde{\kappa }^{4}\lambda $. The relation between the brane tension and the bulk and the brane cosmological constant $\Lambda $ is 2$\Lambda =\kappa ^{2}\lambda +\widetilde{\kappa }^{2}\widetilde{\Lambda }$. We introduce the following dimensionless quantities: $$\Omega _{\rho } =\frac{\kappa ^{2}\rho_{0}}{3H_{0}^{2}}\ ,\ \ \ \ \Omega _{\lambda} =\frac{\kappa ^{2}\rho _{0}^{2}}{6\lambda H_{0}^{2}}\ , \ \ \Omega _{d}=\frac{2m_{0}}{a_{0}^{4-\alpha}H_{0}^{2}}\mathrm{\ },\ \ \ \\ \Omega _{\Lambda }=\frac{\Lambda }{3H_{0}^{2}}\ , \label{omd}$$ where $\Omega _{tot} =\Omega_{\Lambda }+\Omega _{\rho }+\Omega _{\lambda}+\Omega_{d}$. Here $H$ is the Huble parameter, $\rho $ is the matter energy density on the brane and $a$ is the scale factor. The subscript $0$ denotes the present value of the respective quantities. The Friedmann equation written in these parameters becomes $H^{2}/H_{0}^{2}=\Omega _{\Lambda }+\Omega _{\rho}{a_{0}^{3}}/{a^{3}}+\Omega _{d}{a_{0}^{4-\alpha }}/{a^{4-\alpha }} +\Omega _{\lambda}{a_{0}^{6}}/{a^{6}}$ . At present time this gives $\Omega _{tot}=1$. The luminosity distance for the spatially flat FLRW brane becomes $$d_{L}\left( z\right) =\frac{\left( 1+z\right) a_{0}}{H_{0}} \int_{a_{em}}^{a_{0}}\frac{ada}{\left[ \Omega _{\Lambda }a^{6} +\Omega _{\rho }a_{0}^{3}a^{3}+\Omega _{d}a_{0}^{4-\alpha }a^{\alpha +2}+\Omega _{\lambda }a_{0}^{6}\right] ^{1/2}}\ . \label{chi2}$$ The above complicated integral has no analytical form in the majority of cases. However for small $\Omega_d$ and (as noted earlier)) vanishing $\Omega_{\lambda}$ this integral could be given analytically as $d_{L}^{\Lambda\lambda d}=d_{L}^{\Lambda \mathrm{CDM}}+\Omega _{d}I_{d}$, where the coefficient $I_d$ is an analytic expression of elementary functions and elliptic integrals of the first and second kind [@supernova1], having different forms depending on the actual value of the parameter $\alpha$. We have then compared the predictions of the models characterized by various values of $\alpha$ with the Gold 2006 supernovae data set [@gold06], for the range $-0.1<\Omega _{d}<0.1 $ up to $z=3$. The data selects among the brane-world models with $\alpha=0 $ a global minimum at $\Omega _{\Lambda }=0.735$, $\Omega _{\rho }=0.225$ and $\Omega_d=0.04$. We note that the value of $\Omega_{\rho}$ is in perfect agreement with the WMAP 3-year data [@WMAP3y]. The 1-$\sigma $ and 2-$\sigma $ confidence levels are shown of Fig\[Fig1\] in the $\Omega _{\Lambda }-\Omega _{\rho }$ plane. The $\Lambda $CDM model is contained in the 2-$\sigma$ cofidence level at $\Omega _{\Lambda }=0.725,\ \Omega_{\rho}=0.275$. The forbidden (white) region appears because the Friedmann equation combined with $\Omega _{tot}=1$ gives constraints on the allowed range of $\Omega _{d}-\Omega _{\rho }$ [@supernova2]. The cases $\alpha =2$ and $\alpha =3$ are represented on Fig\[Fig2\] and Fig\[Fig3\], respectively. In these cases the determination of the global minimum becomes unreliable, as the 1-$\sigma$ contours become much elongated. Instead of a peak we have a trough, which lies aslope in the $\Omega _{\Lambda }$-$\Omega _{\rho}$ plane. Together with the increase in $\alpha$ the 1-$\sigma$ contour becomes more elongated and turning counter-clock-wise. This feature of the 1-$\sigma$ contours indicates that with increasing $\alpha$ the models become more likely to be compatible with the available supernova data, irrelevant of the exact (but small) value of $\Omega_d$. This practically means that any small amount of Weyl fluid with $\alpha=2,~3$ is perfectly compatible with supernova data. In the investigated models the Weyl fluid is either dark radiation ($\alpha=0$) or describes a situation when the brane radiates into the bulk, feeding the bulk black holes ($\alpha=2,~3$). In the latter cases the energy density of the Weyl fluid decreases slower, than that of matter, therefore a sizeable Weyl fluid contribution nowasays is perfectly compatible with BBN constraints. For $\alpha=0 $ the Weyl fluid evolves as radiation, therefore the BBN constraints can be satisfied only with an infinitesimal amount of dark radiation nowadays. The preferred value of $\Omega _{d}=0.04$ would give too much dark radiation in the past. However if we assume that $\alpha=0$ is only a recent characteristic of the brane, this equilibrium situation being preceded by an epoch in which the brane is allowed to radiate, the BBN constraints can be obeyed with a small, but non-negligible amount of dark radiation nowadays. The BBN contraint [@BBN] can be satisfied [@supernova2], if the brane radiates in the interval ($z_{1}$, $z_{\ast }$). Assuming $z_{1}=3$ the constraint for $z_{\ast }$ with different value of $\alpha$ gives $z_{\ast }\geq$ 6114.20 for $\alpha =1 $, $z_{\ast }\geq$ 155.40 for $\alpha =2 $, $z_{\ast }\geq$ 45.08 for $\alpha =3 $, $z_{\ast }\geq$ 24.01 for $\alpha =4 $. Thus the known history of the Universe can be explained if the brane radiates during at least some period of the cosmological evolution. At early times the radiation of the brane leads to a black hole, which can further grow during structure formation. None of the investigated models, compatible with the available supernova data and structure formation, can be excluded by present observational accuracy. The differences among the predictions of the models are however increasing with redshift, therefore future measurements of very far supernovae will be able to either support or falsify these cosmological models. *Acknowledgments*: this work was supported by OTKA grants no. 46939 and 69036. LÁG and GyMSz were further supported by the János Bolyai Fellowship of the Hungarian Academy of Sciences. [1]{} L. Randall and R. Sundrum, *An Alternative to Compactification*, 1999* Phys. Rev. Lett.* **83**, 4690 R. Maartens, *Brane-world Gravity*, 2004 *Living Rev. Rel*. **7** 1 T. Shiromizu, K. Maeda and M. Sasaki, *The Einstein Equations on the 3-Brane World*, 2000 *Phys. Rev.* D **62** 024012 L. Á. Gergely, *Generalized Friedmann branes*, 2003 *Phys. Rev.* D **68** 124011 P. Binétruy, C. Deffayet, U. Ellwanger and D. Langlois, *Brane cosmological evolution in a bulk with cosmological constant,* 2000 *Phys. Lett.* B **477**, 285 L. Á. Gergely, *Black holes and dark energy from gravitational collapse on the brane*, 2007 *JCAP* **07**(02) 027 J. C. Long et al., *New experimental limits on macroscopic forces below 100 microns*, 2003 *Nature* **421** 922; J. H. Gundlach, S. Schlamminger, C. D. Spitzer at al., *Laboratory Test of Newton' s Second Law for Small Acceleration*, 2007 *Phys. Rev. Lett.* **98** 150801; D. J. Kapner, T. S. Cook, E. G. Adelberger, *Tests of the Gravitational Inverse-Square Law below the Dark-Energy Lenght Scale*, 2007 *Phys. Rev. Lett.* **98** 021101 L. Á. Gergely and Z. Keresztes, *Irradiated asymmetricFriedmann branes*, 2006 *JCAP* **06**(01) 022 K. Ichiki, M. Yahiro, T. Kajino, M. Orito and G. J. Mathews, *Observational Constraints on Dark Radiation in Brane Cosmology*, 2002 *Phys. Rev.* **D 66**, 043521 D. Langlois, L. Sorbo, and M. Rodríguez-Martínez, *Cosmology of a brane radiating gravitons into the extra dimension*, 2002 [*P*hys. Rev. Lett. ]{} **89** 171301 L. Á. Gergely, E. Leeper and R. Maartens, *Asymmetric radiating brane-world*, 2004 *Phys. Rev.* D **70** 104025 D. Jennings and I. R. Vernon, *Graviton emmission into non-Z2 symmetric brane world spacetimes*, 2005 *JCAP* **05**(07) 011 D. Jennings, I. R. Vernon, A-C. Davis and C. van de Bruck, *Bulk black holes radiating in non-Z2 brane-world spacetimes*, 2005 *JCAP* **05**(04) 013 Z. Keresztes, I. Képíró and L. Á. Gergely, *Semi-transparent brane-worlds*, 2006 *JCAP* **06**(05) 020 S. Pal, *Structure formation on the brane: A mimicry*, 2006 *Phys. Rev.* D **74** 024005 Z. Keresztes, L. Á. Gergely, B. Nagy and Gy. M. Szabó, *The luminosity-redshift relation in brane-worlds: I. Analytical results*, 2006 * astro-ph/0606698* Gy. M. Szabó, L. Á. Gergely and Z. Keresztes, *The luminosity-redshift relation in brane-worlds: II. Confrontation with expreimental data*, 2007 * astro-ph/0702610* U. Alam and V. Sahni, *Confronting Brane world Cosmology with Supernova data and Baryon Oscillations*, 2006 *Phys. Rev.* D **73** 084024 R. Lazkoz, R. Maartens and E. Majerotto, *Observational constraints on phantom-like braneworld cosmologies*, 2006 *Phys. Rev.*D **74** 083510 M. P. Dabrowski, W. Godłowski and M. Szydłowski, *Brane universes tested against astronomical data*, 2004 *Int. J. Mod. Phys.* D **13** 1669 S. Fay, *Branes: cosmological surprise and observational deception*, 2006 *Astron. Astrophys.* **452** 781 M. P. Dabrowski and T. Stachowiak, *Phantom Friedmann cosmologies and higher-order characteristics of expansion*, 2006 *Annals of Physics* **321** 771* * A. G. Riess, L-G. Strolger, S. Casertano et al., *New Hubble Space Telescope Discoveries of Type Ia Supernovae at* $z>1$*: Narrowing Constraints on the Early Behavior of Dark Energy* 2007 to appear in Astrophys. J. **656** 98 D. N. Spergel, R. Bean, Doré O et al., *Wilkinson Microwave Anisotropy Probe (WMAP) Three Year Results: Implications for Cosmology*, 2006 *astro-ph/0603449* [^1]: We note that for a wide class of phantom Friedmann cosmologies similar analytical results in terms of elementary and Weierstrass elliptic functions for the luminosity distance are available [@DabrowskiS].
--- abstract: | Sensitivity, certificate complexity and block sensitivity are widely used Boolean function complexity measures. A longstanding open problem, proposed by Nisan and Szegedy [@NS], is whether sensitivity and block sensitivity are polynomially related. Motivated by the constructions of functions which achieve the largest known separations, we study the relation between 1-certificate complexity and 0-sensitivity and 0-block sensitivity. Previously the best known lower bound was $C_1(f)\geq \frac{bs_0(f)}{2 s_0(f)}$, achieved by Kenyon and Kutin [@KK]. We improve this to $C_1(f)\geq \frac{3 bs_0(f)}{2 s_0(f)}$. While this improvement is only by a constant factor, this is quite important, as it precludes achieving a superquadratic separation between $bs(f)$ and $s(f)$ by iterating functions which reach this bound. In addition, this bound is tight, as it matches the construction of Ambainis and Sun [@AS] up to an additive constant. author: - Andris Ambainis - Krišjānis Prūsis bibliography: - 'bibliography.bib' title: 'A Tight Lower Bound on Certificate Complexity in Terms of Block Sensitivity and Sensitivity[^1]' --- Introduction ============ Determining the biggest possible gap between the sensitivity $s(f)$ and block sensitivity $bs(f)$ of a Boolean function is a well-known open problem in the complexity of Boolean functions. Even though this question has been known for over 20 years, there has been quite little progress on it. The biggest known gap is $bs(f)=\Omega(s^2(f))$. This was first discovered by Rubinstein [@R], who constructed a function $f$ with $bs(f)=\frac{s^2(f)}{2}$, and then improved by Virza [@V] and Ambainis and Sun [@AS]. Currently, the best result is a function $f$ with $bs(f)=\frac{2}{3} s^2(f) - \frac{1}{3} s(f)$ [@AS]. The best known upper bound is exponential: $bs(f)\leq s(f) 2^{s(f)-1}$ [@A+] which improves over an earlier exponential upper bound by Kenyon and Kutin [@KK]. In this paper, we study a question motivated by the constructions of functions that achieve a separation between $s(f)$ and $bs(f)$. The question is as follows: Let $s_z(f)$, $bs_z(f)$ and $C_{z}(f)$ be the maximum sensitivity, block sensitivity and certificate complexity achieved by $f$ on inputs $x$: $f(x)=z$. What is the best lower bound of $C_1(f)$ in terms of $s_0(f)$ and $bs_0(f)$? The motivation for this question is as follows. Assume that we fix $s_0(f)$ to a relatively small value $m$ and fix $bs_0(f)$ to a substantially larger value $k$. We then minimize $C_1(f)$. We know that $s_1(f)\leq C_1(f)$ (because every sensitive bit has to be contained in a certificate). We have now constructed an example where both $s_0(f)$ and $s_1(f)$ are relatively small and $bs_0(f)$ large. This may already achieve a separation between $bs_0(f)$ and $s(f)=\max(s_0(f), s_1(f))$ and, if $s_1(f)>s_0(f)$, we can improve this separation by composing the function with OR (as in [@AS]). While this is just one way of achieving a gap between $s(f)$ and $bs(f)$, all the best separations between these two quantities can be cast into this framework. Therefore, we think that it is interesting to explore the limits of this approach. The previous results are as follows: 1. Rubinstein’s construction [@R] can be viewed as taking a function $f$ with $s_0(f)=1$, $bs_0(f)=k$ and $C_1(f)=2k$. A composition with OR yields [@AS] $bs(f)=\frac{1}{2}s^2(f)$; 2. Later work by Virza [@V] and Ambainis and Sun [@AS] improves this construction by constructing $f$ with $s_0(f)=1$, $bs_0(f)=k$ and $C_1(f)=\left\lfloor \frac{3k}{2} \right\rfloor + 1$. A composition with OR yields $bs(f)=\frac{2}{3} s^2(f) - \frac{1}{3} s(f)$; 3. Ambainis and Sun [@AS] also show that, given $s_0(f)=1$ and $bs_0(f)=k$, the certificate complexity $C_1(f)=\left\lfloor \frac{3k}{2} \right\rfloor + 1$ is the smallest that can be achieved. This means that a better bound must either start with $f$ with $s_0(f)>1$ or use some other approach; 4. For $s_0(f)=m$ and $bs_0(f)=k$, it is easy to modify the construction of Ambainis and Sun [@AS] to obtain $C_1(f)=\left\lfloor \frac{3\lceil k/m\rceil}{2} \right\rfloor + 1$ but this does not result in a better separation between $bs(f)$ and $s(f)$; 5. Kenyon and Kutin [@KK] have shown a lower bound of $C_1(f)\geq \frac{k}{2m}$. If this was achievable, this could result in a separation of $bs(f)=2 s^2(f)$. The gap between the construction $C_1(f) = \frac{3k}{2m}+O(1)$ and the lower bound of $C_1(f)\geq \frac{k}{2m}$ is only a constant factor but the constant here is quite important. This gap corresponds to a difference between $bs(f)=(\frac{2}{3}+o(1)) s^2(f)$ and $bs(f)=2s^2(f)$, and, if we achieved $bs(f)>s^2(f)$, iterating the function $f$ would yield an infinite sequence of functions with a superquadratic separation $bs(f)=s(f)^c$, where $c>2$. In this paper, we show that for any $f$ $$C_1(f) \geq \frac{3}{2} \frac{bs_0(f)}{s_0(f)} - \frac{1}{2} .$$ This matches the best construction up to an additive constant and shows that no further improvement can be achieved along the lines of [@R; @V; @AS]. Our bound is shown by an intricate analysis of possible certificate structures for $f$. Since we now know that $bs_0(f) \leq \left(\frac{2}{3}+o(1)\right)C_1(f) s_0(f)$, it is tempting to conjecture that $bs_0(f) \leq \left(\frac{2}{3}+o(1)\right)s_1(f) s_0(f)$. If this was true, the existing separation between $bs(f)$ and $s(f)$ would be tight. Preliminaries ============= Let $f: \{0,1\}^n \rightarrow \{0,1\}$ be a Boolean function on $n$ variables. The $i$-th variable of input $x$ is denoted by $x_i$. For an index set $S \subseteq [n]$, let $x^S$ be the input obtained from an input $x$ by flipping every bit $x_i$, $i \in S$. Let a *$z$-input* be an input on which the function takes the value $z$, where $z \in \{0,1 \}$. We briefly define the notions of sensitivity, block sensitivity and certificate complexity. For more information on them and their relations to other complexity measures (such as deterministic, probabilistic and quantum decision tree complexities), we refer the reader to the surveys by Buhrman and de Wolf [@BW] and Hatami et al. [@HKP]. The *sensitivity complexity* $s(f,x)$ of $f$ on an input $x$ is defined as $| \{ i {\left.\right|}f(x) \neq f(x^{\{i\}})\} |$. The *$z$-sensitivity* $s_z(f)$ of $f$, where $z \in \{0,1 \}$, is defined as $\max \{s(f,x) {\left.\right|}x \in \{0,1\}^n, f(x)=z\}$. The *sensitivity* $s(f)$ of $f$ is defined as $\max \{s_0(f),s_1(f)\}$. The *block sensitivity* $bs(f,x)$ of $f$ on input $x$ is defined as the maximum number $b$ such that there are $b$ pairwise disjoint subsets $B_1, \ldots , B_b$ of $[n]$ for which $f(x) \neq f(x^{B_i})$. We call each $B_i$ a *block*. The *$z$-block sensitivity* $bs_z(f)$ of $f$, where $z \in \{0,1 \}$, is defined as $\max \{bs(f,x) {\left.\right|}x \in \{0,1\}^n, f(x)=z\}$. The *block sensitivity* $bs(f)$ of $f$ is defined as $\max \{bs_0(f),bs_1(f)\}$. A *certificate* $c$ of $f$ on input $x$ is defined as a partial assignment $c: S \rightarrow \{0,1\}, S \subseteq [n]$ of $x$ such that $f$ is constant on this restriction. If $f$ is always 0 on this restriction, the certificate is a *0-certificate*. If $f$ is always 1, the certificate is a *1-certificate*. We denote specific certificates as words with $*$ in the positions that the certificate does not assign. For example, $01\!*\!*\!*\!*$ denotes a certificate that assigns 0 to the first variable and 1 to the second variable. We say that an input $x$ *satisfies* a certificate $c$ if it matches the certificate in every assigned bit. The number of *contradictions* between an input and a certificate or between two certificates is the number of positions where one of them assigns 1 and the other assigns 0. For example, there are two contradictions between $0010\!*\!*$ and $100\!*\!**$ (in the 1st position and the 3rd position). The number of *overlaps* between two certificates is the number of positions where both have assigned the same values. For example, there is one overlap between $001\!*\!**$ and $*0000$ (in the second position). We say that two certificates [*overlap*]{} if there is at least one overlap between them. We say that a certificate remains *valid* after fixing some input bits if none of the fixed bits contradicts the certificate’s assignments. The *certificate complexity* $C(f,x)$ of $f$ on input $x$ is defined as the minimum length of a certificate that $x$ satisfies. The *$z$-certificate complexity* $C_z(f)$ of $f$, where $z \in \{0,1 \}$, is defined as $\max \{C(f,x) {\left.\right|}x \in \{0,1\}^n, f(x)=z\}$. The *certificate complexity* $C(f)$ of $f$ is defined as $\max \{C_0(f),C_1(f)\}$. Background ========== We study the following question: [**Question:**]{} Assume that $s_0(g)=m$ and $bs_0(g)=k$. How small can we make $C_1(g)$? [**Example 1.**]{} Ambainis and Sun [@AS] consider the following construction. They define $g_0(x_1, \ldots, x_{2k})=1$ if and only if $(x_1, \ldots, x_{2k})$ satisfies one of $k$ certificates $c_0, \ldots, c_{k-1}$ with $c_i$ ($i\in\{0, 1, \ldots, k-1\}$) requiring that 1. $x_{2i+1}=x_{2i+2}=1$; 2. $x_{2j+1}=0$ for $j\in\{0, \ldots, k-1\}$, $j\neq i$; 3. $x_{2j+2}=0$ for $j\in\{i+1, \ldots, i+\lfloor k/2\rfloor\}$ (with $i+1, \ldots, i+\lfloor k/2\rfloor$ taken $\bmod k$). Then, we have: - $s_0(g_0)=1$ (it can be shown that, for every 0-input of $g_0$, there is at most one $c_i$ in which only one variable does not have the right value); - $s_1(g_0)=C_1(g_0)=\lfloor 3k/2\rfloor+1$ (a 1-input that satisfies a certificate $c_i$ is sensitive to changing any of the variables in $c_i$ and $c_i$ contains $\lfloor 3k/2\rfloor+1$ variables); - $bs_0(g_0)=k$ (the 0-input $x_1=\cdots=x_{2k}=0$ is sensitive to changing any of the pairs $(x_{2i+1}, x_{2i+2})$ from $(0, 0)$ to $(1, 1)$). This function can be composed with the OR-function to obtain the best known separation between $s(f)$ and $bs(f)$: $bs(f)=\frac{2}{3} s^2(f) - \frac{1}{3} s(f)$[@AS]. As long as $s_0(g)=1$, the construction is essentially optimal: any $g$ with $bs_0(g)=k$ must satisfy $C_1(g)\geq s_1(g) \geq \frac{3k}{2}-O(1)$. In this paper, we explore the case when $s_0(g)>1$. An easy modification of the construction from [@AS] gives \[thm:easy\] There exists a function $g$ for which $s_0(g)=m$, $bs_0(g)=k$ and $C_1(f)=\left\lfloor \frac{3\lceil k/m\rceil}{2} \right\rfloor + 1$. To simplify the notation, we assume that $k$ is divisible by $m$. Let $r=k/m$. We consider a function $g(x_{m1}, \ldots, x_{m,2r})$ with variables $x_{i, j}$ ( $i\in\{1, \ldots, m\}$ and $j\in\{1, \ldots, 2r\}$) defined by $$\label{eq:or} g(x_{11}, \ldots, x_{m,2r}) = \vee_{i=1}^m g_0(x_{i,1}, \ldots, x_{i,2r}) .$$ Equivalently, $g(x_{11}, \ldots, x_{m,2r})=1$ if and only if at least one of the blocks $(x_{i,1}, \ldots, x_{i,2r})$ satisfies one of the certificates $c_{i,0} , \ldots , c_{i,r-1}$ that are defined similarly to $c_0 , \ldots , c_{k-1}$ in the definition of $g_0$. It is easy to see [@AS] that composing a function $g_0$ with OR gives $s_0(g)=m\, s_0(g_0)$, $bs_0(g)=m\, bs_0(g_0)$ and $C_1(g) = C_1(g_0)$, implying the theorem. ------------------------------------------------------------------------ While this function does not give a better separation between $s(f)$ and $bs(f)$, any improvement to Theorem \[thm:easy\] could give a better separation between $s(f)$ and $bs(f)$ by using the same composition with OR as in [@AS]. On the other hand, Kenyon and Kutin [@KK] have shown that For any $f$ with $s_0(g)=m$ and $bs_0(g)=k$, we have $C_1(f)\geq \frac{k}{2m}$. Separation between $C_1(f)$ and $bs_0(f)$ ========================================= In this paper, we show that the example of Theorem \[thm:easy\] is optimal. \[theorem:result\] For any Boolean function $f$ the following inequality holds: $$\label {equation:result} C_1(f) \geq \frac{3}{2} \frac{bs_0(f)} {s_0(f)} - \frac{1}{2}.$$ Without loss of generality, we can assume that the maximum $bs_0$ is achieved on the all-0 input denoted by 0. Let $B_1,...,B_k$ be the sensitive blocks, where $k=bs_0(f)$. Also, we can w.l.o.g. assume that these blocks are minimal and that every bit belongs to a block. (Otherwise, we can fix the remaining bits to 0. This can only decrease $s_0$ and $C_1$, strengthening the result.) Each block $B_i$ has a corresponding minimal 1-certificate $c_i$ such that the word $(\{0\}^n)^{B_i}$ satisfies this certificate. Each of these certificates has a 1 in every position of the corresponding block (otherwise the block would not be minimal) and any number of 0’s in other blocks. We construct a complete weighted graph $G$ whose vertices correspond to certificates $c_1$, $\ldots$, $c_k$. Each edge has a weight that is equal to the number of contradictions between the two certificates the edge connects. [*The weight of a graph*]{} is just the sum of the weights of its edges. We will prove Let $w$ be the weight of an induced subgraph of $G$ of order $m$. Then $$w \geq \frac{3}{2} \frac{m^2}{s_0(f)}-\frac{3}{2}m.$$ The proof is by induction. As a basis we take induced subgraphs of order $m \leq s_0(f)$. In this case, $$\frac{3}{2} \frac{m^2}{s_0(f)}-\frac{3}{2}m \leq 0$$ and $w \geq 0$ is always true, as the number of contradictions between two certificates cannot be negative. Let $m > s_0(f)$. We assume that the relation holds for every induced subgraph of order $< m$. Let $G'$ be an induced subgraph of order $m$. Let $H \subset G'$ be its induced subgraph of order $s_0(f)$ with the smallest total weight. \[lemma:subgraphs\] For any certificate $c_i \in G' \setminus H$ in $G'$ not belonging to this subgraph $H$ the weight of the edges connecting $c_i$ to $H$ is $\geq 3$. Let $t$ be the total weight of the edges in $H$. Let us assume that there exists a certificate $c_j \notin H$ such that the weight of the edges connecting $c_j$ to $H$ is $\leq 2$. Let $H'$ be the induced subgraph $H \cup \{c_j\}$. Then the weight of $H'$ must be $\leq t+2$. We define the weight of a certificate $c_i \in H'$ as the sum of the weights of all edges of $H'$ that involve vertex $c_i$. If there exists a certificate $c_i \in H'$ such that its weight in $H'$ is $\geq 3$, then the weight of $H' \setminus \{c_i\}$ would be $<t$, which is a contradiction, as $H$ was taken to be the induced subgraph of order $s_0(f)$ with the smallest weight. Therefore the weight of every certificate in $H'$ is at most 2. In the next section, we show \[lemma:overlaps\] Let $f$ be a Boolean function for which the following properties hold: $f(\{0\}^n)=0$ and $f$ has such $k$ minimal 1-certificates that each has at most 2 contradictions with all the others together. Furthermore, for each input position, exactly one of these certificates assigns the value 1. Then, $s_0(f) \geq k$. This lemma implies that $s_0(f)\geq |H'|$ which is in contradiction with $|H'|=s_0(f)+1$. Therefore no such $c_j$ exists. ------------------------------------------------------------------------ We now examine the graph $G' \setminus H$. It consists of $m-s_0(f)$ certificates and by the inductive assumption has a weight of at least $$\frac{3}{2} \frac{(m-s_0(f))^2}{s_0(f)} - \frac{3}{2} (m-s_0(f)).$$ But there are at least $3 (m-s_0(f))$ contradictions between $H$ and $G' \setminus H$, thus the total weight of $G'$ is at least $$\begin{aligned} &\frac{3}{2} \frac{(m-s_0(f))^2}{s_0(f)} - \frac{3}{2} (m-s_0(f)) + 3(m-s_0(f)) \\ =~&\frac{3}{2} \frac{m^2-2 m s_0(f) + s_0(f)^2 } {s_0(f)} + \frac{3}{2} m - \frac{3}{2} s_0(f) \\ =~&\frac{3}{2} \frac{m^2} {s_0(f)} - 3 m + \frac{3}{2} s_0(f) + \frac{3}{2} m - \frac{3}{2} s_0(f) \\ =~&\frac{3}{2} \frac{m^2}{s_0(f)} - \frac{3}{2} m .\end{aligned}$$ This completes the induction step. ------------------------------------------------------------------------ By taking the whole of $G$ as $G'$, we find a lower bound on the total number of contradictions in the graph: $$\frac{3}{2} \frac{k^2}{s_0(f)} - \frac{3}{2} k .$$ Each contradiction requires one 0 in one of the certificates and each 0 contributes to exactly one contradiction (since for each position exactly one of $c_i$ assigns a 1). Therefore, by the pigeonhole principle, there exists a certificate with at least $$\frac{3}{2} \frac{k}{s_0(f)} - \frac{3}{2}$$ zeroes. As each certificate contains at least one 1, we get a lower bound on the size of one of these certificates and $C_1(f)$: $$C_1(f) \geq \frac{3}{2} \frac{bs_0(f)}{s_0(f)} - \frac{1}{2}.$$ ------------------------------------------------------------------------ Functions with $s_0(f)$ Equal to Number of 1-certificates ========================================================= In this section we prove Lemma \[lemma:overlaps\]. General Case: Functions with Overlaps {#section:overlaps} ------------------------------------- Let $c_1, \ldots, c_k$ be the $k$ certificates. We start by reducing the general case of Lemma \[lemma:overlaps\] to the case when there are no overlaps between any of $c_1, \ldots, c_k$. Note that certificate overlaps can only occur when two certificates assign 0 to the same position. Then a third certificate assigns 1 to that position. This produces 2 contradictions for the third certificate, therefore it has no further overlaps or contradictions. For example, here we have this situation in the 3rd position (with the first three certificates) and in the 6th position (with the last three certificates): $$\begin{pmatrix} 1&1&0&*&*&*&*&*&*&*\\ *&*&1&*&*&*&*&*&*&*\\ *&*&0&1&1&0&*&*&*&*\\ *&*&*&*&*&1&1&1&*&*\\ 0&*&*&*&*&0&*&*&1&1 \end{pmatrix}.$$ Let $t$ be the total number of such overlaps. Let $D$ be the set of certificates assigning 1 to positions with overlaps, $|D|=t$. We fix the position of every overlap to 0. Since the remaining function contains the word $\{0\}^n$, it is not identically 1. Every certificate not in $D$ is still a valid 1-certificate, as they assigned either nothing or 0 to the fixed positions. If they are no longer minimal, we can minimize them, which cannot produce any new overlaps or contradictions. The certificates in $D$ are, however, no longer valid. Let us examine one such certificate $c \in D$. We denote the set of positions assigned to by $c$ by $S$. Let $i$ be the position in $S$ that is now fixed to 0. We claim that certificate $c$ assigns value 1 to all $|S|$ positions in $c$. (If it assigned 0 to some position, there would be at least 3 contradictions between $c$ and other certificates: two in position $i$ and one in position where $c$ assigns 0.) If $|S|=1$, then the remaining function is always sensitive to $i$ on 0-inputs, as flipping $x_i$ results in an input satisfying $c$. If $|S|>1$, we examine the $2^{|S|-1}$ subfunctions obtainable by fixing the remaining positions of $S$. We fix these positions to the subfunction that is not identically 1 with the highest number of bits fixed to 1, we will call this the *largest non-constant subfunction*. If it fixes 1 in every position, it is sensitive to $i$ on 0-inputs, as flipping it produces a word which satisfies $c$. Otherwise it is sensitive on 0-inputs to every other bit fixed to 0 in $S$ besides $i$, as flipping them would produce a word from a subfunction with a higher amount of bits fixed to 1. But that subfunction is identically 1 or we would have fixed it instead. In either case we obtain at least one sensitive bit in $S$ on 0-inputs in the remaining function. Furthermore, every certificate not in $D$ is still valid, if not minimal. But we can safely minimize them again. We can repeat this procedure for every certificate in $D$. The resulting function is not always 1 and, on every 0-input, it has at least $t$ sensitive bits among the bits that we fixed. Furthermore, we still have $k-t$ non-overlapping valid minimal 1-certificates with no more than 2 contradictions each. In the next section, we show that this implies that it has 0-sensitivity of at least $k-t$ (Lemma \[lemma:graph\]). Therefore, the original function has a 0-sensitivity of at least $k-t+t=k$. Functions with No Overlaps -------------------------- \[lemma:graph\] Let $f$ be a Boolean function, such that $f$ is not always 1 and $f$ has such $k$ non-overlapping minimal 1-certificates that each has at most 2 contradictions with all the others together. Then, $s_0(f) \geq k$. To prove this lemma, we consider the weighted graph $G$ on these $k$ certificates where the weight of an edge in this graph is the number of contradictions between the two certificates the edge connects. We examine the connected components in this graph, not counting edges with weight 0. There can be only 4 kinds of components – individual certificates, two certificates with 2 contradictions between them, paths of 2 or more certificates with 1 contradiction between every two subsequent certificates in the path and cycles of 3 or more certificates with 1 contradiction between every two subsequent certificates in the cycle. As there are no overlaps between the certificates, each position is assigned to by certificates from at most one component. We will now prove by induction on $k$ that we can obtain a 0-input with as many sensitive bits in each component as there are certificates in it. As a basis we take $k=0$. Since $f$ is not always 1, $s_0(f)$ is defined, but obviously $s_0(f) \geq 0$. Then we look at each graph component type separately. ### Individual Certificates. We first examine individual certificates. Let us denote the examined certificate by $c$ and the set of positions it assigns by $S$. We fix all bits of $S$ except for one according to $c$ and we fix the remaining bit of $S$ opposite to $c$. The remaining function cannot be always 1, as otherwise the last bit in $S$ would not be necessary in $c$, but $c$ is minimal. Therefore on 0-inputs the remaining function is also sensitive to this last bit, as flipping it produces a word which satisfies $c$. Afterwards the remaining certificates might no longer be minimal. In this case we can minimize them. This cannot produce any more contradictions and no certificate can disappear, as the function is not always 1. Therefore the remaining function still satisfies the conditions of this lemma and has $k-1$ minimal 1-certificates, with each certificate having at most 2 contradictions with the others. Then by induction the remaining function has a 0-sensitivity of $k-1$. Together with the sensitive bit among the fixed ones, we obtain $s_0(f) \geq k$. ### Certificate Paths. We can similarly reduce certificate paths. A certificate path is a structure where each certificate has 1 contradiction with the next one and there are no other contradictions. For example, here is an example of a path of length 3: $$\begin{pmatrix} &&i&&&& \\ 1&1&0&*&*&*&*\\ *&*&1&1&0&*&*\\ *&*&*&*&1&1&1 \end{pmatrix}.$$ We note that every certificate in a path assigns at least 2 positions, otherwise its neighbours would not be minimal. We then take a certificate $c$ at the start of a path, which is next to a certificate $d$. Let $S$ be the set of positions $c$ assigns. Let $i$ be the position where $c$ and $d$ contradict each other. We then fix every bit in $S$ but $i$ according to $c$, and we fix $i$ according to $d$. The remaining function cannot be always 1, as otherwise $i$ would not be necessary in $c$, but $c$ is minimal. But on 0-inputs the remaining function is also sensitive to $i$ because flipping it produces a word which satisfies $c$. We note that in the remaining function the rest of $d$ (not all of $d$ was fixed because $d$ assigns at least 2 positions) is still a valid certificate, since it only assigns one of the fixed bits and it was fixed according to $d$. Similarly to the first case we can minimize the remaining certificates and obtain a function with $k-1$ certificates satisfying the lemma conditions. Then by induction the remaining function has a 0-sensitivity of $k-1$. Together with the sensitive bit $i$, we obtain $s_0(f) \geq k$. ### Two Certificates with Two Contradictions. Let us denote these 2 certificates as $c$ and $d$ and the two positions where they contradict as $i$ and $j$. For example, we can have 2 certificates like this: $$\begin{pmatrix} &&i&j& \\ 1&1&1&0&*\\ *&*&0&1&1 \end{pmatrix}.$$ Let $S$ be the set of positions $c$ assigns and $T$ be the set of positions $d$ assigns. We then fix every bit in $S$ except $j$ according to $c$ but we fix $j$ according to $d$. The remaining function cannot be always 1 because, otherwise, $j$ would not be necessary in $c$, but $c$ is minimal. But on 0-inputs the remaining function is also sensitive to $j$, as flipping it produces a word which satisfies $c$. If $|T|=2$, then on 0-inputs the remaining function is also sensitive to $i$ because flipping the $i^{\rm th}$ variable produces a word which satisfies $d$. If $|T|>2$, we examine the $2^{|T|-2}$ subfunctions obtainable by fixing the remaining positions of $T$. We can w.l.o.g. assume that $d$ assigns the value 1 to each of these. Similarly to section \[section:overlaps\], we find the largest non-constant subfunction among these – the subfunction that is not identically 1 with the highest number of bits fixed to 1. Then on 0-inputs we obtain a sensitive bit either at $i$ if this subfunction fixes all these positions to 1 or at a fixed 0 otherwise. Therefore we can always find at least one additional sensitive bit among $T$. Again we can minimize the remaining certificates and obtain a function with $k-2$ certificates satisfying the conditions of the lemma. Then by induction the remaining function has a 0-sensitivity of $k-2$. Together with the two additional sensitive bits found, we obtain $s_0(f) \geq k$. ### Certificate Cycles. A certificate cycle is a sequence of at least 3 certificates where each certificate has 1 contradiction with the next one and the last one has 1 contradiction with the first one. For example, here is a cycle of length 5: $$\begin{pmatrix} j_{5,1}& &j_{1,2}&j_{2,3}& &j_{3,4}&j_{4,5} & \\ 1&1&0&*&*&*&*&*\\ *&*&1&0&*&*&*&*\\ *&*&*&1&1&1&*&*\\ *&*&*&*&*&0&0&*\\ 0&*&*&*&*&*&1&1 \end{pmatrix}.$$ Every certificate in a cycle assigns at least 2 positions, otherwise its neighbours in the cycle would overlap. We denote the length of the cycle by $m$. Let $c_1, \ldots, c_m$ be the certificates in this cycle, let $S_1, \ldots, S_m$ be the positions assigned by them, and let $j_{1,2}, \ldots, j_{m,1}$ be the positions where the certificates contradict. We assign values to variables in $c_2, \ldots, c_m$ in the following way. We first assign values to variables in $S_2$ so that the variable $j_{2,3}$ contradicts $c_2$ and is assigned according to $c_3$, but all other variables are assigned according to $c_2$. We have the following properties. First, the remaining function cannot be always 1, as otherwise $j_{2,3}$ would not be necessary in $c_2$, but $c_2$ is minimal. Second, any 0-input that is consistent with the assignment that we made is sensitive to $j_{2,3}$ because flipping this position produces a word which satisfies $c_2$. Third, in the remaining function $c_3$, $\ldots$, $c_m$ are still valid 1-certificates because we have not made any assignments that contradict them. Some of these certificates $c_i$ may no longer be minimal. In this case, we can minimize them by removing unnecessary variables from $c_i$ and $S_i$. We then perform a similar procedure for $c_i \in\{3, \ldots, m\}$. We assume that the variables in $S_2$, $\ldots$, $S_{i-1}$ have been assigned values. We then assign values to variables in $S_i$. If $c_i$ and $c_{i+1}$ contradict in the variable $j_{i,i+1}$, we assign it according to $c_{i+1}$. (If $i=m$, we define $i+1=1$.) If $c_i$ and $c_{i+1}$ no longer contradict (this can happen if $j_{i,i+1}$ was removed from one of them), we choose a variable in $S_{i}$ arbitrarily and assign it opposite to $c_i$. All other variables in $S_i$ are assigned according to $c_i$. We now have similar properties as before. The remaining function cannot be always 1 and any 0-input that is consistent with our assignment is sensitive to changing a variable in $S_i$. Moreover, $c_{i+1}, \ldots, c_m$ are still valid 1-certificates and, if they are not minimal, they can be made minimal by removing variables. At the end of this process, we have obtained $m-1$ sensitive bits on 0-inputs: for each of $c_2$, $\ldots$, $c_m$, there is a bit, changing which results in an input satisfying $c_i$. We now argue that there should be one more sensitive bit. To find it, we consider the certificate $c_1$. During the process described above, the position $j_{1,2}$ where $c_1$ and $c_2$ contradict was fixed opposite to the value assigned by $c_1$. The position $j_{m,1}$ where $c_1$ and $c_m$ contradict is either unfixed or fixed according to $c_1$. All other positions of $c_1$ are unfixed. If there are no unfixed positions of $c_1$, then changing the position $j_{1,2}$ in a 0-input (that satisfies the partial assignment that we made) leads to a 1-input that satisfies $c_1$. Hence, we have $m$ sensitive bits. Otherwise, let $T \subset S_1$ be the set of positions in $c_1$ that have not been assigned and let $p=|T|$. W.l.o.g, we assume that $c_1$ assigns the value 1 to each of those positions. We examine the $2^{p}$ subfunctions obtainable by fixing the positions of $T$ in some way. Again we find the largest non-constant subfunction among these – the subfunction that is not identically 1 with the highest number of bits fixed to 1. Then on 0-inputs we obtain a sensitive bit either at $j_{1,2}$ if this subfunction fixes all these positions to 1 or at a fixed 0 otherwise. Similarly to the first three cases, we can minimize the remaining certificates and obtain a function with $k-m$ certificates satisfying the conditions of the lemma. By induction, the remaining function has a 0-sensitivity of $k-m$. Together with the $m$ additional sensitive bits we found, we obtain $s_0(f) \geq k$. ------------------------------------------------------------------------ Conclusions =========== In this paper, we have shown a lower bound on 1-certificate complexity in relation to the ratio of 0-block sensitivity and 0-sensitivity: $$\label{eq:final} C_1(f) \geq \frac{3}{2} \frac{bs_0(f)}{s_0(f)} - \frac{1}{2}.$$ This bound is tight, as the function constructed in Theorem \[thm:easy\] achieves the following equality: $$\label{eq:easy} C_1(f) = \frac{3}{2} \frac{bs_0(f)}{s_0(f)} + \frac{1}{2}.$$ The difference of $1$ appears as the proof of Theorem \[theorem:result\] requires only a single $1$ in each certificate but the construction of Theorem \[thm:easy\] has two. Thus, we have completely solved the problem of finding the optimal relationship between $s_0(f)$, $bs_0(f)$ and $C_1(f)$. For functions with $s_1(f)=C_1(f)$, such as those constructed in [@AS; @R; @V], this means that $$\label{equation:conjecture} bs_0(f) \leq \left( \frac{2}{3} +o(1) \right) s_0(f) s_1(f).$$ That is, if we use such functions, there is no better separation between $s(f)$ and $bs(f)$ than the currently known one. For the general case, it is important to understand how big the gap between $s_1(f)$ and $C_1(f)$ can be. Currently, we only know that $$\label{eq:as} s_1(f) \leq C_1(f) \leq 2^{s_0(f)-1} s_1(f),$$ with the upper bound shown in [@A+]. In the general case (\[eq:final\]) together with this bound implies only $$bs_0(f) \leq \left( \frac{2}{3} +o(1) \right) 2^{s_0(f)-1} s_0(f) s_1(f).$$ However, there is no known $f$ that comes even close to saturating the upper bound of (\[eq:as\]) and we suspect that this bound can be significantly improved. There are some examples of $f$ with gaps between $C_1(f)$ and $s_1(f)$, though. For example, the 4-bit non-equality function of [@A] has $s_0(NE)=s_1(NE)=2$ and $C_1(NE)=3$ and it is easy to use it to produce an example $s_0(NE)=2$, $s_1(NE)=2k$ and $C_1(NE)=3k$. Unfortunately, we have not been able to combine this function with the function that achieves (\[eq:easy\]) to obtain a bigger gap between $bs(f)$ and $s(f)$. Because of that, we conjecture that (\[equation:conjecture\]) might actually be optimal. Proving or disproving this conjecture is a very challenging problem. [^1]: This research has received funding from the EU Seventh Framework Programme (FP7/2007-2013) under projects QALGO (No. 600700) and RAQUEL (No. 323970) and ERC Advanced Grant MQC. Part of this work was done while Andris Ambainis was visiting Institute for Advanced Study, Princeton, supported by National Science Foundation under agreement No. DMS-1128155. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
--- abstract: 'As the second stage of the project [*multi-indexed orthogonal polynomials*]{}, we present, in the framework of ‘discrete quantum mechanics’ with real shifts in one dimension, the multi-indexed ($q$-)Racah polynomials. They are obtained from the ($q$-)Racah polynomials by multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of ‘virtual state’ vectors, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier. The virtual state vectors are the ‘solutions’ of the matrix Schrödinger equation with negative ‘eigenvalues’, except for one of the two boundary points.' --- Yukawa Institute Kyoto\ DPSU-12-1\ YITP-12-18\ **Multi-indexed ($q$-)Racah Polynomials\  \ ** **Satoru Odake${}^a$ and Ryu Sasaki${}^b$** $^a$ Department of Physics, Shinshu University,\ Matsumoto 390-8621, Japan\ ${}^b$ Yukawa Institute for Theoretical Physics,\ Kyoto University, Kyoto 606-8502, Japan Introduction {#sec:intro} ============ This is a second report of the project [*multi-indexed orthogonal polynomials*]{}. Following the examples of multi-indexed Laguerre and Jacobi polynomials [@os25], multi-indexed ($q$-)Racah polynomials are constructed in the framework of discrete quantum mechanics with real shifts [@os12]. It should be emphasised that the original ($q$-)Racah polynomials are the most generic members of the Askey scheme of hypergeometric orthogonal polynomials with purely discrete orthogonality measures [@askey; @ismail; @koeswart; @gasper]. They are also called orthogonal polynomials of a discrete variable [@nikiforov]. These new multi-indexed orthogonal polynomials are specified by a set of indices $\mathcal{D}=\{d_1,\ldots,d_M\}$ consisting of distinct natural numbers $d_j\in\mathbb{N}$, on top of $n$, which counts the nodes as in the ordinary orthogonal polynomials. The simplest examples, $\mathcal{D}=\{\ell\}$, $\ell\ge1$, $\{P_{\ell,n}(x)\}$ are also called [*exceptional orthogonal polynomials*]{} [@gomez]–[@quesne3]. They are obtained as the main part of the eigenfunctions (vectors) of various [*exactly solvable*]{} Schrödinger equations in one dimensional quantum mechanics and their ‘discrete’ generalisations, in which the corresponding Schrödinger equations are second order difference equations [@os12; @os13; @os14]. They form a complete set of orthogonal polynomials, although they start at a certain positive degree ($\ell\ge1$) rather than a degree zero constant term. The latter situation is essential for avoiding the constraints of Bochner’s theorem [@bochner]. The exceptional Laguerre polynomials with two extra indices $\mathcal{D}=\{d_1,d_2\}$ were introduced in [@gomez3]. We are quite sure that these new orthogonal polynomials will find plenty of novel applications in various branches of science and technology as other orthogonal polynomials. One obvious application is the birth and death processes [@bdp]. These new orthogonal polynomials provide huge stocks of [*exactly solvable birth and death processes*]{} [@bdproc]. The transition probabilities are given explicitly, not in a general spectral representation form of Karlin-McGregor [@karlin]. An interesting possible application is to one-dimensional spin systems and quantum information theory [@vinet]. The basic logic for constructing multi-indexed orthogonal polynomials is essentially the same for the ordinary Schrödinger equations, [*i.e.*]{} those for the Laguerre and Jacobi polynomials and for the difference Schrödinger equations with real shifts, [*i.e.*]{} the ($q$-)Racah polynomials, etc. The main ingredients are the factorised Hamiltonians, the Crum-Krein-Adler formulas [@crum; @adler; @Nsusy] for deletion of eigenstates, [*that is*]{} the multiple Darboux transformations [@darb] and the virtual states solutions [@os25] which are generated by twisting the discrete symmetries of the original Hamiltonians. Most of these methods for discrete Schrödinger equations had been developed [@os12; @os24; @os13; @os14; @os15; @gos; @os22] and they were used for the exceptional ($q$-)Racah polynomials [@os23]. The concept of virtual state ‘solutions’ requires special explanation in the present case. In the ordinary quantum mechanics cases, the virtual state solutions are the solutions of the Schrödinger equation but they do not belong to the Hilbert space of square integrable functions due to the twisted boundary conditions. In the present case, the Hamiltonians are finite-dimensional real symmetric tri-diagonal matrices. Therefore the eigenvalue equations for a given Hamiltonian matrix cannot have any extra solutions other than the genuine eigenvectors. Thus we will use the term [*virtual state vectors*]{}. As will be shown in the text, virtual state vectors are the ‘solutions’ of the eigenvalue problem for a [*virtual*]{} Hamiltonian $\mathcal{H}'$, except for one of the boundaries, $x=x_{\text{max}}$ . The virtual Hamiltonians are obtained from the original Hamiltonian by twisting the discrete symmetry and they are linearly related to the original Hamiltonian . Thus the virtual state vectors ‘satisfy’ the eigenvalue equation for the original Hamiltonian, except for one of the two boundaries. The polynomial part of the virtual state vectors had been used for the exceptional ($q$-)Racah polynomials. One distinctive feature of virtual states deletion in discrete quantum mechanics with real shifts is that the size of the Hamiltonian matrix ($x_{\text{max}}$) remains the same. This is in marked contrast with the eigenstates deletion (Christoffel transformations [@askey; @os22]), in which case the size decreases by the number of deleted eigenstates. This paper is organised as follows. In section two, the main ingredients of the theory, the difference Schrödinger equation for the ($q$-)Racah system, the polynomial eigenvectors and virtual state vectors, are introduced. Starting from the general setting of discrete quantum mechanics with real shifts in §\[sec:form\], the basic properties of the ($q$-)Racah systems are recapitulated in §\[sec:org\_qR\]. Based on the twisting (symmetry), the virtual state vectors are introduced in §\[sec:virtual\]. Section three is the main part of the paper. The basic logic of virtual states deletion in discrete quantum mechanics with real shifts in general is outlined in §\[sec:virtual\_del\]. The explicit forms of multi-indexed ($q$-)Racah polynomials are provided in §\[sec:ef\_miop\_qR\]. The final section is for a summary and comments. Original System {#sec:ori} =============== General formulation {#sec:form} ------------------- Let us recapitulate the discrete quantum mechanics with real shifts developed in [@os12]. We restrict ourselves to the finite dimensional matrix case, $x_{\text{max}}=N$. The Hamiltonian $\mathcal{H}=(\mathcal{H}_{x,y})$ is an irreducible (that is, not the direct sum of two or more such matrices) tri-diagonal real symmetric (Jacobi) matrix and its rows and columns are indexed by non-negative integers $x$ and $y$, $x,y=0,1,\ldots,x_{\text{max}}$. By adding a scalar matrix to the Hamiltonian, the lowest eigenvalue is assumed to be zero. This makes the Hamiltonian [*positive semi-definite*]{}. By a similarity transformation in terms of a diagonal matrix of $\pm1$ entries only, the eigenvector corresponding to the zero eigenvalue can be made to have definite sign, [*i.e.*]{} all the components are positive or negative. Then the Hamiltonian $\mathcal{H}$ has the following form $$\mathcal{H}_{x,y}{\stackrel{\text{def}}{=}}-\sqrt{B(x)D(x+1)}\,\delta_{x+1,y}-\sqrt{B(x-1)D(x)}\,\delta_{x-1,y} +\bigl(B(x)+D(x)\bigr)\delta_{x,y}, \label{Hdef}$$ in which the potential functions $B(x)$ and $D(x)$ are real and positive but vanish at the boundary: $$\begin{aligned} &B(x)>0\ \ (x=0,1,\ldots,x_{\text{max}}-1),\quad B(x_{\text{max}})=0,{\nonumber\\}&D(x)>0\ \ (x=1,2,\ldots,x_{\text{max}}),\quad D(0)=0. \label{BDcondition}\end{aligned}$$ The Schrödinger equation is the eigenvalue problem for the hermitian matrix $\mathcal{H}$, $$\mathcal{H}\phi_n(x)=\mathcal{E}_n\phi_n(x)\quad (n=0,1,\ldots,n_{\text{max}}),\quad 0=\mathcal{E}_0<\mathcal{E}_1<\cdots<\mathcal{E}_{n_{\text{max}}}, \label{schreq0}$$ where the eigenvector is $\phi_n=(\phi_n(x))_{x=0,1,\ldots,x_{\text{max}}}$ and $n_{\text{max}}=N$. Reflecting the [*positive semi-definiteness*]{} and based on the boundary conditions , the Hamiltonian can be expressed in a factorised form: $$\begin{aligned} &\mathcal{H}=\mathcal{A}^{\dagger}\mathcal{A},\qquad \mathcal{A}=(\mathcal{A}_{x,y}), \ \ \mathcal{A}^{\dagger}=((\mathcal{A}^{\dagger})_{x,y}) =(\mathcal{A}_{y,x}), \ \ (x,y=0,1,\ldots,x_{\text{max}}), \label{factor}\\ &\mathcal{A}_{x,y}{\stackrel{\text{def}}{=}}\sqrt{B(x)}\,\delta_{x,y}-\sqrt{D(x+1)}\,\delta_{x+1,y},\quad (\mathcal{A}^{\dagger})_{x,y}= \sqrt{B(x)}\,\delta_{x,y}-\sqrt{D(x)}\,\delta_{x-1,y}.\end{aligned}$$ Here $\mathcal{A}$ ($\mathcal{A}^\dagger$) is an upper (lower) triangular matrix with the diagonal and the super(sub)-diagonal entries only. The zero mode equation, $\mathcal{A}\phi_0=0$, is $$\begin{aligned} &\sqrt{B(x)}\,\phi_0(x)-\sqrt{D(x+1)}\,\phi_0(x+1)=0 \ \ (x=0,1,\ldots,x_{\text{max}}-1),\\ &\sqrt{B(x_{\text{max}})}\,\phi_0(x_{\text{max}})=0,\end{aligned}$$ and the second equation is trivially satisfied by the boundary condition $B(x_{\text{max}})=0$. The groundstate eigenvector is easily obtained: $$\phi_0(x)=\sqrt{\prod_{y=0}^{x-1}\frac{B(y)}{D(y+1)}} \ \ (x=0,1,\ldots,x_{\text{max}}), \label{phi0=prodB/D}$$ with the normalisation $\phi_0(0)=1$ (convention: $\prod_{k=n}^{n-1}*=1$). Needless to say it is positive for $x=0,1,\ldots,x_{\text{max}}$. For the explicit examples treated in [@os12], $\phi_0^2(x)$ can be analytically continued to the entire complex $x$-plane as a meromorphic function and it vanishes on the integer points outside the boundary; $\phi_0^2(x)=0$ ($x\in\mathbb{Z}\backslash\{0,1,\ldots,x_{\text{max}}\}$). The eigenvectors are mutually orthogonal: $$(\phi_n,\phi_m){\stackrel{\text{def}}{=}}\sum_{x=0}^{x_{\text{max}}}\phi_n(x)\phi_m(x) =\frac{1}{d_n^2}\delta_{nm}\quad (n,m=0,1,\ldots,n_{\text{max}}). \label{ortho}$$ For simplicity in notation, we write $\mathcal{H}$, $\mathcal{A}$ and $\mathcal{A}^{\dagger}$ as follows: $$\begin{aligned} &e^{\pm\partial}=((e^{\pm\partial})_{x,y}) \ \ (x,y=0,1,\ldots,x_{\text{max}}), \ \ (e^{\pm\partial})_{x,y}{\stackrel{\text{def}}{=}}\delta_{x\pm 1,y}, \ \ (e^{\partial})^{\dagger}=e^{-\partial}, \label{partdef}\\ &\mathcal{H}=-\sqrt{B(x)}\,e^{\partial}\sqrt{D(x)} -\sqrt{D(x)}\,e^{-\partial}\sqrt{B(x)}+B(x)+D(x){\nonumber\\}&\phantom{\mathcal{H}}=-\sqrt{B(x)D(x+1)}\,e^{\partial} -\sqrt{B(x-1)D(x)}\,e^{-\partial}+B(x)+D(x), \label{genham}\\ &\mathcal{A}=\sqrt{B(x)}-e^{\partial}\sqrt{D(x)},\quad \mathcal{A}^{\dagger}=\sqrt{B(x)}-\sqrt{D(x)}\,e^{-\partial}. \label{A,Ad}\end{aligned}$$ For the Schrödinger equation , it is sufficient that the functions $B(x)$, $D(x)$ and $\phi_n(x)$ are defined only for the integer grid, $x=0,1,\ldots,x_{\text{max}}$. In this paper we consider the case that the potential functions $B(x)$ and $D(x)$ are rational functions of $x$ or $q^x$ ($0<q<1$). So they are defined for any $x\in\mathbb{C}$ (except for the zeros of their denominators), see the explicit forms –. Also we consider the eigenvectors in a factorised form: $$\phi_n(x)=\phi_0(x)\check{P}_n(x),\quad \check{P}_n(x){\stackrel{\text{def}}{=}}P_n\bigl(\eta(x)\bigr). \label{phin=phi0P}$$ Here $P_n(\eta)$ is a polynomial of degree $n$ in $\eta$ and the sinusoidal coordinate $\eta(x)$ is one of the following [@os12]; $\eta(x)=x,\epsilon'x(x+d),1-q^x,q^{-x}-1,\epsilon'(q^{-x}-1)(1-dq^x)$, ($\epsilon'=\pm1$). Since $P_n$ is a polynomial, $\check{P}_n(x)$ is defined for any $x\in\mathbb{C}$. The Schrödinger equation gives a square root free difference equation for the polynomial eigenvector $\check{P}_n(x)$, $$B(x)\bigl(\check{P}_n(x)-\check{P}_n(x+1)\bigr) +D(x)\bigl(\check{P}_n(x)-\check{P}_n(x-1)\bigr) =\mathcal{E}_n\check{P}_n(x)\quad(\forall x\in\mathbb{C}). \label{tHcPn=}$$ Original ($q$-)Racah system {#sec:org_qR} --------------------------- Let us consider the Racah (R) and the $q$-Racah ($q$R) cases. We follow the notation of [@os12]. Although there are four possible parameter choices indexed by $(\epsilon,\epsilon')=(\pm 1,\pm 1)$ in general, as explained in detail in §V.A.1 and §V.A.5 of [@os12], we restrict ourselves to the $(\epsilon,\epsilon')=(1,1)$ case for simplicity of presentation. The set of parameters ${\boldsymbol}{\lambda}$, which is different from the standard one $(\alpha,\beta,\gamma,\delta)$ [@koeswart], its shift ${\boldsymbol}{\delta}$ and $\kappa$ are $$\begin{aligned} \text{R}&:\ {\boldsymbol}{\lambda\,}=(a,b,c,d),\quad {\boldsymbol}{\delta}=(1,1,1,1), \quad\kappa=1, \label{lamdelR}\\ \text{$q$R}&:\ q^{{\boldsymbol}{\lambda}}=(a,b,c,d),\quad {\boldsymbol}{\delta}=(1,1,1,1), \quad\kappa=q^{-1}, \quad 0<q<1, \label{lamdelqR}\end{aligned}$$ where $q^{{\boldsymbol}{\lambda}}$ stands for $q^{(\lambda_1,\lambda_2,\ldots)}=(q^{\lambda_1},q^{\lambda_2},\ldots)$. We introduce a new parameter $\tilde{d}$ defined by $$\tilde{d}{\stackrel{\text{def}}{=}}\left\{ \begin{array}{ll} a+b+c-d-1&:\text{R}\\ abcd^{-1}q^{-1}&:\text{$q$R} \end{array}\right..$$ We adopt the following choice of the parameter ranges: $$\begin{aligned} \text{R}:\quad&a=-N,\ \ 0<d<a+b,\ \ 0<c<1+d, \label{pararange}\\ \text{$q$R}:\quad&a=q^{-N},\ \ 0<ab<d<1,\ \ qd<c<1, \label{pararangeq}\end{aligned}$$ and $x_{\text{max}}=n_{\text{max}}=N$. They are sufficient for the positivity of $B(x;{\boldsymbol}{\lambda})$ and $D(x;{\boldsymbol}{\lambda})$ below. Here are the fundamental data [@os12]: $$\begin{aligned} &B(x;{\boldsymbol}{\lambda})= \left\{ \begin{array}{ll} {\displaystyle -\frac{(x+a)(x+b)(x+c)(x+d)}{(2x+d)(2x+1+d)}}&:\text{R}\\[8pt] {\displaystyle-\frac{(1-aq^x)(1-bq^x)(1-cq^x)(1-dq^x)} {(1-dq^{2x})(1-dq^{2x+1})}}&:\text{$q$R} \end{array}\right.\!, \label{Bform}\\ &D(x;{\boldsymbol}{\lambda})= \left\{ \begin{array}{ll} {\displaystyle -\frac{(x+d-a)(x+d-b)(x+d-c)x}{(2x-1+d)(2x+d)}}&:\text{R}\\[8pt] {\displaystyle-\tilde{d}\, \frac{(1-a^{-1}dq^x)(1-b^{-1}dq^x)(1-c^{-1}dq^x)(1-q^x)} {(1-dq^{2x-1})(1-dq^{2x})}}&:\text{$q$R} \end{array}\right.\!, \label{Dform}\\ &\mathcal{H}({\boldsymbol}{\lambda})\phi_{n}(x;{\boldsymbol}{\lambda}) =\mathcal{E}_{n}({\boldsymbol}{\lambda}) \phi_{n}(x;{\boldsymbol}{\lambda}) \ \ (x=0,1,\ldots,x_{\text{max}};n=0,1,\ldots,n_{\text{max}}), \label{schreq}\\ &\phi_{n}(x;{\boldsymbol}{\lambda})=\phi_0(x;{\boldsymbol}{\lambda})\check{P}_n(x;{\boldsymbol}{\lambda}), \label{facsol}\\ &\mathcal{E}_n({\boldsymbol}{\lambda})= \left\{ \begin{array}{ll} n(n+\tilde{d})&:\text{R}\\ (q^{-n}-1)(1-\tilde{d}q^n)&:\text{$q$R} \end{array}\right.\!,\quad \eta(x;{\boldsymbol}{\lambda})= \left\{ \begin{array}{ll} x(x+d)&:\text{R}\\ (q^{-x}-1)(1-dq^x)&:\text{$q$R} \end{array}\right.\!, \label{etadefs}\\ & \varphi(x;{\boldsymbol}{\lambda})= \left\{ \begin{array}{ll} {\displaystyle\frac{2x+d+1}{d+1}}&:\text{R}\\[6pt] {\displaystyle\frac{q^{-x}-dq^{x+1}}{1-dq}}&:\text{$q$R} \end{array}\right.\!,\\ &\check{P}_n(x;{\boldsymbol}{\lambda}) =P_n\bigl(\eta(x;{\boldsymbol}{\lambda});{\boldsymbol}{\lambda}\bigr)= \left\{ \begin{array}{ll} {\displaystyle {}_4F_3\Bigl( \genfrac{}{}{0pt}{}{-n,\,n+\tilde{d},\,-x,\,x+d} {a,\,b,\,c}\Bigm|1\Bigr)}&:\text{R}\\ {\displaystyle {}_4\phi_3\Bigl( \genfrac{}{}{0pt}{}{q^{-n},\,\tilde{d}q^n,\,q^{-x},\,dq^x} {a,\,b,\,c}\Bigm|q\,;q\Bigr)}&:\text{$q$R} \end{array}\right. \label{qracah}\\ &\phantom{\check{P}_n(x;{\boldsymbol}{\lambda}) =P_n\bigl(\eta(x;{\boldsymbol}{\lambda});{\boldsymbol}{\lambda}\bigr)} =\left\{ \begin{array}{ll} {\displaystyle R_n\bigl(\eta(x;{\boldsymbol}{\lambda});a-1,\tilde{d}-a,c-1,d-c\bigr)} &:\text{R}\\ {\displaystyle R_n\bigl(1+d+\eta(x;{\boldsymbol}{\lambda}); aq^{-1},\tilde{d}a^{-1},cq^{-1},dc^{-1}|q\bigr)}&:\text{$q$R} \end{array}\right.\!, \label{Pn=R,qR}\\ &\phi_0(x;{\boldsymbol}{\lambda})^2=\left\{ \begin{array}{ll} {\displaystyle \frac{(a,b,c,d)_x}{(1+d-a,1+d-b,1+d-c,1)_x}\, \frac{2x+d}{d} }&:\text{R}\\[8pt] {\displaystyle \frac{(a,b,c,d\,;q)_x} {(a^{-1}dq,b^{-1}dq,c^{-1}dq,q\,;q)_x\,\tilde{d}^x}\, \frac{1-dq^{2x}}{1-d} }&:\text{$q$R} \end{array}\right.\!,\\ &d_n({\boldsymbol}{\lambda})^2 =\left\{ \begin{array}{ll} {\displaystyle \frac{(a,b,c,\tilde{d})_n} {(1+\tilde{d}-a,1+\tilde{d}-b,1+\tilde{d}-c,1)_n}\, \frac{2n+\tilde{d}}{\tilde{d}} }&\\[8pt] {\displaystyle \quad\times \frac{(-1)^N(1+d-a,1+d-b,1+d-c)_N}{(\tilde{d}+1)_N(d+1)_{2N}} }&:\text{R}\\[8pt] {\displaystyle \frac{(a,b,c,\tilde{d}\,;q)_n} {(a^{-1}\tilde{d}q,b^{-1}\tilde{d}q,c^{-1}\tilde{d}q,q\,;q)_n\,d^n}\, \frac{1-\tilde{d}q^{2n}}{1-\tilde{d}} }&\\[8pt] {\displaystyle \quad\times \frac{(-1)^N(a^{-1}dq,b^{-1}dq,c^{-1}dq\,;q)_N\,\tilde{d}^Nq^{\frac12N(N+1)}} {(\tilde{d}q\,;q)_N(dq\,;q)_{2N}} }&:\text{$q$R} \end{array}\right.\!. \label{dn2}\end{aligned}$$ Here $R_n(\cdots)$ in are the standard notation of the ($q$-)Racah polynomial in [@koeswart]. It should be emphasised that the quantities $B(x;{\boldsymbol}{\lambda})$, $D(x;{\boldsymbol}{\lambda})$, $\mathcal{E}_n({\boldsymbol}{\lambda})$, $\check{P}_n(x;{\boldsymbol}{\lambda})$, $\phi_0(x;{\boldsymbol}{\lambda})^2$, $d_n({\boldsymbol}{\lambda})^2$ are formally symmetric under the permutation of $(a,b,c)$, although their ranges are restricted as above by –. Here is a remark on the polynomial $\check{P}_n(x;{\boldsymbol}{\lambda})$, which is in fact a polynomial in the sinusoidal coordinate $\eta(x;{\boldsymbol}{\lambda})$ . The sinusoidal coordinate has a special dynamical meaning [@os12; @os13; @os7]. The Heisenberg operator solution for $\eta(x;{\boldsymbol}{\lambda})$ can be expressed in a closed form. This means that its time evolution is a sinusoidal motion. Let $R$ be the ring of polynomials in $x$ (the Racah case) or the ring of Laurent polynomials in $q^x$ (the $q$-Racah case). Let us introduce an automorphism $\mathcal{I}$ in $R$ by $$\mathcal{I}(x)=-x-d \quad :\text{R},\qquad \mathcal{I}(q^x)=q^{-x}d^{-1} \quad :q\text{R}. \label{autom}$$ Obviously it is an involution $\mathcal{I}^2=\text{id}$. The following remark is important.\ [**Remark**]{}: If a (Laurent) polynomial $\check{f}$ in $x$ ($q^x$) is invariant under the above involution, it is a polynomial in the sinusoidal coordinate $\eta(x;{\boldsymbol}{\lambda})$: $$\mathcal{I}\bigl(\check{f}(x)\bigr)=\check{f}(x) \ \Leftrightarrow\ \check{f}(x)=f\bigl(\eta(x;{\boldsymbol}{\lambda})\bigr). \label{remark}$$ The system is shape invariant [@genden; @os12], $$\mathcal{A}({\boldsymbol}{\lambda})\mathcal{A}({\boldsymbol}{\lambda})^{\dagger} =\kappa\mathcal{A}({\boldsymbol}{\lambda}+{\boldsymbol}{\delta})^{\dagger} \mathcal{A}({\boldsymbol}{\lambda}+{\boldsymbol}{\delta})+\mathcal{E}_1({\boldsymbol}{\lambda}),$$ which is a sufficient condition for exact solvability and it provides the explicit formulas for the energy eigenvalues and the eigenfunctions, [*i.e.*]{} the generalised Rodrigues formula [@os12]. The forward and backward shift relations are $$\mathcal{F}({\boldsymbol}{\lambda})\check{P}_n(x;{\boldsymbol}{\lambda}) =\mathcal{E}_n({\boldsymbol}{\lambda})\check{P}_{n-1}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta}), \quad \mathcal{B}({\boldsymbol}{\lambda})\check{P}_{n-1}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta}) =\check{P}_n(x;{\boldsymbol}{\lambda}), \label{BPn-1=Pn}$$ where the forward and backward shift operators are $$\mathcal{F}({\boldsymbol}{\lambda})=B(0;{\boldsymbol}{\lambda})\varphi(x;{\boldsymbol}{\lambda})^{-1} (1-e^{\partial}), \ \ \mathcal{B}({\boldsymbol}{\lambda})=B(0;{\boldsymbol}{\lambda})^{-1} \bigl(B(x;{\boldsymbol}{\lambda})-D(x;{\boldsymbol}{\lambda})e^{-\partial}\bigr) \varphi(x;{\boldsymbol}{\lambda}).$$ Symmetry and virtual state vectors {#sec:virtual} ---------------------------------- Let us define the twist operation $\mathfrak{t}$ of the parameters: $$\mathfrak{t}({\boldsymbol}{\lambda}){\stackrel{\text{def}}{=}}(\lambda_4-\lambda_1+1,\lambda_4-\lambda_2+1,\lambda_3,\lambda_4), \quad \mathfrak{t}^2=\text{id}. \label{twist}$$ We introduce two functions $B'(x)$ and $D'(x)$ by $$B'(x;{\boldsymbol}{\lambda}){\stackrel{\text{def}}{=}}B\bigl(x;\mathfrak{t}({\boldsymbol}{\lambda})\bigr),\quad D'(x;{\boldsymbol}{\lambda}){\stackrel{\text{def}}{=}}D\bigl(x;\mathfrak{t}({\boldsymbol}{\lambda})\bigr), \label{B'D'}$$ namely, $$\begin{aligned} &B'(x;{\boldsymbol}{\lambda})= \left\{ \begin{array}{ll} {\displaystyle -\frac{(x+d-a+1)(x+d-b+1)(x+c)(x+d)}{(2x+d)(2x+1+d)}}&:\text{R}\\[8pt] {\displaystyle -\frac{(1-a^{-1}dq^{x+1})(1-b^{-1}dq^{x+1})(1-cq^x)(1-dq^x)} {(1-dq^{2x})(1-dq^{2x+1})}}&:\text{$q$R} \end{array}\right.\!,\\ &D'(x;{\boldsymbol}{\lambda})= \left\{ \begin{array}{ll} {\displaystyle -\frac{(x+a-1)(x+b-1)(x+d-c)x}{(2x-1+d)(2x+d)}}&:\text{R}\\[8pt] {\displaystyle-\frac{cdq}{ab}\, \frac{(1-aq^{x-1})(1-bq^{x-1})(1-c^{-1}dq^x)(1-q^x)} {(1-dq^{2x-1})(1-dq^{2x})}}&:\text{$q$R} \end{array}\right.\!.\end{aligned}$$ We restrict the parameter range $$\text{R}:\ \ d+M<a+b,\qquad \text{$q$R}:\ \ ab<dq^M, \label{Mrange}$$ in which $M$ is a positive integer and later it will be identified with the total number of deleted virtual states. It is easy to verify $$\begin{aligned} &B(x;{\boldsymbol}{\lambda})D(x+1;{\boldsymbol}{\lambda}) =\alpha({\boldsymbol}{\lambda})^2B'(x;{\boldsymbol}{\lambda})D'(x+1;{\boldsymbol}{\lambda}), \label{BD=B'D'}\\ &B(x;{\boldsymbol}{\lambda})+D(x;{\boldsymbol}{\lambda}) =\alpha({\boldsymbol}{\lambda})\bigl(B'(x;{\boldsymbol}{\lambda}) +D'(x;{\boldsymbol}{\lambda})\bigr)+\alpha'({\boldsymbol}{\lambda}), \label{BD=B'D'2}\\ &B'(x;{\boldsymbol}{\lambda})>0\ \ (x=0,1,\ldots,x_{\text{max}}+M-1), \label{B'>0,..}\\ &D'(x;{\boldsymbol}{\lambda})>0\ \ (x=1,2,\ldots,x_{\text{max}}), \ \ D'(0;{\boldsymbol}{\lambda})=D'(x_{\text{max}}+1;{\boldsymbol}{\lambda})=0. \label{D'>0,..}\end{aligned}$$ Here the constant $\alpha({\boldsymbol}{\lambda})$ is positive and $\alpha'({\boldsymbol}{\lambda})$ is negative: $$\begin{aligned} &0<\alpha({\boldsymbol}{\lambda})=\left\{ \begin{array}{ll} 1&:\text{R}\\ abd^{-1}q^{-1}&:\text{$q$R} \end{array}\right.,\quad 0>\alpha'({\boldsymbol}{\lambda})=\left\{ \begin{array}{ll} -c(a+b-d-1)&:\text{R}\\ -(1-c)(1-abd^{-1}q^{-1})&:\text{$q$R} \end{array}\right..\end{aligned}$$ The above relations – imply that we can define a virtual Hamiltonian $\mathcal{H}'$ by the twisted parameters (the ${\boldsymbol}{\lambda}$ dependence is suppressed for simplicity): $$\begin{aligned} \mathcal{H}'({\boldsymbol}{\lambda})&{\stackrel{\text{def}}{=}}\mathcal{H}\bigl(\mathfrak{t}({\boldsymbol}{\lambda})\bigr) =-\sqrt{B'(x)}\,e^{\partial}\sqrt{D'(x)} -\sqrt{D'(x)}\,e^{-\partial}\sqrt{B'(x)}+B'(x)+D'(x), \label{Hprime}\end{aligned}$$ and the original Hamiltonian and the virtual Hamiltonian are linearly related $$\begin{aligned} &\mathcal{H}({\boldsymbol}{{\boldsymbol}{\lambda}})= \alpha({\boldsymbol}{\lambda})\mathcal{H}\bigl(\mathfrak{t}({\boldsymbol}{\lambda})\bigr) +\alpha'({\boldsymbol}{\lambda}). \label{HH'} \end{aligned}$$ This also means that $\mathcal{H}(\mathfrak{t}({\boldsymbol}{\lambda}))$ is [*positive definite*]{} and it has [*no zero-mode*]{}. In other words, the two term recurrence relation determining the ‘zero-mode’ of $\mathcal{H}(\mathfrak{t}({\boldsymbol}{\lambda}))$ $$\begin{aligned} &\mathcal{A}\bigl(\mathfrak{t}({\boldsymbol}{\lambda})\bigr)= \sqrt{B'(x;{\boldsymbol}{\lambda})}-e^{\partial}\sqrt{D'(x;{\boldsymbol}{\lambda})},{\nonumber\\}&\mathcal{A}\bigl(\mathfrak{t}({\boldsymbol}{\lambda})\bigr) \tilde{\phi}_0(x;{\boldsymbol}{\lambda})=0\quad(x=0,1,\ldots,x_{\text{max}}-1), \label{aprimezero}\end{aligned}$$ can be ‘solved’ from $x=0$ to $x=x_\text{max}-1$ to determine $$\tilde{\phi}_0(x;{\boldsymbol}{\lambda}){\stackrel{\text{def}}{=}}\sqrt{\prod_{y=0}^{x-1}\frac{B'(y;{\boldsymbol}{\lambda})}{D'(y+1;{\boldsymbol}{\lambda})}} \qquad (x=0,1,\ldots,x_{\text{max}}). \label{tphi0}$$ But at the end point $x=x_\text{max}$, the ‘zero-mode’ equation is not satisfied, because of the boundary condition $$B'(x_\text{max};{\boldsymbol}{\lambda})\neq 0,\quad \mathcal{A}\bigl(\mathfrak{t}({\boldsymbol}{\lambda})\bigr) \tilde{\phi}_0(x;{\boldsymbol}{\lambda})\neq0\ \ (x=x_{\text{max}}). \label{virtnon}$$ The new Schrödinger equation $$\mathcal{H}'({\boldsymbol}{\lambda})\tilde{\phi}_{\text{v}}(x;{\boldsymbol}{\lambda}) =\mathcal{E}'_{\text{v}}({\boldsymbol}{\lambda}) \tilde{\phi}_{\text{v}}(x;{\boldsymbol}{\lambda}), \label{schreq'}$$ can be [*almost solved*]{} except for the end point $x=x_\text{max}$ by the factorisation ansatz $$\tilde{\phi}_{\text{v}}(x;{\boldsymbol}{\lambda}) {\stackrel{\text{def}}{=}}\tilde{\phi}_0(x;{\boldsymbol}{\lambda}) \check{\xi}_{\text{v}}(x;{\boldsymbol}{\lambda}),$$ as in the original ($q$-)Racah system. By using the explicit form of $\tilde{\phi}_0(x;{\boldsymbol}{\lambda})$ , the new Schrödinger equation for $x=0,\ldots, x_\text{max}-1$ is rewritten as $$B'(x;{\boldsymbol}{\lambda})\bigl(\check{\xi}_{\text{v}}(x;{\boldsymbol}{\lambda}) -\check{\xi}_{\text{v}}(x+1;{\boldsymbol}{\lambda})\bigr) +D'(x;{\boldsymbol}{\lambda})\bigl(\check{\xi}_{\text{v}}(x;{\boldsymbol}{\lambda}) -\check{\xi}_{\text{v}}(x-1;{\boldsymbol}{\lambda})\bigr) =\mathcal{E}'_{\text{v}}({\boldsymbol}{\lambda})\check{\xi}_{\text{v}}(x;{\boldsymbol}{\lambda}). \label{tH'cxi=}$$ This is the same form of equation as that for the ($q$-)Racah polynomials. So its solution for $x\in\mathbb{C}$ is given by the ($q$-)Racah polynomial with the twisted parameters: $$\check{\xi}_{\text{v}}(x;{\boldsymbol}{\lambda}) =\check{P}_{\text{v}}\bigl(x;\mathfrak{t}({\boldsymbol}{\lambda})\bigr),\qquad \mathcal{E}'_{\text{v}}({\boldsymbol}{\lambda}) =\mathcal{E}_{\text{v}}\bigl(\mathfrak{t}({\boldsymbol}{\lambda})\bigr).$$ Among such ‘solutions’, those with the negative energy and having definite sign $$\begin{aligned} \check{\xi}_{\text{v}}(x;{\boldsymbol}{\lambda})>0& \ \ (x=0,1,\ldots,x_{\text{max}},x_{\text{max}}+1; \text{v}\in\mathcal{V}), \label{xi>0}\\ \tilde{\mathcal{E}}_{\text{v}}({\boldsymbol}{\lambda})<0&\ \ (\text{v}\in\mathcal{V}), \label{tEv<0}\end{aligned}$$ are called the [*virtual state vectors*]{}: $\{\tilde{\phi}_{\text{v}}(x)\}$, $\text{v}\in \mathcal{V}$. The index set of the virtual state vectors is $$\mathcal{V}=\{1,2,\ldots,\text{v}_{\text{max}}\}, \quad \text{v}_{\text{max}}=\min\bigl\{ [\lambda_1+\lambda_2-\lambda_4-1]', [\tfrac12(\lambda_1+\lambda_2-\lambda_3-\lambda_4)]\bigr\}, \label{vrange}$$ where $[x]$ denotes the greatest integer not exceeding $x$ and $[x]'$ denotes the greatest integer not equal or exceeding $x$. We will not use the label 0 state for deletion, see . The negative virtual state energy conditions is met by $\text{v}_{\text{max}}\leq[\lambda_1+\lambda_2-\lambda_4-1]'$. For the positivity of $\check{\xi}_{\text{v}}(x;{\boldsymbol}{\lambda})$ , we write down them explicitly: $$\begin{aligned} \check{\xi}_{\text{v}}(x;{\boldsymbol}{\lambda})&= \left\{ \begin{array}{ll} {\displaystyle {}_4F_3\Bigl( \genfrac{}{}{0pt}{}{-\text{v},\,\text{v}-a-b+c+d+1,\,-x,\,x+d} {d-a+1,\,d-b+1,\,c}\Bigm|1\Bigr)}&:\text{R}\\[8pt] {\displaystyle {}_4\phi_3\Bigl( \genfrac{}{}{0pt}{}{q^{-\text{v}},\,a^{-1}b^{-1}cdq^{\text{v}+1}, \,q^{-x},\,dq^x} {a^{-1}dq,\,b^{-1}dq,\,c}\Bigm|q\,;q\Bigr)}&:\text{$q$R} \end{array}\right.{\nonumber\\}&=\left\{ \begin{array}{ll} {\displaystyle \sum_{k=0}^{\text{v}}\frac{(-\text{v},\text{v}-a-b+c+d+1,-x,x+d)_k} {(d-a+1,d-b+1,c)_k}\frac{1}{k!}}&:\text{R}\\[8pt] {\displaystyle \sum_{k=0}^{\text{v}}\frac{(q^{-\text{v}},a^{-1}b^{-1}cdq^{\text{v}+1}, q^{-x},dq^x;q)_k} {(a^{-1}dq,b^{-1}dq,c;q)_k}\frac{q^k}{(q;q)_k}}&:\text{$q$R} \end{array}\right.. \label{xipos}\end{aligned}$$ Each $k$-th term in the sum is non-negative for $2\text{v}_{\text{max}}\leq\lambda_1+\lambda_2-\lambda_3-\lambda_4$. Here is a summary of the properties of the virtual state vectors: $$\begin{aligned} &\tilde{\phi}_0(x;{\boldsymbol}{\lambda}) {\stackrel{\text{def}}{=}}\phi_0\bigl(x;\mathfrak{t}({\boldsymbol}{\lambda})\bigr),\quad \tilde{\phi}_{\text{v}}(x;{\boldsymbol}{\lambda}) {\stackrel{\text{def}}{=}}\phi_{\text{v}}\bigl(x;\mathfrak{t}({\boldsymbol}{\lambda})\bigr) =\tilde{\phi}_0(x;{\boldsymbol}{\lambda}) \check{\xi}_{\text{v}}(x;{\boldsymbol}{\lambda}) \ \ (\text{v}\in\mathcal{V}), \label{tphiv=}\\ &\check{\xi}_{\text{v}}(x;{\boldsymbol}{\lambda}){\stackrel{\text{def}}{=}}\check{P}_{\text{v}}\bigl(x;\mathfrak{t}({\boldsymbol}{\lambda})\bigr),\quad \check{\xi}_{\text{v}}(x;{\boldsymbol}{\lambda}){\stackrel{\text{def}}{=}}\xi_{\text{v}}\bigl(\eta(x;{\boldsymbol}{\lambda});{\boldsymbol}{\lambda}\bigr), \label{xiv=}\\ &\mathcal{H}({\boldsymbol}{\lambda})\tilde{\phi}_{\text{v}}(x;{\boldsymbol}{\lambda}) =\tilde{\mathcal{E}}_{\text{v}}({\boldsymbol}{\lambda}) \tilde{\phi}_{\text{v}}(x;{\boldsymbol}{\lambda}) \ \ (x=0,1,\ldots,x_{\text{max}}-1),{\nonumber\\}&\mathcal{H}({\boldsymbol}{\lambda}) \tilde{\phi}_{\text{v}}(x_{\text{max}};{\boldsymbol}{\lambda}) \neq\tilde{\mathcal{E}}_{\text{v}}({\boldsymbol}{\lambda}) \tilde{\phi}_{\text{v}}(x_{\text{max}};{\boldsymbol}{\lambda}), \qquad\quad \mathcal{E}'_{\text{v}}({\boldsymbol}{\lambda}) =\mathcal{E}_{\text{v}}\bigl(\mathfrak{t}({\boldsymbol}{\lambda})\bigr), \label{H'tphiv=}\\ &\tilde{\mathcal{E}}_{\text{v}}({\boldsymbol}{\lambda}) =\alpha({\boldsymbol}{\lambda})\mathcal{E}'_{\text{v}}({\boldsymbol}{\lambda}) +\alpha'({\boldsymbol}{\lambda}) =\left\{ \begin{array}{ll} -(c+\text{v})(a+b-d-1-\text{v})&:\text{R}\\ -(1-cq^{\text{v}})(1-abd^{-1}q^{-1-\text{v}})&:\text{$q$R} \end{array}\right., \label{tEv=}\\ &\nu(x;{\boldsymbol}{\lambda}){\stackrel{\text{def}}{=}}\frac{\phi_0(x;{\boldsymbol}{\lambda})}{\tilde{\phi}_0(x;{\boldsymbol}{\lambda})} =\left\{ \begin{array}{ll} {\displaystyle \frac{\Gamma(1-a)\Gamma(x+b)\Gamma(d-a+1)\Gamma(b-d-x)} {\Gamma(1-a-x)\Gamma(b)\Gamma(x+d-a+1)\Gamma(b-d)}} &:\text{R}\\[8pt] {\displaystyle \frac{(a^{-1}q^{1-x},b,a^{-1}dq^{x+1},bd^{-1};q)_{\infty}} {(a^{-1}q,bq^x,a^{-1}dq,bd^{-1}q^{-x};q)_{\infty}}} &:\text{$q$R} \end{array}\right.. \label{phi0/tphi0}\end{aligned}$$ Note that $\alpha'({\boldsymbol}{\lambda})=\tilde{\mathcal{E}}_0({\boldsymbol}{\lambda})<0$. The function $\nu(x;{\boldsymbol}{\lambda})$ can be analytically continued into a meromorphic function of $x$ or $q^x$ through the functional relations: $$\nu(x+1;{\boldsymbol}{\lambda})=\frac{B(x;{\boldsymbol}{\lambda})}{\alpha B'(x;{\boldsymbol}{\lambda})} \nu(x;{\boldsymbol}{\lambda}),\quad \nu(x-1;{\boldsymbol}{\lambda})=\frac{D(x;{\boldsymbol}{\lambda})}{\alpha D'(x;{\boldsymbol}{\lambda})} \nu(x;{\boldsymbol}{\lambda}). \label{nurel}$$ By $B(x_{\text{max}};{\boldsymbol}{\lambda})=0$, it vanishes for integer $x_\text{max}+1\leq x\leq x_{\text{max}}+M$, $\nu(x;{\boldsymbol}{\lambda})=0$, and at negative integer points it takes nonzero finite values in general. Multi-indexed ($q$-)Racah Polynomials {#sec:miop_qR} ===================================== In this section we apply the Crum-Adler method of virtual states deletion to the exactly solvable systems whose eigenstates are described by the ($q$-)Racah polynomials. Since all the eigenvalues remain the same, [*i.e.*]{} the process is [*exactly iso-spectral deformation*]{}, the size of the Hamiltonian is unchanged. Various quantities are neatly expressed in terms of a Casoratian, a discrete counterpart of the Wronskian. The Casorati determinant of a set of $n$ functions $\{f_j(x)\}$ is defined by $$\text{W}[f_1,\ldots,f_n](x) {\stackrel{\text{def}}{=}}\det\Bigl(f_k(x+j-1)\Bigr)_{1\leq j,k\leq n},$$ (for $n=0$, we set $\text{W}[\cdot](x)=1$), which satisfies identities $$\begin{aligned} &\text{W}[gf_1,gf_2,\ldots,gf_n](x) =\prod_{k=0}^{n-1}g(x+k)\cdot\text{W}[f_1,f_2,\ldots,f_n](x), \\ &\text{W}\bigl[\text{W}[f_1,f_2,\ldots,f_n,g], \text{W}[f_1,f_2,\ldots,f_n,h]\,\bigr](x){\nonumber\\}&=\text{W}[f_1,f_2,\ldots,f_n](x+1)\, \text{W}[f_1,f_2,\ldots,f_n,g,h](x) \quad(n\geq 0). $$ Virtual states deletion {#sec:virtual_del} ----------------------- Let us provide the basic formulas starting from one virtual state deletion. For simplicity of presentation the parameter (${\boldsymbol}{\lambda}$) dependence of various quantities is suppressed in this subsection. First we rewrite the original Hamiltonian by introducing potential functions $\hat{B}_{d_1}(x)$ and $\hat{D}_{d_1}(x)$ determined by one of the virtual state polynomials $\check{\xi}_{d_1}(x)$ ($d_1\in\mathcal{V}$): $$\hat{B}_{d_1}(x){\stackrel{\text{def}}{=}}\alpha B'(x) \frac{\check{\xi}_{d_1}(x+1)}{\check{\xi}_{d_1}(x)},\quad \hat{D}_{d_1}(x){\stackrel{\text{def}}{=}}\alpha D'(x) \frac{\check{\xi}_{d_1}(x-1)}{\check{\xi}_{d_1}(x)}. \label{Bd1def}$$ We have $\hat{B}_{d_1}(x)>0$ ($x=0,1,\ldots,x_{\text{max}}$), $\hat{D}_{d_1}(0)=\hat{D}_{d_1}(x_{\text{max}}+1)=0$, $\hat{D}_{d_1}(x)>0$ ($x=1,2,\ldots,x_{\text{max}}$) and $$\begin{aligned} &B(x)D(x+1)=\hat{B}_{d_1}(x)\hat{D}_{d_1}(x+1),\\ &B(x)+D(x)=\hat{B}_{d_1}(x)+\hat{D}_{d_1}(x)+\tilde{\mathcal{E}}_{d_1},\end{aligned}$$ where use is made of in the second equation. The original Hamiltonian reads: $$\begin{aligned} &\mathcal{H}=\hat{\mathcal{A}}_{d_1}^{\dagger}\hat{\mathcal{A}}_{d_1} +\tilde{\mathcal{E}}_{d_1},\\ &\hat{\mathcal{A}}_{d_1}{\stackrel{\text{def}}{=}}\sqrt{\hat{B}_{d_1}(x)}-e^{\partial}\sqrt{\hat{D}_{d_1}(x)},\quad \hat{\mathcal{A}}_{d_1}^{\dagger} =\sqrt{\hat{B}_{d_1}(x)}-\sqrt{\hat{D}_{d_1}(x)}\,e^{-\partial}.\end{aligned}$$ The virtual state vector $\tilde{\phi}_{d_1}(x)$ is almost annihilated by $\hat{\mathcal{A}}_{d_1}$, except for the upper end point: $$\hat{\mathcal{A}}_{d_1}\tilde{\phi}_{d_1}(x)=0 \ \ (x=0,1,\ldots,x_{\text{max}}-1),\quad \hat{\mathcal{A}}_{d_1}\tilde{\phi}_{d_1}(x_{\text{max}})\neq 0.$$ The proof is straightforward by direct substitution of and . Next let us define a new Hamiltonian $\mathcal{H}_{d_1}$ by changing the order of the two matrices $\hat{\mathcal{A}}_{d_1}^{\dagger}$ and $\hat{\mathcal{A}}_{d_1}$ together with the sets of new eigenvectors $\phi_{d_1n}(x)$ and new virtual state vectors $\tilde{\phi}_{d_1\text{v}}(x)$: $$\begin{aligned} \mathcal{H}_{d_1}&{\stackrel{\text{def}}{=}}\hat{\mathcal{A}}_{d_1}\hat{\mathcal{A}}_{d_1}^{\dagger} +\tilde{\mathcal{E}}_{d_1},\quad \mathcal{H}_{d_1}=(\mathcal{H}_{d_1\,x,y}) \ \ (x,y=0,1,\ldots,x_{\text{max}}),\\ \phi_{d_1n}(x)&{\stackrel{\text{def}}{=}}\hat{\mathcal{A}}_{d_1}\phi_n(x) \ \ (x=0,1,\ldots,x_{\text{max}}; n=0,1,\ldots,n_{\text{max}}), \label{phid1n}\\ \tilde{\phi}_{d_1\text{v}}(x)&{\stackrel{\text{def}}{=}}\hat{\mathcal{A}}_{d_1}\tilde{\phi}_{\text{v}}(x) +\delta_{x,x_{\text{max}}}\varphi_{d_1\text{v}} \ \ (x=0,1,\ldots,x_{\text{max}};\text{v}\in\mathcal{V}\backslash\{d_1\}), {\nonumber\\}&\varphi_{d_1\text{v}}{\stackrel{\text{def}}{=}}-\frac{\sqrt{\alpha B'(x_{\text{max}})}\,\tilde{\phi}_0(x_{\text{max}})} {\sqrt{\check{\xi}_{d_1}(x_{\text{max}})\check{\xi}_{d_1}(x_{\text{max}}+1)}} \,\check{\xi}_{d_1}(x_{\text{max}})\check{\xi}_{\text{v}}(x_{\text{max}}+1). \label{tphid1v}\end{aligned}$$ The $\varphi_{d_1\text{v}}$ term is necessary for the Casoratian expression for $\tilde{\phi}_{d_1\text{v}}(x)$ in to hold at $x=x_{\text{max}}$. It is easy to verify that $\phi_{d_1n}(x)$ is an eigenvector and that $\tilde{\phi}_{d_1\text{v}}(x)$ is a virtual state vector $$\begin{aligned} &\mathcal{H}_{d_1}\phi_{d_1n}(x) =\mathcal{E}_n\phi_{d_1n}(x) \ \ (x=0,1,\ldots,x_{\text{max}}; n=0,1,\ldots,n_{\text{max}}),\\ &\mathcal{H}_{d_1}\tilde{\phi}_{d_1\text{v}}(x) =\tilde{\mathcal{E}}_{\text{v}}\tilde{\phi}_{d_1\text{v}}(x) \ \ (x=0,1,\ldots,x_{\text{max}}-1; \text{v}\in\mathcal{V}\backslash\{d_1\}),{\nonumber\\}&\mathcal{H}_{d_1}\tilde{\phi}_{d_1\text{v}}(x_{\text{max}}) \neq\tilde{\mathcal{E}}_{\text{v}} \tilde{\phi}_{d_1\text{v}}(x_{\text{max}}). $$ For example, $$\begin{aligned} &\mathcal{H}_{d_1}\phi_{d_1n} =(\hat{\mathcal{A}}_{d_1}\hat{\mathcal{A}}_{d_1}^{\dagger} +\tilde{\mathcal{E}}_{d_1})\hat{\mathcal{A}}_{d_1}\phi_n =\hat{\mathcal{A}}_{d_1}(\hat{\mathcal{A}}_{d_1}^{\dagger} \hat{\mathcal{A}}_{d_1}+\tilde{\mathcal{E}}_{d_1})\phi_n\\ &=\hat{\mathcal{A}}_{d_1}\mathcal{H}\phi_n =\hat{\mathcal{A}}_{d_1}\mathcal{E}_n\phi_n =\mathcal{E}_n\hat{\mathcal{A}}_{d_1}\phi_n =\mathcal{E}_n\phi_{d_1n}.\end{aligned}$$ The two Hamiltonians $\mathcal{H}$ and $\mathcal{H}_{d_1}$ are exactly iso-spectral. If the original system is exactly solvable, this new system is also exactly solvable. The orthogonality relation for the new eigenvectors is $$\begin{aligned} &\quad(\phi_{d_1n},\phi_{d_1m}) =\sum_{x=0}^{x_{\text{max}}}\phi_{d_1n}(x)\phi_{d_1m}(x){\nonumber\\}&=(\hat{\mathcal{A}}_{d_1}\phi_n,\hat{\mathcal{A}}_{d_1}\phi_m) =(\hat{\mathcal{A}}_{d_1}^{\dagger}\hat{\mathcal{A}}_{d_1}\phi_n,\phi_m) =\bigl((\mathcal{H}-\tilde{\mathcal{E}}_{d_1})\phi_n,\phi_m\bigr){\nonumber\\}&=(\mathcal{E}_n-\tilde{\mathcal{E}}_{d_1})(\phi_n,\phi_m) =(\mathcal{E}_n-\tilde{\mathcal{E}}_{d_1})\frac{1}{d_n^2}\delta_{nm} \ \ (n,m=0,1,\ldots,n_{\text{max}}).\end{aligned}$$ This shows clearly that the [*negative*]{} virtual state energy ($\tilde{\mathcal{E}}_{\text{v}}<0$) is necessary for the positivity of the inner products. The new eigenvector $\phi_{d_1n}(x)$ and the virtual state vector $\tilde{\phi}_{d_1\text{v}}(x)$ are expressed neatly in terms of the Casoratian ($x=0,1,\ldots,x_{\text{max}}$) $$\phi_{d_1n}(x)=\frac{-\sqrt{\alpha B'(x)}\,\tilde{\phi}_0(x)} {\sqrt{\check{\xi}_{d_1}(x)\check{\xi}_{d_1}(x+1)}}\, \text{W}\bigl[\check{\xi}_{d_1},\nu\check{P}_n\bigr](x), \ \ \tilde{\phi}_{d_1\text{v}}(x) =\frac{-\sqrt{\alpha B'(x)}\,\tilde{\phi}_0(x)} {\sqrt{\check{\xi}_{d_1}(x)\check{\xi}_{d_1}(x+1)}}\, \text{W}[\check{\xi}_{d_1},\check{\xi}_{\text{v}}](x). \label{phid1v}$$ We will show that the positivity of the virtual state vector is inherited by the new virtual state vector $\tilde{\phi}_{d_1\text{v}}(x)$ . The Casoratian $\text{W}[\check{\xi}_{d_1},\check{\xi}_{\text{v}}](x)$ has definite sign for $x=0,1,\ldots,x_{\text{max}}+1$, namely all positive or all negative. By using we have $$\alpha B'(x)\text{W}[\check{\xi}_{d_1},\check{\xi}_{\text{v}}](x) =\alpha D'(x)\text{W}[\check{\xi}_{d_1},\check{\xi}_{\text{v}}](x-1) +(\tilde{\mathcal{E}}_{d_1}-\tilde{\mathcal{E}}_{\text{v}}) \check{\xi}_{d_1}(x)\check{\xi}_{\text{v}}(x). $$ By setting $x=0,1,\ldots,x_{\text{max}}+1$ in turn, we obtain $$\pm(\tilde{\mathcal{E}}_{d_1}-\tilde{\mathcal{E}}_{\text{v}})>0 \Rightarrow \pm\text{W}[\check{\xi}_{d_1},\check{\xi}_{\text{v}}](x)>0 \ \ (x=0,1,\ldots,x_{\text{max}}+1).$$ Note that the set of virtual eigenvalues $\{\tilde{\mathcal{E}}_{d_j}\}$ are mutually distinct. We will now show that the new groundstate eigenvector $\phi_{d_10}(x)$ is of definite sign as the original one $\phi_0(x)$ . We show that the Casoratian $\text{W}[\check{\xi}_{d_1},\nu](x)$ has definite sign for $x=0,1,\ldots,x_{\text{max}}$. By writing down the equation $\mathcal{H}\phi_n(x)=\mathcal{E}_n\phi_n(x)$ ($x=0,1,\ldots,x_{\text{max}}$) with $\mathcal{H}=\hat{A}_{d_1}^{\dagger}\hat{A}_{d_1}+\tilde{\mathcal{E}}_{d_1}$ and , we have $$\alpha B'(x)\nu(x+1)\check{P}_n(x+1) +\alpha D'(x)\nu(x-1)\check{P}_n(x-1) =\bigl(B(x)+D(x)-\mathcal{E}_n\bigr)\nu(x)\check{P}_n(x).$$ In terms of the functional relations of $\nu(x)$ , it is reduced to the original difference equation for $\check{P}_n(x)$ and it is valid any $x\in\mathbb{C}$. By using this, we can show $$\alpha B'(x)\text{W}[\check{\xi}_{d_1},\nu\check{P}_n](x) =\alpha D'(x)\text{W}[\check{\xi}_{d_1},\nu\check{P}_n](x-1) +(\tilde{\mathcal{E}}_{d_1}-\mathcal{E}_n) \check{\xi}_{d_1}(x)\nu(x)\check{P}_n(x). $$ By setting $n=0$ and $x=0,1,\ldots,x_{\text{max}}$ in turn, we obtain $$-\text{W}[\check{\xi}_{d_1},\nu](x)>0 \ \ (x=0,1,\ldots,x_{\text{max}}).$$ Let us rewrite the deformed Hamiltonian $\mathcal{H}_{d_1}$ in the standard form. The potential functions $B_{d_1}(x)$ and $D_{d_1}(x)$ are introduced: $$\begin{aligned} &B_{d_1}(x){\stackrel{\text{def}}{=}}\alpha B'(x+1) \frac{\check{\xi}_{d_1}(x)}{\check{\xi}_{d_1}(x+1)} \frac{\text{W}[\check{\xi}_{d_1},\nu](x+1)} {\text{W}[\check{\xi}_{d_1},\nu](x)},\\ &D_{d_1}(x){\stackrel{\text{def}}{=}}\alpha D'(x) \frac{\check{\xi}_{d_1}(x+1)}{\check{\xi}_{d_1}(x)} \frac{\text{W}[\check{\xi}_{d_1},\nu](x-1)} {\text{W}[\check{\xi}_{d_1},\nu](x)}.\end{aligned}$$ The positivity of $B_{d_1}(x)$ and $D_{d_1}(x)$ is shown above and the boundary conditions $B_{d_1}(x_{\text{max}})=0$ and $D_{d_1}(0)=0$ are satisfied. They satisfy the relations $$\begin{aligned} &B_{d_1}(x)D_{d_1}(x+1) =\hat{B}_{d_1}(x+1)\hat{D}_{d_1}(x+1),\\ &B_{d_1}(x)+D_{d_1}(x) =\hat{B}_{d_1}(x)+\hat{D}_{d_1}(x+1)+\tilde{\mathcal{E}}_{d_1}.\end{aligned}$$ The standard form Hamiltonian is obtained: $$\begin{aligned} &\mathcal{H}_{d_1}=\mathcal{A}_{d_1}^{\dagger}\mathcal{A}_{d_1},\\ &\mathcal{A}_{d_1}{\stackrel{\text{def}}{=}}\sqrt{B_{d_1}(x)}-e^{\partial}\sqrt{D_{d_1}(x)},\quad \mathcal{A}_{d_1}^{\dagger} =\sqrt{B_{d_1}(x)}-\sqrt{D_{d_1}(x)}\,e^{-\partial},\end{aligned}$$ in which $\mathcal{A}_{d_1}$ annihilates the groundstate eigenvector $$\mathcal{A}_{d_1}\phi_{d_10}(x)=0 \ \ (x=0,1,\ldots,x_{\text{max}}).$$ This one virtual state vector deletion is essentially the same procedure as that developed for the exceptional orthogonal polynomials in [@os23]. See §\[sec:ef\_miop\_qR\] for the explicit expressions. We repeat the above procedure and obtain the modified systems. The number of deleted virtual state vectors should be less than or equal $|\mathcal{V}|$ and $M$. Let us assume that we have already deleted $s$ virtual state vectors ($s\geq 1$), which are labeled by $\{d_1,\ldots,d_s\}$ ($d_j\in\mathcal{V}$ : mutually distinct). Namely we have $$\begin{aligned} &\mathcal{H}_{d_1\ldots d_s}{\stackrel{\text{def}}{=}}\hat{\mathcal{A}}_{d_1\ldots d_s}\hat{\mathcal{A}}_{d_1\ldots d_s}^{\dagger} +\tilde{\mathcal{E}}_{d_s},\quad \mathcal{H}_{d_1\ldots d_s}=(\mathcal{H}_{d_1\ldots d_s\,x,y})\quad (x,y=0,1,\ldots,x_{\text{max}}), \label{Hd1..ds}\\ &\hat{\mathcal{A}}_{d_1\ldots d_s}{\stackrel{\text{def}}{=}}\sqrt{\hat{B}_{d_1\dots d_s}(x)} -e^{\partial}\sqrt{\hat{D}_{d_1\ldots d_s}(x)}, \quad\hat{\mathcal{A}}_{d_1\ldots d_s}^{\dagger}= \sqrt{\hat{B}_{d_1\ldots d_s}(x)} -\sqrt{\hat{D}_{d_1\ldots d_s}(x)}\,e^{-\partial},\!\!\\ &\hat{B}_{d_1\ldots d_s}(x){\stackrel{\text{def}}{=}}\alpha B'(x+s-1) \frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s-1}}](x)} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s-1}}](x+1)}\, \frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x+1)} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x)}, \label{Bdsform}\\ &\hat{D}_{d_1\ldots d_s}(x){\stackrel{\text{def}}{=}}\alpha D'(x) \frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s-1}}](x+1)} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s-1}}](x)}\, \frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x-1)} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x)}, \label{Ddsform}\\ &\phi_{d_1\ldots d_s\,n}(x){\stackrel{\text{def}}{=}}\hat{\mathcal{A}}_{d_1\ldots d_s}\phi_{d_1\ldots d_{s-1}\,n}(x) \ \ (x=0,1,\ldots,x_{\text{max}}; n=0,1,\ldots,n_{\text{max}}),\\ &\tilde{\phi}_{d_1\ldots d_s\,\text{v}}(x){\stackrel{\text{def}}{=}}\hat{\mathcal{A}}_{d_1\ldots d_s} \tilde{\phi}_{d_1\ldots d_{s-1}\,\text{v}}(x) +\delta_{x,x_{\text{max}}}\varphi_{d_1\ldots d_s\,\text{v}}, \ (x=0,1,\ldots,x_{\text{max}}; \text{v}\in\mathcal{V}\backslash\{d_1,\ldots,d_s\}),{\nonumber\\}&\qquad \varphi_{d_1\ldots d_s\,\text{v}}{\stackrel{\text{def}}{=}}\phi_{d_1\ldots d_s\,0}(x_{\text{max}}) \frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s}}](x_{\text{max}}) \text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s-1}}, \check{\xi}_{\text{v}}](x_{\text{max}}+1)} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s-1}}](x_{\text{max}}+1) \text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s},\nu](x_{\text{max}})},\\ &\mathcal{H}_{d_1\ldots d_s}\phi_{d_1\ldots d_s\,n}(x) =\mathcal{E}_n\phi_{d_1\ldots d_s\,n}(x) \ \ (x=0,1,\ldots,x_{\text{max}}; n=0,1,\ldots,n_{\text{max}}), \label{Hd1..dsphid1..ds=}\\ &\mathcal{H}_{d_1\ldots d_s}\tilde{\phi}_{d_1\ldots d_s\,\text{v}}(x) =\tilde{\mathcal{E}}_\text{v}\tilde{\phi}_{d_1\ldots d_s\,\text{v}}(x) \ \ (x=0,1,\ldots,x_{\text{max}}-1; \text{v}\in\mathcal{V}\backslash\{d_1,\ldots,d_s\}), \label{Hd1..dstphid1..ds=}\\ &(\phi_{d_1\ldots d_s\,n},\phi_{d_1\ldots d_s\,m}) {\stackrel{\text{def}}{=}}\sum_{x=0}^{x_{\text{max}}} \phi_{d_1\ldots d_s\,n}(x)\phi_{d_1\ldots d_s\,m}(x) =\prod_{j=1}^s(\mathcal{E}_n-\tilde{\mathcal{E}}_{d_j})\cdot \frac{1}{d_n^2}\delta_{nm}{\nonumber\\}&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (n,m=0,1,\ldots,n_{\text{max}}). \label{(phid1..dsm,phid1..dsn)}\end{aligned}$$ The eigenvectors and the virtual state vectors have Casoratian expressions ($x=0,1,\ldots,x_{\text{max}}$): $$\begin{aligned} &\phi_{d_1\ldots d_s\,n}(x)= \frac{(-1)^s\sqrt{\prod_{j=1}^s\alpha B'(x+j-1)} \,\tilde{\phi}_0(x)\, \text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s},\nu\check{P}_n](x)} {\sqrt{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x)\, \text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x+1)}}, \label{phid1..dsn}\\[2pt] &\tilde{\phi}_{d_1\ldots d_s\,\text{v}}(x)= \frac{(-1)^s\sqrt{\prod_{j=1}^s\alpha B'(x+j-1)} \,\tilde{\phi}_0(x)\, \text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}, \check{\xi}_{\text{v}}](x)} {\sqrt{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x)\, \text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x+1)}}. \label{phitd1..dsv}\end{aligned}$$ The Casoratian in the virtual state vectors $\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}, \check{\xi}_{\text{v}}](x)$ has definite sign for $x=0,1,\ldots,x_{\text{max}}+1$, and that appearing in the groundstate eigenvector $\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s},\nu](x)$ has definite sign for $x=0,1,\ldots,x_{\text{max}}$, too. The next step begins with rewriting the Hamiltonian $\mathcal{H}_{d_1\ldots d_s}$ by choosing the next virtual state to be deleted $d_{s+1}\in\mathcal{V}\backslash\{d_1,\ldots,d_s\}$. The potential functions $\hat{B}_{d_1\ldots d_{s+1}}(x)$ and $\hat{D}_{d_1\ldots d_{s+1}}(x)$ are defined as in – by $s\to s+1$. We have $\hat{B}_{d_1\ldots d_{s+1}}(x)>0$ ($x=0,1,\ldots,x_{\text{max}}$), $\hat{D}_{d_1\ldots d_{s+1}}(0)= \hat{D}_{d_1\ldots d_{s+1}}(x_{\text{max}}+1)=0$, $\hat{D}_{d_1\ldots d_{s+1}}(x)>0$ ($x=1,2,\ldots,x_{\text{max}}$). These functions satisfy the relations $$\begin{aligned} &\hat{B}_{d_1\ldots d_{s+1}}(x)\hat{D}_{d_1\ldots d_{s+1}}(x+1) =\hat{B}_{d_1\ldots d_s}(x+1)\hat{D}_{d_1\ldots d_s}(x+1),\\ &\hat{B}_{d_1\ldots d_{s+1}}(x)+\hat{D}_{d_1\ldots d_{s+1}}(x) +\tilde{\mathcal{E}}_{d_{s+1}} =\hat{B}_{d_1\ldots d_s}(x)+\hat{D}_{d_1\ldots d_s}(x+1) +\tilde{\mathcal{E}}_{d_s}.\end{aligned}$$ The Hamiltonian $\mathcal{H}_{d_1\ldots d_s}$ is rewritten as: $$\begin{gathered} \mathcal{H}_{d_1\ldots d_s} =\hat{\mathcal{A}}_{d_1\ldots d_{s+1}}^{\dagger} \hat{\mathcal{A}}_{d_1\ldots d_{s+1}} +\tilde{\mathcal{E}}_{d_{s+1}},\\ \hat{\mathcal{A}}_{d_1\ldots d_{s+1}}\!{\stackrel{\text{def}}{=}}\! \sqrt{\hat{B}_{d_1\ldots d_{s+1}}(x)} -e^{\partial}\sqrt{\hat{D}_{d_1\ldots d_{s+1}}(x)},\ \hat{\mathcal{A}}_{d_1\ldots d_{s+1}}^{\dagger}\! =\!\sqrt{\hat{B}_{d_1\ldots d_{s+1}}(x)} -\sqrt{\hat{D}_{d_1\ldots d_{s+1}}(x)}\,e^{-\partial}.\end{gathered}$$ Now let us define a new Hamiltonian $\mathcal{H}_{d_1\ldots d_{s+1}}$ by changing the orders of $\hat{\mathcal{A}}_{d_1\ldots d_{s+1}}^{\dagger}$ and $\hat{\mathcal{A}}_{d_1\ldots d_{s+1}}$ together with the eigenvectors $\phi_{d_1\ldots d_{s+1}\,n}(x)$ and the virtual state vectors $\tilde{\phi}_{d_1\ldots d_{s+1}\,\text{v}}(x)$: $$\begin{aligned} &\mathcal{H}_{d_1\ldots d_{s+1}}{\stackrel{\text{def}}{=}}\hat{\mathcal{A}}_{d_1\ldots d_{s+1}} \hat{\mathcal{A}}_{d_1\ldots d_{s+1}}^{\dagger} +\tilde{\mathcal{E}}_{d_{s+1}},\quad \mathcal{H}_{d_1\ldots d_{s+1}}=(\mathcal{H}_{d_1\ldots d_{s+1}\,x,y}) \ \ (x,y=0,1,\ldots,x_{\text{max}}),\\ &\phi_{d_1\ldots d_{s+1}\,n}(x){\stackrel{\text{def}}{=}}\hat{\mathcal{A}}_{d_1\ldots d_{s+1}}\phi_{d_1\ldots d_s\,n}(x) \ \ (x=0,1,\ldots,x_{\text{max}}; n=0,1,\ldots,n_{\text{max}}),\\ &\tilde{\phi}_{d_1\ldots d_{s+1}\,\text{v}}(x){\stackrel{\text{def}}{=}}\hat{\mathcal{A}}_{d_1\ldots d_{s+1}} \tilde{\phi}_{d_1\ldots d_s\,\text{v}}(x) +\delta_{x,x_{\text{max}}}\varphi_{d_1\ldots d_{s+1}\,\text{v}},\\ &\hspace{40mm} \ (x=0,1,\ldots,x_{\text{max}}; \text{v}\in\mathcal{V}\backslash\{d_1,\ldots,d_{s+1}\}),\\ &\qquad \varphi_{d_1\ldots d_{s+1}\,\text{v}}{\stackrel{\text{def}}{=}}\phi_{d_1\ldots d_{s+1}\,0}(x_{\text{max}}) \frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s+1}}](x_{\text{max}}) \text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s}}, \check{\xi}_{\text{v}}](x_{\text{max}}+1)} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{{s}}}](x_{\text{max}}+1) \text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s+1}},\nu](x_{\text{max}})}.\end{aligned}$$ The orthogonality relation reads $$\quad(\phi_{d_1\ldots d_{s+1}\,n},\phi_{d_1\ldots d_{s+1}\,m}) =\prod_{j=1}^{s+1}(\mathcal{E}_n-\tilde{\mathcal{E}}_{d_j}) \cdot\frac{1}{d_n^2}\delta_{nm} \ \ (n,m=0,1,\ldots,n_{\text{max}}).$$ The functions $\phi_{d_1\ldots d_{s+1}\,n}(x)$ and $\tilde{\phi}_{d_1\ldots d_{s+1}\,\text{v}}(x)$ are expressed as Casoratians as in –. The Casoratian $\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s+1}}, \check{\xi}_{\text{v}}](x)$ has definite sign $$\begin{aligned} &\pm\frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](0) (\tilde{\mathcal{E}}_{d_{s+1}}-\tilde{\mathcal{E}}_{\text{v}})} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s+1}}](0) \text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}, \check{\xi}_{\text{v}}](0)}>0{\nonumber\\}&\Rightarrow \pm\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s+1}}, \check{\xi}_{\text{v}}](x)>0 \ \ (x=0,1,\ldots,x_{\text{max}}+1).\end{aligned}$$ Likewise $\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s+1}},\nu](x)$ and the lowest eigenvector $\phi_{d_1\ldots d_{s+1}\,0}(x)$ have definite sign $$\begin{aligned} &\mp\frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](0)} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s+1}}](0) \text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s},\nu](0)}>0{\nonumber\\}&\Rightarrow \pm\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_{s+1}},\nu](x)>0 \ \ (x=0,1,\ldots,x_{\text{max}}).\end{aligned}$$ These establish the $s+1$ case. At the end of this subsection we present this deformed Hamiltonian $\mathcal{H}_{d_1\ldots d_s}$ in the standard form, in which the $\mathcal{A}$ operator annihilates the groundstate eigenvector: $$\begin{aligned} &\mathcal{H}_{d_1\ldots d_s} =\mathcal{A}_{d_1\ldots d_s}^{\dagger}\mathcal{A}_{d_1\ldots d_s}, \label{sstanham1}\\ &\mathcal{A}_{d_1\ldots d_s}{\stackrel{\text{def}}{=}}\sqrt{B_{d_1\ldots d_s}(x)} -e^{\partial}\sqrt{D_{d_1\ldots d_s}(x)}, \ \ \mathcal{A}_{d_1\ldots d_s}^{\dagger} =\sqrt{B_{d_1\ldots d_s}(x)} -\sqrt{D_{d_1\ldots d_s}(x)}\,e^{-\partial}, \label{sstanham2}\end{aligned}$$ which satisfies $$\mathcal{A}_{d_1\ldots d_s}\phi_{d_1\ldots d_s\,0}(x)=0 \ \ (x=0,1,\ldots,x_{\text{max}}).$$ The potential functions $B_{d_1\ldots d_s}(x)$ and $D_{d_1\ldots d_s}(x)$ are: $$\begin{aligned} &B_{d_1\ldots d_s}(x){\stackrel{\text{def}}{=}}\alpha B'(x+s) \frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x)} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x+1)}\, \frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s},\nu](x+1)} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s},\nu](x)}, \label{Bd1..ds}\\ &D_{d_1\ldots d_s}(x){\stackrel{\text{def}}{=}}\alpha D'(x) \frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x+1)} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s}](x)}\, \frac{\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s},\nu](x-1)} {\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_s},\nu](x)}. \label{Dd1..ds}\end{aligned}$$ The positivity of $B_{d_1\ldots d_s}(x)$ and $D_{d_1\ldots d_s}(x)$ is shown above and the boundary conditions $B_{d_1\ldots d_s}(x_{\text{max}})=0$ and $D_{d_1\ldots d_s}(0)=0$ are satisfied. They satisfy the relations $$\begin{aligned} &B_{d_1\ldots d_s}(x)D_{d_1\ldots d_s}(x+1) =\hat{B}_{d_1\ldots d_s}(x+1)\hat{D}_{d_1\ldots d_s}(x+1), \\ &B_{d_1\ldots d_s}(x)+D_{d_1\ldots d_s}(x) =\hat{B}_{d_1\ldots d_s}(x)+\hat{D}_{d_1\ldots d_s}(x+1) +\tilde{\mathcal{E}}_{d_s}. $$ It should be stressed that the above results after $s$-deletions are independent of the orders of deletions ($\phi_{d_1\ldots d_s\,n}(x)$ and $\tilde{\phi}_{d_1\ldots d_s\,\text{v}}(x)$ may change sign). Explicit forms of multi-indexed ($q$-)Racah polynomials {#sec:ef_miop_qR} ------------------------------------------------------- Here we present the main results of the paper. The multi-indexed ($q$-)Racah polynomials are obtained by applying the method of virtual states deletion to the ($q$-)Racah system. The parameter ${\boldsymbol}{\lambda}=(\lambda_1,\lambda_2,\ldots)$ dependence is now shown explicitly. The eigenvectors of the models in §5 of [@os12] are described by orthogonal polynomials in the sinusoidal coordinate $\eta(x;{\boldsymbol}{\lambda})$. The auxiliary function $\varphi(x;{\boldsymbol}{\lambda})$ is defined by $$\varphi(x;{\boldsymbol}{\lambda}){\stackrel{\text{def}}{=}}\frac{\eta(x+1;{\boldsymbol}{\lambda})-\eta(x;{\boldsymbol}{\lambda})}{\eta(1;{\boldsymbol}{\lambda})}, \label{varphidef}$$ and it satisfies (with ${\boldsymbol}{\delta}$ defined in –) $$\frac{\varphi(x;{\boldsymbol}{\lambda})}{\varphi(x-1;{\boldsymbol}{\lambda}+2{\boldsymbol}{\delta})} =\varphi(1;{\boldsymbol}{\lambda}). \label{varphiid}$$ All the models in §5 of [@os12] have shape invariance [@genden]. The following relations are very useful: $$\begin{aligned} &\varphi(x;{\boldsymbol}{\lambda})=\sqrt{\frac{B(0;{\boldsymbol}{\lambda})}{B(x;{\boldsymbol}{\lambda})}} \,\frac{\phi_0(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})}{\phi_0(x;{\boldsymbol}{\lambda})},\quad \varphi(x;{\boldsymbol}{\lambda})=\sqrt{\frac{B(0;{\boldsymbol}{\lambda})}{D(x+1;{\boldsymbol}{\lambda})}} \,\frac{\phi_0(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})}{\phi_0(x+1;{\boldsymbol}{\lambda})}, \label{OS12(4.12,13)}\\ &\frac{B(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})}{B(x+1;{\boldsymbol}{\lambda})} =\kappa^{-1}\frac{\varphi(x+1;{\boldsymbol}{\lambda})}{\varphi(x;{\boldsymbol}{\lambda})},\quad \frac{D(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})}{D(x;{\boldsymbol}{\lambda})} =\kappa^{-1}\frac{\varphi(x-1;{\boldsymbol}{\lambda})}{\varphi(x;{\boldsymbol}{\lambda})}. \label{OS12(5.81etc)}\end{aligned}$$ We delete $M$ virtual state vectors labeled by $$\mathcal{D}=\{d_1,d_2,\ldots,d_M\} \ \ (d_j\in\mathcal{V} : \text{mutually distinct}),$$ and denote $\mathcal{H}_{d_1\ldots d_M}$, $\phi_{d_1\ldots d_M\,n}$, $\mathcal{A}_{d_1\ldots d_M}$, etc. by $\mathcal{H}_{\mathcal{D}}$, $\phi_{\mathcal{D}\,n}$, $\mathcal{A}_{\mathcal{D}}$, etc. Let us denote the eigenvector $\phi_{\mathcal{D}\,n}(x;{\boldsymbol}{\lambda})$ in after $M$ deletions ($s=M$) by $\phi^{\text{gen}}_{\mathcal{D}\,n}(x;{\boldsymbol}{\lambda})$. We define two polynomials $\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda})$ and $\check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda})$, to be called the denominator polynomial and the multi-indexed orthogonal polynomial, respectively, from the Casoratians as follows: $$\begin{aligned} &\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_M}](x;{\boldsymbol}{\lambda}) =\mathcal{C}_{\mathcal{D}}({\boldsymbol}{\lambda})\varphi_M(x;{\boldsymbol}{\lambda}) \check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}), \label{XiDdef}\\ &\text{W}[\check{\xi}_{d_1},\ldots,\check{\xi}_{d_M},\nu\check{P}_n] (x;{\boldsymbol}{\lambda}) =\mathcal{C}_{\mathcal{D},n}({\boldsymbol}{\lambda}) \varphi_{M+1}(x;{\boldsymbol}{\lambda}) \check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda}) \nu(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}}), \label{cPDndef}\\ &\tilde{{\boldsymbol}{\delta}}{\stackrel{\text{def}}{=}}(0,0,1,1),\ \quad \mathfrak{t}({\boldsymbol}{\lambda})+\beta{\boldsymbol}{\delta}= \mathfrak{t}({\boldsymbol}{\lambda}+\beta\tilde{{\boldsymbol}{\delta}}) \ \ (\forall\beta\in\mathbb{R}). \label{deltat}\end{aligned}$$ The constants $\mathcal{C}_{\mathcal{D}}({\boldsymbol}{\lambda})$ and $\mathcal{C}_{\mathcal{D},n}({\boldsymbol}{\lambda})$ are specified later. The auxiliary function $\varphi_M(x;{\boldsymbol}{\lambda})$ is defined by [@os22]: $$\begin{aligned} \varphi_M(x;{\boldsymbol}{\lambda})&{\stackrel{\text{def}}{=}}\prod_{1\leq j<k\leq M} \frac{\eta(x+k-1;{\boldsymbol}{\lambda})-\eta(x+j-1;{\boldsymbol}{\lambda})} {\eta(k-j;{\boldsymbol}{\lambda})}{\nonumber\\}&=\prod_{1\leq j<k\leq M} \varphi\bigl(x+j-1;{\boldsymbol}{\lambda}+(k-j-1){\boldsymbol}{\delta}\bigr), \label{varphiMdef}\end{aligned}$$ and $\varphi_0(x;{\boldsymbol}{\lambda})=\varphi_1(x;{\boldsymbol}{\lambda})=1$. The eigenvector is rewritten as $$\begin{aligned} \phi^{\text{gen}}_{\mathcal{D}\,n}(x;{\boldsymbol}{\lambda}) &=(-1)^M\kappa^{\frac14M(M-1)} \frac{\mathcal{C}_{\mathcal{D},n}({\boldsymbol}{\lambda})} {\mathcal{C}_{\mathcal{D}}({\boldsymbol}{\lambda})} \sqrt{\prod_{j=1}^M\alpha({\boldsymbol}{\lambda}) B'\bigl(0;{\boldsymbol}{\lambda}+(j-1)\tilde{{\boldsymbol}{\delta}}\bigr)}{\nonumber\\}&\quad\times \frac{\phi_0(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}})} {\sqrt{\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}) \check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda})}} \check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda}).\end{aligned}$$ The multi-indexed orthogonal polynomial $\check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda})$ has an expression $$\begin{aligned} \check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda}) &=\mathcal{C}_{\mathcal{D},n}({\boldsymbol}{\lambda})^{-1} \varphi_{M+1}(x;{\boldsymbol}{\lambda})^{-1}{\nonumber\\}&\quad\times\left| \begin{array}{cccc} \check{\xi}_{d_1}(x_1)&\cdots&\check{\xi}_{d_M}(x_1) &r_1(x_1)\check{P}_n(x_1)\\ \check{\xi}_{d_1}(x_2)&\cdots&\check{\xi}_{d_M}(x_2) &r_2(x_2)\check{P}_n(x_2)\\ \vdots&\cdots&\vdots&\vdots\\ \check{\xi}_{d_1}(x_{M+1})&\cdots&\check{\xi}_{d_M}(x_{M+1}) &r_{M+1}(x_{M+1})\check{P}_n(x_{M+1})\\ \end{array}\right|, \label{cPDn}\end{aligned}$$ where $x_j{\stackrel{\text{def}}{=}}x+j-1$ and $r_j(x)=r_j(x;{\boldsymbol}{\lambda},M)$ ($1\leq j\leq M+1$) are given by $$r_j\bigl(x+j-1;{\boldsymbol}{\lambda},M\bigr) {\stackrel{\text{def}}{=}}\left\{ \begin{array}{ll} {\displaystyle \frac{(x+a,x+b)_{j-1}(x+d-a+j,x+d-b+j)_{M+1-j}} {(d-a+1,d-b+1)_M}}&:\text{R}\\[8pt] {\displaystyle \frac{(aq^x,bq^x;q)_{j-1}(a^{-1}dq^{x+j},b^{-1}dq^{x+j};q)_{M+1-j}} {(abd^{-1}q^{-1})^{j-1}q^{Mx}(a^{-1}dq,b^{-1}dq;q)_M}}&:\text{$q$R} \end{array}\right.. \label{rj}$$ One can show that $\check{\Xi}_{\mathcal{D}}$ and $\check{P}_{\mathcal{D},n}$ are indeed polynomials in $\eta$: $$\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}){\stackrel{\text{def}}{=}}\Xi_{\mathcal{D}}\bigl(\eta(x;{\boldsymbol}{\lambda}+(M-1)\tilde{{\boldsymbol}{\delta}}); {\boldsymbol}{\lambda}\bigr), \quad \check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda}){\stackrel{\text{def}}{=}}P_{\mathcal{D},n}\bigl(\eta(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}}); {\boldsymbol}{\lambda}\bigr), \label{XiP_poly}$$ and their degrees are generically $\ell$ and $\ell+n$, respectively (see ). Here $\ell$ is $$\ell{\stackrel{\text{def}}{=}}\sum_{j=1}^Md_j-\tfrac12M(M-1). \label{lform}$$ The involution properties of these polynomials are the consequence of those of the basic polynomials $\check{P}_n(x)$ and $\check{\xi}_{d_j}(x)$. We adopt the standard normalisation for $\check{\Xi}_{\mathcal{D}}$ and $\check{P}_{\mathcal{D},n}$: $\check{\Xi}_{\mathcal{D}}(0;{\boldsymbol}{\lambda})=1$, $\check{P}_{\mathcal{D},n}(0;{\boldsymbol}{\lambda})=1$, which determine the constants $\mathcal{C}_{\mathcal{D}}({\boldsymbol}{\lambda})$ and $\mathcal{C}_{\mathcal{D},n}({\boldsymbol}{\lambda})$, $$\begin{aligned} \mathcal{C}_{\mathcal{D}}({\boldsymbol}{\lambda})&{\stackrel{\text{def}}{=}}\frac{1}{\varphi_M(0;{\boldsymbol}{\lambda})} \prod_{1\leq j<k\leq M} \frac{\tilde{\mathcal{E}}_{d_j}({\boldsymbol}{\lambda}) -\tilde{\mathcal{E}}_{d_k}({\boldsymbol}{\lambda})} {\alpha({\boldsymbol}{\lambda})B'(j-1;{\boldsymbol}{\lambda})}, \label{CD}\\ \mathcal{C}_{\mathcal{D},n}({\boldsymbol}{\lambda})&{\stackrel{\text{def}}{=}}(-1)^M\mathcal{C}_{\mathcal{D}}({\boldsymbol}{\lambda}) \tilde{d}_{\mathcal{D},n}({\boldsymbol}{\lambda})^2,\quad \tilde{d}_{\mathcal{D},n}({\boldsymbol}{\lambda})^2{\stackrel{\text{def}}{=}}\frac{\varphi_M(0;{\boldsymbol}{\lambda})}{\varphi_{M+1}(0;{\boldsymbol}{\lambda})} \prod_{j=1}^M\frac{\mathcal{E}_n({\boldsymbol}{\lambda}) -\tilde{\mathcal{E}}_{d_j}({\boldsymbol}{\lambda})} {\alpha({\boldsymbol}{\lambda})B'(j-1;{\boldsymbol}{\lambda})}. \label{CDn}\end{aligned}$$ The use of [*dual polynomials*]{} $Q_x(\mathcal{E}_n) {\stackrel{\text{def}}{=}}P_n(\eta(x))$ [@os12] is essential for the derivation of these results. The three term recurrence relations of $\{Q_x(\mathcal{E})\}$ are specified by $B(x)$ and $D(x)$. The denominator polynomial $\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda})$ is positive for $x=0,1,\ldots,x_{\text{max}}+1$. The lowest degree multi-indexed orthogonal polynomial $\check{P}_{\mathcal{D},0}(x;{\boldsymbol}{\lambda})$ is related to $\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda})$ by the parameter shift ${\boldsymbol}{\lambda}\to{\boldsymbol}{\lambda}+{\boldsymbol}{\delta}$: $$\check{P}_{\mathcal{D},0}(x;{\boldsymbol}{\lambda}) =\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta}). \label{PD0=XiD}$$ The potential functions $B_{\mathcal{D}}$ and $D_{\mathcal{D}}$ – after $M$-deletion ($s=M$) can be expressed neatly in terms of the denominator polynomial: $$\begin{aligned} B_{\mathcal{D}}(x;{\boldsymbol}{\lambda})&=B(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}})\, \frac{\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda})} {\check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda})} \frac{\check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})} {\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})}, \label{BD2}\\ D_{\mathcal{D}}(x;{\boldsymbol}{\lambda})&=D(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}})\, \frac{\check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda})} {\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda})} \frac{\check{\Xi}_{\mathcal{D}}(x-1;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})} {\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})}. \label{DD2}\end{aligned}$$ These formulas look similar to those in the exceptional polynomials [@os23]. The groundstate eigenvector $\phi_{\mathcal{D}\,0}$ is expressed by $\phi_0(x)$ and $\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda})$: $$\begin{aligned} \phi_{\mathcal{D}\,0}(x;{\boldsymbol}{\lambda})&= \sqrt{\prod_{y=0}^{x-1}\frac{B_{\mathcal{D}}(y)}{D_{\mathcal{D}}(y+1)}} =\phi_0(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}}) \sqrt{\frac{\check{\Xi}_{\mathcal{D}}(1;{\boldsymbol}{\lambda})} {\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}) \check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda})}}\, \check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta}){\nonumber\\}&=\psi_{\mathcal{D}}(x;{\boldsymbol}{\lambda}) \check{P}_{\mathcal{D},0}(x;{\boldsymbol}{\lambda}) \propto\phi^{\text{gen}}_{\mathcal{D}\,0}(x;{\boldsymbol}{\lambda}),\\ \psi_{\mathcal{D}}(x;{\boldsymbol}{\lambda})&{\stackrel{\text{def}}{=}}\sqrt{\check{\Xi}_{\mathcal{D}}(1;{\boldsymbol}{\lambda})}\, \frac{\phi_0(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}})} {\sqrt{\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda})\, \check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda})}},\quad \psi_{\mathcal{D}}(0;{\boldsymbol}{\lambda})=1.\end{aligned}$$ We arrive at the normalised eigenvector $\phi_{\mathcal{D}\,n}(x;{\boldsymbol}{\lambda})$ with the orthogonality relation, $$\begin{aligned} &\phi_{\mathcal{D}\,n}(x;{\boldsymbol}{\lambda}) {\stackrel{\text{def}}{=}}\psi_{\mathcal{D}}(x;{\boldsymbol}{\lambda}) \check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda}) \propto\phi^{\text{gen}}_{\mathcal{D}\,n}(x;{\boldsymbol}{\lambda}),\quad \phi_{\mathcal{D}\,n}(0;{\boldsymbol}{\lambda})=1,\\ &\sum_{x=0}^{x_{\text{max}}} \frac{\psi_{\mathcal{D}}(x;{\boldsymbol}{\lambda})^2} {\check{\Xi}_{\mathcal{D}}(1;{\boldsymbol}{\lambda})} \check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda}) \check{P}_{\mathcal{D},m}(x;{\boldsymbol}{\lambda}) =\frac{\delta_{nm}}{d_n({\boldsymbol}{\lambda})^2 \tilde{d}_{\mathcal{D},n}({\boldsymbol}{\lambda})^2} \ \ (n,m=0,1,\ldots,n_{\text{max}}).\end{aligned}$$ It is worthwhile to emphasise that the above orthogonality relation is a rational equation of ${\boldsymbol}{\lambda}$ or $q^{{\boldsymbol}{\lambda}}$, and it is valid for any value of ${\boldsymbol}{\lambda}$ (except for the zeros of denominators) but the weight function may not be positive definite. The shape invariance of the original system is inherited by the deformed systems. The matrix $\hat{\mathcal{A}}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda})$ intertwines the two Hamiltonians $\mathcal{H}_{d_1\ldots d_s}({\boldsymbol}{\lambda})$ and $\mathcal{H}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda})$, $$\begin{aligned} \hat{\mathcal{A}}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda})^{\dagger} \hat{\mathcal{A}}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda}) &=\mathcal{H}_{d_1\ldots d_s}({\boldsymbol}{\lambda}) -\tilde{\mathcal{E}}_{d_{s+1}}({\boldsymbol}{\lambda}),\\ \hat{\mathcal{A}}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda}) \hat{\mathcal{A}}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda})^{\dagger} &=\mathcal{H}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda}) -\tilde{\mathcal{E}}_{d_{s+1}}({\boldsymbol}{\lambda}),\end{aligned}$$ and it has the inverse. By the same argument given in §4 of [@os20], the shape invariance of $\mathcal{H}({\boldsymbol}{\lambda})$ is inherited by $\mathcal{H}_{d_1}({\boldsymbol}{\lambda})$, $\mathcal{H}_{d_1d_2}({\boldsymbol}{\lambda})$, $\cdots$. Therefore the Hamiltonian $\mathcal{H}_{\mathcal{D}}({\boldsymbol}{\lambda})$ is shape invariant: $$\mathcal{A}_{\mathcal{D}}({\boldsymbol}{\lambda}) \mathcal{A}_{\mathcal{D}}({\boldsymbol}{\lambda})^{\dagger} =\kappa\mathcal{A}_{\mathcal{D}}({\boldsymbol}{\lambda}+{\boldsymbol}{\delta})^{\dagger} \mathcal{A}_{\mathcal{D}}({\boldsymbol}{\lambda}+{\boldsymbol}{\delta}) +\mathcal{E}_1({\boldsymbol}{\lambda}). \label{shapeinvD}$$ As a consequence of the shape invariance and the normalisation, the actions of $\mathcal{A}_{\mathcal{D}}({\boldsymbol}{\lambda})$ and $\mathcal{A}_{\mathcal{D}}({\boldsymbol}{\lambda})^{\dagger}$ on the eigenvectors $\phi_{\mathcal{D}\,n}(x;{\boldsymbol}{\lambda})$ are $$\begin{aligned} &\mathcal{A}_{\mathcal{D}}({\boldsymbol}{\lambda}) \phi_{\mathcal{D}\,n}(x;{\boldsymbol}{\lambda}) =\frac{\mathcal{E}_n({\boldsymbol}{\lambda})} {\sqrt{B_{\mathcal{D}}(0;{\boldsymbol}{\lambda})}}\, \phi_{\mathcal{D}\,n-1}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta}) \ \ (x=0,1,\ldots,x_{\text{max}}-1), \label{ADphiDn=}\\ &\mathcal{A}_{\mathcal{D}}({\boldsymbol}{\lambda})^{\dagger} \phi_{\mathcal{D}\,n-1}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta}) =\sqrt{B_{\mathcal{D}}(0;{\boldsymbol}{\lambda})}\, \phi_{\mathcal{D}\,n}(x;{\boldsymbol}{\lambda}) \ \ (x=0,1,\ldots,x_{\text{max}}). \label{ADdphiDn=}\end{aligned}$$ The forward and backward shift operators are defined by $$\begin{aligned} \mathcal{F}_{\mathcal{D}}({\boldsymbol}{\lambda})&{\stackrel{\text{def}}{=}}\sqrt{B_{\mathcal{D}}(0;{\boldsymbol}{\lambda})}\, \psi_{\mathcal{D}}\,(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})^{-1}\circ \mathcal{A}_{\mathcal{D}}({\boldsymbol}{\lambda})\circ \psi_{\mathcal{D}}\,(x;{\boldsymbol}{\lambda}){\nonumber\\}&=\frac{B(0;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}})} {\varphi(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}}) \check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda})} \Bigl(\check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta}) -\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})e^{\partial}\Bigr), \label{calFD}\\ \mathcal{B}_{\mathcal{D}}({\boldsymbol}{\lambda})&{\stackrel{\text{def}}{=}}\frac{1}{\sqrt{B_{\mathcal{D}}(0;{\boldsymbol}{\lambda})}}\, \psi_{\mathcal{D}}\,(x;{\boldsymbol}{\lambda})^{-1}\circ \mathcal{A}_{\mathcal{D}}({\boldsymbol}{\lambda})^{\dagger}\circ \psi_{\mathcal{D}}\,(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta}){\nonumber\\}&=\frac{1}{B(0;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}}) \check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})} \label{calBD}\\ &\quad\times \Bigl(B(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}}) \check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}) -D(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}}) \check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda})e^{-\partial}\Bigr) \varphi(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}}), \nonumber\end{aligned}$$ and their actions on $\check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda})$ are $$\mathcal{F}_{\mathcal{D}}({\boldsymbol}{\lambda}) \check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda}) =\mathcal{E}_n({\boldsymbol}{\lambda}) \check{P}_{\mathcal{D},n-1}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta}),\quad \mathcal{B}_{\mathcal{D}}({\boldsymbol}{\lambda}) \check{P}_{\mathcal{D},n-1}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta}) =\check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda}). \label{BDPDn=}$$ As in the original ($q$-)Racah theory , these formulas are useful for the explicit calculation of the multi-indexed polynomials. The similarity transformed Hamiltonian is $$\begin{aligned} \widetilde{\mathcal{H}}_{\mathcal{D}}({\boldsymbol}{\lambda}) &{\stackrel{\text{def}}{=}}\psi_{\mathcal{D}}(x;{\boldsymbol}{\lambda})^{-1}\circ \mathcal{H}_{\mathcal{D}}({\boldsymbol}{\lambda})\circ \psi_{\mathcal{D}}(x;{\boldsymbol}{\lambda}) =\mathcal{B}_{\mathcal{D}}({\boldsymbol}{\lambda}) \mathcal{F}_{\mathcal{D}}({\boldsymbol}{\lambda}){\nonumber\\}&=B(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}})\, \frac{\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda})} {\check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda})} \biggl(\frac{\check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})} {\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})}-e^{\partial} \biggr){\nonumber\\}&\quad+D(x;{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}})\, \frac{\check{\Xi}_{\mathcal{D}}(x+1;{\boldsymbol}{\lambda})} {\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda})} \biggl(\frac{\check{\Xi}_{\mathcal{D}}(x-1;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})} {\check{\Xi}_{\mathcal{D}}(x;{\boldsymbol}{\lambda}+{\boldsymbol}{\delta})}-e^{-\partial} \biggr),\end{aligned}$$ and the multi-indexed orthogonal polynomials $\check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda})$ are its eigenpolynomials: $$\widetilde{\mathcal{H}}_{\mathcal{D}}({\boldsymbol}{\lambda}) \check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda})=\mathcal{E}_n({\boldsymbol}{\lambda}) \check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda}). \label{tHPDn=}$$ Other intertwining relations are $$\begin{aligned} \kappa^{\frac12} \hat{\mathcal{A}}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda}+{\boldsymbol}{\delta}) \mathcal{A}_{d_1\ldots d_s}({\boldsymbol}{\lambda}) &=\mathcal{A}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda}) \hat{\mathcal{A}}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda}), \\ \kappa^{-\frac12} \hat{\mathcal{A}}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda}) \mathcal{A}_{d_1\ldots d_s}({\boldsymbol}{\lambda})^{\dagger} &=\mathcal{A}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda})^{\dagger} \hat{\mathcal{A}}_{d_1\ldots d_{s+1}}({\boldsymbol}{\lambda}+{\boldsymbol}{\delta}), $$ with the potential functions given in – (with $s\to s+1$). Including the level 0 deletion corresponds to $M-1$ virtual states deletion: $$\check{P}_{\mathcal{D},n}(x;{\boldsymbol}{\lambda})\Bigm|_{d_M=0} =\check{P}_{\mathcal{D}',n}(x;{\boldsymbol}{\lambda}+\tilde{{\boldsymbol}{\delta}}),\quad \mathcal{D}'=\{d_1-1,\ldots,d_{M-1}-1\}. \label{dM=0}$$ This formula is similar to those in the multi-indexed Jacobi theory, eqs.(48)–(49) in [@os25]. The denominator polynomial $\Xi_{\mathcal{D}}$ behaves similarly. This is why we have restricted $d_j\geq 1$. The coefficients of the highest degree terms of the polynomials $\Xi_{\mathcal{D}}$ and $P_{\mathcal{D},n}$, $$\begin{aligned} \Xi_{\mathcal{D}}(y;{\boldsymbol}{\lambda})&=c^{\Xi}_{\mathcal{D}}({\boldsymbol}{\lambda}) y^{\ell}+(\text{lower order terms}),\\ P_{\mathcal{D},n}(y;{\boldsymbol}{\lambda})&=c^{P}_{\mathcal{D},n}({\boldsymbol}{\lambda}) y^{\ell+n}+(\text{lower order terms}),\end{aligned}$$ are $$\begin{aligned} &c^{\Xi}_{\mathcal{D}}({\boldsymbol}{\lambda})=\left\{ \begin{array}{ll} {\displaystyle \frac{\prod_{j=1}^M(-a-b+c+d+d_j+1)_{d_j}} {\prod_{1\leq j<k\leq M}(-a-b+c+d+d_j+d_k+1)} \prod_{j=1}^M\frac{(c,d-a+1,d-b+1)_{j-1}}{(c,d-a+1,d-b+1)_{d_j}} }&:\text{R}\\ {\displaystyle \frac{\prod_{j=1}^M(a^{-1}b^{-1}cdq^{d_j+1};q)_{d_j}} {\prod_{1\leq j<k\leq M}(1-a^{-1}b^{-1}cdq^{d_j+d_k+1})} \prod_{j=1}^M\frac{(c,a^{-1}dq,b^{-1}dq;q)_{j-1}} {(c,a^{-1}dq,b^{-1}dq;q)_{d_j}} }&:\text{$q$R} \end{array}\right.,{\nonumber\\}&c^P_{\mathcal{D},n}({\boldsymbol}{\lambda})=c^{\Xi}_{\mathcal{D}}({\boldsymbol}{\lambda}) \times\left\{ \begin{array}{ll} {\displaystyle \frac{(a+b+c-d+n-1)_n\,(c)_M} {(a,b,c)_n\prod_{j=1}^M(c+n+d_j)} }&:\text{R}\\[12pt] {\displaystyle \frac{(abcd^{-1}q^{n-1};q)_n\,(c;q)_M} {(a,b,c;q)_n\prod_{j=1}^M(1-cq^{n+d_j})} }&:\text{$q$R} \end{array}\right.. \label{c_PDn}\end{aligned}$$ The exceptional $X_{\ell}$ ($q$-)Racah orthogonal polynomials presented in [@os23] correspond to the simplest case $M=1$, $\mathcal{D}=\{\ell\}$, $\ell\geq 1$: $$\check{\xi}_{\ell}(x;{\boldsymbol}{\lambda}) =\check{\Xi}_{\{\ell\}} (x;{\boldsymbol}{\lambda}+\ell{\boldsymbol}{\delta}-\tilde{{\boldsymbol}{\delta}}),\quad \check{P}_{\ell,n}(x;{\boldsymbol}{\lambda}) =\check{P}_{\{\ell\},n}(x;{\boldsymbol}{\lambda}+\ell{\boldsymbol}{\delta}-\tilde{{\boldsymbol}{\delta}}).$$ Summary and Comments {#summary} ==================== Following the examples of multi-indexed Laguerre and Jacobi polynomials [@os25], multi-indexed ($q$-)Racah polynomials, the discrete quantum mechanics counterparts, are constructed. These new polynomials could be considered as a further generalisation of Bannai-Ito polynomials [@bannai]. The next stage will be the construction of multi-indexed Askey-Wilson and Wilson polynomials. The basic logic is the same for the ordinary quantum mechanics as well as for the discrete quantum mechanics with real [@os12] or pure imaginary shifts [@os13]. Starting from the factorised Hamiltonians of exactly solvable quantum mechanical systems, a series of new ‘deformed’ exactly solvable quantum systems are generated by applying Crum-Krein-Adler formulas [@crum; @adler] or multiple Darboux transformations [@darb] through deletion of various virtual states instead of eigenstates. The virtual state vectors are polynomial ‘solutions’ of a virtual Hamiltonian which is obtained by twisting the discrete symmetry of the original Hamiltonian. They fail to satisfy the Schrödinger equation of the virtual Hamiltonian at one of the boundaries, at $x=x_{\text{max}}$. When there is only one extra index $\mathcal{D}=\{\ell\}$ ($\ell\ge1$), the multi-indexed ($q$-)Racah polynomials reduce to the exceptional polynomials [@os23; @os24]. Like the exceptional polynomials, the multi-indexed ($q$-)Racah polynomials do not satisfy the three term recurrence relations. On the other hand, their dual polynomials satisfy the three term recurrence relations because of the tri-diagonal form of the Hamiltonian. As for the parameter ranges in which –, – are satisfied, we have taken conservative ones, –, , . It is quite possible that the valid parameter ranges could be enlarged. The difference equations for the multi-indexed ($q$-)Racah polynomials, , are purely algebraic and they hold for any parameter ranges. As in the ordinary Sturm-Liouville case (the oscillation theorem) the multi-indexed orthogonal polynomial $P_{\mathcal{D},n}(y;{\boldsymbol}{\lambda})$ has $n$ zeros in the orthogonality range, $0<y<\eta(x_{\text{max}};{\boldsymbol}{\lambda}+M\tilde{{\boldsymbol}{\delta}})$. This is a general property of the eigenvectors of a Jacobi matrix of the form , etc. See [@gladwell]. A few words on the mirror reflection with respect to the mid point of the $x$-grid, $x\to x_\text{max}-x=N-x$. The original ($q$-)Racah polynomial under the reflection is described by the same polynomial with different parameters: $$\begin{aligned} \check{P}_n(N-x;{\boldsymbol}{\lambda}) &=A\times\check{P}_n\bigl(x;(\lambda_1,\lambda_1+\lambda_3-\lambda_4, \lambda_1+\lambda_2-\lambda_4,2\lambda_1-\lambda_4)\bigr),{\nonumber\\}A&=\check{P}_n(N;{\boldsymbol}{\lambda}) =\left\{ \begin{array}{ll} \displaystyle{\frac{(a+b-d,a+c-d)_n}{(b,c)_n}}&:\text{R}\\[10pt] \displaystyle{\bigl(a^{-1}d\bigr)^n\, \frac{(abd^{-1},acd^{-1};q)_n}{(b,c;q)_n}}&:\text{$q$R} \end{array}\right..\end{aligned}$$ This corresponds to the mirror reflection formula of the Jacobi polynomials: $$P_n^{(\alpha,\beta)}(-\eta)=(-1)^nP_n^{(\beta,\alpha)}(\eta),\quad \eta(x)=\cos 2x,\quad \eta(\tfrac{\pi}{2}-x)=-\eta(x).$$ The type ${\text{{\uppercase\expandafter{\romannumeral1}}}}$ and ${\text{{\uppercase\expandafter{\romannumeral2}}}}$ virtual state wavefunctions for the Jacobi case have a twisted boundary condition at $x=\frac{\pi}{2}$ and $x=0$, respectively [@os25] and their polynomial parts are related by this mirror reflection [@os19]. The virtual state vectors, which fail to satisfy the equation at $x=x_\text{max}$, correspond to the type ${\text{{\uppercase\expandafter{\romannumeral1}}}}$. By the mirror reflection, one can consider the type ${\text{{\uppercase\expandafter{\romannumeral2}}}}$ virtual state vectors, which fail to satisfy the equation at $x=0$, instead of $x=x_\text{max}$. By using these type ${\text{{\uppercase\expandafter{\romannumeral2}}}}$ virtual state vectors, the mirror reflexed version of the multi-indexed ($q$-)Racah polynomials can be constructed. They are related to the multi-indexed ($q$-)Racah polynomials by $$\begin{aligned} \check{P}_{\mathcal{D},n}^{\text{mirror}}(x;{\boldsymbol}{\lambda}) &=A\times \check{P}_{\mathcal{D},n}\bigl(N-x; (\lambda_1,\lambda_1+\lambda_3-\lambda_4,\lambda_1+\lambda_2-\lambda_4, 2\lambda_1-\lambda_4)\bigr),{\nonumber\\}A^{-1}&=\check{P}_{\mathcal{D},n}\bigl(N; (\lambda_1,\lambda_1+\lambda_3-\lambda_4,\lambda_1+\lambda_2-\lambda_4, 2\lambda_1-\lambda_4)\bigr).\end{aligned}$$ The normalisation conditions are $\check{P}_{n}(0;{\boldsymbol}{\lambda})= \check{P}_{\mathcal{D},n}(0;{\boldsymbol}{\lambda}) =\check{P}_{\mathcal{D},n}^{\text{mirror}}(0;{\boldsymbol}{\lambda})=1$. Contrary to the Jacobi case [@os25], the type ${\text{{\uppercase\expandafter{\romannumeral1}}}}$ and ${\text{{\uppercase\expandafter{\romannumeral2}}}}$ virtual state vectors cannot be used together to generate new multi-indexed ($q$-)Racah polynomials. Various orthogonal polynomials are obtained from the ($q$-)Racah polynomials in certain limits. Similarly, from the multi-indexed ($q$-)Racah polynomials presented in the previous section, we can obtain the multi-indexed version of various orthogonal polynomials, such as the ($q$-)Hahn, dual ($q$-)Hahn, alternative $q$-Hahn (see §V.C.1 of [@os12]), etc. The infinite dimensional cases, the little $q$-Jacobi, ($q$-)Meixner, etc will be reported in a separate publication. In certain limiting processes, the mirror reflection does not commute with the limit, as is well known in the Jacobi $\to$ Laguerre limits [@os19; @os25]. The mirror reflexed multi-indexed polynomials are supposed to play certain roles in such limits. Let us emphasise that the discrete symmetries of the original ($q$-)Racah systems and their twisting, which are essential for the construction of virtual Hamiltonians and virtual state vectors, are easily recognised in the present parametrisation – [@os12], but rather unclear in the original parametrisation [@askey; @ismail; @koeswart]. This is a good reason to promote the ($q$-)Racah systems in our parametrisation. Acknowledgements {#acknowledgements .unnumbered} ================ We thank a referee for many useful comments. S.O. thanks to his late father Zen Odake for warm encouragement. R.S. is supported in part by Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), No.23540303 and No.22540186. [99]{} S.Odake and R.Sasaki, “Exactly Solvable Quantum Mechanics and Infinite Families of Multi-indexed Orthogonal Polynomials," Phys. Lett. [**B702**]{} (2011) 164-170, [arXiv:1105.0508\[math-ph\]]{}. S.Odake and R.Sasaki, “Orthogonal Polynomials from Hermitian Matrices," J. Math. Phys. [**49**]{} (2008) 053503 (43pp), [arXiv:0712.4106\[math.CA\]]{}. (The dual $q$-Meixner polynomial in §5.2.4 and dual $q$-Charlier polynomial in §5.2.8 should be deleted because the hermiticity of the Hamiltonian is lost for these two cases.) G.E.Andrews, R.Askey and R.Roy, [*Special Functions*]{}, vol. 71 of Encyclopedia of mathematics and its applications, Cambridge Univ. Press, Cambridge, (1999). M.E.H.Ismail, [*Classical and quantum orthogonal polynomials in one variable*]{}, vol. 98 of Encyclopedia of mathematics and its applications, Cambridge Univ. Press, Cambridge, (2005). R.Koekoek and R.F.Swarttouw, “The Askey-scheme of hypergeometric orthogonal polynomials and its $q$-analogue,” [arXiv:math.CA/9602214]{}. G.Gasper and M.Rahman, [*Basic Hypergeometric Series*]{} (2nd ed.), vol. 96 of Encyclopedia of mathematics and its applications, Cambridge Univ. Press, Cambridge, (2004). A.F.Nikiforov, S.K.Suslov, and V.B.Uvarov, [*Classical Orthogonal Polynomials of a Discrete Variable*]{}, Springer, Berlin, (1991). D.Gómez-Ullate, N.Kamran and R.Milson, “An extension of Bochner’s problem: exceptional invariant subspaces,” J. Approx Theory [**162**]{} (2010) 987-1006, [arXiv:0805.3376\[math-ph\]]{}; “An extended class of orthogonal polynomials defined by a Sturm-Liouville problem,” J. Math. Anal. Appl. [**359**]{} (2009) 352-367, [arXiv:0807.3939\[math-ph\]]{}. C.Quesne, “Exceptional orthogonal polynomials, exactly solvable potentials and supersymmetry,” J. Phys. [**A41**]{} (2008) 392001, [arXiv:0807.4087\[quant-ph\]]{}; B.Bagchi, C.Quesne and R.Roychoudhury, “Isospectrality of conventional and new extended potentials, second-order supersymmetry and role of PT symmetry," Pramana J. Phys. [**73**]{} (2009) 337-347, [arXiv:0812.1488\[quant-ph\]]{}. S.Odake and R.Sasaki, “Infinitely many shape invariant potentials and new orthogonal polynomials,” Phys. Lett. [**B679**]{} (2009) 414-417, [arXiv:0906.0142\[math-ph\]]{}. C.Quesne, “Solvable rational potentials and exceptional orthogonal polynomials in supersymmetric quantum mechanics," SIGMA [**5**]{} (2009) 084, [arXiv:0906.2331\[math-ph\]]{}. S.Odake and R.Sasaki, “Infinitely many shape invariant discrete quantum mechanical systems and new exceptional orthogonal polynomials related to the Wilson and Askey-Wilson polynomials," Phys. Lett. [**B682**]{} (2009) 130-136, [arXiv:0909.3668\[math-ph\]]{}. S.Odake and R.Sasaki, “Infinitely many shape invariant potentials and cubic identities of the Laguerre and Jacobi polynomials," J. Math. Phys. [**51**]{} (2010) 053513 (9pp), [arXiv:0911.1585\[math-ph\]]{}. S.Odake and R.Sasaki, “Another set of infinitely many exceptional ($X_{\ell}$) Laguerre polynomials,” Phys. Lett. [**B684**]{} (2010) 173-176, [arXiv:0911.3442\[math-ph\]]{}. (Remark: J1(J2) in this reference corresponds to J2(J1) in later references.) B.Midya and B.Roy, “Exceptional orthogonal polynomials and exactly solvable potentials in position dependent mass Schrödinger Hamiltonians," Phys. Lett. [**A 373**]{} (2009) 4117-4122. C.-L.Ho, S.Odake and R.Sasaki, “Properties of the exceptional ($X_{\ell}$) Laguerre and Jacobi polynomials,” SIGMA [**7**]{} (2011) 107 (24pp), [arXiv:0912.5447\[math-ph\]]{}. D.Gómez-Ullate, N.Kamran and R.Milson, “Exceptional orthogonal polynomials and the Darboux transformation," J. Phys. [**A43**]{} (2010) 434016 (16pp), [arXiv:1002.2666\[math-ph\]]{}; “On orthogonal polynomials spanning a nonstandard flag,” Contemp. Math. [**563**]{} (2012) 51–70, [arXiv:1101.5584\[math-ph\]]{}. R.Sasaki, S.Tsujimoto and A.Zhedanov, “Exceptional Laguerre and Jacobi polynomials and the corresponding potentials through Darboux-Crum transformations," J. Phys. [**A43**]{} (2010) 315204 (20pp), [arXiv:1004.4711\[math-ph\]]{}. S.Odake and R.Sasaki, “Exceptional Askey-Wilson type polynomials through Darboux-Crum transformations,” J. Phys. [**A43**]{} (2010) 335201 (18pp), [arXiv:1004.0544\[math-ph\]]{}. S.Odake and R.Sasaki, “A new family of shape invariantly deformed Darboux-Pöschl-Teller potentials with continuous $\ell$," J. Phys. A [**44**]{} (2011) 195203 (14pp), [arXiv:1007.3800\[math-ph\]]{}. C-L.Ho, “Dirac(-Pauli), Fokker-Planck equations and exceptional Laguerre polynomials," Ann. Phys. [**326**]{} (2011) 797-807, [arXiv:1008.0744\[quant-ph\]]{}. C-L.Ho and R.Sasaki, “Zeros of the exceptional Laguerre and Jacobi polynomials," ISRN Mathematical Physics, in press, [arXiv:1102.5669\[math-ph\]]{}. S.Odake and R.Sasaki, “Exceptional ($X_{\ell}$) ($q$-)Racah polynomials," Prog. Theor. Phys. [**125**]{} (2011) 851-870, [arXiv:1102.0813\[math-ph\]]{}. Y.Grandati, “Solvable rational extensions of the isotonic oscillator,” Ann. Phys. [**326**]{} (2011) 2074-2090, [arXiv:1101.0055\[math-ph\]]{}; “Solvable rational extensions of the Morse and Kepler-Coulomb potentials,” [arXiv:1103.5023\[math-ph\]]{}. C-L.Ho, “Prepotential approach to solvable rational potentials and exceptional orthogonal polynomials," Prog. Theor. Phys. [**126**]{} (2011) 185-201, [arXiv:1104.3511\[math-ph\]]{}. S.Odake and R.Sasaki, “Discrete quantum mechanics," J. Phys. A: Math. Theor. [**44**]{} (2011) 353001 (47pp), [arXiv:1104.0473\[math-ph\]]{}. C. Quesne, “Higher-order SUSY, exactly solvable potentials, and exceptional orthogonal polynomials," Mod. Phys. Lett. A [**26**]{} (2011) 1843-1852, [arXiv:1106.1990\[math-ph\]]{}; “Rationally-extended radial oscillators and Laguerre exceptional orthogonal polynomials in kth-order SUSYQM," J. Mod. Phys. A [**26**]{} (2011) 5337-5347, [arXiv:1110.3958\[math-ph\]]{}; “Exceptional orthogonal polynomials and new exactly solvable potentials in quantum mechanics," [arXiv:1111.6467\[math-ph\]]{}. S.Odake and R.Sasaki, “Exactly solvable ‘discrete’ quantum mechanics; shape invariance, Heisenberg solutions, annihilation-creation operators and coherent states," Prog. Theor. Phys. [**119**]{} (2008) 663-700, [arXiv:0802.1075\[quant-ph\]]{}. S.Odake and R.Sasaki, “Unified theory of exactly and quasi-exactly solvable ‘discrete’ quantum mechanics: I. Formalism," J. Math. Phys [**51**]{} (2010) 083502 (24pp). [arXiv:0903.2604\[math-ph\]]{}. E.Routh, “On some properties of certain solutions of a differential equation of the second order," Proc. London Math. Soc. [**16**]{} (1884) 245-261; S.Bochner, “Über Sturm-Liouvillesche Polynomsysteme," Math. Zeit. [**29**]{} (1929) 730-736. D.Gómez-Ullate, N.Kamran and R.Milson, “Two-step Darboux transformations and exceptional Laguerre polynomials," J. Math. Anal. Appr. [**387**]{} (2012) 410-418, [arXiv:1103.5724\[math-ph\]]{}. P.R.Parthasarathy and R.B.Lenin, “Birth and death processes (BDP) models with applications," American Sciences Press, Inc. Columbus, Ohio (2004). R.Sasaki, “Exactly solvable birth and death processes," J. Math. Phys. [**50**]{} (2009) 103509 (18pp), [arXiv:0903.3097\[math-ph\]]{}. S.Karlin and J.L.McGregor, “The differential equations of birth-and-death processes," Trans. Amer. Math. Soc. [**85**]{} (1957) 489-546. C.Albanese, M.Christandl, N.Datta and A.Ekert, “Mirror Inversion of Quantum States in Linear Registers," Phys. Rev. Lett. [**93**]{} (2004) 230502 (4pp), [arXiv:quant-ph/0405029]{}; R.Chakrabarti and J.Van der Jeugt, “Quantum communication through a spin chain with interaction determined by a Jacobi matrix," J. Phys. [**A43**]{} (2010) 085302 (20pp), [arXiv:0912.0837\[quant-ph\]]{}; L. Vinet and A. Zhedanov, “How to construct spin chains with perfect state transfer," [arXiv:1110.6474\[quant-ph\]]{}. M.M.Crum, “Associated Sturm-Liouville systems," Quart. J. Math. Oxford Ser. (2) [**6**]{} (1955) 121-127, [arXiv:physics/9908019]{}. M.G.Krein, “On continuous analogue of a formula of Christoffel from the theory of orthogonal polynomials," Doklady Acad. Nauk. CCCP, [**113**]{} (1957) 970-973; V.É.Adler, “A modification of Crum’s method,” Theor. Math. Phys. [**101**]{} (1994) 1381-1386. A.A.Andrianov, M.V.Ioffe and V.P.Spiridonov, “Higher-derivative supersymmetry and the Witten index,” Phys. Lett. [**A 174**]{} (1993) 273-279; H.Aoyama, M.Sato and T.Tanaka, “General forms of a $\mathcal{N}$-fold supersymmetric family,” Phys. Lett. [**B 503**]{} (2001) 423-429, [arXiv:quant-ph/0012065]{}; D.J.Fernández and C.V.Hussin, “Higher-order SUSY, linearized nonlinear Heisenberg algebras and coherent states,” J. Phys. [**A 32**]{} (1999) 3603-3619; V.G.Bagrov and B.F.Samsonov, “Supersymmetry of a nonstationary Schrödinger equation,” Phys. Lett. [**A 210**]{} (1996) 60-64. G.Darboux, [*Théorie générale des surfaces*]{} vol 2 (1888) Gauthier-Villars, Paris. S.Odake and R.Sasaki, “Crum’s theorem for ‘discrete’ quantum mechanics,” Prog. Theor. Phys. [**122**]{} (2009) 1067-1079, [arXiv:0902.2593\[math-ph\]]{}. L.García-Gutiérrez, S.Odake and R.Sasaki, “Modification of Crum’s Theorem for ‘Discrete’ Quantum Mechanics,” Prog. Theor. Phys. [**124**]{} (2010) 1-26, [arXiv:1004.0289\[math-ph\]]{}. S.Odake and R.Sasaki, “Dual Christoffel transformations," Prog. Theor. Phys. [**126**]{} (2011) 1-34, [arXiv:1101.5468\[math-ph\]]{}. S.Odake and R.Sasaki, “Unified theory of annihilation-creation operators for solvable (‘discrete’) quantum mechanics,” J. Math. Phys. [**47**]{} (2006) 102102 (33pp), [arXiv:quant-ph/0605215]{}; “Exact solution in the Heisenberg picture and annihilation-creation operators," Phys. Lett. [**B641**]{} (2006) 112-117, [arXiv:quant-ph/0605221]{}. L.E.Gendenshtein, “Derivation of exact spectra of the Schroedinger equation by means of supersymmetry,” JETP Lett. [**38**]{} (1983) 356-359. E.Bannai and T.Ito, [*Algebraic Combinatorics I: Association Schemes*]{}, Benjamin/Cummings, Menlo Park, CA, (1984). G.M.L.Gladwell, [*Inverse Problems in Vibration*]{}, Kluwer, Dordrecht (2004).
--- abstract: 'We study the problems of leader election and population size counting for *population protocols*: networks of finite-state anonymous agents that interact randomly under a uniform random scheduler. We show a protocol for leader election that terminates in $O(\log_m(n) \cdot \log_2 n)$ parallel time, where $m$ is a parameter, using $O(\max\{m,\log n\})$ states. By adjusting the parameter $m$ between a constant and $n$, we obtain a single leader election protocol whose time and space can be smoothly traded off between $O(\log^2 n)$ to $O(\log n)$ time and $O(\log n)$ to $O(n)$ states. Finally, we give a protocol which provides an upper bound $\hat{n}$ of the size $n$ of the population, where $\hat{n}$ is at most $n^a$ for some $a>1$. This protocol assumes the existence of a unique leader in the population and stabilizes in $\Theta{(\log{n})}$ parallel time, using constant number of states in every node, except the unique leader which is required to use $\Theta{(\log^2{n})}$ states.' author: - Othon Michail - 'Paul G. Spirakis' - Michail Theofilatos bibliography: - 'FullPaper.bib' title: 'Fast Approximate Counting and Leader Election in Populations[^1]' --- **Keywords:** population protocol, epidemic, leader election, counting, approximate counting, polylogarithmic time protocol Introduction {#sec:intro} ============ *Population protocols* [@AADFP06] are networks that consist of very weak computational entities (also called *nodes* or *agents*), regarding their individual capabilities. These networks have been shown that are able to construct complex shapes [@MS16a] and perform complex computational tasks when they work collectively. Leader Election, which is a fundamental problem in distributed computing, is the process of designating a single agent as the coordinator of some task distributed among several nodes. The nodes communicate among themselves in order to decide which of them will get into the *leader* state. *Counting* is also a fundamental problem in distributed computing, where nodes must determine the size $n$ of the population. Finally, we call *Approximate Counting* the problem in which nodes must determine an estimation $k$ of the population size $n$. Counting can be then considered as a special case of population size estimation, where $k=n$. Many distributed tasks require the existence of a leader prior to the execution of the protocol and, furthermore, some knowledge about the system (for instance the size of the population) can also help to solve these tasks more efficiently with respect both to time and space. Consider the setting in which an agent is in an initial state *a*, the rest $n-1$ agents are in state *b* and the only existing transition is $(a,b) \rightarrow (a,a)$. This is the *one-way epidemic* process and it can be shown that the expected time to convergence under the uniform random scheduler is $\Theta(n\log{n})$ (e.g., [@AAE08]), thus $\Theta(\log{n})$ *parallel time*. In this work, we make an extensive use of epidemics, which means that information is being spread throughout the population, thus all nodes will obtain this information in $O(\log{n})$ expected parallel time. We use this property to construct an algorithm that solves the *Leader Election* problem. In addition, by observing the rate of the epidemic spreading under the uniform random scheduler, we can extract valuable information about the population. This is the key idea of our *Approximate Counting* algorithm. Related Work {#subsec:related} ------------ The framework of population protocols was first introduced by Angluin et al. [@AADFP06] in order to model the interactions in networks between small resource-limited mobile agents. When operating under a uniform random scheduler, population protocols are formally equivalent to a restricted version of stochastic Chemical Reaction Networks (CRNs), which model chemistry in a well-mixed solution [@SCWB08]. “CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising programming language for the design of artificial molecular control circuitry” [@CDS14; @Do14]. Results in both population protocols and CRNs can be transfered to each other, owing to a formal equivalence between these models. Angluin et al. [@AAER07] showed that all predicates stably computable in population protocols (and certain generalizations of it) are semilinear. Semilinearity persists up to $o(\log\log n)$ local space but not more than this [@CMNPS11]. Moreover, the computational power of population protocols can be increased to the commutative subclass of ${\mathbf}{NSPACE}(n^2)$, if we allow the processes to form connections between each other that can hold a state from a finite domain [@MCS11], or by equipping them with unique identifiers, as in [@GR09]. For introductory texts to population protocols the interested reader is encouraged to consult [@AR09; @MCS11] and [@MS18] (the latter discusses population protocols and related developments as part of a more general overview of the emerging theory of dynamic networks). Optimal algorithms, regarding the time complexity of fundamental tasks in distributed networks, for example leader election and majority, is the key for many distributed problems. For instance, the help of a central coordinator can lead to simpler and more efficient protocols [@AAE08]. There are many solutions to the problem of leader election, such as in networks with nodes having distinct labels or anonymous networks [@An80; @ASW85; @AG15; @GS18; @FJ06]. Although the availability of an initial leader does not increase the computational power of standard population protocols (in contrast, it does in some settings where faults can occur [@DFI17]), still it may allow faster computation. Specifically, the fastest known population protocols for semilinear predicates without a leader take as long as linear parallel time to converge ($\Theta(n)$). On the other hand, when the process is coordinated by a unique leader, it is known that any semilinear predicate can be stably computed with polylogarithmic expected convergence time ($O(\log^5 n)$) [@AAE06]. For several years, the best known algorithm for leader election in population protocols was the pairwise-elimination protocol of Angluin et al. [@AADFP06], in which all nodes are leaders in state $l$ initially and the only effective transition is $(l,l)\rightarrow (l,f)$. This protocol always stabilizes to a configuration with unique leader, but this takes on average linear time. Recently, Doty and Soloveichik [@DS15] proved that not only this, but any standard population protocol requires linear time to solve leader election. This immediately led the research community to look into ways of strengthening the population protocol model in order to enable the development of sub-linear time protocols for leader election and other problems (note that Belleville, Doty, and Soloveichik [@BDS17] recently showed that such linear time lower bounds hold for a larger family of problems and not just for leader election). Fortunately, in the same way that increasing the local space of agents led to a substantial increase of the class of computable predicates [@CMNPS11], it has started to become evident that it can also be exploited to substantially speed-up computations. Alistarh and Gelashvili [@AG15] proposed the first sub-linear leader election protocol, which stabilizes in $O(\log^3n)$ parallel time, assuming $O(\log^3n)$ states at each agent. In another recent work, Gasieniec and Stachowiak [@GS18] designed a space optimal ($O(\log\log{n})$ states) leader election protocol, which stabilises in $O(\log^2n)$ parallel time. They use the concept of phase clocks (introduced in [@AAE08] for population protocols), which is a synchronization and coordination tool in distributed computing. General characterizations, including upper and lower bounds, of the trade-offs between time and space in population protocols were recently achieved in [@AAEGR17]. Moreover, some papers [@MOKY12; @DDFSV17] have studied leader election in the mediated population protocol model. For counting, the most studied case is that of *self-stabilization*, which makes the strong adversarial assumption that arbitrary corruption of memory is possible in any agent at any time, and promises only that eventually it will stop. Thus, the protocol must be designed to work from any possible configuration of the memory of each agent. It can be shown that counting is *impossible* without having one agent (the “base station”) that is protected from corruption [@beauquier2007self]. In this scenario $\Theta(n \log n)$ time is sufficient [@beauquier2015space] and necessary [@AspnesBBS2016] for self-stabilizing counting. In the less restrictive setting in which all nodes start from the same state (apart possibly from a unique leader and/or unique ids), not much is known. In a recent work, Michail [@M15] proposed a terminating protocol in which a pre-elected leader equipped with two $n$-counters computes an approximate count between $n/2$ and $n$ in $O(n\log{n})$ parallel time with high probability. The idea is to have the leader implement two competing processes, running in parallel. The first process counts the number of nodes that have been encountered once, the second process counts the number of nodes that have been encountered twice, and the leader terminates when the second counter catches up the first. In the same paper, also a version assuming unique ids instead of a leader was given. The task of counting has also been studied in the related context of worst-case dynamic networks [@IK14; @KLO10; @MCS13; @LBBC14; @CFQS12]. Contribution ------------ In this work we employ the use of simple epidemics in order to provide efficient solutions to approximate counting the size of a population of agents and also to leader election in populations. Our model is that of population protocols. Our goal for both problems is to get polylogarithmic parallel time and to also use small memory per agent. First, we show how to approximately count a population fast (with a leader) and then we show how to elect a leader (very fast) if we have a crude population estimate.\ *(a)* We start by providing a protocol which provides an upper bound $\hat{n}$ of the size $n$ of the population, where $\hat{n}$ is at most $n^a$ for some $a>1$. This protocol assumes the existence of a unique leader in the population. The runtime of the protocol until stabilization is $\Theta(\log{n})$ parallel time. Each node except the unique leader uses only a constant number of states. However, the leader is required to use $\Theta(\log^2{n})$ states.\ *(b)* We then look into the problem of electing a leader. We assume an approximate knowledge of the size of the population (i.e., an estimate $\hat{n}$ of at most $n^a$, where $n$ is the population size) and provide a protocol (parameterized by the size $m$ of a counter for drawing local random numbers) that elects a unique leader w.h.p. in $O(\frac{\log^2{n}}{\log{m}})$ parallel time, with number of states $O(\max\{m,\log{n}\})$ per node. The model {#sec:model} ========= In this work, the system consists of a population *V* of *n* distributed and anonymous (i.e., do not have unique IDs) *processes*, also called *nodes* or *agents*, that are capable to perform local computations. Each of them is executing as a deterministic state machine from a finite set of states $Q$ according to a transition function $\delta: Q \times Q \rightarrow Q \times Q$. Their interaction is based on the probabilistic (uniform random) scheduler, which picks in every discrete step a random edge from the complete graph $G$ on $n$ vertices. When two agents interact, they mutually access their local states, updating them according to the transition function $\delta$. The transition function is a part of the population protocol which all nodes store and execute locally. *The time is measured as the number of steps until stabilization, divided by $n$ (parallel time)*. The protocols that we propose do not enable or disable connections between nodes, in contrast with [@MS16a], where Michail and Spirakis considered a model where a (virtual or physical) connection between two processes can be in one of a finite number of possible states. The transition function that we present throughout this paper, follows the notation $(x,y) \rightarrow (z,w)$, which refers to the process states before *(x and y)* and after *(z and w)* the interaction, that is, the transition function maps pairs of states to pairs of states.\ #### The Leader Election Problem. The problem of leader election in distributed computing is for each node eventually to decide whether it is a leader or not subject to only one node decides that it is the leader. An algorithm $A$ solves the leader election problem if eventually the states of agents are divided into *leader* and *follower*, a leader remains elected and a follower can never become a leader. In every execution, exactly one agent becomes leader and the rest determine that they are not leaders. All agents start in the same initial state $q$ and the output is $O=\{leader, follower\}$. A randomized algorithm $R$ solves the leader election problem if eventually only one leader remains in the system w.h.p. #### Approximate Counting Problem. We define as *Approximate Counting* the problem in which a leader must determine an estimation $\hat{n}$ of the population size, where $\frac{\hat{n}}{a} < n < \hat{n}$. We call $a$ the estimation parameter. Fast Counting with a unique leader {#sec:counting_with_unique_leader} ================================== In this section we present our *Approximate Counting* protocol. The protocol is presented in Section \[subsec:counting\_with\_unique\_leader\_abstract\]. In Section \[subsec:counting\_with\_unique\_leader\_analysis\] we prove the correctness of our protocol and finally, in Section \[sec:all\_experiments\], experiments that support our analysis can be found. Abstract description and protocol {#subsec:counting_with_unique_leader_abstract} --------------------------------- In this section, we construct a protocol which solves the problem of approximate counting. Our probabilistic algorithm for solving the approximate counting problem requires a unique leader who is responsible to give an estimation on the number of nodes. It uses the epidemic spreading technique and it stabilizes in $O(\log{n})$ parallel time. There is initially a unique leader $l$ and all other nodes are in state $q$. The leader $l$ stores two counters in its local memory, initially both set to 0. We use the notation $l_{(c_q,c_a)}$, where $c_q$ is the value of the first counter and $c_a$ is the value of the second one. The leader, after the first interaction starts an epidemic by turning a $q$ node into an $a$ node. Whenever a $q$ node interacts with an $a$ node, its state becomes $a$ $((a, q) \rightarrow (a, a))$. The first counter $c_q$ is being used for counting the $q$ nodes and the second counter $c_a$ for the $a$ nodes, that is, whenever the leader $l$ interacts with a $q$ node, the value of the counter $c_q$ is increased by one and whenever $l$ interacts with an $a$ node, $c_a$ is increased by one. The termination condition is $c_q = c_a$ and then the leader holds a constant-factor approximation of $\log{n}$, which we prove that with high probability is $2^{c_q+1} = 2^{c_a+1}$. We first describe a simple terminating protocol that guarantee with high probability $n^{-a} \leq n_e \leq n^a$, for a constant $a$, i.e., the population size estimation is polynomially close to the actual size. Chernoff bounds then imply that repeating this protocol a constant number of times suffices to obtain $n/2 \leq n_e \leq 2n$ with high probability. $Q = \{q,\; a,\; l_{(c_q, c_a)}\}$ $\delta:$ $ $ $(l_{(0,0)},\; q) \rightarrow (l_{(1,0)},\; a)$ $(a,\; q) \rightarrow (a,\; a)$ $(l_{(c_q,c_a)},\; q) \rightarrow (l_{(c_q+1,c_a)},\; q), \; if \; c_q>c_a$ $(l_{(c_q,c_a)},\; a) \rightarrow (l_{(c_q,c_a+1)},\; a), \; if \; c_q>c_a$ $(l_{(c_q,c_a)},\; \cdot) \rightarrow (halt,\; \cdot), \; if \; c_q=c_a$ Analysis {#subsec:counting_with_unique_leader_analysis} -------- \[lemma:size\_estimation\] When half or less of the population has been infected, with high probability $c_q>c_a$. In fact, $c_q - c_a \approx \ln{(n/2)} - \sqrt{\log{n}} > 0$. We divide the process of the epidemic elimination into rounds $i$, where round $i$ means that there exist $i$ infected nodes in the population. Call an interaction a success if an effective rule applies and a new $a$ appears on some node. Let the random variable $X$ be the total number of interactions between the leader $l$ and non-infected nodes $q$, the random variable $Y$ be the total number of interactions between $l$ and infected nodes $a$ and the r.v. $I$ be the total number of interactions in the population until all nodes become infected. We also define the r.v. $X_i,\; Y_i$ and $I_i$ to be the corresponding numbers in round $i$. Then, it holds that $X = \sum_{i=1}^{n}X_i,\; Y = \sum_{i=1}^{n}Y_i$ and $I = \sum_{i=1}^{n}I_i$. Finally, let the r.v. $X_{ij}$ and $Y_{ij}$ be independent Bernoulli trials such that for $1 \leq j \leq I_i$, $Pr[X_{ij}=1]=p_{Xi}$, $Pr[X_{ij}=0]=1-p_{Xi}$, $Pr[Y_{ij}=1]=p_{Yi}$ and $Pr[Y_{ij}=0]=1-p_{Yi}$. This means that in every interaction in round $i$, the leader, if chosen, interacts with a $q$ node with probability $p_{Xi}$ and with an $a$ node with probability $p_{Yi}$. Then, it holds that $X_i = \sum_{i=1}^{I_i}X_{ij}$ and $Y_i = \sum_{i=1}^{I_i}Y_{ij}$, where $I_i$ is the number of interactions until a success in round $i$. $$p_{Xi} = \frac{2(n-i)}{n(n-1)}, \; p_{Yi} = \frac{2i}{n(n-1)} \text{ and } p_{Ii} = \frac{2i(n-i)}{n(n-1)}$$ We also divide the whole process into two phases; the first phase ends when half of the population has been infected, that is $1 \leq i \leq \frac{n}{2}$ and for the second phase it holds that $\frac{n}{2} + 1 \leq i \leq n$. We shall argue that if the counter $c_q$ reaches a value which is a function of $n$, before the second counter $c_a$ reach $c_q$, the leader gives a good estimation. We use $X^a$ and $Y^a$ to indicate the r.v. $X$ and $Y$ during the first phase and $X^b$, $Y^b$ for the second phase.\ For $1 \leq i \leq \frac{n}{2}$ (first phase) and by linearity of expectation we have: $$\begin{split} E[X^a] = E[\sum_{i=1}^{n/2}X_i] = E[\sum_{i=1}^{n/2}\sum_{j=1}^{I_i}X_{ij}] = \sum_{i=1}^{n/2}\sum_{j=1}^{I_i}E[X_{ij}] \end{split}$$ and by Wald’s equation, we have that $E[\sum_{i=1}^{I_i}X_{ij}] = E[I_i]E[X_{ij}]$. $$\begin{split} E[X^a] = \sum_{i=1}^{n/2}\frac{n(n-1)}{2i(n-i)}\frac{2(n-i)}{n(n-1)} = \sum_{i=1}^{n/2}\frac{1}{i} = H_{n/2} = \ln{\frac{n}{2}} + a_{n/2} \geq \ln{\frac{n}{2}} \end{split}$$ where $H_{n/2}$ denotes the $(\frac{n}{2})$th Harmonic number and $0 < a_n < 1$ for all $n \in \mathbb{N}$ (Euler-Mascheroni constant). $$\begin{split} E[Y^a] & = E[\sum_{i=1}^{n/2}Y_i] = E[\sum_{i=1}^{n/2}\sum_{j=1}^{I_i}Y_{ij}] = \sum_{i=1}^{n/2}\sum_{j=1}^{I_i}E[Y_{ij}] \end{split}$$ and by Wald’s equation, we have that $E[\sum_{i=1}^{I_i}Y_{ij}] = E[I_i]E[Y_{ij}]$. $$\begin{split} E[Y^a] = \sum_{i=1}^{n/2}\frac{n(n-1)}{2i(n-i)}\frac{2i}{n(n-1)} = \sum_{i=1}^{n/2}\frac{1}{n-i} = \sum_{i=1}^{n-1}\frac{1}{i} - \sum_{i=1}^{n/2-1}\frac{1}{i} = H_{n-1} - H_{n/2-1} \approx \ln{2} \end{split}$$ By Chernoff Bound, the probabilities that the r.v. $X^a$ is less than $(1 - \delta)E(X^a)$ and more than $(1 + \delta)E(X^a)$ are $$\begin{split} Pr[X^a \leq (1 - \delta)E(X^a)] \leq e^{-\frac{\ln{(n/2)}\delta^2}{2}} = \frac{1}{(\frac{n}{2})^{\delta^2/2}} \end{split}$$ $$\begin{split} Pr[X^a \geq (1 + \delta)E(X^a)] \leq e^{-\frac{\ln{(n/2)}\delta^2}{3}} = \frac{1}{(\frac{n}{2})^{\delta^2/3}} \end{split}$$ that is, $X^a$ does not deviate far from its expectation. The probability that the r.v. $Y^a$ is more than $(1 + \delta)E(Y^a)$, for $\delta=\frac{3\sqrt{\log{n}}}{\ln{2}}$ is $$\begin{split} Pr[Y^a \geq (1 + \delta)E(Y^a)] \leq e^{-\frac{\ln{2}\frac{3\sqrt{\log{n}}}{\ln{2}}}{3}} = \frac{1}{n^{1/2}} \end{split}$$ Thus, the leader interacts a constant number of times and w.h.p. less than $(1+\delta)E[Y^a]$ times with $a$ nodes during the first phase (half of the population is infected). In addition, it interacts $O(\log{n})$ times with non-infected nodes w.h.p.. In section \[sec:all\_experiments\], we have tested our results and the Figure \[fig:population\_size\_estimation\_counters\] confirms this behavior. During the second phase, the infected nodes are more than the non-infected nodes, thus, eventually, the second counter $c_a$ will reach $c_q$ and the leader terminates. By that time, the first counter will already hold a function of $n$ w.h.p. $(c_q - c_a \approx \ln{(n/2)} - \sqrt{\log{n}} > 0)$. \[corollary:population\_size\_estimation\_1\] *PSE* does not terminate w.h.p. until more than half of the population has been infected. It now suffices to show that the first counter $c_q$ does not continue to rise significantly. During the second phase, where $\frac{n}{2} + 1 \leq i \leq n$, we have $$\begin{split} E[X^b] & = E[\sum_{i=n/2+1}^{n}X_i] = H_n - H_{n/2} \approx \ln{2} \end{split}$$ By Chernoff Bound, the probability that the r.v. $X^b$ is more than $(1 + \delta)E(X^b)$, for $\delta=\frac{3\log{n}}{\ln{2}}$ is $$\begin{split} Pr[X^b \geq (1 + \delta)E(X^b)] \leq e^{-\frac{\ln{2}\frac{3\log{n}}{\ln{2}}}{3}} = \frac{1}{n} \end{split}$$ $ $ \[lemma:size\_estimation\_2\] Our *Population Size Estimation* protocol terminates after $\Theta(\log{n})$ parallel time w.h.p.. After half of the population has been infected, it holds that $|c_a - c_q| = \Theta(\log{n})$. When this difference reaches zero, the unique leader terminates. We focus only on the effective interactions, which are always interactions between the leader $l$ and $a$ or $q$ nodes. The probability that an interaction is $(l, a)$ is $p_i = i/n > 1/2$, as more than half of the population is infected. Thus, the probability that an interaction is $(l, q)$ is $q_i = 1 - p_i =(n-i)/n < 1/2$. In fact, the probability $p_i$ is constantly decreasing as the epidemic spreads throughout the population. This process may be viewed as a random walk on a line with positions $[0, \infty)$. The particle starts from position $a\log{n}$ and there is an absorbing barrier at $0$. The position of the particle corresponds to the difference $|c_a - c_q|$ of the two counters and it moves towards zero with probability $p_i>1/2$. By the basic properties of random walks, after $\Theta(\log{n})$ steps, the particle will be absorbed at $0$. Thus, the total parallel time to termination is $\Theta(\log{n})$. \[corollary:population\_size\_estimation\_2\] When $c_q = c_a$, w.h.p. $2^{c_q+1}$ is an upper bound on $n$. $ $ Leader Election with approximate knowledge of $n$ {#sec:leaderelection} ================================================= The existence of a *unique leader agent* is a key requirement for many population protocols [@AAE08] and generally in distributed computing, thus, having a fast protocol that elects a unique leader is of high significance. In this section, we present our *Leader Election* protocol, giving, at first, an abstract description \[subsec:leaderelection\_abstract\], the algorithm \[subsec:leaderelection\_protocol\] and then, we present the analysis of it \[subsec:leaderelection\_analysis\]. Finally, we have measured the stabilization time of this protocol for different population sizes and the results can be found in section \[sec:all\_experiments\]. Abstract description {#subsec:leaderelection_abstract} -------------------- We assume that the nodes know *an upper bound on the population size $n^b$, where $n$ is the number of nodes and $b$ is any big constant number*.\ All nodes store three variables; the round $e$, a random number $r$ and a counter $c$ and they are able to compute random numbers within a predefined range $[1,m]$. We define two types of states; the leaders ($l$) and the followers ($f$). Initially, all nodes are in state $l$, indicating that they are all potential leaders. The protocol operates in rounds and in every round, the leaders compete with each other trying to survive (i.e., do not become followers). The followers just copy the *tuple* $(r, e)$ from the leaders and try to spread it throughout the population. During the first interaction of two $l$ nodes, one of them becomes follower, a random number between $1$ and $m$ is being generated, the leader enters the first round and the follower copies the round $e$ and the random number $r$ from the leader to its local memory. The followers are only being used for information spreading purposes among the potential leaders and they cannot become leaders again. Throughout this paper, $n$ denotes the *population size* and $m$ *the maximum number that nodes can generate*.\ **Information spreading.** It has been shown that the epidemic spreading of information can accelerate the convergence time of a population protocol. In this work, we adopt this notion and we use the followers as the means of competition and communication among the potential leaders. All leaders try to spread their information (i.e., their round and random number) throughout the population, but w.h.p. all of them except one eventually become followers. We say that a node $x$ wins during an interaction if one of the following holds: - Node $x$ is in a bigger round $e$. - If they are both in the same round, node $x$ has bigger random number $r$. One or more leaders $L$ are in the *dominant state* if their tuple $(r_1, e_1)$ wins every other tuple in the population. Then, the tuple $(r_1, e_1)$ is being spread as an epidemic throughout the population, independently of the other leaders’ tuples (all leaders or followers with the tuple $(r_1, e_1)$ always win their competitors). We also call leaders $L$ the *dominant leaders*.\ **Transition to next round.** After the first interaction, a leader $l$ enters the first round. We can group all the other nodes that $l$ can interact with into three independent sets. - The first group contains the nodes that are in a bigger round or have a bigger random number, being in the same round as $l$. If the leader $l$ interacts with such a node, it becomes follower. - The second group contains the nodes that are in a smaller round or have a smaller random number, being in the same round as $l$. After an interaction with a node in this group, the other node becomes a follower and the leader increases its counter $c$ by one. - The third group contains the followers that have the same tuple $(r, e)$ as $l$. After an interaction with a node in this group, $l$ increases its counter $c$ by one. As long as the leader $l$ survives (i.e., does not become a follower), it increases or resets its counter $c$, according to the transition function $\delta$. When the counter $c$ reaches $b\log{n}$, where $n^b$ is the upper bound on the population size, it resets it and round $r$ is increased by one. The followers can never increase their round or generate random numbers.\ **Stabilization.** The protocol that we present stabilizes, as the whole population will eventually reach in a final configuration of states. To achieve this, when the round of a leader $l$ reaches $\ceil{\frac{2b\log{n}-\log (b\log^2{n})}{\log{m}}}$, $l$ stops increasing its round $r$, unless it interacts with another leader. This rule guarantees the stabilization of our protocol. The protocol {#subsec:leaderelection_protocol} ------------ In this section, we present our *Leader Election* protocol. We use the notation $p_{r,e}$ to indicate that node $p$ has the random number $r$ and is in the round $e$. Also, we say that $(r_1,e_1)>(r_2,e_2)$ if the tuple $(r_1,e_1)$ wins the tuple $(r_2,e_2)$. A tuple $(r_1,e_1)$ wins the tuple $(r_2,e_2)$ if $e_1>e_2$ or if they are in the same round $(e_1=e_2)$, it holds that $r_1>r_2$. $Q = \{l, f_{r,e}, l_{r,e}\}: r \in [1, m]$ $\delta:$ $(l, l) \rightarrow (l_{r,1}, f_{r,1})$ $(f_{r,e}, l) \rightarrow (f_{r,e}, f_{r,e})$ $(l_{r,e}, l) \rightarrow (l_{r,e}, f_{r,e}), \; l_{counter}=l_{counter}+1$ $(f_{r,i}, f_{s,j}) \rightarrow (f_{k,l}, f_{k,l}), \; \textbf{if}\; (r,i) > (s,j)\; \textbf{then}\; (k,l)=(r,i)\; \textbf{else}\; (k,l)=(s,j) $ $(l_{r,i}, l_{s,j}) \rightarrow (l_{k,l}, f_{k,l}), \; l_{counter}=l_{counter}+1, \; \textbf{if}\; (r,i) \geq (s,j)\; \textbf{then}\; (k,l)=(r,i)\; \textbf{else}\; (k,l)=(s,j) $ $(l_{r,i}, f_{s,j}) \rightarrow (f_{s,j}, f_{s,j}), \; \textbf{if}\; (s,j)>(r,i)$ $(l_{r,i}, f_{s,j}) \rightarrow (l_{r,i}, f_{r,i}), \; l_{counter}=l_{counter}+1, \; \textbf{if}\; (r,i)>(s,j)$ $(l_{r,e}, f_{r,e}) \rightarrow (l_{k,j}, f_{k,j}), \; l_{counter}=l_{counter}+1$ $\textbf{if}\; (l_{counter}=b\log{n}) \; \textbf{then}\{$ Increase round; Generate a new random number between 1 and m; Reset counter to zero; $\textbf{if}\; (Round=\ceil{\frac{2b\log{n}-\log (b\log^2{n})}{\log{m}}}) \;\textbf{Stop increasing the round, unless you interact with a leader;}$ $\}$ Analysis {#subsec:leaderelection_analysis} -------- The leader election algorithm that we propose, elects a unique leader after $O(\frac{\log^2{n}}{\log{m}})$ parallel time w.h.p.. To achieve this, the algorithm works in stages, called *epochs* throughout this paper and the number of potential leaders decreases exponentially between the epochs. An epoch $i$ starts when any leader enters the $i$th round $(r=i)$ and ends when any leader enters the $(i+1)$th round $(r=i+1)$. Here we do the exact analysis for $m=\log n$. This can be generalized to any $m$ between a constant and $n$. \[lemma:1\] During the execution of the protocol, at least one leader will always exist in the population. Assume an epoch $e$, in which only one leader $l_1$ with the tuple $(r_1,e_1)$ exists in the population and the rest of the nodes have become followers. In order for $l_1$ to become follower, there should be a follower with a tuple $(r_2,e_2)$, where $(r_2,e_2)>(r_1,e_1)$. But, while the followers can never increase their epoch or generate a new random number, that would imply that there exists another leader $l_2$ with the tuple $(r_2,e_2)$. \[lemma:2\] Assume an epoch $e$ and $k$ leaders with the dominant tuple $(r, e)$ in this epoch. The expected parallel time to convergence of their epidemic in epoch $e$ is $\Theta(logn)$. Let the random variable $X$ be the total number of interactions until all nodes have the dominant tuple $(r,e)$. We divide the interactions of the protocol into rounds, where round $i$ means that the epidemic has been spread to $i$ nodes. Initially, $i=k$, that is, the $k$ leaders are already infected by the epidemic, but we study the worst case where $k=1$. Call an interaction a success if the epidemic spreads to a new node. Let also the random variables $X_i, 1 \leq i \leq n-1$, be the number of interactions in the $i$th round. Then, $X = \sum_{i=1}^{n-1}X_i$. The probability $p_i$ of success at any interaction during the $i$th round is:\ $$p_i = \frac{2i(n-i)}{n(n-1)}$$ where $i(n-i)$ are the effective interactions and $\frac{n(n-1)}{2}$ are all the possible interactions. By linearity of expectation we have: $$\begin{split} E[X] & = E[\sum_{i=1}^{n-1}X_i] = \sum_{i=1}^{n-1}E[X_i] =\sum_{i=1}^{n-1} \frac{1}{p_i} = \sum_{i=1}^{n-1} \frac{n(n-1)}{2i(n-i)} \\ & = \frac{n(n-1)}{2} \sum_{i=1}^{n-1} \frac{1}{i(n-i)} \\ & = \frac{n(n-1)}{2} \sum_{i=1}^{n-1} \frac{1}{n}(\frac{1}{i} + \frac{1}{n-i}) \\ & = \frac{(n-1)}{2} [\sum_{i=1}^{n-1} \frac{1}{i} + \sum_{i=1}^{n-1} \frac{1}{n-i}] \\ & = \frac{(n-1)}{2}2H_{n-1} \\ & = (n-1)[ln(n-1)+ a_{n-1}] = \Theta(n\log n) \end{split}$$ where $H_n$ denotes the $n$th Harmonic number and $a_n:=H_n-\log n, (n \in \mathbb{N})$ is a decreasing sequence and $0<a_n<1$ for all $n \in \mathbb{N}$ (*Euler-Mascheroni constant*). It terms of parallel time, it holds that $E[\frac{X}{n}] = \frac{E[X]}{n} = \Theta{(\log{n})}$. \[lemma:4\] If a counter $c$ of a leader $l$ reaches $b\log{n}$, its epidemic will have already been spread throughout the population w.h.p.. Let the r.v. $X$ be the total number of interactions until all nodes have been infected by the dominant tuple. By Lemma \[lemma:2\], the expected interactions until the epidemic spreads throughout the whole population is $\mu = (n-1)\ln{(n-1)} + \Theta(1)$. By Chernoff Bound and for $\delta = 1/2$, it holds that $$\begin{split} & Pr[X \leq (1-\delta)\mu] \leq e^{\frac{-\delta^2\mu}{2}} \leq e^{-\frac{(n-1)\ln{(n-1)}}{8}} \leq \left( \frac{1}{n-1} \right)^{(n-1)/8} \end{split}$$ Thus, the interactions per node under the uniform random scheduler until all nodes become infected are w.h.p. $\frac{(n-1)\ln{(n-1)}}{n} < \frac{n\ln{n}}{n} = \ln{n}$. Thus, after $b\log{n}$ interactions, where $n^b$ is the population size estimation and $b$ a large constant, there are no non-infected nodes w.h.p.. \[theorem:1\] After $O(\frac{\log{n}}{\log{m}})$ epochs, there is a unique leader in the population w.h.p.. Assume an epoch $e$, in which there are $k$ leaders with the dominant tuple $(r,e)$ and $m$ is the biggest number that the leaders can generate. We shall argue that by the end of the next epoch $e+1$, approximately $\frac{k(m-1)}{m}$ leaders will have become followers and approximately $\frac{k}{m}$ leaders will have a new dominant tuple $(r_2,e_2)$. Whenever the $k$ leaders enter to the next epoch $e+1$, they generate a new random number between $1$ and $m$. Let the random variable $X_e$ be the number of leaders that have randomly generated the biggest number in epoch $e$. We view the possible values of the random choices as $m$ bins and we investigate how many leaders shall go to each bin. Assume the sequence of the random numbers $C_i^e, 1 \leq i \leq k$ that the leaders generate in epoch $e$. Let the random variables $X_i^e$ be independent Bernoulli trials such that, for $1 \leq i \leq k$, $Pr[X_i^e = 1] = p_i$ and $Pr[X_i^e = 0] = 1-p_i$ and $X_e = \sum_{i=1}^{k}X_i^e$. The probability that a leader chooses randomly a number is $$p_i = \frac{1}{m}$$ Then, the expected number of balls in each bin, thus in the biggest bin also ($X_e$) is $$\begin{split} & \mu = E(X_e) = E(\sum_{i=1}^{k}X_i^e) = \sum_{i=1}^{k}E(X_i^e) = \sum_{i=1}^{k}p_i = \sum_{i=1}^{k}\frac{1}{m} = \frac{k}{m} \end{split}$$ Assume now inductively that $X_e \geq a\log^2{n}$, where $a>0$ and $m=\log{n}$. By the Chernoff bound and observing that $k \geq ma\log{n} \Rightarrow \frac{k}{m} \geq a\log{n} \Rightarrow \mu \geq a\log{n}$, we prove that the number of the new dominant leaders will be more than or equal to $\frac{k}{m}(1+\delta)$ with a negligible probability. $$Pr[X_e \geq (1+\delta)\mu] \leq e^{-\frac{\mu\delta^2}{3}} \leq e^{-\frac{a\log{n}\delta^2}{3}} = n^{-\frac{a\delta^2}{3}} = n^{-\phi}$$ For $a \geq \frac{9}{\delta^2}$ it holds that $Pr[X_e \geq (1+\delta)\mu] \leq n^{-3}$. Consequently, if we had $X_e$ leaders in epoch $e$, we now shall have no more than $X_{e+1} \leq (1+\delta)\frac{X_e}{m}$ leaders in epoch $e+1$ with probability $Pr[X_{e+1} \leq (1+\delta)\frac{X_e}{m}] \geq 1 - \frac{1}{n^3}$. We can now assume that the expected number of leaders between the epochs can be described by the following recursive function. $$G_e= \begin{cases} \frac{G_{e-1}}{m}, & i \geq 1 \\ n, & i=0 \end{cases}$$ where $G_e = (1+\delta)X_e$. Then, $$G_e = \frac{G_{e-1}}{m} = \frac{G_{e-2}}{m^2} = ... = \frac{n}{m^e}$$ The number of the expected epochs until at most $a\log^2{n}$ leaders remain in the population is $$\begin{split} & G_t = a\log^2{n} \Rightarrow \frac{G_{t-1}}{m} = a\log^2{n} \Rightarrow \frac{G_{t-2}}{m^2} = a\log^2{n} \Rightarrow ... \Rightarrow \frac{n}{m^t} = a\log^2{n} \Rightarrow \\ & m^t = \frac{n}{a\log^2{n}} \Rightarrow log_m(m^t) = log_m(\frac{n}{a\log^2{n}}) \Rightarrow t = \log_m{n} - \log_m(a\log^2{n}) \Rightarrow \\ & t = \frac{\log{n} - \log{(a\log^2{n})}}{\log{m}} \Rightarrow t = \frac{\log{n} - \log{(a\log^2{n})}}{\log{\log{n}}} \end{split}$$ Let $E(e)$, be the event that in epoch $e$, there are at most $G_e$ dominant leaders. We consider a success when $(E(e) {\:\vert\:}E(1) \cap E(2) \cap \ldots \cap E(e-1))$ occurs until we have at most $\log{n}$ leaders. By taking the union bound, the probability to fail after $t = \frac{\log{n} - \log{(a\log^2{n})}}{\log{\log{n}}}$ epochs is given by $$\begin{split} & Pr(\text{fail after } t \text{ epochs}) \leq \sum_{i=0}^{t}Pr[\text{fail in epoch i} {\:\vert\:}\text{success until (i-1)th epoch}] \\ & \leq \sum_{i=0}^{t}\frac{1}{n^{\phi}} = \frac{\frac{\log{n} - \log{(a\log^2{n})}}{\log{\log{n}}}}{n^{\phi}} \leq \frac{1}{n^{\phi - 1}} \leq \frac{1}{n^2} \end{split}$$\ $ $ \[corollary:1\] After $t = \frac{\log{n} - \log{(a\log^2{n})}}{\log{\log{n}}}$ epochs, the remaining leaders are at most $a\log^2{n}$ w.h.p.. We argue that the number of leaders can be reduced from $a\log^2{n}$ to $a\log{n}$ in one round w.h.p.. The expected value of dominant leaders is now $E[X_{t+1}] = a\log{n}$, thus, by the Chernoff Bound it holds that $Pr[X_{t+1} \geq (1+\delta)\mu] \leq e^{-\frac{a\log{n}\delta^2}{3}}$, and for $a \geq \frac{9}{\delta^2}$, $Pr[X_{t+1} \geq (1+\delta)\mu] \leq n^{-3}$.\ Assume w.l.o.g. that $m=a\log{n}$ and according to the previous analysis, there exist $k=a\log{n}$ leaders after $t'=\frac{\log{n} - \log{(a\log^2{n})}}{\log{\log{n}}} + 1$ epochs. The expected value of $X_{t'+1}$ is now $\mu = E[X_{t'+1}] = 1$. Thus, by the Markov Inequality, the probability that the number of the dominant leaders in the next epoch are at least $2$ is $$\begin{split} & P(X_{t'+1} \geq 2) \leq \frac{E[X_{t'+1}]}{2} = \frac{1}{2} \end{split}$$ The probability that after $\log_m{n}$ epochs, there is no unique leader in the population is $$\begin{split} & P[\text{at least } 2 \text{ leaders exist after } \log_m{n} \text{ epochs}] \leq (\frac{1}{2})^{\log_m{n}} = \frac{1}{2^{log_m{n}}} \end{split}$$ The total number of epochs until there exists a unique leader in the population is w.h.p. $\frac{2\log{n} - \log{(a\log^2{n})}}{\log{m}} + 1 = O(\frac{\log{n}}{\log{m}})$.\ \[theorem:2\] Our *Leader Election* protocol elects a unique leader in $O(\frac{\log^2{n}}{\log{\log{n}}})$ parallel time w.h.p.. There are initially $n$ leaders in the population. During an epoch $e$, by Lemma \[lemma:2\] the dominant tuple spreads throughout the population in $\Theta(\log{n})$ parallel time, by Lemma \[lemma:4\] no (dominant) leader can enter to the next epoch if their epidemic has not been spread throughout the whole population before and by Theorem \[theorem:1\], there will exist a unique leader after $O(\frac{\log{n}}{\log{m}})$ epochs w.h.p., thus, for $m = b\log{n}$ the overall parallel time is $O(\frac{\log^2{n}}{\log{\log{n}}})$. Finally, by Lemma \[lemma:1\], this unique leader can never become follower and according to the transition function in Protocol \[protocol:leader\_election\], a follower can never become leader again.\ The rule which says the leaders stop increasing their rounds if $r>=\frac{2b\log{n} - \log{(b\log^2{n})}}{\log{m}}$, unless they interact with another leader, implies that the population stabilizes in $O(\frac{\log^2{n}}{\log\log{n}})$ parallel time w.h.p. and when this happens, there will exist only one leader in the population and eventually, our protocol always elects a unique leader. By adjusting $m$ to be any number between a constant and $n$ and conducting a very similar analysis we may obtain a single leader election protocol whose time and space can be smoothly traded off between $O(\log^2 n)$ to $O(\log n)$ time and $O(\log n)$ to $O(n)$ space. Experiments {#sec:all_experiments} =========== We have also measured the stabilization time of our *Leader Election* and *Population Size Estimation using a unique leader* algorithms for different network sizes. We have executed our protocols $100$ times for each population size $n$, where $n=2^i$ and $i=[3,14]$. Regarding the *Leader Election* algorithm which assumes some knowledge on the population size, the results (Figure. \[fig:leader\_election\]) support our analysis and confirm its logarithmic behavior. In these experiments, the maximum number that the nodes could generate was $m=10$. Finally, all executions elected a unique leader in $a\frac{\log^2{n}}{\log{10}}$ parallel time except one in which two leaders existed by that time (eventually, only one leader remained). [0.47]{} ![Leader Election with approximate knowing of $n$. Both axes are logarithmic. In $(a)$ the dots represent the results of individual experiments and the line represents the average values for each network size.[]{data-label="fig:leader_election"}]("leader_election_time".jpg "fig:"){width="\linewidth"} [0.53]{} ![Leader Election with approximate knowing of $n$. Both axes are logarithmic. In $(a)$ the dots represent the results of individual experiments and the line represents the average values for each network size.[]{data-label="fig:leader_election"}]("leader_election_leaders".jpg "fig:"){width="\linewidth"} The stabilization time of our *Approximate Counting with a unique leader* algorithm is shown in Figure \[fig:population\_size\_estimation\_time\]. The algorithm always gives very close estimations to the actual size of the population (Figure \[fig:population\_size\_estimation\_estimations\]). Moreover, in Figure \[fig:population\_size\_estimation\_counters\], we show the values of the counters $c_q$ and $c_a$, when half of the population has been infected by the epidemic. These experiments support our analysis, while the counter of infected nodes reaches a constant number and the counter of non-infected nodes reaches a value related to $\log{n}$. [0.5]{} ![Approximate Counting with a unique leader. Both axes are logarithmic. In $(a)$ the dots represent the results of individual experiments and the line represents the average values for each network size.]("population_size_estimation_algorithm_time".jpg "fig:"){width="\linewidth"} [0.5]{} ![Approximate Counting with a unique leader. Both axes are logarithmic. In $(a)$ the dots represent the results of individual experiments and the line represents the average values for each network size.]("population_size_estimation_algorithm_estimations".jpg "fig:"){width="\linewidth"} ![Counters $c_q$ and $c_a$ when half of the population has been infected by the epidemic.[]{data-label="fig:population_size_estimation_counters"}]("population_size_estimation_counters".jpg){width="80.00000%"} Open Problems {#sec:conclusions} ============= Call a population protocol *size-oblivious* if its transition function does not depend on the population size. Our leader election protocol requires a rough estimate on the size of the population in order to elect a leader in polylogarithmic time. In addition, our approximate counting protocol requires a unique leader who initiates the epidemic process and then gives an upper bound on the population size. Is it possible to completely drop these assumptions by composing our protocols (i.e., design a size-oblivious and leaderless protocol)? Moreover, in our leader election protocol, when two nodes interact with each other, the amount of data which is transfered is $O(max\{\log{\log{n}}, \log{m}\})$ bits. In certain applications of population protocols, the processes are not able to transfer arbitrarily large amount of data during an interaction. Can we design a polylogarithmic time population protocol for the problem of leader election that satisfies this requirement? **Acknowledgments** We would like to thank David Doty and Mahsa Eftekhari for their valuable comments and suggestions during the development of this research work. [^1]: All authors were supported by the EEE/CS initiative NeST. The last author was also supported by the Leverhulme Research Centre for Functional Materials Design.
--- abstract: 'We study the quantization of systems with local particle-ghost symmetries. The systems contain ordinary particles including gauge bosons and their counterparts obeying different statistics. The particle-ghost symmetry is a kind of fermionic symmetry, different from the space-time supersymmetry and the BRST symmetry. Subsidiary conditions on states guarantee the unitarity of systems.' author: - | Yoshiharu <span style="font-variant:small-caps;">Kawamura</span>[^1]\ [*Department of Physics, Shinshu University,* ]{}\ [*Matsumoto 390-8621, Japan*]{}\ date: 'May 20, 2015' title: 'Local particle-ghost symmetry' --- Introduction ============ Graded Lie algebras or Lie superalgebras have been frequently used to formulate theories and construct models in particle physics. Typical examples are supersymmetry (SUSY) [@NS; @R; @ACS; @GS] and BRST symmetry [@BRS1; @BRS2; @T]. The space-time SUSY  is a symmetry between ordinary particles with integer spin and those with half-integer spin, and the generators called supercharges are space-time spinors that obey the anti-commutation relations [@SS; @BLS]. The BRST symmetry is a symmetry concerning unphysical modes in gauge fields and abnormal fields called Faddeev-Popov ghost fields . Though both gauge fields and abnormal fields contain negative norm states, theories become unitary on the physical subspace, thanks to the BRST invariance . The BRST and anti-BRST charges are anti-commuting space-time scalars. Recently, models that contain both ordinary particles with a positive norm and their counterparts obeying different statistics have been constructed and those features have been studied [@YK1; @YK2; @YK3; @YK4]. Models have fermionic symmetries different from the space-time SUSY and the BRST symmetry. We refer to this type of novel symmetries as $\lq\lq$particle-ghost symmetries”. The particle-ghost symmetries have been introduced as global symmetries, but we do not need to restrict them to the global ones. Rather, it would be meaningful to examine systems with local particle-ghost symmetries from following reasons. It is known that any global continuous symmetries can be broken down by the effect of quantum gravity such as a wormhole . Then, it is expected that an fundamental theory possesses local symmetries, and global continuous symmetries can appear as accidental ones in lower-energy scale. In the system with global particle-ghost symmetries, the unitarity holds by imposing subsidiary conditions on states by hand. In contrast, there is a possibility that the conditions are realized as remnants of local symmetries in a specific situation. We study the quantization of systems with local particle-ghost symmetries. The systems contain ordinary particles including gauge bosons and their counterparts obeying different statistics. Subsidiary conditions on states guarantee the unitarity of systems. The conditions can be originated from constraints in case that gauge fields have no dynamical degrees of freedom. The contents of this paper are as follows. We construct models with local fermionic symmetries in Sect. II, and carry out the quantization of the system containing scalar and gauge fields in Sect. III. Section IV is devoted to conclusions and discussions. In appendix A, we study the system that gauge fields are auxiliary ones. Systems with local fermionic symmetries ======================================= Scalar fields with local fermionic symmetries --------------------------------------------- Recently, the system described by the following Lagrangian density has been studied [@YK1; @YK2; @YK3; @YK4], $$\begin{aligned} \mathcal{L}_{\varphi, c_{\varphi}} = \partial_{\mu} \varphi^{\dagger} \partial^{\mu} \varphi - m^2 \varphi^{\dagger} \varphi + \partial_{\mu} c_{\varphi}^{\dagger} \partial^{\mu} c_{\varphi} - m^2 c_{\varphi}^{\dagger} c_{\varphi}, \label{L-varphi-c}\end{aligned}$$ where $\varphi$ is an ordinary complex scalar field and $c_{\varphi}$ is the fermionic counterpart obeying the anti-commutation relations. The system has a global $OSp(2|2)$ symmetry that consists of $U(1)$ and fermionic symmetries. The unitarity holds by imposing suitable subsidiary conditions relating the conserved charges on states. Starting from (\[L-varphi-c\]), the model with the local $OSp(2|2)$ symmetry is constructed by introducing gauge fields. The resultant Lagrangian density is given by $$\begin{aligned} &~& \mathcal{L}= \mathcal{L}_{\rm M} + \mathcal{L}_{\rm G}, \nonumber\\ &~& \mathcal{L}_{\rm M} = \bigl\{(\partial_{\mu} - i g A_{\mu} - i g B_{\mu}) \varphi^{\dagger} - g C_{\mu}^{-} c_{\varphi}^{\dagger}\bigr\} \bigl\{(\partial^{\mu} + i g A^{\mu} + i g B^{\mu}) \varphi + g C^{+\mu} c_{\varphi}\bigr\} \nonumber\\ &~& ~~~~~~~~~~~ + \bigl\{(\partial_{\mu} - ig A_{\mu} + i g B_{\mu}) c_{\varphi}^{\dagger} - g C_{\mu}^{+} \varphi^{\dagger}\bigr\} \bigl\{(\partial^{\mu} + i g A^{\mu} - i g B^{\mu}) c_{\varphi} - g C^{-\mu} \varphi\bigr\} \nonumber\\ &~& ~~~~~~~~~~~ - m^2 \varphi^{\dagger} \varphi - m^2 c_{\varphi}^{\dagger} c_{\varphi}, \label{L-M}\\ \hspace{-1cm}&~& \mathcal{L}_{\rm G} = - \bigl\{\partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu} + i g (C_{\mu}^{+} C_{\nu}^{-} - C_{\nu}^{+} C_{\mu}^{-})\bigr\} \bigl\{\partial^{\mu} B^{\nu} - \partial^{\nu} B^{\mu}\bigr\} \nonumber\\ &~& ~~~~~~~~~~~ - \frac{1}{2} \bigl\{\partial_{\mu} C_{\nu}^{+} - \partial_{\nu} C_{\mu}^{+} + 2 i g (B_{\mu} C_{\nu}^+ - B_{\nu} C_{\mu}^+)\bigr\} \nonumber\\ &~& ~~~~~~~~~~~~~~~~~~~ \cdot \bigl\{\partial^{\mu} C^{-\nu} - \partial^{\nu} C^{-\mu} - 2 i g (B^{\mu} C^{-\nu} - B^{\nu} C^{-\mu})\bigr\}, \label{L-G}\end{aligned}$$ where $A_{\mu}$ and $B_{\mu}$ are the gauge fields relating the (diagonal) $U(1)$ symmetries, $C_{\mu}^{+}$ and $C_{\mu}^{-}$ are gauge fields relating the fermionic symmetries, and $g$ is the gauge coupling constant. The quantized fields of $C_{\mu}^{\pm}$ obey the anti-commutation relations. The $\mathcal{L}$ is invariant under the local $U(1)$ transformations, $$\begin{aligned} &~& \delta_{A} \varphi = - i \epsilon \varphi,~~ \delta_{A} \varphi^{\dagger} = i \epsilon \varphi^{\dagger},~~ \delta_{A} c_{\varphi} = - i \epsilon c_{\varphi},~~ \delta_{A} c_{\varphi}^{\dagger} = i \epsilon c_{\varphi}^{\dagger}, \nonumber \\ &~& \delta_{A} A_{\mu} = \frac{1}{g} \partial_{\mu} \epsilon,~~ \delta_{A} B_{\mu} = 0,~~ \delta_{A} C_{\mu}^{+} = 0,~~ \delta_{A} C_{\mu}^{-} = 0, \label{delta-A-local}\\ &~& \delta_{B} \varphi = - i \xi \varphi,~~ \delta_{B} \varphi^{\dagger} = i \xi \varphi^{\dagger},~~ \delta_{B} c_{\varphi} = i \xi c_{\varphi},~~ \delta_{B} c_{\varphi}^{\dagger} = - i \xi c_{\varphi}^{\dagger}, \nonumber \\ &~& \delta_{B} A_{\mu} = 0,~~ \delta_{B} B_{\mu} = \frac{1}{g} \partial_{\mu} \xi,~~ \delta_{B} C_{\mu}^{+} = - 2 i \xi C_{\mu}^+,~~ \delta_{B} C_{\mu}^{-} = 2 i \xi C_{\mu}^- \label{delta-B-local}\end{aligned}$$ and the local fermionic transformations, $$\begin{aligned} &~& \delta_{\rm F} \varphi = - \zeta c_{\varphi},~~\delta_{\rm F} \varphi^{\dagger} = 0,~~ \delta_{\rm F} c_{\varphi} = 0,~~\delta_{\rm F} c_{\varphi}^{\dagger} = \zeta \varphi^{\dagger},~~ \nonumber \\ &~& \delta_{\rm F} A_{\mu} = - i \zeta C_{\mu}^{-},~~ \delta_{\rm F} B_{\mu} = 0,~~ \delta_{\rm F} C_{\mu}^{+} = 2 i \zeta B_{\mu} + \frac{1}{g} \partial_{\mu} \zeta,~~ \delta_{\rm F} C_{\mu}^{-} =0,~~ \label{delta-F-local}\\ &~& \delta_{\rm F}^{\dagger} \varphi = 0,~~ \delta_{\rm F}^{\dagger} \varphi^{\dagger} = \zeta^{\dagger} c_{\varphi}^{\dagger},~~ \delta_{\rm F}^{\dagger} c_{\varphi} = \zeta^{\dagger} \varphi,~~ \delta_{\rm F}^{\dagger} c_{\varphi}^{\dagger} = 0,~~ \nonumber \\ &~& \delta_{\rm F}^{\dagger} A_{\mu} = - i \zeta^{\dagger} C_{\mu}^{+},~~ \delta_{\rm F}^{\dagger} B_{\mu} = 0,~~ \delta_{\rm F}^{\dagger} C_{\mu}^{+} = 0,~~ \delta_{\rm F}^{\dagger} C_{\mu}^{-} = - 2 i \zeta^{\dagger} B_{\mu} + \frac{1}{g} \partial_{\mu} \zeta^{\dagger},~~ \label{delta-Fdagger-local}\end{aligned}$$ where $\epsilon$ and $\xi$ are infinitesimal real functions of $x$, and $\zeta$ and $\zeta^{\dagger}$ are Grassmann-valued functions of $x$. The $\mathcal{L}_{\rm M}$ and $\mathcal{L}_{\rm G}$ are simply written as $$\begin{aligned} \mathcal{L}_{\rm M} = (D_{\mu} \Phi)^{\dagger} (D^{\mu} \Phi) - m^2 \Phi^{\dagger} \Phi~~~~{\rm and}~~~~ \mathcal{L}_{\rm G} = -\frac{1}{4} {\rm Str} (F_{\mu\nu}F^{\mu\nu}), \label{L-again}\end{aligned}$$ respectively. In $\mathcal{L}_{\rm M}$, $D_{\mu}$ and $\Phi$ are the covariant derivative and the doublet of fermionic transformation defined by $$\begin{aligned} D_{\mu} \equiv \left( \begin{array}{cc} \partial_{\mu} + i g A_{\mu} + i g B_{\mu} & g C_{\mu}^{+} \\ - g C_{\mu}^{-} & \partial_{\mu} + i g A_{\mu} - i g B_{\mu} \end{array} \right)~~~~{\rm and}~~~~ \Phi \equiv \left( \begin{array}{c} \varphi \\ c_{\varphi} \end{array} \right), \label{DmuPhi}\end{aligned}$$ respectively. In $\mathcal{L}_{\rm G}$, ${\rm Str}$ is the supertrace defined by ${\rm Str}M = a - d$ where $M$ is the $2 \times 2$ matrix given by $$\begin{aligned} M = \left( \begin{array}{cc} a & b \\ c & d \end{array} \right). \label{M}\end{aligned}$$ The $F_{\mu\nu}$ is defined by $$\begin{aligned} F_{\mu\nu} \equiv \frac{1}{i g} [D_{\mu}, D_{\nu}] = \left( \begin{array}{cc} A_{\mu\nu} + B_{\mu\nu} & - i C_{\mu\nu}^{+} \\ i C_{\mu\nu}^{-} & A_{\mu\nu} - B_{\mu\nu} \end{array} \right), \label{Fmunu}\end{aligned}$$ where $A_{\mu\nu}$, $B_{\mu\nu}$, $C_{\mu\nu}^{+}$ and $C_{\mu\nu}^{-}$ are the field strengths given by $$\begin{aligned} &~& A_{\mu\nu} = \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu} + i g (C_{\mu}^{+} C_{\nu}^{-} - C_{\nu}^{+} C_{\mu}^{-}), \label{Amunu}\\ &~& B_{\mu\nu} = \partial_{\mu} B_{\nu} - \partial_{\nu} B_{\mu}, \label{Bmunu}\\ &~& C_{\mu\nu}^{+} = \partial_{\mu} C_{\nu}^{+} - \partial_{\nu} C_{\mu}^{+} + 2 i g (B_{\mu} C_{\nu}^+ - B_{\nu} C_{\mu}^+),~~ \label{C+munu}\\ &~& C_{\mu\nu}^{-} = \partial_{\mu} C_{\nu}^{-} - \partial_{\nu} C_{\mu}^{-} - 2 i g (B_{\mu} C_{\nu}^{-} - B_{\nu} C_{\mu}^{-}). \label{C-munu}\end{aligned}$$ Under the transformations (\[delta-A-local\]) – (\[delta-Fdagger-local\]), the field strengths are transformed as $$\begin{aligned} &~& \delta_{A} A_{\mu\nu} = 0,~~\delta_{A} B_{\mu\nu} = 0,~~ \delta_{A} C_{\mu\nu}^{+} = 0,~~ \delta_{A} C_{\mu\nu}^{-} = 0, \label{fs-A}\\ &~& \delta_{B} A_{\mu\nu} = 0,~~\delta_{B} B_{\mu\nu} = 0,~~ \delta_{B} C_{\mu\nu}^{+} = - 2 i \xi C_{\mu\nu}^{+},~~ \delta_{B} C_{\mu\nu}^{-} = 2 i \xi C_{\mu\nu}^{-}, \label{fs-B}\\ &~& \delta_{\rm F} A_{\mu\nu} = - i \zeta C_{\mu\nu}^{-},~~ \delta_{\rm F} B_{\mu\nu} = 0,~~ \delta_{\rm F} C_{\mu\nu}^{+} = 2 i \zeta B_{\mu\nu},~~ \delta_{\rm F} C_{\mu\nu}^{-} = 0, \label{fs-F}\\ &~& \delta_{\rm F}^{\dagger} A_{\mu\nu} = -i \zeta^{\dagger} C_{\mu\nu}^{+},~~ \delta_{\rm F}^{\dagger} B_{\mu\nu} = 0,~~ \delta_{\rm F}^{\dagger} C_{\mu\nu}^{+} = 0,~~ \delta_{\rm F}^{\dagger} C_{\mu\nu}^{-} = - 2 i \zeta^{\dagger} B_{\mu\nu}. \label{fs-F-dagger}\end{aligned}$$ Using the global fermionic transformations, $$\begin{aligned} &~& \tilde{\bm{\delta}}_{\rm F} \varphi = - c_{\varphi},~~\tilde{\bm{\delta}}_{\rm F}\varphi^{\dagger} = 0,~~ \tilde{\bm{\delta}}_{\rm F} c_{\varphi} = 0,~~ \tilde{\bm{\delta}}_{\rm F} c_{\varphi}^{\dagger} = \varphi^{\dagger},~~ \nonumber \\ &~& \tilde{\bm{\delta}}_{\rm F} A_{\mu} = - i C_{\mu}^{-},~~ \tilde{\bm{\delta}}_{\rm F} B_{\mu} = 0,~~ \tilde{\bm{\delta}}_{\rm F} C_{\mu}^{+} = 2 i B_{\mu},~~ \tilde{\bm{\delta}}_{\rm F} C_{\mu}^{-} = 0,~~ \label{delta-F}\\ &~& \tilde{\bm{\delta}}^{\dagger}_{\rm F} \varphi = 0,~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} \varphi^{\dagger} = c_{\varphi}^{\dagger},~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} c_{\varphi} = \varphi,~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} c_{\varphi}^{\dagger} = 0,~~ \nonumber \\ &~& \tilde{\bm{\delta}}^{\dagger}_{\rm F} A_{\mu} = - i C_{\mu}^{+},~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} B_{\mu} = 0,~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} C_{\mu}^{+} = 0,~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} C_{\mu}^{-} = - 2 i B_{\mu},~~ \label{delta-Fdagger}\end{aligned}$$ $\mathcal{L}$ is rewritten as $$\begin{aligned} \mathcal{L}= \tilde{\bm{\delta}}_{\rm F} \tilde{\bm{\delta}}^{\dagger}_{\rm F} \mathcal{L}_{\varphi, A} = - \tilde{\bm{\delta}}^{\dagger}_{\rm F} \tilde{\bm{\delta}}_{\rm F} \mathcal{L}_{\varphi, A}, \label{L-exact}\end{aligned}$$ where $\mathcal{L}_{\varphi, A}$ is given by $$\begin{aligned} &~& \mathcal{L}_{\varphi, A} = \bigl\{(\partial_{\mu} - i g A_{\mu} - i g B_{\mu}) \varphi^{\dagger} - g C_{\mu}^{-} c_{\varphi}^{\dagger}\bigr\} \bigl\{(\partial^{\mu} + i g A^{\mu} + i g B^{\mu}) \varphi + g C^{+\mu} c_{\varphi}\bigr\} \nonumber \\ &~& ~~~~~~~~~~~~~~ - m^2 \varphi^{\dagger} \varphi - \frac{1}{4} A_{\mu\nu} A^{\mu\nu}. \label{L-varphiA}\end{aligned}$$ Spinor fields with local fermionic symmetries --------------------------------------------- For spinor fields, we consider the Lagrangian density, $$\begin{aligned} \mathcal{L}_{\psi, c_{\psi}} = i \overline{\psi} \gamma^{\mu} \partial_{\mu} \psi - m \overline{\psi} \psi + i \overline{c}_{\psi} \gamma^{\mu} \partial_{\mu} c_{\psi} - m \overline{c}_{\psi}c_{\psi}, \label{L-psi-c}\end{aligned}$$ where $\psi$ is an ordinary spinor field and $c_{\psi}$ is its bosonic counterpart obeying commutation relations. This system also has global $U(1)$ and fermionic symmetries, and the unitarity holds by imposing suitable subsidiary conditions on states. Starting from (\[L-psi-c\]), the Lagrangian density with local symmetries is constructed as $$\begin{aligned} &~& \mathcal{L}^{\rm sp}= \mathcal{L}_{\rm M}^{\rm sp} + \mathcal{L}_{\rm G}, \nonumber\\ &~& \mathcal{L}_{\rm M}^{\rm sp} = i \overline{\psi} \gamma^{\mu} \bigl\{(\partial_{\mu} + i g A_{\mu} + i g B_{\mu}) \psi + g C_{\mu}^{+} c_{\psi}\bigr\} - m \overline{\psi} \psi \nonumber\\ &~& ~~~~~~~~~~~ + i \overline{c}_{\psi} \gamma^{\mu} \bigl\{(\partial_{\mu} + i g A_{\mu} - i g B_{\mu}) c_{\psi} - g C_{\mu}^{-} \psi\bigr\} - m \overline{c}_{\psi} c_{\psi}, \label{L-M-sp}\end{aligned}$$ where $\mathcal{L}_{\rm G}$ is given by (\[L-G\]), $\overline{\psi} \equiv \psi^{\dagger} \gamma^0$, $\overline{c}_{\psi} \equiv c_{\psi}^{\dagger} \gamma^0$ and $\gamma^{\mu}$ are the $\gamma$ matrices satisfying $\{\gamma^{\mu}, \gamma^{\nu}\} = 2 \eta^{\mu\nu}$. The $\mathcal{L}_{\rm M}^{\rm sp}$ is rewritten as $$\begin{aligned} \mathcal{L}_{\rm M}^{\rm sp} = i \overline{\Psi} \Gamma^{\mu} D_{\mu} \Psi - m \overline{\Psi} \Psi, \label{L-M-sp2}\end{aligned}$$ where $\Gamma^{\mu}$ and $\Psi$ are the extension of $\gamma$-matrices and the doublet of fermionic transformation defined by $$\begin{aligned} \Gamma^{\mu} \equiv \left( \begin{array}{cc} \gamma^{\mu} & 0 \\ 0 & \gamma^{\mu} \end{array} \right)~~~~ {\rm and}~~~~ \Psi \equiv \left( \begin{array}{c} \psi \\ c_{\psi} \end{array} \right), \label{DmuPsi}\end{aligned}$$ respectively. The $\mathcal{L}^{\rm sp}$ is invariant under the local $U(1)$ transformations, $$\begin{aligned} &~& \delta_{A} \psi = - i \epsilon \psi,~~ \delta_{A} \psi^{\dagger} = i \epsilon \psi^{\dagger},~~ \delta_{A} c_{\psi} = - i \epsilon c_{\psi},~~ \delta_{A} c_{\psi}^{\dagger} = i \epsilon c_{\psi}^{\dagger}, \nonumber \\ &~& \delta_{A} A_{\mu} = \frac{1}{g} \partial_{\mu} \epsilon,~~ \delta_{A} B_{\mu} = 0,~~ \delta_{A} C_{\mu}^{+} = 0,~~ \delta_{A} C_{\mu}^{-} = 0, \label{delta-A-local-sp}\\ &~& \delta_{B} \psi = - i \xi \psi,~~ \delta_{B} \psi^{\dagger} = i \xi \psi^{\dagger},~~ \delta_{B} c_{\psi} = i \xi c_{\psi},~~ \delta_{B} c_{\psi}^{\dagger} = -i \xi c_{\psi}^{\dagger}, \nonumber \\ &~& \delta_{B} A_{\mu} = 0,~~ \delta_{B} B_{\mu} = \frac{1}{g} \partial_{\mu} \xi,~~ \delta_{B} C_{\mu}^{+} = - 2 i \xi C_{\mu}^+,~~ \delta_{B} C_{\mu}^{-} = 2 i \xi C_{\mu}^- \label{delta-B-local-sp}\end{aligned}$$ and the local fermionic transformations, $$\begin{aligned} &~& \delta_{\rm F} \psi = - \zeta c_{\psi},~~\delta_{\rm F} \psi^{\dagger} = 0,~~ \delta_{\rm F} c_{\psi} = 0,~~\delta_{\rm F} c_{\psi}^{\dagger} = - \zeta \psi^{\dagger},~~ \nonumber \\ &~& \delta_{\rm F} A_{\mu} = - i \zeta C_{\mu}^{-},~~ \delta_{\rm F} B_{\mu} = 0,~~ \delta_{\rm F} C_{\mu}^{+} = 2 i \zeta B_{\mu} + \frac{1}{g} \partial_{\mu} \zeta,~~ \delta_{\rm F} C_{\mu}^{-} =0,~~ \label{delta-F-local-sp}\\ &~& \delta_{\rm F}^{\dagger} \psi = 0,~~ \delta_{\rm F}^{\dagger} \psi^{\dagger} = - \zeta^{\dagger} c_{\psi}^{\dagger},~~ \delta_{\rm F}^{\dagger} c_{\psi} = \zeta^{\dagger} \psi,~~ \delta_{\rm F}^{\dagger} c_{\psi}^{\dagger} = 0,~~ \nonumber \\ &~& \delta_{\rm F}^{\dagger} A_{\mu} = - i \zeta^{\dagger} C_{\mu}^{+},~~ \delta_{\rm F}^{\dagger} B_{\mu} = 0,~~ \delta_{\rm F}^{\dagger} C_{\mu}^{+} = 0,~~ \delta_{\rm F}^{\dagger} C_{\mu}^{-} = - 2 i \zeta^{\dagger} B_{\mu} + \frac{1}{g} \partial_{\mu} \zeta^{\dagger},~~ \label{delta-Fdagger-local-sp}\end{aligned}$$ where $\epsilon$ and $\xi$ are infinitesimal real functions of $x$, and $\zeta$ and $\zeta^{\dagger}$ are Grassmann-valued functions of $x$. Using the global fermionic transformations, $$\begin{aligned} &~& \tilde{\bm{\delta}}_{\rm F} \psi = - c_{\psi},~~ \tilde{\bm{\delta}}_{\rm F} \psi^{\dagger} = 0,~~ \tilde{\bm{\delta}}_{\rm F} c_{\psi} = 0,~~ \tilde{\bm{\delta}}_{\rm F} c_{\psi}^{\dagger} = - \psi^{\dagger},~~ \nonumber \\ &~& \tilde{\bm{\delta}}_{\rm F} A_{\mu} = - i C_{\mu}^{-},~~ \tilde{\bm{\delta}}_{\rm F} B_{\mu} = 0,~~ \tilde{\bm{\delta}}_{\rm F} C_{\mu}^{+} = 2 i B_{\mu},~~ \tilde{\bm{\delta}}_{\rm F} C_{\mu}^{-} = 0,~~ \label{delta-F-sp}\\ &~& \tilde{\bm{\delta}}^{\dagger}_{\rm F} \psi = 0,~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} \psi^{\dagger} = - c_{\psi}^{\dagger},~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} c_{\psi} = \psi,~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} c_{\psi}^{\dagger} = 0,~~ \nonumber \\ &~& \tilde{\bm{\delta}}^{\dagger}_{\rm F} A_{\mu} = - i C_{\mu}^{+},~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} B_{\mu} = 0,~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} C_{\mu}^{+} = 0,~~ \tilde{\bm{\delta}}^{\dagger}_{\rm F} C_{\mu}^{-} = - 2 i B_{\mu},~~ \label{delta-Fdagger-sp}\end{aligned}$$ $\mathcal{L}^{\rm sp}$ is rewritten as $$\begin{aligned} \mathcal{L}= \tilde{\bm{\delta}}_{\rm F} \tilde{\bm{\delta}}^{\dagger}_{\rm F} \mathcal{L}_{\psi, A} = - \tilde{\bm{\delta}}^{\dagger}_{\rm F} \tilde{\bm{\delta}}_{\rm F} \mathcal{L}_{\psi, A}, \label{L-exact-sp}\end{aligned}$$ where $\mathcal{L}_{\psi, A}$ is given by $$\begin{aligned} &~& \mathcal{L}_{\psi, A} = i \overline{\psi} \gamma^{\mu} \bigl\{(\partial_{\mu} + i g q A_{\mu} + i g B_{\mu}) \psi + g C_{\mu}^{+} c_{\psi}\bigr\} - m \overline{\psi} \psi - \frac{1}{4} A_{\mu\nu} A^{\mu\nu}. \label{L-psiA}\end{aligned}$$ Quantization ============ We carry out the quantization of the system with scalar and gauge fields described by $\mathcal{L} = \mathcal{L}_{\rm M} + \mathcal{L}_{\rm G}$. Canonical quantization ---------------------- Based on the formulation with the property that [*the hermitian conjugate of canonical momentum for a variable is just the canonical momentum for the hermitian conjugate of the variable*]{} [@YK2], the conjugate momenta are given by $$\begin{aligned} &~& \pi \equiv \left(\frac{\partial \mathcal{L}}{\partial \dot{\varphi}}\right)_{\rm R} = (\partial_{0} - ig A_0 - i g B_{0} ) {\varphi}^{\dagger} - g C_{0}^{-} c_{\varphi}^{\dagger},~~ \label{pi}\\ &~& \pi^{\dagger} \equiv \left(\frac{\partial \mathcal{L}}{\partial \dot{\varphi}^{\dagger}}\right)_{\rm L} = (\partial_{0} + ig A_0 + i g B_{0}) {\varphi} + g C_{0}^{+} c_{\varphi}, \label{pi-dagger}\\ &~& \pi_{c_{\varphi}} \equiv \left(\frac{\partial \mathcal{L}}{\partial \dot{c}_{\varphi}}\right)_{\rm R} = (\partial_{0} - ig A_0 + i g B_{0}) c_{\varphi}^{\dagger} - g C_{0}^{+} \varphi^{\dagger},~~ \label{pi-c}\\ &~& \pi_{c_{\varphi}}^{\dagger} \equiv \left(\frac{\partial \mathcal{L}}{\partial \dot{c}_{\varphi}^{\dagger}}\right)_{\rm L} = (\partial_{0} + ig A_0 - i g B_{0}) c_{\varphi} - g C_{0}^{-} \varphi,~~ \label{pi-c-dagger}\\ &~& \Pi_{A}^{\mu} \equiv \left(\frac{\partial \mathcal{L}}{\partial \dot{A}_{\mu}}\right)_{\rm L} = 2 B^{\mu 0},~~ \Pi_{B}^{\mu} \equiv \left(\frac{\partial \mathcal{L}}{\partial \dot{B}_{\mu}}\right)_{\rm R} = 2 A^{\mu 0},~~ \label{Pi-AB}\\ &~& \Pi_{C}^{+\mu} \equiv \left(\frac{\partial \mathcal{L}}{\partial \dot{C}_{\mu}^{+}}\right)_{\rm L} = C^{-\mu 0},~~ \Pi_{C}^{-\mu} \equiv \left(\frac{\partial \mathcal{L}}{\partial \dot{C}_{\mu}^{-}}\right)_{\rm R} = C^{+\mu 0},~~ \label{Pi-C}\end{aligned}$$ where $\dot{\mathcal{O}} = \partial \mathcal{O}/\partial t$, and R and L stand for the right-differentiation and the left-differentiation, respectively. From (\[Pi-AB\]) and (\[Pi-C\]), we obtain the primary constraints, $$\begin{aligned} \Pi_{A}^{0} = 0,~~ \Pi_{B}^{0} = 0,~~ \Pi_{C}^{+0} = 0,~~ \Pi_{C}^{-0} = 0. \label{primary}\end{aligned}$$ Using the Legendre transformation, the Hamiltonian density is obtained as $$\begin{aligned} &~& \mathcal{H} = \pi \dot{\varphi} + \dot{\varphi}^{\dagger}\pi^{\dagger} + \pi_{c_{\varphi}} \dot{c}_{\varphi} + \dot{c}_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger} + \dot{A}_{\mu} \Pi_{A}^{\mu} + \Pi_{B}^{\mu} \dot{B}_{\mu} + \dot{C}_{\mu}^{+} \Pi_{C}^{+\mu} + \Pi_{C}^{-\mu} \dot{C}_{\mu}^{-} - \mathcal{L} \nonumber \\ &~& ~~~~~~~~~ + \lambda_{A} \Pi_{A}^{0} + \Pi_{B}^{0} \lambda_{B} + \lambda_{C}^{+} \Pi_{C}^{+0} + \Pi_{C}^{-0} \lambda_{C}^{-} \nonumber \\ &~& ~~~~~~ = \pi \pi^{\dagger} + \pi_{c_{\varphi}} \pi_{c_{\varphi}}^{\dagger} + (D_i \Phi)^{\dagger} (D^i \Phi) + m^2 \Phi^{\dagger} \Phi \nonumber \\ &~& ~~~~~~~~~ - i g A_0 (\pi \varphi - \varphi^{\dagger} \pi^{\dagger} + \pi_{c_{\varphi}} c_{\varphi} - c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger}) \nonumber \\ &~& ~~~~~~~~~ - i g B_0 (\pi \varphi - \varphi^{\dagger} \pi^{\dagger} - \pi_{c_{\varphi}} c_{\varphi} + c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger} + 2 C_i^{+} \Pi_{C}^{+i} - 2 \Pi_{C}^{-i} C_i^{-}) \nonumber \\ &~& ~~~~~~~~~ - g C_0^{+} (\pi c_{\varphi} - \varphi^{\dagger} \pi_{c_{\varphi}}^{\dagger} + i C_i^{-} \Pi_A^i - 2 i B_i \Pi_C^{+i}) \nonumber \\ &~& ~~~~~~~~~ - g (c_{\varphi}^{\dagger} \pi^{\dagger} - \pi_{c_{\varphi}} \varphi - i C_i^{+} \Pi_A^i + 2 i B_i \Pi_C^{-i}) C_0^{-} \nonumber \\ &~& ~~~~~~~~~ + \Pi_{A i} \Pi_{B}^{i} + A_{ij} B^{ij} + \partial_i A_0~\Pi_{A}^{i} + \Pi_{B}^{i} \partial_i B_0 \nonumber \\ &~& ~~~~~~~~~ + \Pi_{C i}^{+} \Pi_{C}^{-i} + \frac{1}{2} C_{ij} C^{ij} + \partial_i C_0^{+}~\Pi_{C}^{+i} + \Pi_{C}^{-i} \partial_i C_0^{-} \nonumber \\ &~& ~~~~~~~~~ + \lambda_{A} \Pi_{A}^{0} + \Pi_{B}^{0} \lambda_{B} + \lambda_{C}^{+} \Pi_{C}^{+0} + \Pi_{C}^{-0} \lambda_{C}^{-}, \label{H}\end{aligned}$$ where Roman indices $i$ and $j$ denote the spatial components and run from 1 to 3, $\lambda_{A}$, $\lambda_{B}$, $\lambda_{C}^{+}$ and $\lambda_{C}^{-}$ are Lagrange multipliers, and $\dot{A}_{0} + \lambda_{A}$, $\dot{B}_{0} + \lambda_{B}$, $\dot{C}_{0}^{+} + \lambda_{C}^{+}$ and $\dot{C}_{0}^{-} + \lambda_{C}^{-}$ are rewritten as $\lambda_{A}$, $\lambda_{B}$, $\lambda_{C}^{+}$ and $\lambda_{C}^{-}$ in the final expression. Secondary constraints are obtained as follows, $$\begin{aligned} &~& \frac{d\Pi_{A}^{0}}{dt} = \left\{\Pi_{A}^{0}, H\right\}_{\rm PB} = i g \left(\pi \varphi - \varphi^{\dagger} \pi^{\dagger} + \pi_{c_\varphi} c_{\varphi} - c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger}\right) + \partial_i \Pi_{A}^{i} = 0,~~ \label{secondaryA}\\ &~& \frac{d \Pi_{B}^{0}}{dt} = \left\{\Pi_{B}^{0}, H\right\}_{\rm PB} = i g \left(\pi \varphi - \varphi^{\dagger} \pi^{\dagger} - \pi_{c_\varphi} c_{\varphi} + c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger} \right. \nonumber \\ &~& ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \left. + 2 C_i^{+} \Pi_{C}^{+i} - 2 \Pi_{C}^{-i} C_i^{-}\right) + \partial_i \Pi_{B}^{i} = 0,~~ \label{secondaryB}\\ &~& \frac{d \Pi_{C}^{+0}}{dt} = \left\{\Pi_{C}^{+0}, H\right\}_{\rm PB} = g \left(\pi c_{\varphi} - \varphi^{\dagger} \pi_{c_{\varphi}}^{\dagger} + i C_i^{-} \Pi_A^i - 2 i B_i \Pi_C^{+i}\right) + \partial_i \Pi_C^{+i} = 0,~~ \label{secondaryC+}\\ &~& \frac{d \Pi_{C}^{-0}}{dt} = \left\{\Pi_{C}^{-0}, H\right\}_{\rm PB} = g \left(c_{\varphi}^{\dagger} \pi^{\dagger} - \pi_{c_{\varphi}} \varphi - i C_i^{+} \Pi_A^i + 2 i B_i \Pi_C^{-i}\right) + \partial_i \Pi_C^{-i} = 0, \label{secondaryC-}\end{aligned}$$ where $H$ is the Hamiltonian $H = \int \mathcal{H} d^3x$ and $\{A, B\}_{\rm PB}$ is the Poisson bracket. The Poisson bracket for the system with canonical variables $(Q_k, P_k)$ and $(Q_k^{\dagger}, P_k^{\dagger})$ is defined by [@YK2] $$\begin{aligned} &~& \left\{f, g\right\}_{\rm PB} \equiv \sum_{k} \left[\left(\frac{\partial f}{\partial Q_k}\right)_{\rm R} \left(\frac{\partial g}{\partial P_k}\right)_{\rm L} - (-)^{|Q_k|} \left(\frac{\partial f}{\partial P_k}\right)_{\rm R} \left(\frac{\partial g}{\partial Q_k}\right)_{\rm L} \right. \nonumber \\ &~& ~~~~~~~~~~~~~~~~~~~~~~~ \left. + (-)^{|Q_k|} \left(\frac{\partial f}{\partial Q_k^{\dagger}}\right)_{\rm R} \left(\frac{\partial g}{\partial P_k^{\dagger}}\right)_{\rm L} - \left(\frac{\partial f}{\partial P_k^{\dagger}}\right)_{\rm R} \left(\frac{\partial g}{\partial Q_k^{\dagger}}\right)_{\rm L} \right], \label{Poisson}\end{aligned}$$ where $|Q_k|$ is the number representing the Grassmann parity of $Q_k$, i.e., $|Q_k|=1$ for the Grassmann odd $Q_k$ and $|Q_k|=0$ for the Grassmann even $Q_k$. There appear no other constraints, and all constraints are first class ones and generate local transformations. We take the gauge fixing conditions, $$\begin{aligned} A^{0} = 0,~~ B^{0} = 0,~~ {C}^{+0} = 0,~~ {C}^{-0} = 0,~~ \partial_i A^{i} = 0,~~ \partial_i B^{i} = 0,~~ \partial_i C^{+i} = 0,~~ \partial_i C^{-i} = 0. \label{gf}\end{aligned}$$ The system is quantized by regarding variables as operators and imposing the following relations on the canonical pairs, $$\begin{aligned} &~& [\varphi(\bm{x}, t), \pi(\bm{y}, t)] = i \delta^3(\bm{x}-\bm{y}),~~ [\varphi^{\dagger}(\bm{x}, t), \pi^{\dagger}(\bm{y}, t)] = i \delta^3(\bm{x}-\bm{y}), \label{CCR-varphi}\\ &~& \{c_{\varphi}(\bm{x}, t), \pi_{c_{\varphi}}(\bm{y}, t)\} = i \delta^3(\bm{x}-\bm{y}),~~ \{c_{\varphi}^{\dagger}(\bm{x}, t), \pi_{c_{\varphi}}^{\dagger}(\bm{y}, t)\} = -i \delta^3(\bm{x}-\bm{y}), \label{CCR-c}\\ &~& [A_{i}(\bm{x}, t), \Pi_{A}^j(\bm{y}, t)] = i \left(\delta_{i}^{j} - \frac{\partial_i \partial^j}{\varDelta}\right) \delta^3(\bm{x}-\bm{y}),~~ \label{CCR-A}\\ &~& [B_{i}(\bm{x}, t), \Pi_{B}^j(\bm{y}, t)] = i \left(\delta_{i}^{j} - \frac{\partial_i \partial^j}{\varDelta}\right) \delta^3(\bm{x}-\bm{y}),~~ \label{CCR-B}\\ &~& \{C_{i}^{+}(\bm{x}, t), \Pi_{C}^{+j}(\bm{y}, t)\} = - i \left(\delta_{i}^{j} - \frac{\partial_i \partial^j}{\varDelta}\right) \delta^3(\bm{x}-\bm{y}),~~ \label{CCR-C+}\\ &~& \{C_{i}^{-}(\bm{x}, t), \Pi_{C}^{-j}(\bm{y}, t)\} = i \left(\delta_{i}^{j} - \frac{\partial_i \partial^j}{\varDelta}\right) \delta^3(\bm{x}-\bm{y}), \label{CCR-C-}\end{aligned}$$ where $[\mathcal{O}_1, \mathcal{O}_2] \equiv \mathcal{O}_1 \mathcal{O}_2 - \mathcal{O}_2 \mathcal{O}_1$, $\{\mathcal{O}_1, \mathcal{O}_2\} \equiv \mathcal{O}_1 \mathcal{O}_2 + \mathcal{O}_2 \mathcal{O}_1$, and only the non-vanishing ones are denoted. Here, we define the Dirac bracket using the first class constraints and the gauge fixing conditions, and replace the bracket with the commutator or the anti-commutator. On the reduced phase space, the conserved $U(1)$ charges $N_{A}$ and $N_{B}$ and the conserved fermionic charges $Q_{\rm F}$ and $Q_{\rm F}^{\dagger}$ are constructed as $$\begin{aligned} &~& N_{A} = - i \int d^3x~\left(\pi \varphi - \varphi^{\dagger} \pi^{\dagger} + \pi_{c_\varphi} c_{\varphi} - c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger}\right), \label{NA}\\ &~& N_{B} = - i \int d^3x~\left(\pi \varphi - \varphi^{\dagger} \pi^{\dagger} - \pi_{c_\varphi} c_{\varphi} + c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger} + 2 C_i^{+} \Pi_{C}^{+i} - 2 \Pi_{C}^{-i} C_i^{-}\right), \label{NB}\\ &~& Q_{\rm F} = - \int d^3x~\left(\pi c_{\varphi} - \varphi^{\dagger} \pi_{c_{\varphi}}^{\dagger} + i C_i^{-} \Pi_A^i - 2 i B_i \Pi_C^{+i}\right),~~ \label{QF}\\ &~& Q_{\rm F}^{\dagger} = - \int d^3x~\left(c_{\varphi}^{\dagger} \pi^{\dagger} - \pi_{c_{\varphi}} \varphi - i C_i^{+} \Pi_A^i + 2 i B_i \Pi_C^{-i}\right). \label{QF-dagger}\end{aligned}$$ The following algebraic relations hold: $$\begin{aligned} &~& {Q_{\rm F}}^2 = 0,~~{Q_{\rm F}^{\dagger}}^2 = 0,~~ \{Q_{\rm F}, Q_{\rm F}^{\dagger}\} = N_{A},~~ [N_{A}, Q_{\rm F}] = 0,~~ [N_{A}, Q_{\rm F}^{\dagger}] = 0,~~ \nonumber\\ &~& [N_{B}, Q_{\rm F}] = - 2 Q_{\rm F},~~ [N_{B}, Q_{\rm F}^{\dagger}] = 2 Q_{\rm F}^{\dagger},~~ [N_{A}, N_{B}] = 0. \label{QQdagger-varphi}\end{aligned}$$ The above charges are generators of global $U(1)$ and fermionic transformations such that $$\begin{aligned} \tilde{\delta}_{A} \mathcal{O} = i[\epsilon_{0} N_{A}, \mathcal{O}],~~ \tilde{\delta}_{B} \mathcal{O} = i[\xi_{0} N_{B}, \mathcal{O}],~~ \tilde{\delta}_{\rm F} \mathcal{O} = i[\zeta_{0} Q_{\rm F}, \mathcal{O}],~~ \tilde{\delta}_{\rm F}^{\dagger} \mathcal{O} = i[Q_{\rm F}^{\dagger}\zeta^{\dagger}_{0}, \mathcal{O}], \label{delta}\end{aligned}$$ where $\epsilon_{0}$ and $\xi_{0}$ are real parameters, and $\zeta_{0}$ and $\zeta^{\dagger}_{0}$ are Grassmann parameters. Note that $\tilde{\bm{\delta}}_{\rm F}$ and $\tilde{\bm{\delta}}^{\dagger}_{\rm F}$ in (\[delta-F\]) and (\[delta-Fdagger\]) are related to $\tilde{\delta}_{\rm F}$ and $\tilde{\delta}_{\rm F}^{\dagger}$ as $\tilde{\delta}_{\rm F} = \zeta_{0} \tilde{\bm{\delta}}_{\rm F}$, $\tilde{\delta}^{\dagger}_{\rm F} = \zeta^{\dagger}_{0} \tilde{\bm{\delta}}^{\dagger}_{\rm F}$. The system contains negative norm states originated from $c_{\varphi}$, $c_{\varphi}^{\dagger}$ and $C_i^{\pm}$. In the presence of negative norm states, the probability interpretation cannot be endured. To formulate our model in a consistent manner, we use a feature that [*conserved charges can be, in general, set to be zero as subsidiary conditions*]{}. We impose the following subsidiary conditions on states by hand, $$\begin{aligned} N_{A} |{\rm phys}\rangle = 0,~~ N_{B} |{\rm phys}\rangle = 0,~~ Q_{\rm F} |{\rm phys}\rangle = 0,~~ Q_{\rm F}^{\dagger} |{\rm phys}\rangle = 0. \label{Phys}\end{aligned}$$ In appendix A, we point out that subsidiary conditions corresponding to (\[Phys\]) can be realized as remnants of local symmetries in a specific case. Unitarity --------- Let us study the unitarity of physical $S$ matrix in our system, using the Lagrangian density of free fields, $$\begin{aligned} \mathcal{L}_{0} = \partial_{\mu} \varphi^{\dagger} \partial^{\mu} \varphi - m^2 \varphi^{\dagger} \varphi + \partial_{\mu} c_{\varphi}^{\dagger} \partial^{\mu} c_{\varphi} - m^2 c_{\varphi}^{\dagger} c_{\varphi} - 2 \partial_{\mu} A_i \partial^{\mu} B^{i} - \partial_{\mu} C_i^{+} \partial^{\mu} C^{-i}, \label{L-0}\end{aligned}$$ where the gauge fixing conditions (\[gf\]) are imposed on. The $\mathcal{L}_{0}$ describes the behavior of asymptotic fields of Heisenberg operators in $\mathcal{L} = \mathcal{L}_{\rm M} + \mathcal{L}_{\rm G}$. From (\[L-0\]), free field equations for $\varphi$, $\varphi^{\dagger}$, $c_{\varphi}$, $c_{\varphi}^{\dagger}$, $A_i$, $B_i$ and $C_i^{\pm}$ are derived. By solving the Klein-Gordon equations, we obtain the solutions $$\begin{aligned} &~& \varphi(x) = \int \frac{d^3k}{\sqrt{(2\pi)^3 2k_0}} \left(a(\bm{k}) e^{-i k x} + b^{\dagger}(\bm{k}) e^{i k x}\right), \label{varphi-sol}\\ &~& \varphi^{\dagger}(x) = \int \frac{d^3k}{\sqrt{(2\pi)^3 2k_0}} \left(a^{\dagger}(\bm{k}) e^{i k x} + b (\bm{k}) e^{-i k x}\right), \label{varphi-dagger-sol}\\ &~& \pi(x) = i \int d^3k \sqrt{\frac{k_0}{2 (2\pi)^3}} \left(a^{\dagger}(\bm{k}) e^{i k x} - b (\bm{k}) e^{-i k x}\right), \label{pi-sol}\\ &~& \pi^{\dagger}(x) = - i \int d^3k \sqrt{\frac{k_0}{2 (2\pi)^3}} \left(a(\bm{k}) e^{-i k x} - b^{\dagger} (\bm{k}) e^{i k x}\right), \label{pi-dagger-sol}\\ &~& c_{\varphi}(x) = \int \frac{d^3k}{\sqrt{(2\pi)^3 2k_0}} \left(c(\bm{k}) e^{-i k x} + d^{\dagger}(\bm{k}) e^{i k x}\right), \label{c-sol}\\ &~& c_{\varphi}^{\dagger}(x) = \int \frac{d^3k}{\sqrt{(2\pi)^3 2k_0}} \left(c^{\dagger}(\bm{k}) e^{i k x} + d (\bm{k}) e^{-i k x}\right), \label{c-dagger-sol}\\ &~& \pi_{c_{\varphi}}(x) = i \int d^3k \sqrt{\frac{k_0}{2 (2\pi)^3}} \left(c^{\dagger}(\bm{k}) e^{i k x} - d (\bm{k}) e^{-i k x}\right), \label{pi-c-sol}\\ &~& \pi_{c_{\varphi}}^{\dagger}(x) = - i \int d^3k \sqrt{\frac{k_0}{2 (2\pi)^3}} \left(c(\bm{k}) e^{-i k x} - d^{\dagger} (\bm{k}) e^{i k x}\right), \label{pi-c-dagger-sol}\end{aligned}$$ where $k_0 = \sqrt{\bm{k}^2 + m^2}$ and $kx = k^{\mu} x_{\mu}$. In the same way, by solving the free Maxwell equations, we obtain the solutions, $$\begin{aligned} &~& A_{i}(x) = \int \frac{d^3k}{\sqrt{(2\pi)^3 2k_0}} \left(\varepsilon_i^{\alpha} a_{\alpha}(\bm{k}) e^{-i k x} + \varepsilon_i^{*\alpha} a_{\alpha}^{\dagger}(\bm{k}) e^{i k x}\right), \label{Ai-sol}\\ &~& B_{i}(x) = \int \frac{d^3k}{\sqrt{(2\pi)^3 2k_0}} \left(\varepsilon_i^{\alpha} b_{\alpha}(\bm{k}) e^{-i k x} + \varepsilon_i^{*\alpha} b_{\alpha}^{\dagger}(\bm{k}) e^{i k x}\right), \label{Bi-sol}\\ &~& C_{i}^{+}(x) = \int \frac{d^3k}{\sqrt{(2\pi)^3 2k_0}} \left(\varepsilon_i^{\alpha} c_{\alpha}(\bm{k}) e^{-i k x} + \varepsilon_i^{*\alpha} d_{\alpha}^{\dagger}(\bm{k}) e^{i k x}\right), \label{Ci+-sol}\\ &~& C_{i}^{-}(x) = \int \frac{d^3k}{\sqrt{(2\pi)^3 2k_0}} \left(\varepsilon_i^{*\alpha} c_{\alpha}^{\dagger}(\bm{k}) e^{i k x} + \varepsilon_i^{\alpha} d_{\alpha}(\bm{k}) e^{-i k x}\right), \label{Ci--sol}\\ &~& \Pi_{A}^{i}(x) = 2 i \int d^3k \sqrt{\frac{k_0}{2 (2\pi)^3}} \left(\varepsilon_i^{*\alpha} b_{\alpha}^{\dagger}(\bm{k}) e^{i k x} - \varepsilon_i^{\alpha} b_{\alpha}(\bm{k}) e^{-i k x}\right), \label{PiAi-sol}\\ &~& \Pi_{B}^{i}(x) = 2 i \int d^3k \sqrt{\frac{k_0}{2 (2\pi)^3}} \left(\varepsilon_i^{*\alpha} a_{\alpha}^{\dagger}(\bm{k}) e^{i k x} - \varepsilon_i^{\alpha} a_{\alpha}(\bm{k}) e^{-i k x}\right), \label{PiBi-sol}\\ &~& \Pi_{C}^{+i}(x) = i \int d^3k \sqrt{\frac{k_0}{2 (2\pi)^3}} \left(\varepsilon_i^{*\alpha} c_{\alpha}^{\dagger}(\bm{k}) e^{i k x} - \varepsilon_i^{\alpha} d_{\alpha}(\bm{k}) e^{-i k x}\right), \label{Pii+-sol}\\ &~& \Pi_{C}^{-i}(x) = - i \int d^3k \sqrt{\frac{k_0}{2 (2\pi)^3}} \left(\varepsilon_i^{\alpha} c_{\alpha}(\bm{k}) e^{-i k x} - \varepsilon_i^{*\alpha} d_{\alpha}^{\dagger}(\bm{k}) e^{i k x}\right), \label{Pii--sol}\end{aligned}$$ where $k_0 = |\bm{k}|$ and $\varepsilon_i^{\alpha}$ are polarization vectors satisfying the relations, $$\begin{aligned} k_i \varepsilon_i^{\alpha} = 0,~~ \varepsilon_i^{\alpha} \varepsilon^{*i\alpha'} = \delta_{\alpha\alpha'},~~ \sum_{\alpha} \varepsilon_i^{\alpha} \varepsilon^{*j\alpha} = \delta_i^j - \frac{k_i k^j}{\bm{k}^2}~. \label{polarization}\end{aligned}$$ The index $\alpha$ represents the helicity of gauge fields. By imposing the same type of relations as (\[CCR-varphi\]) – (\[CCR-C-\]), we have the relations, $$\begin{aligned} &~& [a(\bm{k}), a^{\dagger}(\bm{l})] = \delta^3(\bm{k}-\bm{l}),~~ [b(\bm{k}), b^{\dagger}(\bm{l})] = \delta^3(\bm{k}-\bm{l}), \label{CCR-ab-varphi}\\ &~& \{c(\bm{k}), c^{\dagger}(\bm{l})\} = \delta^3(\bm{k}-\bm{l}),~~ \{d(\bm{k}), d^{\dagger}(\bm{l})\} = - \delta^3(\bm{k}-\bm{l}), \label{CCR-cd-c}\\ &~& [a_{\alpha}(\bm{k}), b_{\alpha'}^{\dagger}(\bm{l})] = - \frac{1}{2}\delta_{\alpha\alpha'} \delta^3(\bm{k}-\bm{l}),~~ [b_{\alpha}(\bm{k}), a_{\alpha'}^{\dagger}(\bm{l})] = - \frac{1}{2}\delta_{\alpha\alpha'} \delta^3(\bm{k}-\bm{l}),~~ \label{CCR-ab-AB}\\ &~& \{c_{\alpha}(\bm{k}), c_{\alpha'}^{\dagger}(\bm{l})\} = \delta_{\alpha\alpha'} \delta^3(\bm{k}-\bm{l}),~~ \{d_{\alpha}(\bm{k}), d_{\alpha}^{\dagger}(\bm{l})\} = - \delta_{\alpha\alpha'} \delta^3(\bm{k}-\bm{l}), \label{CCR-cd-C}\end{aligned}$$ and others are zero. The states in the Fock space are constructed by acting the creation operators $a^{\dagger}(\bm{k})$, $b^{\dagger}(\bm{k})$, $c^{\dagger}(\bm{k})$, $d^{\dagger}(\bm{k})$, $a_{\alpha}^{\dagger}(\bm{k})$, $b_{\alpha}^{\dagger}(\bm{k})$, $c_{\alpha}^{\dagger}(\bm{k})$ and $d_{\alpha}^{\dagger}(\bm{k})$ on the vacuum state $| 0 \rangle$, where $| 0 \rangle$ is defined by the conditions $a(\bm{k})| 0 \rangle = 0$, $b(\bm{k})| 0 \rangle = 0$, $c(\bm{k})| 0 \rangle = 0$, $d(\bm{k})| 0 \rangle = 0$, $a_{\alpha}(\bm{k}) | 0 \rangle = 0$, $b_{\alpha}(\bm{k}) | 0 \rangle = 0$, $c_{\alpha}(\bm{k}) | 0 \rangle = 0$ and $d_{\alpha}(\bm{k}) | 0 \rangle = 0$. We impose the following subsidiary conditions on states to select physical states, $$\begin{aligned} N_{A} |{\rm phys}\rangle = 0,~~ N_{B} |{\rm phys}\rangle = 0,~~ Q_{\rm F} |{\rm phys}\rangle = 0,~~ Q_{\rm F}^{\dagger} |{\rm phys}\rangle = 0. \label{Phys2}\end{aligned}$$ Note that $Q_{\rm F}^{\dagger} |{\rm phys}\rangle = 0$ means $\langle {\rm phys}|Q_{\rm F}=0$. We find that all states, except for the vacuum state, are unphysical because they do not satisfy (\[Phys2\]). This feature is understood as a counterpart of the quartet mechanism . The projection operator $P^{(n)}$ on the states with $n$ particles is given by $$\begin{aligned} &~& P^{(n)} = \frac{1}{n} \left\{a^{\dagger} P^{(n-1)} a + b^{\dagger} P^{(n-1)} b + c^{\dagger} P^{(n-1)} c - d^{\dagger} P^{(n-1)} d \right. \nonumber \\ &~& ~~~~~~~~~~~~ \left. + \sum_{\alpha}\left(-2 a_{\alpha}^{\dagger} P^{(n-1)} b_{\alpha} - 2 b_{\alpha}^{\dagger} P^{(n-1)} a_{\alpha} + c_{\alpha}^{\dagger} P^{(n-1)} c_{\alpha} - d_{\alpha}^{\dagger} P^{(n-1)} d_{\alpha} \right)\right\}, \label{P(n)}\end{aligned}$$ where $n \ge 1$ and we omit $\bm{k}$, for simplicity. Using the transformation properties, $$\begin{aligned} &~& \tilde{\bm{\delta}}_{\rm F} a = - c,~~ \tilde{\bm{\delta}}_{\rm F} a^{\dagger} = 0,~~ \tilde{\bm{\delta}}_{\rm F} b = 0,~~ \tilde{\bm{\delta}}_{\rm F} b^{\dagger} = - d^{\dagger},~~ \nonumber \\ &~& \tilde{\bm{\delta}}_{\rm F} c = 0,~~ \tilde{\bm{\delta}}_{\rm F} c^{\dagger} = a^{\dagger},~~ \tilde{\bm{\delta}}_{\rm F} d = b,~~ \tilde{\bm{\delta}}_{\rm F} d^{\dagger} = 0, \nonumber \\ &~& \tilde{\bm{\delta}}_{\rm F} a_{\alpha} = - i d_{\alpha},~~ \tilde{\bm{\delta}}_{\rm F} a_{\alpha}^{\dagger} = - i c_{\alpha}^{\dagger},~~ \tilde{\bm{\delta}}_{\rm F} b_{\alpha} = 0,~~ \tilde{\bm{\delta}}_{\rm F} b_{\alpha}^{\dagger} = 0,~~ \nonumber \\ &~& \tilde{\bm{\delta}}_{\rm F} c_{\alpha} = 2 i b_{\alpha},~~ \tilde{\bm{\delta}}_{\rm F} c_{\alpha}^{\dagger} = 0,~~ \tilde{\bm{\delta}}_{\rm F} d_{\alpha} = 0,~~ \tilde{\bm{\delta}}_{\rm F} d_{\alpha}^{\dagger} = 2 i b_{\alpha}^{\dagger}, \label{delta-F-abcd}\end{aligned}$$ $P^{(n)}$ is written in a simple form as $$\begin{aligned} P^{(n)} = i \left\{Q_{\rm F}, R^{(n)}\right\}~, \label{P(n)2}\end{aligned}$$ where $R^{(n)}$ is given by $$\begin{aligned} R^{(n)} = \frac{1}{n} \left\{ c^{\dagger} P^{(n-1)} a + b^{\dagger} P^{(n-1)} d + i \sum_{\alpha} \left(a_{\alpha}^{\dagger} P^{(n-1)} c_{\alpha} + d_{\alpha}^{\dagger} P^{(n-1)} a_{\alpha}\right)\right\}. \label{R(n)}\end{aligned}$$ From (\[P(n)2\]), we find that any state with $n \ge 1$ is unphysical from $\langle {\rm phys}|P^{(n)}|{\rm phys}\rangle = 0$. Then, we understand that every field becomes unphysical, and only $|0 \rangle$ remains as the physical state. This is also regarded as a field theoretical version of the Parisi-Sourlas mechanism . The system is also formulated using hermitian fermionic charges defined by $Q_1 \equiv Q_{\rm F} + Q_{\rm F}^{\dagger}$ and $Q_2 \equiv i(Q_{\rm F} - Q_{\rm F}^{\dagger})$. They satisfy the relations $Q_1 Q_2 + Q_2 Q_1 = 0$, ${Q_1}^2 = N_{A}$ and ${Q_2}^2 = N_{A}$. Though $Q_1$, $Q_2$ and $N_{A}$ form elements of the $N=2$ (quantum mechanical) SUSY algebra [@Witten], our system does not possess the space-time SUSY because $N_{A}$ is not our Hamiltonian but the $U(1)$ charge $N_{A}$. Only the vacuum state is selected as the physical states by imposing the following subsidiary conditions on states, in place of (\[Phys2\]), $$\begin{aligned} N_{A} |{\rm phys}\rangle = 0,~~ N_{B} |{\rm phys}\rangle = 0,~~ Q_{1} |{\rm phys}\rangle = 0,~~ Q_{2} |{\rm phys}\rangle = 0. \label{Phys-Q12}\end{aligned}$$ It is also understood that our fermionic symmetries are different from the space-time SUSY, from the fact that $Q_1$ and $Q_2$ are scalar charges. They are also different from the BRST symmetry, as seen from the algebraic relations among charges. The system with spinor and gauge fields described by $\mathcal{L}^{\rm sp} = \mathcal{L}_{\rm M}^{\rm sp} + \mathcal{L}_{\rm G}$ is also quantized, in a similar way. We find that the theory becomes harmless but empty leaving the vacuum state alone as the physical state, after imposing subsidiary conditions corresponding to (\[Phys\]). BRST symmetry ------------- Our system has local symmetries, and it is quantized by the Faddeev-Popov (FP) method. In order to add the gauge fixing conditions to the Lagrangian, several fields corresponding to FP ghost and anti-ghost fields and auxiliary fields called Nakanishi-Lautrup (NL) fields are introduced. Then, the system is described on the extended phase space and has a global symmetry called the BRST symmetry. We present the gauge-fixed Lagrangian density and study the BRST transformation properties. According to the usual procedure, the Lagrangian density containing the gauge fixing terms and FP ghost terms is constructed as $$\begin{aligned} &~& \mathcal{L}_{\rm T}= \mathcal{L}_{\rm M} + \mathcal{L}_{\rm G} + \mathcal{L}_{\rm gf} + \mathcal{L}_{\rm FP}, \nonumber\\ &~& \mathcal{L}_{\rm gf} = - \partial_{\mu}b_A~A^{\mu} - \partial_{\mu}b_B~B^{\mu} + C^{+\mu} \partial_{\mu}\phi_c + \partial_{\mu}\phi_c^{\dagger}~C^{-\mu} \nonumber \\ &~& ~~~~~~~~~~~ + \frac{1}{2}\alpha (b_A^2 + b_B^2 + 2 \phi_c^{\dagger} \phi_c), \label{L-gf}\\ &~& \mathcal{L}_{\rm FP} =- i \partial_{\mu} \overline{c}_A (\partial^{\mu} c_A - i g \phi C^{-\mu} + i g \phi^{\dagger} C^{+\mu}) - i \partial_{\mu} \overline{c}_B~\partial^{\mu} c_B \nonumber \\ &~& ~~~~~~~~~~~~ + i (\partial^{\mu} \phi - 2 i g c_B C^{+\mu} + 2 i g \phi B^{\mu}) \partial_{\mu} \overline{\phi} \nonumber \\ &~& ~~~~~~~~~~~~ - i \partial_{\mu} \overline{\phi}^{\dagger} (\partial^{\mu} \phi^{\dagger} - 2 i g c_B C^{-\mu} - 2 i g \phi^{\dagger} B^{\mu}), \label{L-FP}\end{aligned}$$ where $c_A$, $c_B$, $\phi$ and $\phi^{\dagger}$ are FP ghosts, $\overline{c}_A$, $\overline{c}_B$, $\overline{\phi}$ and $\overline{\phi}^{\dagger}$ are FP anti-ghosts, $b_A$, $b_B$, $\phi_c$ and $\phi_c^{\dagger}$ are NL fields, and $\alpha$ is a gauge parameter. These fields are scalar fields. $c_A$, $c_B$, $\overline{c}_A$ and $\overline{c}_B$ are fermionic, and $b_A$ and $b_B$ are bosonic. In contrast, $\phi$, $\phi^{\dagger}$, $\overline{\phi}$ and $\overline{\phi}^{\dagger}$ are bosonic, and $\phi_c$ and $\phi_c^{\dagger}$ are fermionic because the relevant symmetries are fermionic. The $\mathcal{L}_{\rm T}$ is invariant under the BRST transformation, $$\begin{aligned} &~& \bm{\delta}_{\mbox{\tiny BRST}} \varphi = - i g c_A \varphi - i g c_B \varphi - g \phi c_{\varphi},~~ \bm{\delta}_{\mbox{\tiny BRST}} \varphi^{\dagger} = i g c_A \varphi^{\dagger} + i g c_B \varphi^{\dagger} - g \phi^{\dagger} c_{\varphi}^{\dagger},~~ \nonumber \\ &~& \bm{\delta}_{\mbox{\tiny BRST}} c_{\varphi} = - i g c_A c_{\varphi} + i g c_B c_{\varphi} - g \phi^{\dagger} \varphi,~~ \bm{\delta}_{\mbox{\tiny BRST}} c_{\varphi}^{\dagger} = i g c_A c_{\varphi}^{\dagger} - i g c_B c_{\varphi}^{\dagger} + g \phi \varphi^{\dagger}, \nonumber \\ &~& \bm{\delta}_{\mbox{\tiny BRST}} c_A = -i g \phi^{\dagger} \phi,~~ \bm{\delta}_{\mbox{\tiny BRST}} c_B = 0,~~ \bm{\delta}_{\mbox{\tiny BRST}} \phi = - 2 i g c_B \phi,~~ \bm{\delta}_{\mbox{\tiny BRST}} \phi^{\dagger} = 2 i g c_B \phi^{\dagger},~~ \nonumber \\ &~& \bm{\delta}_{\mbox{\tiny BRST}} A_{\mu} = \partial_{\mu} c_A - i g \phi C_{\mu}^{-} + i g \phi^{\dagger} C_{\mu}^{+},~~ \bm{\delta}_{\mbox{\tiny BRST}} B_{\mu} = \partial_{\mu} c_B,~~ \nonumber \\ &~& \bm{\delta}_{\mbox{\tiny BRST}} C_{\mu}^{+} = - 2 i g c_B C_{\mu}^+ + 2 i g \phi B_{\mu} + \partial_{\mu} \phi,~~ \bm{\delta}_{\rm B} C_{\mu}^{-} = 2 i g c_B C_{\mu}^- + 2 i g \phi^{\dagger} B_{\mu} - \partial_{\mu} \phi^{\dagger}, \nonumber \\ &~& \bm{\delta}_{\mbox{\tiny BRST}} \overline{c}_A = i b_A,~~ \bm{\delta}_{\mbox{\tiny BRST}} \overline{c}_B = i b_B,~~ \bm{\delta}_{\mbox{\tiny BRST}} \overline{\phi} = i \phi_c,~~ \bm{\delta}_{\mbox{\tiny BRST}} \overline{\phi}^{\dagger} = - i {\phi}_c^{\dagger},~~ \nonumber \\ &~& \bm{\delta}_{\mbox{\tiny BRST}} b_A = 0,~~ \bm{\delta}_{\mbox{\tiny BRST}} b_B = 0,~~ \bm{\delta}_{\mbox{\tiny BRST}} \phi_c = 0,~~ \bm{\delta}_{\mbox{\tiny BRST}} \phi_c^{\dagger} = 0, \label{delta-BRST}\end{aligned}$$ where the transformations for $\varphi$, $\varphi^{\dagger}$, $c_{\varphi}$, $c_{\varphi}^{\dagger}$, $A_{\mu}$, $B_{\mu}$, $C_{\mu}^+$ and $C_{\mu}^-$ are obtained by regarding the sum of transformations $\delta_{A} + \delta_{B} + \delta_{\rm F} + \delta_{\rm F}^{\dagger}$ as $\bm{\delta}_{\mbox{\tiny BRST}}$ and replacing $\epsilon$, $\xi$, $\zeta$ and $\zeta^{\dagger}$ with $g c_A$, $g c_B$, $g \phi$ and $-g \phi^{\dagger}$, and those for $c_A$, $c_B$, $\phi$ and $\phi^{\dagger}$ are determined by the requirement that $\bm{\delta}_{\mbox{\tiny BRST}}$ has a nilpotency property, i.e., ${\bm{\delta}_{\mbox{\tiny BRST}}}^2 \mathcal{O} = 0$. The sum of the gauge fixing terms and FP ghost terms is simply written as $$\begin{aligned} &~& \mathcal{L}_{\rm gf} + \mathcal{L}_{\rm FP} = i \bm{\delta}_{\mbox{\tiny BRST}} \bigl\{\partial_{\mu}\overline{c}_A~A^{\mu} + \partial_{\mu}\overline{c}_B~B^{\mu} + C^{+\mu} \partial_{\mu}\overline{\phi} + \partial_{\mu}\overline{\phi}^{\dagger}~C^{-\mu} \nonumber\\ &~& ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - \frac{1}{2}\alpha (\overline{c}_A b_A + \overline{c}_B b_B - \phi_c^{\dagger} \overline{\phi} - \overline{\phi}^{\dagger} \phi_c)\bigr\}. \label{L-fgFP}\end{aligned}$$ According to the Noether procedure, the BRST current $J_{\mbox{\tiny BRST}}^{\mu}$ and the BRST charge $Q_{\mbox{\tiny BRST}}$ are obtained as $$\begin{aligned} &~& J_{\mbox{\tiny BRST}}^{\mu} = b_A (\partial^{\mu} c_{A} - i g \phi C^{-\mu} + i g \phi^{\dagger} C^{+\mu}) - c_{A} \partial^{\mu} b_A + b_B \partial^{\mu} c_{B}~ - c_{B} \partial^{\mu} b_B \nonumber \\ &~& ~~~~~~~~~~~~ - \phi_c (\partial^{\mu} \phi - 2 i g c_B C^{+\mu} + 2 i g \phi B^{\mu}) + \phi \partial^{\mu} \phi_c \nonumber \\ &~& ~~~~~~~~~~~~ - \phi_c^{\dagger} (\partial^{\mu} \phi^{\dagger} - 2 i g c_B C^{-\mu} - 2 i g \phi^{\dagger} B^{\mu}) + \phi^{\dagger} \partial^{\mu} \phi_c^{\dagger} \nonumber \\ &~& ~~~~~~~~~~~~ - 2 g c_B \phi \partial^{\mu} \overline{\phi} - 2 g c_B \phi^{\dagger} \partial^{\mu} \overline{\phi}^{\dagger} - g \phi^{\dagger} \phi \partial^{\mu} \overline{c}_A \nonumber \\ &~& ~~~~~~~~~~~~ - 2 \partial_{\nu} (c_A B^{\mu\nu}) - 2 \partial_{\nu} (c_B A^{\mu\nu}) - \partial_{\nu} (\phi C^{-\mu\nu}) - \partial_{\nu} (\phi^{\dagger} C^{+\mu\nu}) \label{J-BRST}\end{aligned}$$ and $$\begin{aligned} &~& Q_{\mbox{\tiny BRST}} \equiv \int d^3x J_{\mbox{\tiny BRST}}^{0} = \int d^3x \bigl\{b_A (\partial^{0} c_{A} - i g \phi C^{-0} + i g \phi^{\dagger} C^{+0}) - c_{A} \partial^{0} b_A \nonumber \\ &~& ~~~~~~~~~~~~ + b_B \partial^{0} c_{B}~ - \partial^{0} b_B~c_{B} - \phi_c (\partial^{0} \phi - 2 i g c_B C^{+0} + 2 i g \phi B^{0}) + \phi \partial^{0} \phi_c \nonumber \\ &~& ~~~~~~~~~~~~ - \phi_c^{\dagger} (\partial^{0} \phi^{\dagger} - 2 i g c_B C^{-0} - 2 i g \phi^{\dagger} B^{0}) + \phi^{\dagger} \partial^{0} \phi_c^{\dagger} \nonumber\\ &~& ~~~~~~~~~~~~ - 2 g c_B \phi \partial^{0} \overline{\phi} - 2 g c_B \phi^{\dagger} \partial^{0} \overline{\phi}^{\dagger} - g \phi^{\dagger} \phi \partial^{0} \overline{c}_A\bigr\}, \label{Q-BRST}\end{aligned}$$ respectively. Here we use the field equations. The BRST charge is a conserved charge ($d Q_{\mbox{\tiny BRST}}/dt = 0$), and it has the nilpotency property such as ${Q_{\mbox{\tiny BRST}}}^2 = 0$. By imposing the following subsidiary condition on states, $$\begin{aligned} Q_{\mbox{\tiny BRST}} |{\rm phys}\rangle = 0, \label{Phys-BRST}\end{aligned}$$ it is shown that any negative states originated from time and longitudinal components of gauge fields as well as FP ghost and anti-ghost fields and NL fields do not appear on the physical subspace, through the quartet mechanism. There still exist negative norm states come from $c_{\varphi}$, $c_{\varphi}^{\dagger}$ and $C_{\mu}^{\pm}$, and it is necessary to impose additional conditions corresponding to (\[Phys\]) on states in order to project out such harmful states. Conclusions and discussions =========================== We have studied the quantization of systems with local particle-ghost symmetries. The systems contain ordinary particles including gauge bosons and their counterparts obeying different statistics. There exist negative norm states come from fermionic scalar fields (or bosonic spinor fields) and transverse components of fermionic gauge fields, even after reducing the phase space due to the first class constraints and the gauge fixing conditions or imposing the subsidiary condition concerning the BRST charge on states. By imposing additional subsidiary conditions on states, such negative norm states are projected out on the physical subspace and the unitarity of systems hold. The additional conditions can be originated from constraints in case that gauge fields have no dynamical degrees of freedom. The systems considered are unrealistic if this goes on, because they are empty leaving the vacuum state alone as the physical state. Then, one might think that it is better not to get deeply involved them. Although they are still up in the air at present, but there is a possibility that a formalism or concept itself is basically correct and is useful to explain phenomena of elementary particles at a more fundamental level. It is necessary to fully understand features of our particle-ghost symmetries, in order to appropriately apply them on a more microscopic system. We make conjectures on some applications. We suppose that particle-ghost symmetries exist and the system contains only a few states including the vacuum one as physical states at an ultimate level. Most physical particles might be released from unphysical doublets that consist of particles and their ghost partners. A release mechanism has been proposed based on the dimensional reduction by orbifolding [@YK3]. After the appearance of physical fields, $Q_{\rm F}$-singlets and $Q_{\rm F}$-doublets coexist with exact fermionic symmetries. The Lagrangian density is, in general, written in the form as $\mathcal{L}_{\rm Total} = \mathcal{L}_{\rm S} + \mathcal{L}_{\rm D} + \mathcal{L}_{\rm mix} = \mathcal{L}_{\rm S} + \tilde{\bm{\delta}}_{\rm F} \tilde{\bm{\delta}}_{\rm F}^{\dagger} (\Delta \mathcal{L})$. Here, $\mathcal{L}_{\rm S}$, $\mathcal{L}_{\rm D}$ and $\mathcal{L}_{\rm mix}$ stand for the Lagrangian density for $Q_{\rm F}$-singlets, $Q_{\rm F}$-doublets and interactions between $Q_{\rm F}$-singlets and $Q_{\rm F}$-doublets. Under the subsidiary conditions $N_{A} |{\rm phys}\rangle =0$, $N_{B} |{\rm phys}\rangle =0$, $Q_{\rm F} |{\rm phys}\rangle =0$ and $Q_{\rm F}^{\dagger} |{\rm phys}\rangle =0$ on states, all $Q_{\rm F}$-doublets become unphysical and would not give any physical effects on $Q_{\rm F}$ singlets. Because $Q_{\rm F}$ singlets would not receive any radiative corrections from $Q_{\rm F}$ doublets, the theory is free from the gauge hierarchy problem if all heavy fields form $Q_{\rm F}$ doublets [@YK1]. The system seems to be same as that described by $\mathcal{L}_{\rm S}$ alone, and to be impossible to show the existence of $Q_{\rm F}$-doublets. However, in a very special case, an indirect proof would be possible through fingerprints left by symmetries in a fundamental theory. The fingerprints are specific relations among parameters such as a unification of coupling constants, reflecting on underlying symmetries [@YK1; @YK5]. In most cases, our ghost fields require non-local interactions [@YK1] and the change of degrees of freedom can occur in systems with infinite numbers of fields [@YK3]. Then, they might suggest that fundamental objects are not point particles but extended objects such as strings and membranes. Hence, it would be interesting to explore systems with particle-ghost symmetries and their applications in the framework of string theories.[^2] Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported in part by scientific grants from the Ministry of Education, Culture, Sports, Science and Technology under Grant No. 22540272. System with auxiliary gauge fields ================================== Let us study the system without $\mathcal{L}_{\rm G}$ described by $$\begin{aligned} &~& \mathcal{L}_{\rm M} = \bigl\{(\partial_{\mu} - i g A_{\mu} - i g B_{\mu}) \varphi^{\dagger} - g C_{\mu}^{-} c_{\varphi}^{\dagger}\bigr\} \bigl\{(\partial^{\mu} + i g A^{\mu} + i g B^{\mu}) \varphi + g C^{+\mu} c_{\varphi}\bigr\} \nonumber\\ &~& ~~~~~~~~~~~ + \bigl\{(\partial_{\mu} - ig A_{\mu} + i g B_{\mu}) c_{\varphi}^{\dagger} - g C_{\mu}^{+} \varphi^{\dagger}\bigr\} \bigl\{(\partial^{\mu} + i g A^{\mu} - i g B^{\mu}) c_{\varphi} - g C^{-\mu} \varphi\bigr\} \nonumber\\ &~& ~~~~~~~~~~~ - m^2 \varphi^{\dagger} \varphi - m^2 c_{\varphi}^{\dagger} c_{\varphi}. \label{L-M-again}\end{aligned}$$ In this case, gauge fields do not have any dynamical degrees of freedom, and are regarded as auxiliary fields. The conjugate momenta of $\varphi$, $\varphi^{\dagger}$, $c_{\varphi}$ and $c_{\varphi}^{\dagger}$ are same as those obtained in (\[pi\]) – (\[pi-c-dagger\]). The conjugate momenta of $A_{\mu}$, $B_{\mu}$, $C_{\mu}^+$ and $C_{\mu}^-$ become constraints, $$\begin{aligned} \Pi_{A}^{\mu} = 0,~~ \Pi_{B}^{\mu} = 0,~~ \Pi_{C}^{+\mu} = 0,~~ \Pi_{C}^{-\mu} = 0. \label{Pi-ABC}\end{aligned}$$ Using the Legendre transformation, the Hamiltonian density is obtained as $$\begin{aligned} &~& \mathcal{H}_{\rm M} = \pi \dot{\varphi} + \dot{\varphi}^{\dagger}\pi^{\dagger} + \pi_{c_{\varphi}} \dot{c}_{\varphi} + \dot{c}_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger} + \dot{A}_{\mu} \Pi_{A}^{\mu} + \Pi_{B}^{\mu} \dot{B}_{\mu} + \dot{C}_{\mu}^{+} \Pi_{C}^{+\mu} + \Pi_{C}^{-\mu} \dot{C}_{\mu}^{-} - \mathcal{L} \nonumber \\ &~& ~~~~~~~~~~~~ + \lambda_{A\mu} \Pi_{A}^{\mu} + \Pi_{B}^{\mu} \lambda_{B\mu} + \lambda_{C\mu}^{+} \Pi_{C}^{+\mu} + \Pi_{C}^{-\mu} \lambda_{C\mu}^{-} \nonumber \\ &~& ~~~~~~~~~ = \pi \pi^{\dagger} + \pi_{c_{\varphi}} \pi_{c_{\varphi}}^{\dagger} + (D_i \Phi)^{\dagger} (D^i \Phi) + m^2 \Phi^{\dagger} \Phi \nonumber \\ &~& ~~~~~~~~~~~~ - i g A_0 (\pi \varphi - \varphi^{\dagger} \pi^{\dagger} + \pi_{c_{\varphi}} c_{\varphi} - c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger}) - i g B_0 (\pi \varphi - \varphi^{\dagger} \pi^{\dagger} - \pi_{c_{\varphi}} c_{\varphi} + c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger}) \nonumber \\ &~& ~~~~~~~~~~~~ - g C_0^{+} (\pi c_{\varphi} - \varphi^{\dagger} \pi_{c_{\varphi}}^{\dagger}) - g (c_{\varphi}^{\dagger} \pi^{\dagger} - \pi_{c_{\varphi}} \varphi) C_0^{-} \nonumber \\ &~& ~~~~~~~~~~~~ + \lambda_{A\mu} \Pi_{A}^{\mu} + \Pi_{B}^{\mu} \lambda_{B\mu} + \lambda_{C\mu}^{+} \Pi_{C}^{+\mu} + \Pi_{C}^{-\mu} \lambda_{C\mu}^{-}, \label{H-non}\end{aligned}$$ where $\lambda_{A\mu}$, $\lambda_{B\mu}$, $\lambda_{C\mu}^{+}$ and $\lambda_{C\mu}^{-}$ are Lagrange multipliers. Secondary constraints are obtained as $$\begin{aligned} &~& \frac{d\Pi_{A}^{\mu}}{dt} = \left\{\Pi_{A}^{\mu}, H_{\rm M}\right\}_{\rm PB} = g j_A^{\mu} = 0,~~ \label{secondaryA-non}\\ &~& \frac{d \Pi_{B}^{\mu}}{dt} = \left\{\Pi_{B}^{\mu}, H_{\rm M}\right\}_{\rm PB} = g j_B^{\mu} = 0,~~ \label{secondaryB-non}\\ &~& \frac{d \Pi_{C}^{+\mu}}{dt} = \left\{\Pi_{C}^{+\mu}, H_{\rm M}\right\}_{\rm PB} = g j_{C}^{+\mu} = 0,~~ \label{secondaryC+-non}\\ &~& \frac{d \Pi_{C}^{-\mu}}{dt} = \left\{\Pi_{C}^{-\mu}, H_{\rm M}\right\}_{\rm PB} = g j_{C}^{-\mu} = 0, \label{secondaryC--non}\end{aligned}$$ where $H_{\rm M}$ is the Hamiltonian $H_{\rm M} = \int \mathcal{H}_{\rm M} d^3x$, and $j_A^{\mu}$, $j_{B}^{\mu}$, $j_{C}^{+\mu}$ and $j_{C}^{-\mu}$ are the currents of $U(1)$ and fermionic symmetries given by $$\begin{aligned} \hspace{-1cm}&~& j_{A}^{0} = i \left(\pi \varphi - \varphi^{\dagger} \pi^{\dagger} + \pi_{c_\varphi} c_{\varphi} - c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger}\right),~~ \nonumber \\ \hspace{-1cm}&~& j_{A}^{i} = i \bigl[ \bigl\{(\partial^{i} - i g A^{i} - i g B^{i}) \varphi^{\dagger} - g C^{-i} c_{\varphi}^{\dagger}\bigr\} \varphi - \varphi^{\dagger} \bigl\{(\partial^{i} + i g A^{i} + i g B^{i}) \varphi + g C^{+i} c_{\varphi}\bigr\} \nonumber \\ \hspace{-1cm}&~& ~~~~~~~~ + \bigl\{(\partial^{i} + i g A^{i} - i g B^{i}) c_{\varphi} - g C^{-i} \varphi\bigr\} c_{\varphi} - c_{\varphi}^{\dagger} \bigl\{(\partial^{i} + i g A^{i} - i g B^{i}) c_{\varphi} - g C^{-i} \varphi\bigr\}\bigr], \label{jAmu}\\ \hspace{-1cm}&~& j_{B}^{0} = i \left(\pi \varphi - \varphi^{\dagger} \pi^{\dagger} - \pi_{c_\varphi} c_{\varphi} + c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger}\right),~~ \nonumber \\ \hspace{-1cm}&~& j_{B}^{i} = i \bigl[ \bigl\{(\partial^{i} - i g A^{i} - i g B^{i}) \varphi^{\dagger} - g C^{-i} c_{\varphi}^{\dagger}\bigr\} \varphi - \varphi^{\dagger} \bigl\{(\partial^{i} + i g A^{i} + i g B^{i}) \varphi + g C^{+i} c_{\varphi}\bigr\} \nonumber \\ \hspace{-1cm}&~& ~~~~~~~~ - \bigl\{(\partial^{i} + i g A^{i} - i g B^{i}) c_{\varphi} - g C^{-i} \varphi\bigr\} c_{\varphi} + c_{\varphi}^{\dagger} \bigl\{(\partial^{i} + i g A^{i} - i g B^{i}) c_{\varphi} - g C^{-i} \varphi\bigr\}\bigr], \label{jBmu}\\ \hspace{-1cm}&~& j_{C}^{+0} = \pi c_{\varphi} - \varphi^{\dagger} \pi_{c_{\varphi}}^{\dagger},~~ \nonumber \\ \hspace{-1cm}&~& j_{C}^{+i} = \bigl\{(\partial^{i} - i g A^{i} - i g B^{i}) \varphi^{\dagger} - g C^{-i} c_{\varphi}^{\dagger}\bigr\} c_{\varphi} - \varphi^{\dagger} \bigl\{(\partial^{i} + i g A^{i} - i g B^{i}) c_{\varphi} - g C^{-i} \varphi\bigr\}, \label{jC+mu}\\ \hspace{-1cm}&~& j_{C}^{-0} = c_{\varphi}^{\dagger} \pi^{\dagger} - \pi_{c_{\varphi}} \varphi,~~ \nonumber \\ \hspace{-1cm}&~& j_{C}^{-i} = c_{\varphi}^{\dagger} \bigl\{(\partial^{i} + i g A^{i} + i g B^{i}) \varphi + g C^{+i} c_{\varphi}\bigr\} - \bigl\{(\partial^{i} - i g A^{i} + i g B^{i}) c_{\varphi}^{\dagger} - g C^{+i} \varphi^{\dagger}\bigr\} \varphi. \label{jC-mu}\end{aligned}$$ In the same way, tertiary constraints are obtained as $$\begin{aligned} &~& \frac{d j_{A}^{0}}{dt} = \left\{j_{A}^{0}, H_{\rm M}\right\}_{\rm PB} = - \partial_i j_{A}^{i} = 0, \label{tertiaryA}\\ &~& \frac{d j_{B}^{0}}{dt} = \left\{j_{B}^{0}, H_{\rm M}\right\}_{\rm PB} = - \partial_i j_{B}^{i} = 0, \label{tertiaryB}\\ &~& \frac{d j_{C}^{+0}}{dt} = \left\{j_{C}^{+0}, H_{\rm M}\right\}_{\rm PB} = - \partial_i j_{C}^{+i} = 0, \label{tertiaryC+}\\ &~& \frac{d j_{C}^{-0}}{dt} = \left\{j_{C}^{-0}, H_{\rm M}\right\}_{\rm PB} = - \partial_i j_{C}^{-i} = 0, \label{tertiaryC-}\end{aligned}$$ from the invariance under the time evolution of $j_{A}^{0} = 0$, $j_{B}^{0} = 0$, $j_{C}^{+0} = 0$ and $j_{C}^{-0} = 0$. On the other hand, the conditions $d j_{A}^{i}/dt = \left\{j_{A}^{i}, H_{\rm M}\right\}_{\rm PB} = 0$, $d j_{B}^{i}/dt = \left\{j_{B}^{i}, H_{\rm M}\right\}_{\rm PB} = 0$, $d j_{C}^{+i}/dt = \left\{j_{C}^{+i}, H_{\rm M}\right\}_{\rm PB} = 0$ and $d j_{C}^{-i}/dt = \left\{j_{C}^{-i}, H_{\rm M}\right\}_{\rm PB} = 0$ are not new constraints but the relations to determine $\lambda_{Ai}$, $\lambda_{Bi}$, $\lambda_{Ci}^{+}$ and $\lambda_{Ci}^{-}$. Furthermore, new constraints do not appear from the conditions $d (\partial_i j_{A}^{i})/dt = 0$, $d (\partial_i j_{B}^{i})/dt = 0$, $d (\partial_i j_{C}^{+i})/dt = 0$ and $d (\partial_i j_{C}^{-i})/dt = 0$. The constraints are classified into the first class ones $$\begin{aligned} \Pi_{A}^{0} = 0,~~ \Pi_{B}^{0} = 0,~~ \Pi_{C}^{+0} = 0,~~ \Pi_{C}^{-0} = 0 \label{first-non}\end{aligned}$$ and the second class ones $$\begin{aligned} &~&\Pi_{A}^{i} = 0,~~ \Pi_{B}^{i} = 0,~~ \Pi_{C}^{+i} = 0,~~ \Pi_{C}^{-i} = 0,~~ \nonumber\\ &~& j_{A}^{i} = 0,~~ j_{B}^{i} = 0,~~ j_{C}^{+i} = 0,~~ j_{C}^{-i} = 0,~~ \nonumber\\ &~& j_{A}^0 = 0,~~ j_{B}^{0} = 0,~~ j_{C}^{+0} = 0,~~ j_{C}^{-0} = 0,~~ \nonumber\\ &~& \partial_i j_{A}^{i} = 0,~~ \partial_i j_{B}^{i} = 0,~~ \partial_i j_{C}^{+i} = 0,~~ \partial_i j_{C}^{-i} = 0. \label{second-non}\end{aligned}$$ The determinant of Poisson bracket between second class ones $\{\phi_{\rm 2nd}^{a}\}$ does not vanish on constraints. Using $j_{A}^0$, $j_{B}^{0}$, $j_{C}^{+0}$ and $j_{C}^{-0}$, the conserved $U(1)$ and fermionic charges are constructed as $$\begin{aligned} &~& N_{A} \equiv - i \int d^3x~j_{A}^0 = - i \int d^3x~\left(\pi \varphi - \varphi^{\dagger} \pi^{\dagger} + \pi_{c_\varphi} c_{\varphi} - c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger}\right), \label{NA-non}\\ &~& N_{B} \equiv - i \int d^3x~j_{B}^0 = - i \int d^3x~\left(\pi \varphi - \varphi^{\dagger} \pi^{\dagger} - \pi_{c_\varphi} c_{\varphi} + c_{\varphi}^{\dagger} \pi_{c_{\varphi}}^{\dagger}\right), \label{NB-non}\\ &~& Q_{\rm F} \equiv - \int d^3x~j_{C}^{+0} = - \int d^3x~\left(\pi c_{\varphi} - \varphi^{\dagger} \pi_{c_{\varphi}}^{\dagger}\right),~~ \label{QF-non}\\ &~& Q_{\rm F}^{\dagger} \equiv - \int d^3x~j_{C}^{-0} = - \int d^3x~\left(c_{\varphi}^{\dagger} \pi^{\dagger} - \pi_{c_{\varphi}} \varphi\right). \label{QF-dagger-non}\end{aligned}$$ The same algebraic relations hold as those in (\[QQdagger-varphi\]). The above charges are conserved and generators of global $U(1)$ and fermionic transformations for scalar fields. They satisfy the relations, $$\begin{aligned} \left\{N_{A}, \phi^{\hat{a}}\right\}_{\rm PB} = 0,~~ \left\{N_{B}, \phi^{\hat{a}}\right\}_{\rm PB} = 0,~~ \left\{Q_{\rm F}, \phi^{\hat{a}}\right\}_{\rm PB} = 0,~~ \left\{Q_{\rm F}^{\dagger}, \phi^{\hat{a}}\right\}_{\rm PB} = 0, \label{phi-PB}\end{aligned}$$ where $\phi^{\hat{a}}$ are first class constraints (\[first-non\]) and the Hamiltonian $H_{\rm M}$. From (\[secondaryA-non\]) – (\[secondaryC–non\]) and (\[phi-PB\]), following relations can be considered as first class constraints, $$\begin{aligned} N_{A} = 0,~~ N_{B} = 0,~~ Q_{\rm F} = 0,~~ Q_{\rm F}^{\dagger} = 0. \label{NA=0}\end{aligned}$$ After taking the following gauge fixing conditions for the first class ones (\[first-non\]), $$\begin{aligned} A^{0} = 0,~~ B^{0} = 0,~~ {C}^{+0} = 0,~~ {C}^{-0} = 0, \label{gf-non}\end{aligned}$$ the system is quantized by regarding variables as operators and imposing the same type of relations (\[CCR-varphi\]) and (\[CCR-c\]) on the canonical pairs. From (\[NA=0\]), it is reasonable to impose the following subsidiary conditions on states, $$\begin{aligned} N_{A} |{\rm phys}\rangle = 0,~~ N_{B} |{\rm phys}\rangle = 0,~~ Q_{\rm F} |{\rm phys}\rangle = 0,~~ Q_{\rm F}^{\dagger} |{\rm phys}\rangle = 0. \label{Phys-non}\end{aligned}$$ Then, they guarantee the unitarity of our system, though it contains negative norm states originated from $c_{\varphi}$ and $c_{\varphi}^{\dagger}$. [99]{} A. Neveu and J. H. Schwarz, Nucl. Phys. B**31**, 86 (1971). P. Ramond, Phys. Rev. D**3**, 2415 (1971). Y. Aharonov, A. Casher and L. Susskind, Phys. Lett. B**35**, 512 (1971) J. L. Gervais and B. Sakita, Nucl. Phys. B**34**, 632 (1971). C. Becchi, A. Rouet and R. Stora, Comm. Math. Phys. **42**, 127 (1975). C. Becchi, A. Rouet and R. Stora, Ann. Phys. **98**, 287 (1976). V. Tyutin, Lebedev Physical Institute Report No. 39, (1975), arXiv:0812.0580 \[hep-th\]. J. Wess and B. Zumino, Nucl. Phys. B**70**, 39 (1974). J. Wess and J. Bagger, [*Supersymmetry and Supergravity*]{}, second edition (Princeton, 1991). A. Salam and J. Strathdee, Phys. Rev. D**11**, 1521 (1975). R. Haag, J. T. Lopuszański and M. Sohnius, Nucl. Phys. B**88**, 257 (1975). L. D. Faddeev and N. Popov, Phys. Lett. B**25**, 29 (1967). T. Kugo and I. Ojima, Phys. Lett. B**73**, 459 (1978). T. Kugo and I. Ojima, Prog. Theor. Phys. Supplement **66**, 1 (1979). Y. Kawamura, arXiv:1311.2365 \[hep-ph\]. Y. Kawamura, arXiv:1406.6155 \[hep-th\]. Y. Kawamura, Int. J. Mod. Phys. A**30**, 1550056 (2015), arXiv:1409.0276 \[hep-th\]. Y. Kawamura, arXiv:1502.00751 \[hep-th\]. S. Giddings and A. Strominger, Nucl. Phys. B**306**, 890 (1988). G. Parisi and N. Sourlas, Phys. Rev. Lett. **43**, 744 (1979). E. Witten, J. Diff. Geom. [**17**]{}, 661 (1982). Y. Kawamura, to appear in Int. J. Mod. Phys. A, arXiv:1503.03960 \[hep-ph\]. T. Okuda and T. Takayanagi, JHEP **03**, 062 (2006). S. Terashima, JHEP **05**, 067 (2006). [^1]: E-mail: haru@azusa.shinshu-u.ac.jp [^2]: Objects called ghost D-branes have been introduced as an extension of D-brane and their properties have been studied [@OT; @Tera].
--- author: - 'Xumeng Wang[^1]\' - 'Wei Chen[^2]\' - 'Jiazhi Xia[^3]\' - '[Zexian Chen]{}[^4]\' - '[Dongshi Xu]{}[^5]\' - '[Xiangyang Wu]{}[^6]\' - | [Mingliang Xu]{}[^7]\ [Zhengzhou University]{} - | [Tobias Schreck]{}[^8]\ [Graz University of Technology]{} bibliography: - 'template.bib' title: | ConceptExplorer: Visual Analysis of Concept Drifts\ in Multi-source Time-series Data --- [^1]: e-mail: wangxumeng@zju.edu.cn [^2]: e-mail: chenwei@cad.zju.edu.cn [^3]: e-mail: xiajiazhi@csu.edu.cn [^4]: e-mail: zexianchen@zju.edu.cn [^5]: e-mail: p1703085223@stu.cjlu.edu.cn [^6]: e-mail: wuxy@hdu.edu.cn [^7]: e-mail: iexumingliang@zzu.edu.cn [^8]: e-mail: tobias.schreck@cgv.tugraz.at Wei Chen and Jiazhi Xia are corresponding authors.
--- author: - | Matt Visser[^1]\ School of Mathematical and Computing Sciences, Victoria University of Wellington, New Zealand.\ E-mail: title: 'Physical wavelets: Lorentz covariant, singularity-free, finite energy, zero action, localized solutions to the wave equation' --- Ł Motivation ========== While the particle physics community has for some time made extensive use of extended field configurations such as solitons, instantons, and sphalerons, no direct use has yet been made of the quite extensive literature on “localized wave” configurations developed by the engineering, optics, and mathematics communities. (For selected references see [@zed; @tippet; @LG; @LZG; @Kaiser; @Lekner].) These localized waves are classical solutions of the wave equation that are partially localized in space or time, this localization generally coming at a cost such as infinite total energy and/or instability (leading to dispersion or diffraction). The catalogue of known localized waves is large and growing [@key], but most of the known examples are not in a form that would be easy to apply to particle physics problems. In this article I will exhibit a particularly simple “physical wavelet” that is more promising from a particle physics standpoint. It satisfies the properties that: - It is a localized wave that solves the wave equation. - It is a Lorentz covariant classical field configuration that lives in physical Minkowski space. - The field is everywhere finite and nonsingular, and has quadratic falloff in both space and time. - The total energy is finite, depending on the peak field and the width of the pulse. - The total action is zero. These physical wavelets can be constructed for both complex and real scalar fields. Extending these idea to the Maxwell and Yang-Mills fields is straightforward. The simplest case is that of the complex scalar field and it is to that case that I first turn. Complex scalar field ==================== Let $\eta_{ab} = \hbox{diag}[+1,-1,-1,-1]$ be the Minkowski metric \[particle physics signature\], let $x_0$ be an arbitrary 4-vector, and let $\zeta^a$ be arbitrary timelike 4-vector, then $$\phi(x) = - {\phi_0 \; (\eta_{ab} \; \zeta^a \; \zeta^b) \over \eta_{ab} \;[x^a-x_0^a-i\zeta^a] \; [x^b-x_0^b-i \zeta^b] } \label{E:mother}$$ is a Lorentz covariant, finite energy, zero-action solution of the d’Alembertian wave equation $\Delta \phi=0$. The “center” of the pulse is at $x_0$ and its “width” is $a = \sqrt{\eta_{ab} \; \zeta^a \; \zeta^b}$. The field is everywhere finite and in fact $$|\phi(x)| \leq |\phi_0|.$$ To see this, use the fact that $\zeta$ is timelike. Then, using the manifest Lorentz covariance of the field configuration, we can without loss of generality first translate $x_0\to 0$, and then go into the zero-momentum frame where $$\zeta^a = (a,0,0,0).$$ Then the field configuration is $$\phi(x) = - {\phi_0 \; a^2 \over [t-ia]^2 - x^2-y^2-z^2 }.$$ That is $$\phi(x) = {\phi_0 \; a^2 \over r^2-t^2+a^2+2iat }.$$ Once written in this form it is a straightforward exercise to verify that the wave equation is satisfied. To see that the field is everywhere bounded note $$\begin{aligned} |\phi|^2 &=& {|\phi_0|^2 \; a^4 \over (r^2-t^2+a^2)^2 + 4 a^2 t^2} = {|\phi_0|^2 \; a^4 \over (r^2+t^2+a^2)^2 - 4 r^2 t^2} \leq \nonumber\\ & \leq & {|\phi_0|^2 \; a^4 \over (r^2+t^2+a^2)^2 - (r^2 +t^2)^2} = {|\phi_0|^2 \; a^4 \over a^4 + 2a^2(r^2+t^2)} \leq |\phi_0|^2.\end{aligned}$$ [From]{} the penultimate inequality we also derive $$|\phi|^2 \leq {1\over2} \; |\phi_0|^2 \; { a^2 \over r^2+t^2},$$ demonstrating the promised quadratic falloff in both space and time. Indeed for fixed $t$ the magnitude of the field is maximized when $$r^2 = \max \{ t^2-a^2, 0 \},$$ showing that the configuration disperses to spatial infinity at both $t\to\pm\infty$. To calculate the 4-momentum, we remind the reader that the stress-energy tensor for a massless complex scalar is $$T_{ab} = {1\over2}\left[\phi_a^* \; \phi_b+ \phi_a \; \phi_b^*\right] - {1\over2} \eta_{ab} |\nabla \phi|^2.$$ Then $$\begin{aligned} \nabla_a T^{ab} &\equiv& {1\over2} \Delta \phi^* \; \nabla_b \phi + {1\over2} \nabla^a \phi^* \;\nabla_a \nabla_b \phi + {1\over2} \Delta \phi \; \nabla_b \phi^* + {1\over2} \nabla^a \phi \;\nabla_a \nabla_b \phi^* \nonumber \\ &&\qquad\qquad -{1\over2} \nabla^b \phi^* \;\nabla_a \nabla_b \phi^* -{1\over2} \nabla^a \phi^* \;\nabla_a \nabla_b \phi, \\ &\equiv & {1\over2} \left[ \Delta \phi^* \; \nabla_b \phi + \Delta \phi \; \nabla_b \phi^* \right],\end{aligned}$$ which vanishes by the equations of motion. But this means that $$P^\mu = \oint T^{ab} \; \d \Sigma_b$$ is a conserved quantity, the 4-momentum of the configuration, which is independent of the particular spacelike hypersurface $\Sigma$ chosen to do the integration. By simple dimensional analysis $$P^a = C \; |\phi_0|^2 \zeta^a,$$ where $C$ is a dimensionless number to be calculated. \[Note that $\zeta^a$ has the dimensions of a position vector — a distance.\] The energy density is $$\rho = {1\over2} \left[ |\partial_t \phi|^2 + |\partial_r \phi|^2 \right],$$ and in the zero-momentum frame is easily calculated to be $$\rho = { 2 a^4 |\phi_0|^2 (r^2+t^2+a^2) \over (r^2-t^2+a^2+2iat)^2(r^2-t^2+a^2-2iat)^2. }$$ For arbitrary $t$ this integrates to $${\mathcal E} = \oint \d^3 r \rho = \int_0^\infty 4\pi\; r^2 \rho = {1\over2} \pi^2 |\phi_0|^2 a.$$ (This is independent of $t$ as it should be.) This is the invariant mass of the field configuration. By spherical symmetry, the total momentum is zero. Thus, for any timelike $\zeta^a$ $$P^a = {1\over2} \pi^2 |\phi_0|^2 \; \zeta^a.$$ Furthermore the Lagrangian is $$\L = {1\over2} \left[ |\partial_t \phi|^2 - |\partial_r \phi|^2 \right],$$ which evaluates (in the zero-momentum frame) to $$\L = { 2 a^4 |\phi_0|^2 (t^2+a^2-r^2) \over (r^2-t^2+a^2+2iat)^2(r^2-t^2+a^2-2iat)^2 }.$$ It is easy to check that $$\oint \d^4 x \; \L = 0,$$ so that the configuration is zero action. In summary, what we have is a Lorentz covariant, singularity-free, finite energy, zero action, exact localized solution to the d’Alembertian equation. In many ways this configuration has more right to be called an “instanton” than do the instantons of QFT; those instantons live in Euclidean signature. This field configuration lives in real physical time. Now the fact that there are finite energy solutions to the wave equation is not a surprise; that these finite energy solutions can coalesce, bounce, and disperse without producing field singularities is more interesting. One way of guessing that the field configuration in equation (\[E:mother\]) is worth investigating is the following: It is easy to convince oneself that in 4 Euclidean dimensions the solution to Laplace’s equation with a delta function source at the origin is $$\phi(x) \propto {1\over x^2+y^2+z^2+t^2}.$$ Thus in (3+1) Lorentzian dimensions the \[singular\] solution to the wave equation with a delta function source at the origin is $$\phi(x) \propto {1\over x^2+y^2+z^2-t^2}.$$ If the source is now moved to a real position $x_0^a$ we have $$\phi(x) \propto {1\over (x-x_0)^2+(y-y_0)^2+(z-z_0)^2-(t-t_0)^2}.$$ which is still a singular field configuration. Finally, move the source away from physical Minkowski space to the complex position $x_0^a - i\zeta^a$, then $$\phi(x) \propto {1\over (x-x_0+i\zeta^1)^2+(y-y_0+i\zeta^2)^2+(z-z_0+i\zeta^3)^2-(t-t_0+i\zeta^0)^2},$$ which is essentially equation (\[E:mother\]) above. This style of approach has been particularly advocated by Kaiser [@Kaiser]. As we have just seen, if $\zeta$ is timelike the resulting field configuration is singularity free. However, for null and spacelike $\zeta^a$, while the field is still a solution of the wave equation, the field is not bounded. Because of the singularities the energy and action integrals then diverge. Details are deferred for now and will be presented in sections \[S:null\] and \[S:spacelike\] below. One should also note that in the optics and engineering literature the most commonly used notations are not manifestly Lorentz covariant. Thus it is common to see expressions such as $$\phi \propto {1\over x^2+y^2+[b_1-i(z+t)]\;[b_2+i(z-t)]} \label{E:noncovariant}$$ (see, for instance, [@Lekner]) whose Lorentz transformation properties are less than obvious — in fact this field configuration is equivalent to equation (\[E:mother\]) with the identification $$\zeta^a = -\left({b_1+b_2\over2},0,0, {b_1-b_2\over2}\right); \qquad ||\zeta|| = b_1 b_2;$$ Real scalar field ================= By taking real and imaginary parts of the complex solution above we can write down two solutions for the real scalar field. Namely $$\phi_1 = {\phi_0 \; (\eta_{ab} \; \zeta^a \; \zeta^b) \; \left\{\eta_{ab} \;[x^a-x_0^a] \; [x^b-x_0^b] - \eta_{ab} \; \zeta^a \; \zeta^b \right\} \over (\eta_{ab} \;[x^a-x_0^a] \; [x^b-x_0^b] - \eta_{ab} \; \zeta^a \; \zeta^b )^2 + 4 (\eta_{ab} \;[x^a-x_0^a] \; \zeta^b)^2};$$ $$\phi_2 = {2 \phi_0 \; (\eta_{ab} \; \zeta^a \; \zeta^b) \; \left\{\eta_{ab} \;[x^a-x_0^a] \; \zeta^b \right\} \over (\eta_{ab} \;[x^a-x_0^a] \; [x^b-x_0^b] - \eta_{ab} \; \zeta^a \; \zeta^b )^2 + 4 (\eta_{ab} \;[x^a-x_0^a] \; \zeta^b)^2}.$$ As previously, we can without loss of generality translate $x_0\to0$ and go to the zero-momentum frame $\zeta^a=(a,0,0,0)$, then $$\phi_1 = {\phi_0 \; a^2 \; \left\{ t^2-r^2-a^2 \right\} \over (t^2-r^2-a^2)^2 + 4 a^2 t^2};$$ $$\phi_2 = {\phi_0 \; a^2 \; 2 a t \over (t^2-r^2-a^2)^2 + 4 a^2 t^2}.$$ The stress-energy for a real scalar field simplifies $$T_{ab} = \phi_a \; \phi_b - {1\over2} \eta_{ab} |\nabla \phi|^2.$$ Then $$\begin{aligned} \nabla_a T^{ab} &\equiv& \Delta \phi \; \nabla_b \phi + \nabla^a \phi \;\nabla_a \nabla_b \phi - \nabla^b \phi \;\nabla_a \nabla_b \phi \\ &\equiv & \Delta \phi \; \nabla_b \phi,\end{aligned}$$ which vanishes by the equations of motion. The calculation for the energy-momentum 4-vector now yields: $$P^a_1 = {1\over4} \pi^2 |\phi_0|^2 \; \zeta^a = P^a_2.$$ The action integral for both of these field configurations is still zero. \[$\oint \,\d^3 x \,\d t \; {\cal L}(\phi_{1,2}) = 0$.\] Maxwell field ============= A similar construction can be performed for the Maxwell field. There are a number of choices one could make, and I will simply pick one that leads to a relatively simple field configuration. Start by picking a timelike 4-velocity $V^a$ and a spacelike unit vector $m^a$ orthogonal to it. Now adopt the ansatz $$A^a = \left\{ V^a \; m^b - m^a \; V^b \right\} \nabla_b \psi.$$ Here one is automatically in Lorenz gauge [@Lorenz], $\nabla_a A^a = 0$, and the Maxwell equations reduce to the wave equation $\Delta\psi=0$ for the scalar potential $\psi$. The field configuration is manifestly Lorentz covariant and we can without loss of generality go to the inertial frame where $$V^a=(1,0,0,0); \qquad m^a= (0; \vec m).$$ In this inertial frame $$A_a = (\varphi,\vec A) = \left( [\vec m\cdot\vec\nabla] \psi, \vec m\; \dot\psi\right),$$ where $\vec m$ is a constant unit vector. We shall soon see that this is tantamount to working in the zero-momentum frame of the field configuration. The electric field is $$\vec E = - \vec\nabla\left([\vec m\cdot\vec\nabla] \psi\right) + \vec m \; \ddot \psi = - \vec\nabla\left([\vec m\cdot\vec\nabla] \psi\right) + \vec m \; \nabla^2 \psi = \vec\nabla\times(\vec\nabla\times[\vec m\; \psi]),$$ where we have used the wave equation for $\psi$. The magnetic field is $$\vec B = \vec\nabla\times(\vec m\; \dot\psi) = - \vec m \times \vec \nabla \dot\psi.$$ The energy density and momentum flux (Poynting vector) are $$\rho = {1\over2} [\vec E^2+\vec B^2]; \qquad \vec S = \vec E \times \vec B.$$ To evaluate total energy and momentum a useful integration by parts (subject to suitable falloff at spatial infinity) is $$\begin{aligned} \oint \d^3 x \; (\vec\nabla \times \vec X_1)\cdot(\vec\nabla \times \vec X_2) &=& \oint \d^3 x \; (\vec X_1 \times \vec\nabla )\cdot(\vec\nabla \times \vec X_2) \\ &=& \oint \d^3 x \; \vec X_1 \cdot \left[ \vec\nabla \times (\vec\nabla \times \vec X_2) \right] \\ &=& \oint \d^3 x \; \vec X_1 \cdot \left[ -\vec\nabla^2 \vec X_2 + \vec\nabla ( \vec\nabla\cdot X_2) \right] \\ &=& - \oint \d^3 x \; \left\{ \vec X_1 \cdot \vec\nabla^2 \vec X_2 + (\vec\nabla\cdot \vec X_1) ( \vec \nabla \cdot\vec X_2) \right\}.\end{aligned}$$ Consequently $$\begin{aligned} \oint \d^3 x \; \vec E^2 &=& - \oint \d^3 x \left\{ [\vec\nabla\times(\vec m\psi)] \nabla^2 [\vec\nabla\times(\vec m\psi)] \right\} \\ &=& + \oint \d^3 x \left\{ (\vec m\psi) \nabla^2 \nabla^2 (\vec m\psi) + ( \vec \nabla \cdot[\vec m\psi]) \nabla^2 ( \vec \nabla \cdot[\vec m\psi]) \right\} \\ &=& + \oint \d^3 x \left\{ \psi \nabla^2 \nabla^2 \psi + ( \vec m \cdot \vec \nabla\psi) \nabla^2 ( \vec m \cdot \vec \nabla\psi) \right\}.\end{aligned}$$ Now let us assume the potential $\psi$ is spherically symmetric $\psi(r,t)$. Then averaging over angular variables is the same as averaging over orientations of the unit vector $\vec m$ and under the angular integral we can effectively replace $$m_i \; m_j \to {1\over3} \delta_{ij}.$$ Consequently $$\begin{aligned} \oint \d^3 x \; \vec E^2 &=&+ \oint \d^3 x \left\{ \nabla^2 \psi \nabla^2 \psi + {1\over3} ( \vec \nabla\psi) \cdot \nabla^2 ( \vec \nabla\psi) \right\} \\ &=& + \oint \d^3 x \left\{ (\nabla^2 \psi)^2 - {1\over3} (\nabla^2 \psi)^2 \right\} \\ &=& + {2\over3} \oint \d^3 x \left\{ (\nabla^2 \psi)^2 \right\}.\end{aligned}$$ Similarly $$\begin{aligned} \oint \d^3 x \; \vec B^2 &=&- \oint \d^3 x \left\{ (\vec m\dot\psi) \cdot \nabla^2 (\vec m\dot\psi) + ( \vec m \cdot \vec \nabla\dot\psi) \; ( \vec m \cdot \vec \nabla\dot\psi) \right\}.\end{aligned}$$ Again invoking spherical symmetry for $\psi$ we have $$\begin{aligned} \oint \d^3 x \; \vec B^2 &=&- \oint \d^3 x \left\{ \dot\psi \nabla^2 \dot\psi + {1\over3} (\vec \nabla\dot\psi)\cdot(\vec \nabla\dot\psi) \right\} \\ &=&+ {2\over3} \oint \d^3 x \left\{ (\vec \nabla\dot\psi)\cdot(\vec \nabla\dot\psi) \right\}.\end{aligned}$$ Therefore $${\mathcal E} = \oint \d^3 x \; \rho = {1\over3} \oint \d^3 x \left\{ (\nabla^2 \psi)^2 + (\vec \nabla\dot\psi)\cdot(\vec \nabla\dot\psi) \right\}, \label{E:energy1}$$ so the total energy is sensibly positive. Before making further choices regarding the potential $\psi$, consider the total momentum $$\vec \wp = \oint \d^3 x \; \vec S = \oint \d^3 x \; \left[\vec\nabla\times(\vec\nabla\times[\vec m\; \psi])\right] \times \left[ \vec\nabla\times(\vec m\; \dot\psi) \right].$$ Integration by parts implies $$\vec \wp = \oint \d^3 x \left\{ (\vec m\; \dot\psi) \cdot \nabla^2 (\vec\nabla\times[\vec m\; \psi]) \right\} = -\oint \d^3 x \left\{ \dot\psi \vec m \cdot [\vec m \times \vec\nabla (\nabla^2 \psi)] \right\} = 0,$$ so that the net momentum is zero. For the action, an integration by parts together with the equations of motion yields $$\oint \d^4 x \; {1\over2} \left[ \vec E^2 - \vec B^2 \right] = {1\over3} \oint \d^3 x \left\{ (\nabla^2 \psi)^2 - (\vec \nabla\dot\psi)\cdot(\vec \nabla\dot\psi) \right\} = 0.$$ So we still have a zero action solution. Up to now the potential $\psi$ has only needed to be spherically symmetric and to satisfy the wave equation (plus some falloff constraint at spatial infinity to allow the integration by parts). The general solution to the wave equation in spherical symmetry is $$\psi(r,t) = {1\over r} \left[ f(r+t) - f(r-t) \right].$$ Returning to the energy ${\mathcal E}$ as given in equation (\[E:energy1\]), a further integration by parts, together with the wave equation for $\psi$ yields the computationally convenient form $${\mathcal E} = {1\over3} \oint \d^3 x \left\{ \partial_t^2 \psi \; \partial_t^2 \psi - \partial_t\psi \; \partial_t^3 \psi \right\}.$$ Whence $$\begin{aligned} {\mathcal E} &=& 4\pi \int_0^\infty \d r \Big\{ (\partial_t^2[ f(r+t) - f(r-t) ] )^2 \\ &&\qquad \qquad - (\partial_t[ f(r+t) - f(r-t) ] ) \; (\partial_t^3[ f(r+t) - f(r-t) ] ) \Big\} \nonumber \\ &=& 4\pi \int_0^\infty \d r \Big\{ (\partial_r^2[ f(r+t) - f(r-t) ] )^2 \\ &&\qquad\qquad - (\partial_r[ f(r+t) + f(r-t) ] ) \; (\partial_r^3[ f(r+t) + f(r-t) ] ) \Big\} \nonumber \\ &=& 4\pi \int_0^\infty \d r \Big\{ (\partial_r^2[ f(r+t) - f(r-t) ] )^2 \\ &&\qquad\qquad + (\partial_r^2[ f(r+t) + f(r-t) ] )^2 \Big\} \nonumber \\ &=& 8\pi \int_0^\infty \d r \left\{ [\partial_r^2 f(r+t) ]^2 + [\partial_r^2 f(r-t) ]^2 \right\} \\ &=& 8\pi \int_{-\infty}^{+\infty} \d s \; [\partial_s^2 f(s) ]^2.\end{aligned}$$ Which verifies that the energy is constant in a model-independent manner. Furthermore if $f(s)$ is smooth and satisfies suitable falloff conditions at $s\to\pm\infty$ then the wavelet will be nonsingular and of finite energy. To obtain a specific example it only remains to finish the complete specification of the potential $\psi(r,t)$. One particularly simple choice is to take one of the real scalar wavelets of the previous section $\psi\to\phi_{1,2}$, with $\zeta^a = ||\zeta|| \; V^a = a\; V^a$. Since the electric and magnetic fields are now specified in terms of derivatives of a smooth bounded function, the electromagnetic field is similarly smooth and bounded. For the energy, write it in the form $${\mathcal E} = {1\over3} \oint \d^3 x \left\{ \psi \; \partial_t^4 \psi - \partial_t\psi \; \partial_t^3 \psi \right\}.$$ Whence, for either $\psi\to\phi_{1,2}$, an integration carried out at arbitrary $t$ yields the time-independent quantity $${\mathcal E}_{1,2} = {1\over2} \;{\phi_0^2\over a}.$$ The calculation has for convenience been carried out in the zero-momentum frame of the wavelet. In a general Lorentz frame we would have $$P^a_{1,2} = {1\over2}\; {\phi_0^2\over a} \;\; V^a = {1\over2}\; {\phi_0^2\over a^2} \;\; \zeta^a.$$ Thus this field configuration, as for the scalar case, is Lorentz covariant, bounded, finite energy and zero action. The vector nature of the Maxwell field has added technical complications, but there is no real change in basic principles. In closing this section, I should mention one other wavelet that is particularly attractive. If one makes use of the pseudo-differential operator $(\nabla^2)^{-1/2}$ one could write $$\psi = (\nabla^2)^{-1/2} \; \phi_{1,2} = {1\over\Gamma(-1/2)} \int_0^\infty {\d t\over t^{3/2}} \exp[-t \nabla^2] \; \phi_{1,2}.$$ With this choice of $\psi$ the energy integral simplifies $${\mathcal E}_{1,2} = {1\over3} \oint \d^3 x \left\{ (\partial_t\phi_{1,2})^2 + (\nabla \phi_{1,2})^2 \right\} = {1\over 6} \; \pi^2 \; |\phi_0^2| \; a.$$ The price paid for making the energy look simple is that the electric and magnetic fields are much more complicated to calculate. Because the optics and engineering literature generally does not use manifestly Lorentz covariant notation, it can be very time consuming to calculate the 4-momentum of a specific pulse. Indeed only very recently [@Lekner] has Lekner provided specific and explicit computations of both ${\mathcal E}$ and $\vec\wp$ (as well as the angular momentum) for a pulse similar to that considered above. As expected (once one has the covariant perspective advocated in this article) $||\vec\wp|| < {\mathcal E}$, indicating the existence of a zero-momentum frame for the pulses of this type [@Lekner]. Yang-Mills field ================ Once we have the Maxwell wavelet above, a Yang–Mills wavelet is straightforward, indeed trivial. Let $\mathbf{\Lambda}$ be any constant matrix in the center of the gauge group and set $\mathbf{A}^a = A^a \; \mathbf{\Lambda}$. The construction is so simple that there is really nothing extra beyond the Maxwell wavelet considered above. Null $\zeta$ {#S:null} ============ Let us now return to the original scalar complex wavelet. Suppose the 4-vector $\zeta$ is null. Then because the numerator vanishes identically the original definition above gives $\phi\equiv 0$. We should at a minimum change our field definition to read $$\phi(x) = - {\psi_0 \over \eta_{ab} \;[x^a-x_0^a-i\zeta^a] \; [x^b-x_0^b-i \zeta^b] }.$$ Then without loss of generality we go into the frame $$\zeta^a = (a,0,0,a),$$ and then $$\phi(x) = - {\psi_0 \over [t-ia]^2 - x^2-y^2-[z-ia]^2 }.$$ That is $$\phi(x) = {\psi_0 \over r^2-t^2+2ia(t-z) }.$$ Note $$|\phi(x)| = {\psi_0 \over \sqrt{(r^2-t^2)^2 +4 a^2 (t-z)^2 } }.$$ The denominator now vanishes when $z-t$ and $x=y=0$, so that the field is divergent on the beam axis. Suppose we write $R = \sqrt{x^2+y^2}$ then $$\phi(x) = {\psi_0 \over R^2+ z^2-t^2+2ia(t-z) }.$$ So we see that the field drops off as $1/R^2$ as we move away from the beam axis. \[And more critically, the field blows up as $1/R^2$ as we approach the beam axis.\] Attempts at calculating the energy and action now lead to divergent integrals. In other words, despite the fact that it still solves the wave equation, for null $\zeta$ this is not a particularly useful field configuration. Spacelike $\zeta$ {#S:spacelike} ================= Foe the complex scalar wavelet, suppose the 4-vector $\zeta$ is spacelike. Then without loss of generality we go into the infinite velocity frame where $$\zeta^a = (0,0,0,a),$$ and then $$\phi(x) = {\phi_0 \; a^2 \over t^2 - x^2-y^2-[z-ia]^2 }.$$ That is $$\phi(x) = -{\phi_0 \; a^2 \over r^2-t^2-a^2-2iaz }.$$ Note $$|\phi(x)| = {\phi_0 \; a^2 \over \sqrt{(r^2-t^2-a^2)^2 +4 a^2 z^2 } }.$$ The denominator now vanishes when $z=0$ and $x^2+y^2=a^2+t^2$. That is, the field is divergent on a time-dependent circle orthogonal to the beam axis. There is a short distance singularity as one approaches this circle, and the energy and action integrals diverge. In other words, despite the fact that it still solves the wave equation, for spacelike $\zeta$ this is not a useful field configuration. Discussion ========== The physical wavelet discussed in this article is important because it represents a qualitatively different extended field configuration of a type not normally encountered in particle physics. The wavelet is neither a soliton, nor an instanton, nor a sphaleron though it shares properties with all three of these extended objects: - Like the soliton it lives in physical time (Minkowski space, not Euclidean space), and possesses a well-defined 4-velocity. - Like the instanton it “dies away” in the infinite past and future. - Like the instanton it possesses a continuously adjustable scale parameter. - Like the sphaleron it is unstable to dispersal. Because the wavelet fields are bounded and finite energy, wavelet configurations [*will*]{} be classically excited at any finite temperature. Because the wavelet configuration has zero action, arbitrarily complicated combinations of these physical wavelets can be added to the field configurations appearing in Feynman’s path integral without modifying the phase — quantum mechanically there is no “cost” in adding these configurations to the Lorentzian path integral and they [*will*]{} contribute. Other “localized waves” might be interesting in specific applications but the particular example discussed in this article is important because of its extreme simplicity and pleasant behaviour. I wish to thank John Lekner for stimulating my interest in these issues. I also wish to thank Damien Martin for bringing the whole Lorenz/Lorentz issue to my attention. [99]{} R. W. Ziolkowski, “Exact solutions of the wave equation with complex source locations”, J. Math. Phys. [**26**]{} (1985) 861–863. M. K. Tippet and R. W. Ziolkowski, “A bidirectional wave transformation of the cold plasma equations”, J. Math. Phys. [**32**]{} (1991) 488–492. J. Lu and J. F. Greenleaf, “Nondiffracting X waves — exact solutions to free-space scalar wave equation and their finite aperture realizations”, IEEE Transactions on ultrasonics, ferroelectrics, and frequency control, [**39**]{} (1992) 19–31. J. Lu, H. Zou, and J. F. Greenleaf, “A new approach to obtain limited diffraction beams”, IEEE Transactions on ultrasonics, ferroelectrics, and frequency control, [**42**]{} (1995) 850–853. G. Kaiser, “Physical wavelets and their sources: Real physics in complex spacetime,” arXiv:math-ph/0303027. J. Lekner, “Electromagnetic pulses which have a zero momentum frame”, submitted to J. Opt. A, 21 November 2002; arXiv:physics/0304022. Field configurations similar to the one considered in this article may variously be encountered under names such as: “localized waves”, “focus wave modes”, “pulses”, “X-waves”, “limited diffraction beams”, “wavelets”, and “physical wavelets”. The Lorenz gauge was apparently first used by the Danish physicist Ludwig Lorenz (1829-1891), though it is commonly misattributed to the Dutch physicist Hendrik Antoon Lorentz (1853-1928).\ L. Lorenz, “On the Identity of the Vibrations of Light with Electrical Currents”, Philos. Mag. [**34**]{} (1867) 287-301.\ J. van Bladel, “Lorenz or Lorentz?”, IEEE Antennas Prop. Mag. [**33**]{} (1991) 69. [^1]: Research supported by the Marsden Fund administered by the Royal Society of New Zealand
--- abstract: 'We study the topology of toric maps. We show that if $f\colon X\to Y$ is a proper toric morphism, with $X$ simplicial, then the cohomology of every fiber of $f$ is pure and of Hodge-Tate type. When the map is a fibration, we give an explicit formula for the Betti numbers of the fibers in terms of a relative version of the $f$-vector, extending the usual formula for the Betti numbers of a simplicial complete toric variety. We then describe the Decomposition Theorem for a toric fibration, giving in particular a nonnegative combinatorial invariant attached to each cone in the fan of $Y$, which is positive precisely when the corresponding closed subset of $Y$ appears as a support in the Decomposition Theorem. The description of this invariant involves the stalks of the intersection cohomology complexes on $X$ and $Y$, but in the case when both $X$ and $Y$ are simplicial, there is a simple formula in terms of the relative $f$-vector.' address: - 'Department of Mathematics, Stony Brook University, Stony Brook, NY 11794-3651, USA' - 'Dipartimento di Matematica, Universita di Bologna, Piazza di Porta S. Donato 5, 40127 Bologna, Italy' - 'Department of Mathematics, University of Michigan, Ann Arbor, MI 48109, USA' author: - 'Mark Andrea A. de Cataldo' - Luca Migliorini - Mircea Mustaţă title: Combinatorics and topology of proper toric maps --- [^1] Introduction ============ A complex toric variety is a normal complex algebraic variety $X$ that carries an action of a torus $T=({\mathbf C}^*)^n$, such that $X$ has an open orbit isomorphic to $T$. Toric varieties can be described in terms of convex-geometric objects, namely fans and polytopes, and part of their appeal comes from the fact that algebro-geometric properties of the varieties translate into combinatorial properties of the fans or polytopes, see [@Ful93]. For example, given a complete simplicial toric variety $X$ (recall that *simplicial* translates as having quotient singularities), the information given by the Betti numbers of $X$ is equivalent to that encoded by the $f$-vector of $X$, which records the number of cones of each dimension in the fan defining $X$. This has been famously used by Stanley in [@Stanley1], together with the Poincaré duality and Hard Lefschetz theorems to prove a conjecture of McMullen concerning the $f$-vector of a simple polytope. When $X$ is not-necessarily-simplicial, it turns out that the right cohomological invariant to consider is not the cohomology of $X$ (which is *not* a combinatorial invariant), but the intersection cohomology (see [@Stanley2] and also [@DL] and [@Fieseler]). In this paper we are concerned with a relative version of this story. More precisely, we consider a proper (equivariant) morphism of toric varieties $f\colon X\to Y$ and study two related questions. We first study the cohomology of the fibers of $f$ and then apply this study to describe the Decomposition Theorem for $f.$ Our results are most precise when $f$ is a fibration, that is, when it is proper, surjective and with connected fibers. We begin with the following result about simplicial toric varieties that admit a proper toric map to an affine toric variety that has a fixed point. The case of complete simplicial toric varieties is well-known. Recall that a pure Hodge structure of weight $q$ is *of Hodge-Tate type* if all Hodge numbers $h^{i,j}$ are $0$, unless $i=j$ (in particular, the underlying vector space is $0$ if $q$ is odd). If $X$ is a simplicial toric variety that admits a proper toric map to an affine toric variety that has a fixed point, then the following hold for every $q$: 1. The canonical map $A_q(X)_{{{\Bbb Q}}}\to H_{2q}^{BM}(X)_{{{\Bbb Q}}}$ is an isomorphism, where $A_q(X)_{{{\Bbb Q}}}$ is the Chow group of $q$-dimensional cycle classes and $H^{BM}_{2q}(X)_{{{\Bbb Q}}}$ is the $(2q)^{\rm th}$ Borel-Moore homology group of $X$ (both with ${{\Bbb Q}}$-coefficients). 2. The mixed Hodge structures on each of $H^q_c(X,{{\Bbb Q}})$ and $H^q(X,{{\Bbb Q}})$ are pure, of weight $q$, and of Hodge-Tate type. As a consequence, we deduce the following: Let $f\colon X\to Y$ be a proper toric map between complex toric varieties, with $X$ simplicial. For every $y\in Y$ and every $q$, the mixed Hodge structure on $H^q(f^{-1}(y),{\mathbf Q})$ is pure, of weight $q$, and of Hodge-Tate type. By virtue of Proposition \[str\_fibers\] below, under the assumptions of Theorem B, every irreducible component of $f^{-1}(y)$ is a complete, simplicial toric variety, for which the properties in the statement are well-known. However, the fact that the union still satisfies these properties is not a general fact. In order to prove Theorem B, we first reduce to the case when $X$ is smooth, $f$ is a projective fibration, $Y$ is affine, and has a torus-fixed point which is equal to $y$. In this case, there is an isomorphism of mixed Hodge structures $$H^q(f^{-1}(y),{{\Bbb Q}})\simeq H^q(X,{{\Bbb Q}})$$ and therefore Theorem A implies Theorem B. In order to prove Theorem A, we prove Corollary \[cor\_filtration\], which yields a filtration of $X$ by open subsets $\emptyset=U_0\subset U_1\subset\ldots\subset U_r=X$ such that each difference $U_i\smallsetminus U_{i-1}$ is isomorphic to a quotient of an affine space by a finite group. The existence of such a filtration is well-known when $X$ is smooth and projective (see [@Ful93 Chapter 5.2]) and a similar augment gives it in the context we need. It is then easy to deduce the assertions about the cohomology of $X$ from the existence of such a filtration. We remark that the purity statements in both theorems can also be deduced from the above isomorphism of mixed Hodge structures via some general properties of the weight filtration (see Remark \[weights\_trick\]). However, we have preferred to give the argument described above in order to emphasize the elementary nature of the results. For every toric fibration $f\colon X\to Y$, it is easy to compute the Hodge-Deligne polynomial of the fibers of $f$. This is done in terms of a relative version of the familiar notion of $f$-vector of a toric variety. Recall that the orbits of the torus action on a toric variety $Y$ are in bijection with the cones in the fan $\Delta_Y$ defining $Y$, with the orbit $O(\tau)$ corresponding to $\tau\in\Delta_Y$ having dimension equal to ${\rm codim}(\tau)$. Moreover, the irreducible torus-invariant closed subsets of $Y$ are precisely the orbit closures $V(\tau)=\overline{O(\tau)}$. With this notation, for every cone $\tau\in\Delta_Y$, we denote by $d_{\ell}(X/\tau)$ the number of irreducible torus-invariant closed subsets $V(\sigma)$ of $X$ such that $f(V(\sigma))=V(\tau)$ and $\dim(V(\sigma))-\dim(V(\tau))=\ell$. The following corollary (Corollary \[Betti\_fib\]) generalizes the well-known formula for the Betti numbers of a complete, simplicial toric variety. Note that Theorem B implies that in what follows, the odd cohomology groups are trivial. If $f\colon X\to Y$ is a toric fibration, with $X$ simplicial, and $y\in O(\tau)$ for some cone $\tau\in \Delta_Y$, then for every $m\in{\mathbf Z}_{\geq 0}$, we have $$\dim_{\mathbf Q}H^{2m}(f^{-1}(y),{\mathbf Q})=\sum_{\ell\geq m}(-1)^{\ell-m}{{\ell}\choose m}d_{\ell}(X/\tau).$$ Moreover, the Euler-Poincaré characteristic of $f^{-1}(y)$ is given by $\chi(f^{-1}(y))=d_0(X/\tau)$. Our initial goal was to understand the decomposition theorem of [@BBD] for proper toric morphisms. In the setting of finite fields, this is carried out in [deC2]{}. In our setting, i.e. over the complex numbers, the theorem takes the following form (for a variety $X$, we denote by $IC_X$ the intersection complex on $X;$ for nonsingular $X,$ this is ${\mathbf Q}_X [\dim{X}]$). If $X$ and $Y$ are complex toric varieties and $f\colon X\to Y$ is a toric fibration, then we have a decomposition $$\label{eq_toric_dec_thm_intro} Rf_*(IC_X)\simeq \bigoplus_{\tau\in\Delta_Y}\bigoplus_{b\in{{\Bbb Z}}}IC_{V(\tau)}^{\oplus s_{\tau,b}}[-b],$$ where the nonnegative integers $s_{\tau,b}$ satisfy $s_{\tau,b}=0$ if $b+\dim(X)-\dim(V(\tau))$ is odd. In fact, the integers $s_{\tau,b}$ in the above theorem satisfy further constraints coming from Poincaré duality and the relative Hard Lefschetz theorem; see Theorem \[toric\_dec\_thm\] below for the precise statement. By comparison with the general statement of the decomposition theorem from [@BBD], there are two points. Firstly, the subvarieties that appear in the decomposition (\[eq\_toric\_dec\_thm\_intro\]) are torus-invariant and, secondly, the intersection complexes that appear have constant coefficients (this is due to the fact that the map is a fibration). We are interested in the *supports* of toric fibrations, that is, the subvarieties $V(\tau)$ that appear in the decomposition (\[eq\_toric\_dec\_thm\_intro\]). More precisely, in the setting of Theorem D, for every $\tau\in\Delta_Y$ we define the key invariant of this paper $\delta_{\tau}:=\sum_{b\in{{\Bbb Z}}}s_{\tau,b}$. Clearly, in view of (\[eq\_toric\_dec\_thm\_intro\]), we have that the closure of an orbit $V(\tau)$ is a support if and only if $\delta_{\tau}>0$. Theorems E and F below relate the topological invariant $\delta$ of the toric fibration $f$ to the associated combinatorial data. Let us start by discussing the simpler case of Theorem E, where both varieties are simplicial. By building on Corollary C, we obtain the following description. If $f\colon X\to Y$ is a toric fibration, where $X$ and $Y$ are simplicial toric varieties, then for every cone $\tau\in\Delta_Y$, we have $$\label{eq_thmD} \delta_{\tau}=\sum_{\sigma\subseteq\tau}(-1)^{\dim(\tau)-\dim(\sigma)}d_0(X/\sigma).$$ In fact, in this simplical case, we obtain an explicit formula (see Theorem \[form\_both\_simplicial\]) for each of the numbers $s_{\tau,b}$. An interesting consequence of Theorem E is that the expression on the right-hand side of (\[eq\_thmD\]) is nonnegative. It would be desirable to find a direct combinatorial argument for this fact. When $f$ is birational and $\dim(\tau)\leq 3$, we give a combinatorial description of $\delta_{\tau}$ which implies that it is nonnegative (see Remark \[rmk\_special\_case\]). However, we do not know of such a formula when $\tau$ has higher dimension. It is worth noting that the explicit formula for the invariants $s_{\tau,b}$ in the simplicial case, in combination with Poincaré duality and Hard Lefschetz, leads to interesting constraints on the combinatorics of the morphism (see Remark \[rmk\_rel\_f\_vector\] for the precise statement). These constraints extend to the relative setting the well-known conditions on the $f$-vector of a simplicial, projective toric variety. In order to treat the case of fibrations between not-necessarily-simplicial toric varieties, we need to introduce some notation. Given a toric variety $Y$ and two cones $\tau\subseteq\sigma$ in the fan $\Delta_Y$ defining $Y$, we put $r_{\tau,\sigma}:=\dim_{{{\Bbb Q}}}{\mathcal H}^*(IC_{V(\tau)})_{x_{\sigma}}$, where $x_{\sigma}$ can be taken to be any point in the orbit $O(\sigma)\subseteq V(\tau)$. It is a consequence of the results in [@Fieseler] and [@DL], independently, that $r_{\tau,\sigma}$ is a combinatorial invariant. In turn, we have the invariant $\widetilde{r}_{\tau,\sigma}$ for cones $\tau\subseteq\sigma$ in $\Delta_Y$, uniquely determined by the property that for every $\tau\subseteq\sigma$, the sum $\sum_{\tau\subseteq\gamma\subseteq\sigma}r_{\tau,\gamma}\cdot \widetilde{r}_{\gamma,\sigma}$ is equal to $1$ if $\tau=\sigma$ and it is equal to $0$, otherwise. By [@Stanley4], $\widetilde{r}_{\tau,\sigma}$ coincides, up to sign, with the $r_{\tau, \sigma}$-function of the dual poset. Suppose now that $f\colon X\to Y$ is a toric fibration. For a cone $\sigma\in\Delta_Y$, we put $p_{f,\sigma}:=\dim_{{{\Bbb Q}}}{\mathcal H}^*(f^{-1}(x_{\sigma}),IC_X)$, where again we may take $x_{\sigma}$ to be any point in the orbit $O(\sigma)$. The next result gives a description of $\delta_{\sigma}$ in terms of the above invariants. The second part implies that the invariants $p_{f,\sigma}$, hence also the $\delta_{\sigma}$, are combinatorial. With the above notation, if $f\colon X\to Y$ is a toric fibration, then the following hold: 1. For every cone $\tau\in\Delta_Y$, we have $\delta_{\tau}=\sum_{\sigma\subseteq\tau}\widetilde{r}_{\sigma,\tau}\cdot p_{f,\sigma}$. 2. For every cone $\sigma\in\Delta_Y$, we have $p_{f,\sigma}=\sum_ir_{0,\sigma_i}$, where the $\sigma_i$ are the cones in the fan $\Delta_X$ defining $X$ with the property that $f(V(\sigma_i))=V(\sigma)$ and $\dim(V(\sigma_i))=\dim(V(\sigma))$. Just as for Theorem E, an explicit formula for each of the numbers $s_{\tau,b}$ is obtained by upgrading $r_{\sigma, \tau}$ and $p_{f,\sigma}$ to Laurent polynomials with integral coefficients in order to keep track of the cohomological grading (see Theorem \[form\_general\] for the precise result). Again, as in Theorem E, note that while $\delta \geq 0$ by definition, the right-hand side of Theorem F.i) contains the factors $\widetilde{r}$, which have “alternating signs", so that, in particular, we find this right-hand side to be nonnegative as well. In this paper, this is proved as a consequence of the decomposition theorem. As in the simplicial case, it would be desirable to find a combinatorial description for $\delta$ that implies its non-negativity. We mention that in their recent preprint [@KS], Katz and Stapledon undertake a related study in a more combinatorial framework, focused on invariants associated to certain maps between posets. In the case of a toric fibration $f\colon X\to Y$, their results apply to the map $f_*\colon\Delta_X\to\Delta_Y$. In particular, our invariants $s_{\tau,b}$ appear in their setting as the coefficients of a *local h-polynomial*. For some comparisons between their results and ours, see Remarks \[rmk\_KS1\], \[rmk\_KS2\], and \[rmk\_KS3\]. The paper is organized as follows. In Section 2 we review the basics of toric geometry that we use. We pay special attention to some facts concerning toric morphisms that seem somewhat less known, such as the description of toric fibrations and Stein factorizations in the toric setting, and the description of the irreducible components of fibers of toric maps. In Section 3 we study the cohomology of toric varieties that admit proper toric maps to affine toric varieties that have a fixed point. In particular, we prove Theorem A (see Theorems \[BM\] and \[pure\_coh\]). In Section 4, we apply the results in the previous section to obtain Theorem B (see Theorem \[pure\_coh\]). We use this and the computation of Hodge-Deligne polynomials for fibers of toric fibrations to give the formula for the Betti numbers of such fibers in Corollary C (see Corollary \[Betti\_fib\]). In Section 5 we deduce Theorem D from the decomposition theorem in [@BBD] (see Theorem \[toric\_dec\_thm\]). Section 6 is devoted to the description of the invariants $\delta_{\sigma}$ in the case of a toric fibration between simplicial toric varieties (see Theorem \[form\_both\_simplicial\]), while in Section 7 we treat the general case (see Theorems \[form\_general\] and \[thm\_p\_sigma\]). Acknowledgments {#acknowledgments .unnumbered} --------------- The first-named author is grateful to the Max Planck Institute of Mathematics in Bonn for the perfect working conditions. We are grateful to Tom Braden, William Fulton and Vivek Shende for helpful discussions. Laurentiu Maxim informed us that related results are contained in [@CMS], especially Thm. 3.2. During the preparation of this paper we learned that E. Katz and A. Stapledon are investigating, although from a rather different viewpoint, similar questions. We thank them for informing us about their results and sending us a draft of their paper [@KS]. Last but not least, we are grateful to the anonymous referees for their comments and suggestions. Basic toric algebraic geometry ============================== [sec\_basic]{} In this section we review some basic facts about toric varieties and toric maps. For all assertions that we do not prove in this section, as well as for all the standard notation for toric varieties that we employ, we refer to [@Ful93]. We work over an algebraically closed field $k$, of arbitrary characteristic, in the hope that some of the facts that we prove might be useful somewhere else, in this more general setting. Toric varieties and their orbits {#toric-varieties-and-their-orbits .unnumbered} -------------------------------- A toric variety $X$ is associated with a lattice $N$ and a fan $\Delta$ in $N_{{{\Bbb R}}}=N\otimes_{\mathbf Z}{{\Bbb R}}$. If $M$ is the dual lattice of $N$, then the torus $T_N:={\rm Spec}\,k[M]$ embeds as an open subset in $X$ and its standard action on itself extends to an action on $X$. We often write $N_X$, $M_X$, $\Delta_X$, and $T_X$ for the objects corresponding to a fixed toric variety $X$. For every cone $\sigma\in\Delta$, there is an affine open subset $U_{\sigma}$ of $X$, with $U_{\sigma}={\rm Spec}\,k[\sigma^{\vee}\cap M]$. These affine open subsets cover $X$. The support of a fan $\Delta$ is the subset $|\Delta|=\bigcup_{\sigma\in \Delta}\sigma$ of $N_{{{\Bbb R}}}$. The toric variety $X$ is complete if and only if $|\Delta_X|=N_{{{\Bbb R}}}$. The orbits of the $T_X$-action on $X$ are in bijection with the cones in $\Delta_X$. The orbit $O({\sigma}):={\rm Spec}\,k[M_X\cap\sigma^{\perp}]$ corresponding to $\sigma\in\Delta_X$ is a torus of dimension equal to ${\rm codim}(\sigma)$. The distinguished element of $O({\sigma})$ (the identity of the group) is denoted by $x_{\sigma}$. In particular, the smallest cone $\{0\}$ corresponds to the open orbit $T_X$. The irreducible torus-invariant closed subsets of $X$ are precisely the orbit closures $V(\sigma):=\overline{O({\sigma})}$. The lattice corresponding to $V(\sigma)$ is $N/N_{\sigma}$, where $N_{\sigma}$ is the intersection of $N$ with the linear span of $\sigma$. We always view $\Delta$ as a poset, ordered by the inclusion of cones. Note that $\sigma\subseteq\tau$ if and only if $V(\sigma)\supseteq V({\tau})$. The open subset $U_{\sigma}$ is the union of those $O(\tau)$ with $\tau\subseteq\sigma$. Each $V(\sigma)$ is a toric variety, with corresponding torus $O({\sigma})$. There is a surjective morphism of algebraic groups $T_X\to O({\sigma})$ such that the $T_X$-action on $V(\sigma)$ induces the $O({\sigma})$-action. This morphism corresponds to the split inclusion $M_X\cap\sigma^{\perp}\hookrightarrow M_X$. If $X$ is smooth or simplicial, then each $V(\sigma)$ has the same property. We say that an affine toric variety $U_{\sigma}$ is *of contractible type* if $\sigma$ is a cone that spans $N_{{{\Bbb R}}}$. Note that in this case $x_{\sigma}\in U_{\sigma}$ is the unique fixed point for the torus action. Toric maps {#toric-maps .unnumbered} ---------- Let $X$ and $Y$ be toric varieties corresponding, respectively, to the lattices $N_X$ and $N_Y$ and to the fans $\Delta_X$ and $\Delta_Y$. A toric map is a morphism $f\colon X\to Y$ that induces a morphism of algebraic groups $g\colon T_X\to T_Y$ such that $f$ is $T_X$-equivariant with respect to the $T_X$-action on $Y$ induced by $g$. Such $f$ corresponds to a unique linear map $f_{N_{{{\Bbb R}}}}\colon (N_X)_{{{\Bbb R}}}\to (N_Y)_{{{\Bbb R}}}$ inducing $f_N\colon N_X\to N_Y$ such that for every cone $\sigma\in\Delta_X$, there is a cone $\tau\in\Delta_Y$ with $f_{N_{{{\Bbb R}}}}(\sigma)\subseteq \tau$. We write $f_M$ : $M_Y\to M_X$ for the dual of $f_N$. Note that for $\sigma$ and $\tau$ as above, we have a $k$-algebra homomorphism $k[M_Y\cap\tau^{\vee}]\to k[M_X\cap\sigma^{\vee}]$ mapping $\chi^u$ to $\chi^{f_M(u)}$. This induces a morphism $U_{\sigma}\to U_{\tau}$, which is the restriction of $f$ to $U_{\sigma}$. In general, we have $f_{N_{{{\Bbb R}}}}^{-1}(|\Delta_Y|)\supseteq |\Delta_X|$ and the map $f$ is proper if and only if $f_{N_{{{\Bbb R}}}}^{-1}(|\Delta_Y|)=|\Delta_X|$. Note that by definition, we have a map $f_*\colon\Delta_X\to\Delta_Y$ such that $f_*(\sigma)$ is the smallest cone in $\Delta_Y$ that contains $f_N(\sigma)$. It is clear that $f_*$ is a map of posets. Suppose now that $f\colon X\to Y$ is a toric map such that the lattice map $f_N\colon N_X\to N_Y$ is surjective. In this case the morphism of tori $T_X\to T_Y$ is surjective. More generally, if $\sigma$ is a cone in $\Delta_X$, then $f$ induces a surjective morphism of tori $O({\sigma})\to O(\tau)$, where $\tau=f_*(\sigma)$. We denote the kernel of this morphism by $O(\sigma/\tau)$. Note that this is again a torus, of dimension ${\rm codim}(\sigma)-{\rm codim}(\tau)$. It is clear that $f(V({\sigma}))\subseteq V(\tau)$ (with equality if $f$ is proper). In fact, the induced map $V(\sigma)\to V(\tau)$ is again a toric map of toric varieties with the property that the corresponding lattice map is surjective. Toric Stein factorizations {#toric-stein-factorizations .unnumbered} -------------------------- Recall that a proper morphism $f\colon X\to Y$ is a *fibration* if $f_*({\mathcal O}_X)={\mathcal O}_Y$. This implies that $f$ is surjective and has connected fibers; the converse holds if ${\rm char}(k)=0$. The Stein factorization of a proper map $f\colon X\to Y$ is the unique factorization as $X\overset{g}\to Z\overset{h}\to Y$ such that $g$ is a fibration and $h$ is a finite map. The following proposition gives the description of fibrations and Stein factorizations in the toric setting. \[Stein\_fact\] Let $f\colon X\to Y$ be a proper toric map corresponding to the lattice map $f_N\colon N_X\to N_Y$ and let $\Delta_X$ and $\Delta_Y$ be the corresponding fans. 1. The map $f$ is surjective if and only if ${\rm Coker}(f_N)$ is finite. 2. The map $f$ is a fibration if and only if $f_N$ is surjective. 3. Suppose that $f$ is surjective. If we let $N_Z=f_N(N_X)$ and $\Delta_Z=\Delta_Y$, then the factorization $N_X\overset{g_N}\to N_Z\overset{h_N}\to N_Y$ of $f_N$ induces the Stein factorization of $f$. Since $f$ is proper, it is surjective if and only if the induced morphism of tori $T_X\to T_Y$ is dominant. This is the case if and only if $f_M\colon M_Y\to M_X$ is injective, which is equivalent to ${\rm Coker}(f_N)$ being finite. This proves i). Suppose now that $f_N$ is surjective. In order to show that $f$ is a fibration, it is enough to prove that for every $\sigma\in\Delta_Y$, the natural map $$\Gamma(U_{\sigma}, {\mathcal O}_Y)=k[\sigma^{\vee}\cap M_Y]\to \Gamma(f^{-1}(U_{\sigma}), {\mathcal O}_X)$$ is an isomorphism. Note that $f^{-1}(U_{\sigma})$ is the union of $U_{\tau}$, where $\tau$ varies over the set $\Lambda_{\sigma}$ of those cones in $\Delta_X$ such that $f_N(\tau)\subseteq\sigma$. Therefore $$\Gamma(f^{-1}(U_{\sigma}),{\mathcal O}_X)=\bigcap_{\tau\in\Lambda_{\sigma}}k[\tau^{\vee}\cap M_X]=\bigoplus_{u} k\cdot\chi^u,$$ where the direct sum is over those $u\in M_X$ such that $u\in\tau^{\vee}$ for every $\tau\in \Lambda_{\sigma}$. It is enough to show that for every such $u\in M_X$, there is $w\in M_Y\cap \sigma^{\vee}$ such that $f_M(w)=u$ (note that $w$ is clearly unique since $f_M$ is injective). It is clear that there is $w\in M_Y$ such that $f_M(w)=u$. Indeed, since $f_{N_{{{\Bbb R}}}}^{-1}(|\Delta_Y|)=|\Delta_X|$, we deduce that ${\rm Ker}(f_{N_{{{\Bbb R}}}})$ is a union of cones in $\Delta_X$, which automatically lie in $\Lambda_{\sigma}$. Therefore $u\in ({\rm Ker}(f_N))^{\perp}={\rm Im}(f_M)$. We need to show that $w$ lies in $\sigma^{\vee}$. Let $v\in\sigma\cap N_Y$. Since $f_N$ is surjective, we can write $v=f_N(\widetilde{v})$ for some $\widetilde{v}\in N_X$. Since $\widetilde{v}\in f_{N_{{{\Bbb R}}}}^{-1}(|\Delta_Y|)=|\Delta_X|$, we may choose $\tau\in \Delta_X$ smallest such that $\widetilde{v}\in\tau$. In this case $f_{N_{{{\Bbb R}}}}(\tau)\subseteq\sigma$. Therefore $u\in\tau^{\vee}$, which implies $$0\leq \langle u,\widetilde{v}\rangle=\langle f_M(w),\widetilde{v}\rangle=\langle w,f_N(\widetilde{v})\rangle=\langle w,v\rangle$$ and we see that indeed $w\in\sigma^{\vee}$. Suppose now that $f$ is surjective and consider the decomposition $f=h\circ g$ in iii). It is straightforward to see that $h$ is finite, while we have already shown that $g$ is a fibration. Therefore this decomposition is the Stein factorization of $f$. We also deduce from the uniqueness of the Stein factorization that if $f$ is a fibration, then $f_N$ is surjective. This completes the proof of the proposition. \[rmk\_fin\_map\] A finite, surjective toric morphism $h\colon Z\to Y$ can be described as follows. Note that $h_N$ is injective, with finite cokernel. Suppose first that ${\rm char}(k)$ does not divide the order of $|{\rm Coker}(h_N)|$, which is equal to the order of $A:=M_Z/M_Y$. In this case, $h$ is the quotient by a faithful action of $G={\rm Spec}\,k[A]$. In particular, $h$ is generically a Galois cover. Indeed, note first that the assumption on the characteristic of $k$ implies that $G$ is a reduced scheme, isomorphic to the finite group ${\rm Hom}(A, k^*)$. We claim that $G$ acts faithfully on $Z$ such that $Y$ is the quotient by this group action. It is enough to check this on affine open subsets, hence we may assume that $Y=U_{\sigma}$. The morphism $h$ corresponds to $$k[\sigma^{\vee}\cap M_Y]\hookrightarrow k[\sigma^{\vee}\cap M_Z].$$ The $G$-action on $Z$ corresponds to $$k[\sigma^{\vee}\cap M_Z]\to k[M_Z/M_Y]\otimes_k k[\sigma^{\vee}\cap M_Z],\,\,\chi^u\to \chi^{\overline{u}}\otimes\chi^u,$$ where $\overline{u}$ is the class of $u$ in $M_Z/M_Y$. It is clear from this definition that the action is faithful and furthermore, that $$k[\sigma^{\vee}\cap M_Z]^G=k[\sigma^{\vee}\cap M_Y],$$ as claimed. When ${\rm char}(k)=p>0$, given an arbitrary surjective, finite toric morphism $h\colon Z\to Y$, we can uniquely factor $h_M\colon M_Y\hookrightarrow M_Z$ as $M_Y\hookrightarrow M_{\widetilde{Z}}\hookrightarrow M_Z$, such that the order of $M_Z/M_{\widetilde{Z}}$ is relatively prime to $p$ and $M_{\widetilde{Z}}/M_Y$ is a product of abelian $p$-groups. This corresponds to a factorization of $f$ as $Z\to\widetilde{Z}\overset{\alpha}\to Y$, with $Z\to\widetilde{Z}$ a quotient as described above and $\alpha$ a universal homeomorphism. In fact, there is $\beta\colon Y\to\widetilde{Z}$ and $m\geq 1$ such that $\beta\circ\alpha={\rm Frob}_{\widetilde{Z}}^m$ and $\alpha\circ\beta={\rm Frob}_Y^m$ (note that a toric variety $W$ is defined over the prime field, hence it is endowed with a Frobenius morphism ${\rm Frob}_W$ that is linear over the ground field). In order to see this, let us choose a basis $v_1,\ldots,v_n$ for $N_Y$ such that $p^{m_1}v_1,\ldots,p^{m_n}v_n$ is a basis for $N_{\widetilde{Z}}$, for some positive integers $m_1,\ldots,m_n$. If $m=\max_im_i$ and $Z'$ is the toric variety corresponding to the lattice spanned by $p^{m_1-m}e_1,\ldots,p^{m_n-m}e_n$ (the fan being the same as that of the toric varieties $Z$, $\widetilde{Z}$, and $Y$), then multiplication by $p^m$ induces an isomorphism of toric varieties $Z'\simeq\widetilde{Z}$. If $\beta\colon Y\to Z'\simeq \widetilde{Z}$ is the induced morphism, then it is easy to check that it has the desired properties. \[rmk\_nonsurj\] Suppose that $f\colon X\to Y$ is any proper toric map, possibly not surjective. In this case $f$ has a canonical factorization $X\overset{u}\to W\overset{w}\to Y$ such that both $u$ and $w$ are proper toric maps, with $u$ surjective and $w$ finite, with $T_W\to T_Y$ a closed immersion. Indeed, if $w\colon W\to Y$ is the normalization of $f(X)$, then since $X$ is normal, there is a unique morphism $u\colon X\to W$ such that $f=w\circ u$. If $$N_W:=\{v\in N_Y\mid mv\in f_N(N_X)\,\,\, \text{for some}\,\,\,m\in {\mathbf Z}_{>0}\}$$ and $T$ is the torus corresponding to $N_W$, then the restriction of $f$ to $T_X$ factors as $T_X\overset{\phi}\to T\overset{\psi}\to T_Y$, with $\phi$ surjective and $\psi$ a closed immersion. Therefore $T$ is equal to $f(T_X)$. It is easy to deduce from Chevalley’s constructibility theorem that $T$ is an open dense subset of $f(X)$, hence $T$ admits an open immersion in $W$. Moreover, the map $T\times f(X)\to f(X)$ given by the $T_X$-action on $Y$ induces a map $T\times W\to W$ giving an action of $T$ on $W$ that extends the standard action of $T$ on itself. Since by construction $W$ is separated and normal, it follows that $W$ is a toric variety with torus $T$. Moreover, both $u$ and $w$ are toric maps. \[factorization\_maps\_tori\] Let us consider the above decompositions of toric maps in the case of a morphism of algebraic groups $f\colon T_1\to T_2$ between tori. We have a decomposition $$T_1\overset{\phi_1}\to A\overset{\phi_2}\to B\overset{\phi_3}\to C \overset{\phi_4}\to T_2$$ such that the following hold: 1. There is an isomorphism $T_1\simeq A\times A'$, with $A'$ a torus, such that $\phi_1$ corresponds to the projection onto the first component. 2. $\phi_2$ is finite, surjective, and étale, the quotient by the action of a finite group. 3. If ${\rm char}(k)=0$, then $\phi_3$ is an isomorphism. If ${\rm char}(k)=p>0$, then there is $\beta\colon C\to B$ and $m\geq 1$ such that $\beta\circ\phi_3={\rm Frob}_{B}^m$ and $\phi_3\circ\beta={\rm Frob}_{C}^m$. 4. $\phi_4$ is a closed immersion. \[canonical\_fibration\] We say that a toric variety $X$ has *convex, full-dimensional fan support* if $|\Delta_X|$ is a convex cone in $(N_X)_{{{\Bbb R}}}$ ${\rm (}$automatically rational polyhedral${\rm )}$, of maximal dimension. Note that if $X$ admits a proper morphism $f\colon X\to Y$, where $Y$ is an affine toric variety of contractible type, then $X$ has convex, full-dimensional fan support. Conversely, if this is the case, then we can find $f$ as above. In fact, we may take $f$ to be a fibration: if $\Lambda$ is the largest linear subspace contained in $|\Delta_X|$ and $\sigma=|\Delta_X|/\Lambda$, considered as a cone with respect to the lattice $N_X/N_X\cap \Lambda$, then $Y=U_{\sigma}$ is of contractible type and we have a canonical toric fibration $f\colon X\to Y$. Finally, we note that given a proper morphism $f\colon X\to Y$, with $Y$ affine, $f$ is projective if and only if $X$ is quasi-projective. By Proposition \[Stein\_fact\] and Remark \[rmk\_nonsurj\], we can reduce studying the fibers of an arbitrary proper toric map $f\colon X\to Y$ to the case of a fibration. Because of this, we will mostly consider this case. Fibers of toric maps {#fibers-of-toric-maps .unnumbered} -------------------- We want to show that the irreducible components of the fibers of a toric map are toric varieties. We begin by reviewing some basic facts that will be used also later, when reducing to the case of affine toric varieties of contractible type. Recall that if $X$ is a toric variety defined by the fan $\Delta$ in $N_{{{\Bbb R}}}$ and $N'$ is a finitely generated subgroup of $N$ such that $N/N'$ is free and the linear span of $|\Delta|$ is contained in $N'_{{{\Bbb R}}}$, then we have a toric variety $X'$ corresponding to $\Delta$, considered as a fan in $N'_{{{\Bbb R}}}$. The inclusion $\iota\colon N'\hookrightarrow N$ induces a toric map $X'\to X$. In fact, if we choose a splitting of $\iota$, we get an isomorphism $N\simeq N'\times N/N'$ and a corresponding isomorphism $T_N\simeq T_{N'}\times T_{N/N'}$. Moreover, we get an isomorphism $X\simeq X'\times T_{N/N'}$ compatible with the decomposition of $T_N$. A special case that we will often use is that when $X=X(\Delta)$ is an arbitrary toric variety, $\sigma$ is a cone in $\Delta$ and $N'=N_{\sigma}$. Applying the previous considerations to $U_{\sigma}$, we obtain an isomorphism of tori and an isomorphism of affine toric varieties $$\label{eq_desc_aff} T_N\simeq T_{N_{\sigma}}\times O(\sigma),\,\,\,U_{\sigma}\simeq U_{\sigma'}\times O(\sigma),$$ where $\sigma'$ is the same as $\sigma$, but considered in $N'_{{{\Bbb R}}}$. We now turn to a version of such product decompositions in the relative setting. \[lm\_prod\_str\] Let $f\colon X\to Y$ be a toric fibration. Given $\tau\in\Delta_Y$, let us choose a splitting of $(N_Y)_{\tau}\hookrightarrow N_Y$ and then a splitting of $f_N$. These determine equivariant isomorphisms $$U_{\tau}\simeq U_{\tau'}\times O(\tau)\,\,\,\text{and}\,\,\,f^{-1}(U_{\tau})\simeq f^{-1}(U_{\tau'})\times O(\tau)$$ such that $f^{-1}(U_{\tau})\to U_{\tau}$ gets identified to $f_{\tau'}\times {\rm Id}$, where $f_{\tau'}\colon f^{-1}(U_{\tau'})\to U_{\tau'}$ is the restriction of $f$ over $U_{\tau'}$. Moreover, the following hold: 1. We have an induced isomorphism of tori $T_{U_{\tau}}\simeq T_{U_{\tau'}}\times O(\tau)$. 2. We have an isomorphism $f^{-1}(O(\tau))\simeq f^{-1}(x_{\tau})\times O(\tau)$ such that the restriction of $f$ to $f^{-1}(O(\tau))$ corresponds to the projection onto the second component. In particular, $f^{-1}(y)\simeq f^{-1}(x_{\tau})$ for every $y\in O(\tau)$. 3. The map $f_{\tau'}$ is a toric fibration over an affine toric variety of contractible type and we have an isomorphism $$f^{-1}(x_{\tau})\simeq f_{\tau'}^{-1}(x_{\tau'}).$$ Let $N'_Y=(N_Y)_{\tau}$ and $N'_X=f_N^{-1}(N'_Y)$. Note that $f^{-1}(U_{\tau})=\bigcup_{\sigma}U_{\sigma}$, where the union is over those $\sigma\in\Delta_X$ such that $f_*(\sigma)\subseteq\tau$. Therefore the linear span of the support of the fan defining $f^{-1}(U_{\sigma})$ is contained in $(N'_X)_{{{\Bbb R}}}$. The two splittings in the lemma induce isomorphisms $$N_Y\simeq N'_Y\times N_Y/N'_Y\,\,\,\text{and}\,\,\,N_X\simeq N'_X\times N_Y/N'_Y$$ such that $f_N$ corresponds to $f'_N\times {\rm Id}$, where $f'_N\colon N'_X\to N'_Y$ is the induced lattice map. By applying (\[eq\_desc\_aff\]) to both $U_{\sigma}$ and $f^{-1}(U_{\sigma})$ with respect to the subgroups $N'_Y$ and $N'_X$, respectively, we obtain the assertions in the lemma. The following proposition describes the irreducible components for the fibers of proper toric maps. \[str\_fibers\] Let $f\colon X\to Y$ be a proper toric map. 1. Every irreducible component of a fiber $f^{-1}(y)$ is a toric variety. Moreover, this is smooth or simplicial if $X$ has this property. 2. If $f$ is a fibration and $y\in O(\tau)$ for some $\tau\in\Delta_Y$, then $f^{-1}(y)$ is a disjoint union of locally closed subsets parametrized by the cones $\sigma\in\Delta_X$ such that $f_*(\sigma)=\tau$, with the subset corresponding to $\sigma$ being isomorphic to the torus $O(\sigma/\tau)$. It follows from Proposition \[Stein\_fact\] and Remark \[rmk\_nonsurj\] that we may write $f$ as a composition $X\overset{g}\to Z\overset{h}\to Y$, with $g$ a fibration and $h$ finite. Since a fiber of $f$ is either empty or a disjoint union of fibers of $g$, it follows that in order to prove i), we may and will assume that $f$ is a fibration. Let $\tau\in\Delta_Y$ and suppose that $y\in O(\tau)$. It follows from Lemma \[lm\_prod\_str\] that $f^{-1}(y)\simeq f^{-1}(x_{\tau})$, hence we may assume that $y=x_{\tau}$. Moreover, we have an isomorphism $$\label{eq_str_fib} f^{-1}(O(\tau))\simeq f^{-1}(x_{\tau})\times O(\tau).$$ It follows that if $W$ is an irreducible component of $f^{-1}(x_{\tau})$, then we get an induced isomorphism $V\simeq W\times O(\tau)$, where $V$ is an irreducible component of $f^{-1}(O(\tau))$. Note that $f^{-1}(O(\tau))$ is preserved by the $T_X$-action, hence $V$ has the same property since $T_X$ is connected. We conclude that the closure $\overline{V}$ is equal to $V(\gamma)$ for some cone $\gamma\in\Delta_X$. Note that $V$ is locally closed in $X$, hence it is open in $\overline{V}$; therefore it is a toric variety with torus $O(\gamma)$. Since $O(\gamma)\subseteq f^{-1}(O(\tau))$, we see that $f_*(\gamma)=\tau$. Moreover, the isomorphism (\[eq\_str\_fib\]) induces an isomorphism $O(\gamma)\simeq O(\gamma/\tau)\times O(\tau)$ and it is now clear that $W$ is a toric variety with torus $O(\gamma/\tau)$. Note that if $X$ is smooth or simplicial, then $V(\gamma)$ has the same property and so does $V$. In this case $W$ is smooth, respectively simplicial, as well. This completes the proof of i). In order to check the assertion in ii), note that $f^{-1}(O({\tau}))$ is the union of the orbits $O(\sigma)$, where $\sigma$ runs over the cones of $\Delta_X$ such that $f_*(\sigma)=\tau$. For every such $\sigma$, the isomorphism (\[eq\_str\_fib\]) induces an isomorphism $O(\sigma)\simeq Z_{\sigma}\times O(\tau)$, such that $Z_{\sigma}$ is isomorphic to $O(\sigma/\tau)$. Since $f^{-1}(x_{\tau})$ is the disjoint union of the locally closed subsets $Z_{\sigma}$, this completes the proof of the proposition. The toric Chow lemma {#the-toric-chow-lemma .unnumbered} -------------------- We will make use of the following toric version of Chow’s lemma. For a proof in the case when $X$ is complete, see [@CLS Theorem 6.1.18]. For the general case, see [@Sumihiro Theorem 2]. \[Chow\] If $X$ is a toric variety, then there is a projective toric birational morphism $f\colon \widetilde{X}\to X$ such that $\widetilde{X}$ is quasi-projective. \[rmk\_toric\_resolution\] By combining the toric Chow lemma with toric resolution of singularities, we deduce that for every toric variety $X$, there is a projective birational morphism $\pi\colon \widetilde{X}\to X$ such that $\widetilde{X}$ is smooth and quasi-projective. Cohomology of simplicial toric varieties with convex, full-dimensional fan support ================================================================================== Our goal in this section is to show that if $X$ is a complex simplicial toric variety such that the support $|\Delta_X|$ is a full-dimensional, convex cone, then the cohomology of $X$ behaves similarly to the case when $X$ is complete (we refer to [@Ful93 Chapter 5.2] for that case). Recall that by Remark \[canonical\_fibration\], saying that $|\Delta_X|$ is convex and full-dimensional is equivalent to saying that $X$ admits a proper morphism to an affine toric variety, of contractible type. A good filtration by open subsets {#a-good-filtration-by-open-subsets .unnumbered} --------------------------------- The key ingredient in this study is a certain filtration of $X$ by open subsets, in the case when $X$ is quasi-projective. This filtration has been well-understood and used in the case when $X$ is projective (see for example [@Kirwan] and [@Fieseler]). We begin by showing that such a filtration also exists when $X$ is quasi-projective and has convex, full-dimensional fan support. While we work over ${\mathbf C}$, we remark that the results of this subsection hold over any algebraically closed field. Let $X$ be a complex quasi-projective toric variety, with convex, full-dimensional fan support. We fix an ample torus-invariant Cartier divisor $D$ on $X$ and consider the corresponding polyhedron $P=P_D$ (note that $P$ might not be bounded since $X$ might not be complete). We first recall some basic facts related to this setting. Given $D$, to each maximal cone $\sigma\in\Delta_X$ one associates an element $u({\sigma})$ in the lattice $M_X$ such that $D\vert_{U_{\sigma}}={\rm div}(\chi^{-u(\sigma)})$. If $P_0$ is the convex hull of the $u(\sigma)$, where $\sigma$ varies over the maximal cones of $\Delta_X$, then $P=P_0+|\Delta_X|^{\vee}$. If $Q$ is a face of $P$, then we get a cone $\sigma_Q\in\Delta_X$ defined by $$\sigma_Q=\{w\in (N_X)_{{{\Bbb R}}}\mid \langle u,w\rangle \leq\langle u',w\rangle\,\,\text{for all}\,\,u\in Q,u'\in P\}.$$ Moreover, each cone in $\Delta_X$ corresponds in this way to a unique face of $P$. Note that for every maximal cone $\sigma\in\Delta_X$, the corresponding $u({\sigma})$ is a vertex of $P$ and the cone corresponding to $u({\sigma})$ is $\sigma$, that is, $\sigma^{\vee}$ is generated by $\{u-u(\sigma)\mid u\in P\}$. All these facts are well-known when $X$ is complete. The proofs easily extend to our setting, see [@Mustata Chapter 6]. Suppose now that $v\in |\Delta_X|\cap N_X$ is such that the following conditions are satisfied: 1. $v$ is not orthogonal to any minimal generator of the pointed cone $|\Delta_X|^{\vee}$ (equivalently, $v$ does not lie on any facet of $|\Delta_X|$), and 2. The integers $\langle u(\sigma),v\rangle$, when $\sigma$ varies over the maximal cones in $\Delta_X$, are mutually distinct. We denote by $\gamma_v\colon {{\Bbb C}}^*\to T_X$ the one-parameter subgroup of $T_X$ corresponding to $v$. Suppose that $\sigma_1,\ldots,\sigma_r$ are the maximal cones in $\Delta$, ordered such that $$\label{order} \langle u(\sigma_1),v\rangle<\ldots<\langle u(\sigma_r),v\rangle$$ (condition C2) above implies that there is such an ordering and this is of course unique). For every $i$, with $1\leq i\leq r$, we denote by $x_i$ the torus-fixed point in $U_{\sigma_i}$. \[prop\_filtration\] With the above notation, the following hold: 1. The only fixed points for the ${{\Bbb C}}^*$-action on $X$ induced by $\gamma_v$ are $x_1,\ldots,x_r$. 2. For every $i$, let $X_i$ consist of those $x\in X$ such that the map $\gamma_{v,x}\colon {{\Bbb C}}^*\to X$, given by $\gamma_{v,x}(t)=\gamma_v(t)\cdot x$, extends to a map $\widetilde{\gamma}_{v,x}\colon {\mathbf A}^{\!1}\to X$ with $\widetilde{\gamma}_{v,x}(0)=x_i$. If $U_i:=\bigcup_{j\leq i}X_j$, then $U_i$ is open in $X$ for every $i$ and $U_r=X$. It is clear that $x_1,\ldots,x_r$ are fixed points for the ${{\Bbb C}}^*$-action since they are fixed by the $T_X$-action. Therefore in order to prove i) it is enough to show that if $x\in U_{\sigma_i}$ is a ${{\Bbb C}}^*$-fixed point, then $x=x_i$. By definition, the action of $\gamma_v$ on $U_{\sigma_i}$ is such that $$\label{formula_action} \chi^u(\gamma_v(t)\cdot x)=t^{\langle u,v\rangle}\chi^u(x)$$ for every $t\in {{\Bbb C}}^*$ and $u\in\sigma_i^{\vee}\cap M_X$. Since $x$ is a fixed point for the ${{\Bbb C}}^*$-action, it follows that $\langle u,v\rangle=0$ for all $u\in\sigma_i^{\vee}\cap M_X$ such that $\chi^u(x)\neq 0$. We need to show that $S:=\{u\in\sigma_i^{\vee}\cap M_X\mid \chi^u(x)\neq 0\}$ is equal to $\{0\}$. We have seen that $S\subseteq v^{\perp}$. Note that for $u_1,u_2\in\sigma_i^{\vee}\cap M_X$, we have $u_1+u_2\in S$ if and only if $u_1,u_2\in S$. Since $\sigma_i^{\vee}$ is generated as a convex cone by $|\Delta_X|^{\vee}$ and by $\{u(\sigma_j)-u(\sigma_i)\mid j\neq i\}$, it follows that if $S\neq\{0\}$, then either $v$ is orthogonal to a ray of $|\Delta_X|^{\vee}$ or it is orthogonal to some $u(\sigma_j)-u(\sigma_i)$, with $j\neq i$. Since both these conclusions contradict conditions C1) and C2) above, we conclude that $S=\{0\}$, completing the proof of i). We note that for every $x\in X$, the map $\gamma_{v,x}\colon {{\Bbb C}}^*\to X$ extends to a map ${\mathbf A}^{\!1}\to X$. Indeed, consider the canonical toric fibration $f\colon X\to Y$, where $Y$ is an affine toric variety of contractible type (see Remark \[canonical\_fibration\]). Since $Y$ is affine and the image of $v$ in $N_Y$ lies in the cone defining $Y$, the composition $f\circ\gamma_{v,x}$ extends to a map ${\mathbf A}^{\!1}\to Y$ (see [@Ful93 Chapter 2.3]). Since $f$ is proper, the valuative criterion for properness implies that $\gamma_{v,x}$ extends to a map $\widetilde{\gamma}_{v,x}\colon {\mathbf A}^{\!1}\to X$. Since $\widetilde{\gamma}_{v,x}(0)$ is clearly fixed by the ${{\Bbb C}}^*$-action induced by $\gamma_v$, it follows from i) that it is equal to one of the $x_i$. Therefore $\bigcup_{i=1}^rX_i=X$. Let us describe now $X_i$. If $x\in X_i$, then it is clear that $x\in U_{\sigma_i}$. Suppose now that $x\in U_{\sigma_i}$ is arbitrary. Using again (\[formula\_action\]), we conclude that $\gamma_{v,x}(0)=x_i$ if and only if for every $u\in (\sigma_i^{\vee}\cap M_X)\smallsetminus\{0\}$ with $\chi^u(x)\neq 0$, we have $\langle u,v\rangle>0$. Let $\tau$ be the face of $\sigma_i$ such that $x\in O({\tau})$. In this case, for $u\in \sigma_i^{\vee}\cap M_X$ we have $\chi^u(x)\neq 0$ if and only if $u\in\sigma_i^{\vee}\cap\tau^{\perp}\cap M_X$. We thus conclude that $X_i=\bigcup_{\tau}O({\tau})$, where the union is over the faces $\tau$ of $\sigma_i$ such that $\langle u,v\rangle>0$ for every $u\in (\sigma_i^{\vee}\cap\tau^{\perp}\cap M_X)\smallsetminus\{0\}$. On the other hand, a face $\tau$ of $\sigma_i$ is of the form $\sigma_Q$ for a unique face $Q$ of $P$ such that $u({\sigma_i})\in Q$. In this case $\sigma_i^{\vee}\cap\tau^{\perp}$ is generated by $\{u-u(\sigma_i)\mid u\in Q\}$. We deduce that $X_i$ is the union of those $O({\sigma_Q})$, over the faces $Q$ of $P$ containing $u({\sigma_i})$ having the property that $\langle u,v\rangle >\langle u(\sigma_i),v\rangle$ for every $u\in Q$, with $u\neq u(\sigma_i)$. On the other hand, since $P=|\Delta_X|^{\vee}+P_0$, we see that for every face $Q$ of $P$, an element $u\in Q$ can be written as $u=u'+\sum_i\lambda_iu(\sigma_i)$, with $u'\in|\Delta_X|^{\vee}$ and $\lambda_i\in{{\Bbb R}}_{\geq 0}$, with $\sum_i\lambda_i=1$. Since $v\in |\Delta_X|$, it follows from the way we have ordered the maximal cones that $$O(\sigma_Q)\subseteq X_i\quad \text{if and only if}\quad i=\min\{j\mid u(\sigma_j)\in Q\}.$$ We conclude that $U_i$ is the union of those orbits $O(\sigma_Q)$ (with $Q$ a face of $P$) such that some $u(\sigma_j)$, with $j\leq i$, lies in $Q$. It is clear that if $Q$ has this property, then any face of $P$ that contains $Q$ also has this property. Equivalently, $U_i$ is a union of orbits such that $O(\sigma)\subseteq U_i$, then $O(\tau)\subseteq U_i$ for every face $\tau$ of $\sigma$. This implies that $U_i$ is open. Since $U_r=X_1\cup\ldots\cup X_r=X$, this completes the proof of ii). \[cor\_filtration\] If $X$ is a quasi-projective toric variety, with convex, full-dimensional fan support, then there are open torus-invariant subsets $$\emptyset=U_0\subseteq U_1\subseteq\ldots\subseteq U_r=X$$ such that for every $i\geq 1$, the set $U_i\smallsetminus U_{i-1}$ is a closed subset of some $U_{\sigma}$, where $\sigma$ is a maximal cone in $\Delta_X$. Moreover, if $X$ is smooth (resp. simplicial), then each $U_i\smallsetminus U_{i-1}$ is an affine space (resp. the quotient of an affine space by a finite group). We consider the $U_i$ given by Proposition \[prop\_filtration\], hence $U_i\smallsetminus U_{i-1}=X_i$. We have seen in the proof of the proposition that $X_i\subseteq U_{\sigma_i}$ and it follows from definition that $$\label{eq_cor_filtration} X_i=\{x\in U_{\sigma_i}\mid\chi^u(x)=0\,\,\text{if}\,\,u\in(\sigma_i^{\vee}\cap M_X)\smallsetminus\{0\}, \langle u,v\rangle\leq 0\}.$$ Therefore $X_i$ is closed in $U_{\sigma_i}$. Suppose now that $X$ is simplicial and let $v_1,\ldots,v_n$ be the primitive generators of the rays in $\sigma_i$. If $w_1,\ldots,w_n\in M_X$ are such that $\langle w_j,v_j\rangle>0$ for all $j$ and $\langle w_j,v_k\rangle=0$ for $j\neq k$, then it is easy to deduce from (\[eq\_cor\_filtration\]) that $$X_i=\{x\in U_{\sigma_i}\mid \chi^{w_j}(x)=0\,\,\text{if}\,\,\langle w_j,v\rangle\leq 0\}$$ (note that for every nonzero $u\in \sigma_i^{\vee}\cap M_X$, some positive multiple of $u$ can be written as $\sum_ja_jw_j$, with $a_j\in{\mathbf Z}_{\geq 0}$, and if $\langle u,v\rangle\leq 0$, then there is $j$ with $a_j>0$ and $\langle w_j,v\rangle\leq 0$). This implies that $X_i=U_{\sigma_i}\cap V(\tau_i)$, where $\tau_i$ is the face of $\sigma_i$ spanned by those $v_j$, with $j$ such that $\langle w_j,v\rangle\leq 0$. In particular, $X_i$ is a simplicial affine toric variety of contractible type (smooth, if $X$ is smooth). This completes the proof of the corollary. The hypothesis that $X$ is quasi-projective is crucial for the construction of the filtration in Corollary \[cor\_filtration\]. For an example of a complete variety for which the construction does not lead to a filtration by open subsets, see [@Jur77]. One could construct the filtration in Corollary \[cor\_filtration\] also following the approach in [@Ful93 Chapter 5.2]. Applications to cohomology {#applications-to-cohomology .unnumbered} -------------------------- Recall that by work of Deligne [@Deligne], for every complex algebraic variety (assumed to be reduced, but possibly reducible), the cohomology $H^q(X,{\mathbf Q})$ and $H^q_c(X,{\mathbf Q})$ (singular cohomology and cohomology with compact support) carry natural (rational) mixed Hodge structures. A pure Hodge structure of weight $q$ is *of Hodge-Tate type* if all Hodge numbers $h^{i,j}$ vanish, unless $i=j$. In particular, if the underlying vector space is nonzero, then $q$ is even. We can now give the main results concerning the cohomology of simplicial toric varieties that have convex, full-dimensional fan support. \[BM\] If $X$ is a smooth, quasi-projective, complex toric variety, which has convex, full-dimensional fan support, then all maps $A_m(X)\to H_{2m}^{BM}(X)$ are isomorphisms, where $A_m(X)$ is the Chow group of $m$-dimensional cycle classes and $H^{BM}_{2m}(X)$ is the $(2m)^{\rm th}$ Borel-Moore homology group of $X$. If $X$ is not smooth, but simplicial, then the maps are isomorphisms after tensoring with ${{\Bbb Q}}$. We omit the proof, as it follows verbatim the one in [@Ful93 Chapter 5.2], using Proposition \[cor\_filtration\] \[pure\_coh\_global\] Let $X$ be a simplicial complex toric variety which has convex, full-dimensional fan support. For every $q$, the mixed Hodge structures on $H^q(X,{{\Bbb Q}})$ and $H^q_c(X,{{\Bbb Q}})$ are pure, of weight $q$, and of Hodge-Tate type. In particular, both $H^*(X,{{\Bbb Q}})$ and $H_c^*(X,{{\Bbb Q}})$ are even[^2]. We first prove the following lemma. \[lem\_surj\_coh\] Let $X$ be a smooth, quasi-projective, complex toric variety, with convex, full-dimensional fan support. If $U_0\subseteq\ldots\subseteq U_r=X$ is a filtration as in Corollary \[cor\_filtration\] and if $Z_i=X\smallsetminus U_i$, then the following hold: 1. If $q$ is odd and $0\leq i\leq r$, then $H^q_c(Z_i,{{\Bbb Q}})=0$. 2. If $q$ is even and $0\leq i\leq r-1$, then we have an exact sequence $$0\to H_c^q(Z_i\smallsetminus Z_{i+1},{{\Bbb Q}})\to H_c^q(Z_i,{{\Bbb Q}})\to H_c^q(Z_{i+1},{{\Bbb Q}})\to 0.$$ In particular, for every $q$, the mixed Hodge structure on $H^q_c(X,{{\Bbb Q}})$ is pure, of weight $q$, and of Hodge-Tate type. We prove the assertions in (a) and (b) by descending induction on $i$. For $i=r$, the assertion in (a) is trivial. We now assume that $(a)$ holds for $i+1$ and show that both (a) and (b) hold for $i$. Consider the long exact sequence for the cohomology with compact support $$H^{q-1}_c(Z_{i+1},{{\Bbb Q}})\to H^{q}_c(Z_i\smallsetminus Z_{i+1},{{\Bbb Q}})\to H^{q}_c(Z_i,{{\Bbb Q}})\to H^q_c(Z_{i+1},{{\Bbb Q}})\to H^{q+1}_c(Z_i\smallsetminus Z_{i+1},{{\Bbb Q}}).$$ The key point is that by Corollary \[cor\_filtration\], $Z_i\smallsetminus Z_{i+1}$ is isomorphic to an affine space, hence $H_c^j(Z_i\smallsetminus Z_{i+1},{{\Bbb Q}})\simeq {{\Bbb Q}}$ if $j=2\cdot\dim(Z_i\smallsetminus Z_{i+1})$ and $H_c^j(Z_i\smallsetminus Z_{i+1},{{\Bbb Q}})=0$, otherwise. The above exact sequence implies that $H^q_c(Z_i,{{\Bbb Q}})=0$ if $q$ is odd and, using also the fact that (a) holds for $i+1$, we see that the sequence in (b) is exact when $q$ is even. Note that the maps in the exact sequence in (b) are maps of mixed Hodge structures. In particular, they are strict with respect to both the weight and Hodge filtrations. We thus obtain by descending induction on $i$ that the mixed Hodge structure on $H^q_c(Z_i,{{\Bbb Q}})$ is pure, of weight $q$, and of Hodge-Tate type (recall that $H^{2d}_c({\mathbf A}^{\!d})$ is pure, of weight $2d$, and of Hodge-Tate type). By taking $i=0$, we obtain the last assertion in the lemma. Note first that since $X$ is simplicial, Poincaré duality implies that we have an isomorphism of mixed Hodge structures $$H^q(X,{{\Bbb Q}})\simeq H_c^{2d-q}(X,{{\Bbb Q}})^*\otimes {\mathbf Q}(-d),$$ where $d=\dim(X)$. This shows that the assertions about $H^*(X,{{\Bbb Q}})$ and $H^*_c(X,{{\Bbb Q}})$ are equivalent. Lemma \[lem\_surj\_coh\] implies that the assertions about $H^*_c(X,{{\Bbb Q}})$ hold if $X$ is smooth and quasi-projective, hence we are done in this case. In the general case, it is enough to show that the assertions about $H^*(X,{{\Bbb Q}})$ hold. We use Chow’s lemma and toric resolution of singularities (see Remark \[rmk\_toric\_resolution\]) to get a projective, birational toric map $g\colon \widetilde{X}\to X$ such that $\widetilde{X}$ is smooth and quasi-projective. Since the canonical map $g^*\colon H^q(X,{{\Bbb Q}})\to H^q(\widetilde{X},{{\Bbb Q}})$ is a morphism of mixed Hodge structures (hence strict with respect to both the weight and Hodge filtrations) and since we know that $H^q(\widetilde{X},{{\Bbb Q}})$ is pure, of weight $q$, and of Hodge-Tate type, it is enough to show that $g^*$ is injective. In turn, injectivity follows easily from Poincaré duality on the $\mathbf Q$-manifold $X$, coupled with the projection formula [iversen]{}, IX.3.7 ($\eta \in H^*(X, {{\Bbb Q}})$ and the equality holds in the Borel-Moore homology of $X$) $$g_* \left( \{\widetilde{X}\} \cap g^* \eta \right) = \left( \{X\} \cap \eta \right).$$ This completes the proof of the theorem. We remark that the injectivity of $g^*$ also follows from the easy-to-prove fact that, in our situation, ${\mathbf Q}_X$ is a direct summand of $Rg_* \left({\mathbf Q}_{\widetilde{X}} \right).$ [mhs\_intcoh]{} The conclusions of Theorem \[pure\_coh\_global\] hold for $X$ not necessarily simplicial, provided we replace cohomology (with compact supports) with intersection cohomology (with compact supports). This can be seen by applying the theorem to a toric resolution of $X$ and by observing that the intersection cohomology of $X$ is a natural subquotient of the cohomology of any resolution (see, for example, \[Thm. 4.3.1\][decjag]{}). Betti numbers of simplicial toric varieties with convex, pure-dimensional fan support {#betti-numbers-of-simplicial-toric-varieties-with-convex-pure-dimensional-fan-support .unnumbered} ------------------------------------------------------------------------------------- We are now ready to compute the Betti numbers of simplicial toric varieties that admit a proper map to an affine toric variety, of contractible type. This makes use, as in the complete case (see [@Ful93 Chapter 4.5]), of the Hodge-Deligne polynomial that we now briefly review. Given a complex algebraic variety $X$ of dimension $n$, one considers the mixed Hodge structure on the groups $H_c^i(X,{\mathbf Q})$. For every $i$ and $m$, the $m^{\rm th}$ graded piece ${\rm gr}^W_{m}H^i_{c}(X,{\mathbf Q})$ with respect to the weight filtration is a pure Hodge structure of weight $m$. Therefore the Hodge numbers $h^{p,q}({\rm gr}^W_{m}H^i_{c}(X,{\mathbf Q}))$ are defined whenever $p+q=m$. The Hodge-Deligne polynomial of $X$ is given by $$E(X; u,v)=\sum_{p,q\geq 0}e_{p,q}u^pv^q\in{\mathbf Z}[u,v],\,\,\text{where}\,\,e_{p,q}=\sum_{i=0}^{2n} (-1)^ih^{p,q}({\rm gr}^W_{p+q}H^i_{c}(X,{\mathbf Q})).$$ In fact, the Hodge-Deligne polynomial is uniquely characterized by the following two properties: 1. If $X$ is smooth and projective, then $E(X; u,v)$ is the usual Hodge polynomial of $X$. 2. If $Y$ is a closed subset of $X$ and $U=X\smallsetminus Y$, then $$E(X;u,v)=E(Y;u,v)+E(U;u,v).$$ The Hodge-Deligne polynomial is also multiplicative, that is, if $X$ and $Y$ are complex algebraic varieties, then $$E(X\times Y;u,v)=E(X;u,v)\cdot E(Y;u,v).$$ Due to the additivity property in b) above, it is easy to compute the Hodge-Deligne polynomial of varieties that can be written as disjoint unions of simple varieties, such as affine spaces or tori. Note that by property a) above, we have $E({\mathbf P}^1; u,v)=uv+1$, hence property b) implies $E({\mathbf A}^{\!1}; u,v)=uv$ and $E({\mathbf C}^*; u,v)=uv-1$. Using the fact that the Hodge-Deligne polynomial is multiplicative, one obtains $E({\mathbf A}^{\!n};u,v)=(uv)^n$ and $E(({\mathbf C}^*)^n;u,v)=(uv-1)^n$. If $X$ is smooth and projective, of dimension $n$, then $$E(X; t,t)=\sum_{i=0}^{2n}(-1)^ib_i(X)t^i,\,\,\,\text{where}\,\,\,b_i(X)=\dim_{{\mathbf Q}}H^i(X,{\mathbf Q}),$$ that is $E(X;t,t)=P_X(-t)$ where $P_X(t)$ is the Poincaré polynomial. This is a consequence of a) and of the Hodge decomposition. However, for an arbitrary $X$, we can’t recover $\dim_{\mathbf Q}H^i_c(X,{\mathbf Q})$ from the Hodge-Deligne polynomial. One case when this can be done is when we know that the mixed Hodge structure on each $H^i_c(X,{\mathbf Q})$ is pure of weight $i$. In this case, it follows from the definition that $$E(X;u,v)=\sum_{p,q\geq 0}(-1)^{p+q}h^{p,q}(H^{p+q}_c(X,{\mathbf Q}))u^pv^q$$ and therefore that $$E(X; t,t)=\sum_{i=0}^{2n}(-1)^i{\rm dim}_{{\mathbf Q}}H^i_c(X,{\mathbf Q})t^i.$$ \[HodgeDeligneforsheaves\] The formalism of the Hodge-Deligne polynomial extends to constructible complexes $K^\bullet$ of sheaves with the property that the stalks of their cohomology sheaves carry a Mixed Hodge structure, or just a weight filtration. In the proof of Theorem \[thm\_p\_sigma\] below we will use this with $K^\bullet$ the restriction of the intersection complex $IC_X$ of a toric variety $X$ to a union of torus orbits. Let $X$ be an $n$-dimensional toric variety. Let us denote by $d_{\ell}(X)$ the number of cones in $\Delta_X$ of codimension $\ell$. In other words, $(d_n(X),\ldots,d_0(X))$ is the $f$-vector of $X$ (see [@Stanley1; @Stanley2]) (the reason we label the entries differently from the usual way is for convenience in the relative setting). Since a toric variety $X$ is the disjoint union of its orbits, we obtain the following well-known formula $$\label{eq_Hodge_Deligne_general} E(X;u,v)=\sum_{\sigma\in\Delta_X}(uv-1)^{\rm codim(\sigma)}=\sum_{i=0}^nd_i(X)\cdot (uv-1)^i.$$ \[Betti\_convex\_support\] Let $X$ be a simplicial toric variety of dimension $n$, with convex, full-dimensional fan support. The odd cohomology (ordinary and with compact supports) vanishes and we have that $$\dim_{{{\Bbb Q}}}H^{2m}_c(X,{{\Bbb Q}})=\dim_{{{\Bbb Q}}}H^{2n-2m}(X,{{\Bbb Q}})=\sum_{i=m}^n(-1)^{i-m}{{i}\choose{m}}d_i(X).$$ By Theorem \[pure\_coh\_global\], the mixed Hodge structure on each $H^i_c(X,{{\Bbb Q}})$ is pure of weight $i$, hence the formula for the dimension of the cohomology with compact support follows from (\[eq\_Hodge\_Deligne\_general\]). The formula for the dimension of the usual cohomology then follows by Poincaré duality. Cohomology of fibers of toric maps with simplicial source ========================================================= Our goal in this section is to compute the Betti numbers of the fibers of toric fibrations $f\colon X\to Y$, when $X$ is simplicial. As in the previous section, the main ingredient is the following purity result. \[pure\_coh\] Let $f\colon X\to Y$ be a proper toric map between toric varieties, with $X$ simplicial. For every $y\in Y$ and every $q$, the mixed Hodge structure on $H^q(f^{-1}(y),{{\Bbb Q}})$ is pure, of weight $q$, and of Hodge-Tate type. In particular, $H^*(f^{-1}(y),{{\Bbb Q}})$ is even. We will deduce the theorem from Theorem \[pure\_coh\_global\], by showing that if $f\colon X\to Y$ is a toric fibration to an affine toric variety of contractible type, then the restriction map in cohomology from $X$ to the fiber over the torus-fixed point of $Y$ is an isomorphism. In what follows, we prove this in a slightly more general setting that we will need in the next section. Suppose that $X$ is a complex algebraic variety, considered with the analytic topology. Let $G$ be an algebraic group acting on $X$ and let ${\rm act}\colon G\times X\to X$ be the corresponding morphism. By an *equivariant complex of sheaves[^3]* on $X$ we mean a complex of sheaves ${\mathcal E}$ on $X$ with the property that there is an isomorphism[^4] ${\rm act}^*({\mathcal E})\simeq {\rm pr}_2^*({\mathcal E})$ of complexes on $G\times X$. With slight abuse of language, we say that an object in the derived category of sheaves is equivariant if it can be represented by an equivariant complex. An important example is that of the constant sheaf ${{\Bbb Q}}_X$. It is easy to see that if $f\colon X\to Y$ is a $G$-equivariant morphism and ${\mathcal E}$ is equivariant on $X$, then $Rf_*({\mathcal E})$ is equivariant on $Y$. The following lemma was proved in [@DL Lemma 6.5] in the case of the intersection complex. The general case follows in the same way, but we reproduce the argument for the benefit of the reader. \[retraction\_lemma\] Let $Y=U_{\sigma}$ be an affine toric variety with fixed point $y$. If $v\in N_Y\cap {\rm Int}(\sigma)$ and we consider the action of ${\mathbf C}^*$ on $Y$ induced by the 1-parameter subgroup $\gamma_v$, then for every ${\mathbf C}^*$-equivariant complex ${\mathcal E}$ on $Y$, the natural graded map $H^*(Y,{\mathcal E})\to H^*({\mathcal E}_y)$ is an isomorphism. The assertion in the lemma is equivalent to the fact that $H^*(Y, j_!({\mathcal E}))=0$, where $j\colon Y_0=Y\smallsetminus\{y\}\hookrightarrow Y$ is the inclusion. The hypothesis on $v$ implies that the map ${\mathbf C}^*\times Y\to Y$ given by $(t,x)\to\gamma_v(t)\cdot x$ extends to a map $h\colon {\mathbf A}^{\!1}\times Y\to Y$ such that $$h^{-1}(y)=({\mathbf A}^{\!1}\times\{y\})\cup (\{0\}\times Y).$$ Consider the morphism $g\colon Y\to {\mathbf A}^{\!1}\times Y$ given by $g(x)=(1,x)$, so that $h\circ g$ is an isomorphism. Therefore the composition $$H^*(Y,j_!({\mathcal E}))\to H^*({\mathbf A}^{\!1}\times Y,h^*j_!({\mathcal E}))\to H^*(Y, g^*h^*j_!({\mathcal E}))$$ is an isomorphism, hence it is enough to prove that $H^*({\mathbf A}^{\!1}\times Y,h^*j_!({\mathcal E}))=0$. On the other hand, we have $h^{-1}(Y_0)={\mathbf C}^*\times Y_0$ and it is easy to see that $h^*j_!({\mathcal E})\simeq j'_!h_0^*({\mathcal E})$, where $j'\colon {\mathbf C}^*\times Y_0\hookrightarrow {\mathbf A}^1\times Y$ is the inclusion and $h_0\colon {\mathbf C}^*\times Y_0\to Y_0$ is induced by $h$. Since ${\mathcal E}$ is equivariant with respect to the ${\mathbf C}^*$-action, it follows that we have an isomorphism $h_0^*({\mathcal E})\simeq {\rm pr}_2^*({\mathcal E})$, hence $$j'_!h_0^*({\mathcal E})\simeq j''_!({{\Bbb Q}}_{{\mathbf C}^*}\boxtimes j_!({\mathcal E})),$$ where $j''\colon {\mathbf C}^*\hookrightarrow {\mathbf C}$ is the inclusion. Since $H^*({\mathbf A}^{\!1}, j''_!({\mathbf Q}_{{\mathbf C}_*}))=0$, the Künneth formula implies $H^*({\mathbf A}^{\!1}\times Y, h^*j_!({\mathcal E}))=0$. \[rmk\_retraction\_lemma\] Suppose that $f\colon X\to Y$ is a proper surjective toric map, where $Y$ is as in Lemma \[retraction\_lemma\]. If ${\mathcal E}$ is a $T_X$-equivariant complex on $X$, then we may choose $v$ as in the lemma such that $v$ lies in the image of $f_N$. Therefore we get actions of ${\mathbf C}^*$ on $X$ and $Y$ such that $f$ is ${\mathbf C}^*$-equivariant. In this case we may apply the lemma to the complex $Rf_*({\mathcal E})$ and conclude that we have canonical isomorphisms $$H^i(X,{\mathcal E})\simeq H^i(Y, Rf_*({\mathcal E}))\simeq H^i(Rf_*({\mathcal E})_y)\simeq H^i(f^{-1}(y),{\mathcal E}),$$ where the last isomorphism follows from proper base-change. We include the next remark for future reference. \[rmk2\_retraction\_lemma\] In the setting of Lemma \[retraction\_lemma\], we may take ${\mathcal E}={\mathcal I}_Y:=IC_Y[-\dim(Y)]$, where $IC_Y$ is the intersection cohomology complex on $Y$ to conclude $$IH^i(Y,{{\Bbb Q}})\simeq {\mathcal H}^i({\mathcal I}_Y)_y.$$ If $f\colon X\to Y$ is a proper, surjective, toric map, then we may take ${\mathcal E}={\mathcal I}_X$ to conclude $$IH^i(X,{{\Bbb Q}})\simeq H^i(f^{-1}(y),{\mathcal I}_X).$$ We can now prove the main result of this section. We first note that we may assume that $f$ is a fibration, $Y$ is affine and of contractible type, and $y\in Y$ is the fixed point. Indeed, it follows from Proposition \[Stein\_fact\] and Remark \[rmk\_nonsurj\] that we may write $f$ as a composition $X\overset{g}\to Z\overset{h}\to Y$, with $g$ a fibration and $h$ finite. Since every fiber of $f$ is a disjoint union of fibers of $g$, it follows that after replacing $f$ by $g$, we may assume that $f$ is a fibration. Suppose now that $\tau\in\Delta_Y$ is such that $y\in O(\tau)$. After replacing $f$ by $f^{-1}(U_{\tau})\to U_{\tau}$, we may assume that $Y=U_{\tau}$. Furthermore, by Lemma \[lm\_prod\_str\], we may assume that the linear span of $\tau$ is the ambient space and that $y=x_{\tau}$. In this case we may apply Lemma \[retraction\_lemma\] to $Rf_*({{\Bbb Q}}_X)$ (see Remark \[rmk\_retraction\_lemma\]) to conclude that the canonical restriction map $$H^i(X,{{\Bbb Q}})\to H^i(f^{-1}(y),{{\Bbb Q}})$$ is an isomorphism. Since this is a morphism of mixed Hodge structures, the assertions in the theorem follow from those in Theorem \[pure\_coh\_global\]. As in the case of toric varieties, it is easy to compute the Hodge-Deligne polynomial of fibers of toric fibrations. Given a toric fibration $f\colon X\to Y$ and a cone $\tau\in\Delta_Y$, we put $$d_{\ell}(X/\tau)=\#\{\sigma\in\Delta_X\mid f_*(\sigma)=\tau,{\rm codim}(\sigma)-{\rm codim}(\tau)=\ell\}.$$ Note that when $Y$ is a point, then $d_{\ell}(X/\{0\})$ is the same as $d_{\ell}(X)$, as introduced in the previous section. Therefore, if the relative dimension of $f$ is $r=\dim(X)-\dim(Y)$, then the vector $(d_r(X/\tau),\ldots,d_0(X/\tau))$ is a relative version of the $f$-vector of a toric variety. \[HD\_fiber\] Let $X$, $Y$ be toric varieties and let $f\colon X\to Y$ be a toric fibration. If $\tau$ is a cone in $\Delta_Y$ and $y\in O(\tau)$, then $$E(f^{-1}(y); u,v)=\sum_{\ell\geq 0} d_{\ell}(X/\tau)\cdot (uv-1)^{\ell}.$$ In particular, we have $\chi(f^{-1}(y))=d_0(X/\tau)$. Recall that by Proposition \[str\_fibers\], we can write $f^{-1}(y)$ as a disjoint union of locally closed subsets, with the subsets in one-to-one correspondence with the cones $\sigma\in\Delta_X$ that satisfy $f_*(\sigma)=\tau$. Moreover, the subset corresponding to $\sigma$ is isomorphic to $({\mathbf C}^*)^{{\rm codim}(\sigma)-{\rm codim}(\tau)}$. By additivity of the Hodge-Deligne polynomial, we thus get $$E(f^{-1}(y); u,v)=\sum_{f_*(\sigma)=\tau}(uv-1)^{{\rm codim}(\sigma)-{\rm codim}(\tau)}=\sum_{\ell\geq 0}d_{\ell}(X/\tau)\cdot (uv-1)^{\ell}.$$ The last assertion follows from the usual identity $\chi(f^{-1}(y))=E(f^{-1}(y);1,1)$. \[Betti\_fib\] Let $X$ and $Y$ be toric varieties, with $X$ simplicial, and let $f\colon X\to Y$ be a toric fibration. If $\tau$ is a cone in $\Delta_Y$ and $y\in O(\tau)$, then $$\dim_{\mathbf Q}H^{2m}(f^{-1}(y),{\mathbf Q})=\sum_{\ell\geq m}(-1)^{\ell-m}{{\ell}\choose m}d_{\ell}(X/\tau).$$ Note that since $f$ is proper, $f^{-1}(y)$ is compact, hence the usual cohomology agrees with the cohomology with compact support. By Theorem \[pure\_coh\], the mixed Hodge structure on $H^i(f^{-1}(y),{\mathbf Q})$ is pure, hence $$E(f^{-1}(y);t,t)=\sum_{i\geq 0}(-1)^ib_i(f^{-1}(y))t^i,\quad\text{where}\quad b_i(f^{-1}(y))=\dim_{\mathbf Q}H^{i}(f^{-1}(y),{\mathbf Q}).$$ On the other hand, it follows from Proposition \[HD\_fiber\] that $$E(f^{-1}(y);t,t)=\sum_{\ell\geq 0}d_{\ell}(X/\tau)(t^2-1)^{\ell} =\sum_{\ell\geq 0}d_{\ell}(X/\tau)\cdot\sum_{m=0}^{\ell}(-1)^{\ell-m}{{\ell}\choose m}t^{2m}$$ $$=\sum_{m\geq 0}\left(\sum_{\ell\geq m}(-1)^{\ell-m}{{\ell}\choose m}d_{\ell}(X/\tau)\right)t^{2m}.$$ The formula in the corollary now follows by equating the coefficients of the powers of $t$ in the two expressions for $E(f^{-1}(y);t,t)$. If in Corollary \[Betti\_fib\] we consider the case when $Y$ is a point, then we recover the familiar formula for the Betti numbers of a simplicial, complete toric variety ${\rm (}$see [@Ful93 Chapter 4.5]${\rm )}$. \[weights\_trick\] In fact, one can prove the assertions in Theorems \[pure\_coh\_global\] and \[pure\_coh\] without making use of the filtration constructed in the previous section, arguing as follows. We have seen that it is enough to prove that when $f\colon X\to Y$ is a proper toric fibration, with $X$ smooth and $Y$ affine, of contractible type, and with $y\in Y$ being the torus-fixed point, then the mixed Hodge structures on $H^i(X,{{\Bbb Q}})$ and $H^i(f^{-1}(y),{{\Bbb Q}})$ are pure of weight $i$ (the fact that the Hodge structures are of Hodge-Tate type then follows from the computation of the Hodge-Deligne polynomials). As we have seen, we have an isomorphism of mixed Hodge structures $$H^i(X,{{\Bbb Q}})\to H^i(f^{-1}(y),{{\Bbb Q}}).$$ Since $X$ is smooth, all weights on $H^i(X,{{\Bbb Q}})$ are $\geq i$, while since $f^{-1}(y)$ is compact, all weights on $H^i(f^{-1}(y),{{\Bbb Q}})$ are $\leq i$. We conclude that the mixed Hodge structures on each of $H^i(X,{{\Bbb Q}})$ and $H^i(f^{-1}(y),{{\Bbb Q}})$ are pure of weight $i$. The Decomposition Theorem for toric maps ======================================== [sec\_dec]{} The main result of this section is the following version of the Decomposition Theorem in the case of toric fibrations. Recall that for an algebraic variety $X$, we denote by $IC_X$ the intersection complex on $X$. \[toric\_dec\_thm\] If $X$ and $Y$ are complex toric varieties and $f\colon X\to Y$ is a toric fibration, then we have a decomposition $$\label{eq_toric_dec_thm} Rf_*(IC_X)\simeq \bigoplus_{\tau\in\Delta_Y}\bigoplus_{b\in{{\Bbb Z}}}IC_{V(\tau)}^{\oplus s_{\tau,b}}[-b].$$ Furthermore, the nonnegative integers $s_{\tau,b}$ satisfy the following conditions: 1. $s_{\tau,b}=s_{\tau,-b}$ for every $\tau\in\Delta_Y$ and every $b\in{{\Bbb Z}}$. 2. If $f$ is projective, then $s_{\tau,b}\geq s_{\tau,b+2\ell}$ for every $\tau\in\Delta_Y$ and every $b, \ell \in{{\Bbb Z}}_{\geq 0}$. 3. $s_{\tau,b}=0$ if $b+\dim(X)-\dim(V(\tau))$ is odd. With the notation in Theorem \[toric\_dec\_thm\], for every $\tau\in \Delta_Y$ we put $\delta_{\tau}=\sum_bs_{\tau,b}$. It is clear that $\delta_{\tau}$ is a nonnegative integer. The subvariety $V(\tau)$ is a *support* for $f$ if $\delta_{\tau}>0$. In Sections \[sec\_simpl\] and \[sec\_gen\] we will give combinatorial descriptions of the invariants $\delta_{\tau}$, determining in particular the supports of $f$. \[DT\_non\_fibration\] Suppose that $X$ and $Y$ are toric varieties and $f\colon X\to Y$ is any proper toric map. It follows from Proposition \[Stein\_fact\] and Remark \[rmk\_nonsurj\] that we can factor $f$ as $X\overset{g}\to Z\overset{h}\to Y$, with $g$ a toric fibration and $h$ a finite toric map. For every $\tau\in\Delta_Z$, we have an induced finite morphism of algebraic groups $O(\tau)\to O(h_*(\tau))$. If we denote by $O'(\tau)$ the image of this map, then $O'(\tau)$ is a torus and we have a finite, surjective, étale map $h_{\tau}\colon O(\tau)\to O'(\tau)$, which is the quotient by a finite group ${\rm (}$see Remark \[factorization\_maps\_tori\]${\rm )}$. It follows that if ${\mathcal L}_{\tau}:=(h_{\tau})_*({\mathbf Q}_{O(\tau)})$, then ${\mathcal L}_{\tau}$ is a local system on $O'(\tau)$. Moreover, the induced map $\overline{h}_\tau:V(\tau)\to\overline{O'(\tau)}$ is finite, hence the direct image by this map is $t$-exact for the middle perversity $t$-structure, and therefore it preserves intersection complexes with twisted coefficients. This implies that ${\overline{h}_\tau}_*(IC_{V(\tau)})\simeq IC_{\overline{O'(\tau)}}({\mathcal L}_{\tau})$. We thus obtain from Theorem \[toric\_dec\_thm\] the following decomposition in this general toric setting: $$Rf_*(IC_X)\simeq \bigoplus_{\tau\in\Delta_Z}\bigoplus_{b\in{{\Bbb Z}}}IC_{\overline{O'(\tau)}}^{\oplus s_{\tau,b}}({\mathcal L}_{\tau})[-b].$$ Clearly, the supports of $f$ are the images via $h$ of the supports of the toric fibration $g.$ Before giving the proof of Theorem \[toric\_dec\_thm\] we make some preparations. We begin by recalling the following general statement of the Decomposition Theorem, see [@BBD Théorème 6.2.5]. \[gen\_dec\_thm\] Let $X$ and $Y$ be complex algebraic varieties and consider a proper morphism $f\colon X\to Y$. We have a finite direct sum decomposition $$Rf_*(IC_X)\simeq \bigoplus_{\alpha}IC_{\overline{Y_{\alpha}}}(L_{\alpha})[-d_{\alpha}],$$ where each $Y_{\alpha}$ is a smooth, irreducible, locally closed subset of $Y,$ $L_{\alpha}$ is a local system on $Y_{\alpha}$ and $d_\alpha \in {{\Bbb Z}}.$ In order to prove Theorem \[toric\_dec\_thm\], we will need to show that under the assumptions of the theorem, all varieties $\overline{Y_{\alpha}}$ are torus-invariant and the local systems $L_{\alpha}$ are trivial. Properties i) and ii) will then follow from Poincaré duality and relative Hard Lefschetz. Finally, property iii) will be a consequence of the following proposition. \[even\_IC\] If $f\colon X\to Y$ is a proper toric map, then for every $y\in Y$, we have $H^i(f^{-1}(y),IC_X)=0$ whenever $i+\dim(X)$ is odd. By Proposition \[Stein\_fact\] and Remark \[rmk\_nonsurj\], we may factor $f$ as a composition $X\overset{g}\to Z\overset{h}\to Y$, with $g$ a toric fibration and $h$ finite. Since a fiber of $f$ is a disjoint union of fibers of $g$, it follows that we may assume that $f$ is a fibration. Let $\pi\colon X'\to X$ be a proper, birational, toric morphism such that $X'$ is smooth and let $f'=f\circ\pi$. Since $\pi$ is birational, it follows from Theorem \[gen\_dec\_thm\] that $IC_X$ is a direct summand of $R\pi_*(IC_{X'})$. By restricting over $f^{-1}(y)$, applying base-change, and taking the $i^{\rm th}$ cohomology, we conclude that $H^i(f^{-1}(y),IC_X)$ is a summand of $$H^i(f^{-1}(y), R\pi_*(IC_{X'}))=H^i({f'}^{-1}(y),IC_{X'})=H^{i+n}({f'}^{-1}(y),{{\Bbb Q}}),$$ where $n=\dim(X)$ and the second equality follows from the fact that $X'$ being smooth, we have $IC_{X'}={{\Bbb Q}}_{X'}[n]$. On the other hand, since $X'$ is smooth and $f\circ\pi$ is a fibration, we may apply Theorem \[pure\_coh\] to get $H^{i+n}({f'}^{-1}(y),{{\Bbb Q}})=0$ when $i+n$ is odd. This completes the proof of the proposition. \[mhs\_Saito\] It follows from Saito’s theory of mixed Hodge modules [@Saito] that for every proper morphism $f\colon X\to Y$ of complex algebraic varieties and every $y\in Y$, the cohomology groups $H^{i-\dim X}(f^{-1}(y),IC_X)$ $=H^{i}(f^{-1}(y),\mathcal{I}_X)$ carry a natural mixed Hodge structure. The argument in the proof of Theorem \[even\_IC\] together with Theorem \[pure\_coh\] imply that if $f$ is a toric map, then this mixed Hodge structure is pure of weight $i$. The fact that in the toric decomposition theorem we only have torus-invariant subvarieties as supports will follow from the next lemma. Given a toric variety $Y$, we denote by $\Omega_Y$ the stratification by the $T_Y$-orbits. Recall that a complex of sheaves ${\mathcal E}$ on $Y$ is *$\Omega_Y$-constructible* if for every $i$ and every $O\in \Omega_Y$, the restriction ${\mathcal H}^i({\mathcal E})\vert_O$ is a local system on $O$. The intersection complex of a toric variety $Y$ is $\Omega_Y$-constructible in a strong sense, i.e. each of these restrictions is a constant (torus) equivariant sheaf on the orbit (see \[Lemma 5.15\][BL]{}). The same is true for the direct image complex in (\[eq\_toric\_dec\_thm\]), and the lemma that follows is a step in proving Theorem \[toric\_dec\_thm\]. \[Omega\_constructibility\] If $f\colon X\to Y$ is a proper toric fibration, then $Rf_*(IC_X)$ is $\Omega_Y$-constructible. In fact, the restriction of each ${\mathcal H}^i(Rf_*(IC_X))$ to a $T_Y$-orbit is a constant sheaf. In particular, $IC_Y$ is $\Omega_Y$-constructible. Let $\tau\in\Delta_Y$ be fixed. In order to describe the restriction of some ${\mathcal H}^i(Rf_*(IC_X))$ to the orbit $O(\tau)$, we may restrict to the affine open subset $U_{\tau}$ and thus assume that $Y=U_{\tau}$. Lemma \[lm\_prod\_str\] implies that we have isomorphisms $Y\simeq Y'\times O(\tau)$ and $X\simeq X'\times O(\tau)$, with $Y'$ having a fixed point $y$, such that $f$ gets identified to $f'\times {\rm Id}_{O(\tau)}$, where $f'\colon X'\to Y'$ is a toric fibration. In this case $IC_{X}$ gets identified to ${\rm pr}_1^*(IC_{X'})[r]$, where $r=\dim(O(\tau))$. It is then clear that $${\mathcal H}^i(Rf_*(IC_X))\vert_{O(\tau)}\simeq {\mathcal H}^{i+r}(Rf'_*(IC_{X'}))_y\otimes {{\Bbb Q}}_{O(\tau)}.$$ This gives the first two assertions in the lemma. The third is the special case $f={\rm Id}.$ We can now prove the main result of this section. Given a toric fibration $f\colon X\to Y$, we obtain via Theorem \[gen\_dec\_thm\] a finite direct sum decomposition $$Rf_*(IC_X)\simeq \bigoplus_{\alpha}IC_{\overline{Y_{\alpha}}}(L_{\alpha})[-d_{\alpha}],$$ with each $Y_{\alpha}$ a smooth, irreducible, locally closed subset of $Y$ and $L_{\alpha}$ a local system on $Y_{\alpha}$. We may assume that each $L_{\alpha}$ is nonzero and indecomposable. Since $Rf_*(IC_X)$ is $\Omega_Y$-constructible by Lemma \[Omega\_constructibility\], so is each term $IC_{\overline{Y_{\alpha}}}(L_{\alpha}).$ It follows that $\overline{Y_{\alpha}}= V(\sigma_\alpha)$ for a unique cone ${\sigma}_\alpha \in \Delta_Y$ and that we can take $Y_\alpha= O({\sigma}_\alpha).$ The proof of the same lemma implies that the restriction of ${\mathcal H}^{-\dim {O({\sigma}_\alpha)}}(IC_{\overline{V({\sigma}_\alpha)}} (L_{\alpha}))$ to $ O(\sigma_\alpha)$ is constant. This restriction is $L_\alpha,$ which is thus constant, hence isomorphic to ${{\Bbb Q}}_{O({\sigma}_{\alpha})}$ (being indecomposable). Therefore $IC_{\overline{Y_{\alpha}}}(L_{\alpha})=IC_{V(\sigma_\alpha)}$. The assertions in i) and ii) now follow from Poincaré duality and relative Hard Lefschetz (see [@BBD Théorème 5.4.10]). In order to check the assertion in iii), we consider for $\tau\in \Delta_Y$ the stalk at $x_{\tau}$ for both sides of (\[eq\_toric\_dec\_thm\]). By taking the $i^{\rm th}$ cohomology and applying base-change, we conclude that ${\mathcal H}^{i-b}(IC_{V(\tau)})_{x_{\tau}}^{\oplus s_{\tau,b}}$ is a direct summand of $H^i(f^{-1}(x_{\tau}),IC_X)$. If we have $b+\dim(X)-\dim(V(\tau))$ odd, then Lemma \[even\_IC\] implies that $H^i(f^{-1}(x_{\tau}),IC_X)=0$ when $i=b-\dim(V(\tau))$. However, in this case ${\mathcal H}^{i-b}(IC_{V(\tau)})_{x(\tau)}\simeq {{\Bbb Q}}$, which implies $s_{\tau,b}=0$. \[rmk\_KS1\] As we have mentioned in the Introduction, Katz and Stapledon associate some invariants, *local h-polynomials* to certain maps between posets. This is done in a purely combinatorial way. Given a toric fibration $f\colon X\to Y$, their framework can be applied to the map $f_*\colon\Delta_X\to\Delta_Y$ and it turns out that our invariants $s_{\tau,b}$ can be interpreted as coefficients of the local $h$-polynomial. Due to the combinatorial definition, the nonnegativity of these coefficients is not a priori clear. However, one of the main results in [@KS] says that in the case of a regular rational polyhedral subdivision of a rational polytope ${\rm (}$which corresponds to a projective toric birational morphism between toric varieties${\rm )}$, the coefficients of the local $h$-polynomial are nonnegative, symmetric, and unimodal ${\rm (}$see [@KS Theorem 6.1]${\rm )}$. Combinatorics of the toric Decomposition Theorem: the simplicial case ===================================================================== [sec\_simpl]{} In this section we study toric fibrations $f\colon X\to Y$, with $X$ and $Y$ simplicial toric varieties. In this case we can determine explicitly all multiplicities $s_{\tau,b}$ that appear in Theorem \[toric\_dec\_thm\]. We begin by recalling some elementary combinatorial definitions and facts about the incidence algebra associated with a partially ordered set. For an introduction to incidence algebras and applications, see [@Rota64] or [@Stanley4 §3.6]. If $({\mathcal P},\leq)$ is a finite poset and $K$ is a commutative ring with identity, we have the *incidence algebra* ${\mathbb I}({\mathcal P},K)$ consisting of the set of functions $$f\colon \{(a,b)\in {\mathcal P}\times {\mathcal P}\mid a\leq b\}\to K$$ with a *convolution* operation defined by $$f\star g (a,b):=\sum_{a\leq c\leq b}f(a,c)\cdot g(c,b).$$ This is an associative operation with an identity element given by the *delta function* $\delta$[^5], with $\delta(a,b)=1$ if $a=b$ and $\delta(a,b)=0$, otherwise. It is easy to check, see [@Stanley4 Proposition 3.6.2], that $f$ has a (unique) inverse with respect to convolution if and only if $f(a,a)$ is a unit of $K$ for every $a\in {\mathcal P}$ (the proof in [@Stanley4], given for $K={{\Bbb Z}}$, holds without any change for any commutative ring with identity). An important example is that of the function $\zeta$ given by $\zeta(a,b)=1$ whenever $a\leq b$. The inverse of $\zeta$ with respect to convolution is the *Möbius function* $\mu_{\mathcal P}$ of ${\mathcal P}$. For example, if $({\mathcal P},\leq)$ is the set of all subsets of a finite set, ordered by inclusion, then it is easy to see that $\mu_{\mathcal P}(A,B)=(-1)^{\#(B\smallsetminus A)}$ whenever $A\subseteq B$. Another fact that follows easily from definition is that if $f$ is a function as above with inverse $g$ with respect to convolution and $\phi,\psi\colon {\mathcal P}\to K$ are functions such that $\phi(x)=\sum_{y\leq x}f(y,x)\psi(y)$ for every $x\in {\mathcal P}$, then $\psi(y)=\sum_{x\leq y}g(x,y)\phi(x)$ for every $x\in {\mathcal P}$. When $f=\delta$, this is the *Möbius inversion formula*. After these preparations, we return to toric maps. In the following theorem we consider a toric fibration $f\colon X\to Y$, in which both $X$ and $Y$ are simplicial. In this case the decomposition (\[eq\_toric\_dec\_thm\]) becomes easier to describe, since $IC_X={\mathbf Q}_X[\dim(X)]$ and $IC_{V(\tau)}={{\Bbb Q}}_{V(\tau)}[\dim(V(\tau))]$ for every cone $\tau\in\Delta_Y$ (we use the fact that both $X$ and $V(\tau)$ are ${{\Bbb Q}}$-manifolds). Recall that if $f\colon X\to Y$ is a toric fibration, then for every $\tau\in\Delta_Y$ we put $$d_{\ell}(X/\tau)=\#\{\alpha\in\Delta_X\mid f_*(\alpha)=\tau, {\rm codim}(\alpha)-{\rm codim}(\tau)=\ell\}.$$ \[form\_both\_simplicial\] Suppose we are in the setting of Theorem \[toric\_dec\_thm\], with both $X$ and $Y$ simplicial. 1. For every $\tau\in\Delta_Y$, we have $$\delta_{\tau}:=\sum_bs_{\tau,b}=\sum_{\sigma\subseteq\tau}(-1)^{\dim(\tau)-\dim(\sigma)}d_0(X/\sigma).$$ 2. For every $m\in{{\Bbb Z}}$ and every $\tau\in\Delta_Y$, we have $$s_{\tau,2m +\dim(V(\tau))-\dim(X)}=\sum_{\sigma\subseteq\tau}(-1)^{\dim(\tau)-\dim(\sigma)}\cdot \sum_{\ell\geq m}(-1)^{\ell-m}{{\ell}\choose m}d_{\ell}(X/\sigma)$$ ${\rm (}$where the right-hand side is understood to be $0$ if $m<0$${\rm )}$, while $s_{\tau,i+\dim(V(\tau))-\dim(X)}=0$ if $i$ is odd. Let $d_X=\dim(X)$. Since $X$ and $Y$ are simplicial, it follows from Theorem \[toric\_dec\_thm\] that we have a decomposition $$\label{eq_form_both_simplicial} Rf_*({{\Bbb Q}}_X[d_X])\simeq\bigoplus_{\tau\in\Delta_Y}\bigoplus_{b\in{{\Bbb Z}}}{{\Bbb Q}}_{V(\tau)}^{\oplus s_{\tau,b}}[\dim(V(\tau))-b].$$ If $\sigma\in\Delta_Y$, by taking the stalk at $x_{\sigma}$ and computing the $(i-d_X)^{\rm th}$ cohomology, we obtain via base-change $$\label{eq3_form_both_simplicial} H^{i}(f^{-1}(x_{\sigma}),{{\Bbb Q}})\simeq \bigoplus_{\tau\subseteq\sigma}{{\Bbb Q}}^{\oplus s_{\tau,i+\dim(V(\tau))-d_X}}.$$ The second assertion in ii) follows directly from Theorem \[toric\_dec\_thm\], while Theorem \[pure\_coh\] implies $H^j(f^{-1}(x_{\sigma}),{{\Bbb Q}})=0$ for $j$ odd. We conclude that $$\chi(f^{-1}(x_{\sigma}))=\sum_{i\geq 0}\dim_{{{\Bbb Q}}}H^i(f^{-1}(y),{{\Bbb Q}})=\sum_{\tau\subseteq\sigma}\delta_{\tau}.$$ The Möbius inversion formula for the poset $\Delta_Y$ implies $$\label{eq2_form_both_simplicial} \delta_{\tau}=\sum_{\sigma\subseteq\tau}\mu_{\Delta_Y}(\sigma,\tau)\chi(f^{-1}(x_{\sigma}))=\sum_{\sigma\subseteq\tau}\mu_{\Delta_Y}(\sigma,\tau)d_0(X/\sigma),$$ where the second equality follows from Proposition \[HD\_fiber\]. On the other hand, $\mu_{\Delta_Y}(\sigma,\tau)$ only depends on the interval $[\sigma,\tau]$ in $\Delta_Y$ and since $\tau$ is simplicial, this interval is in order-preserving bijection with the poset of all subsets of a set with $\dim(\tau)-\dim(\sigma)$ elements. Therefore $\mu_{\Delta_Y}(\sigma,\tau)=(-1)^{\dim(\tau)-\dim(\sigma)}$. The formula in (\[eq2\_form\_both\_simplicial\]) thus gives the assertion in i). We proceed similarly to prove ii). Let $m$ be a fixed integer. It follows from (\[eq3\_form\_both\_simplicial\]) that for every $\sigma\in\Delta_Y$ we have $$\dim_{{{\Bbb Q}}}H^{2m}(f^{-1}(x_{\sigma}),{{\Bbb Q}})=\sum_{\tau\subseteq\sigma}s_{\tau,2m+\dim(V(\tau))-d_X}.$$ The Möbius inversion formula and Corollary \[Betti\_fib\] imply $$s_{\tau,2m+\dim(V(\tau))-d_X}=\sum_{\sigma\subseteq\tau}(-1)^{\dim(\tau)-\dim(\sigma)}\dim_{{{\Bbb Q}}}H^{2m}(f^{-1}(x_{\sigma}),{{\Bbb Q}})$$ $$= \sum_{\sigma\subseteq\tau}(-1)^{\dim(\tau)-\dim(\sigma)}\cdot \sum_{\ell\geq m}(-1)^{\ell-m}{{\ell}\choose m}d_{\ell}(X/\sigma).$$ This completes the proof of the theorem. \[rmk\_KS2\] The reader can compare the formula for the invariants $s_{\tau,b}$ in Theorem \[form\_both\_simplicial\] with the formula in [@KS Lemma 4.12] for the local $h$-polynomial of a map of posets $\Gamma\to B$, in which $\Gamma$ is simplicial and $B$ is a Boolean algebra. [iop]{} Let $f\colon X\to Y$ be a toric fibration, with both $X$ and $Y$ simplicial. It follows from Theorem \[form\_both\_simplicial\] that for every $\tau\in\Delta_Y$, the expression $$\sum_{\sigma\subseteq\tau}(-1)^{\dim(\tau)-\dim(\sigma)}d_0(X/\sigma)$$ is nonnegative ${\rm (}$and it is positive if and only if $V(\tau)$ is a support for $f$${\rm )}$. We do not know a direct combinatorial argument that would imply that the expression is nonnegative. A similar remark can be made in the not-necessarily-simplicial case, following the combination of Theorems \[form\_general\] and \[thm\_p\_sigma\]. In Remark \[rmk\_special\_case\] below we give such an argument when $f$ is birational between simplicial toric varieties and $\dim(\tau)\leq 3$. \[rmk\_rel\_f\_vector\] Note that the invariants $s_{\tau,b}$ in Theorem \[toric\_dec\_thm\] satisfy the conditions ${\rm i)}$ and ${\rm ii)}$ coming from Poincaré duality and relative Hard Lefschetz. In the setting of Theorem \[form\_both\_simplicial\], these translate into interesting conditions satisfied by the invariants $d_{\ell}(X/\tau)$, for the cones $\tau\in\Delta_Y$. More precisely, suppose that $f\colon X\to Y$ is a projective toric fibration between simplicial toric varieties. For a cone $\sigma\in\Delta_Y$ and $m\geq 0$, let us put $$\widetilde{d}_m(X/\sigma)=\sum_{\ell\geq m}(-1)^{\ell-m}{\ell\choose m}d_{\ell}(X/\sigma).$$ Recall that by Corollary \[Betti\_fib\], we have $\widetilde{d}_m(X/\sigma)=\dim_{{{\Bbb Q}}}H^{2m}(f^{-1}(y);{{\Bbb Q}})$ for any $y\in O(\sigma)$. In particular, we have $\widetilde{d}_m(X/\sigma)\geq 0$. With this notation, Poincaré duality says that for every $m$ with $0\leq m\leq \dim(X)-\dim(V(\tau))$, if $m'=\dim(X)-\dim(V(\tau))-m$, then $$\sum_{\sigma\subseteq\tau}(-1)^{\dim(\tau)-\dim(\sigma)}\widetilde{d}_m(X/\sigma)= \sum_{\sigma\subseteq\tau}(-1)^{\dim(\tau)-\dim(\sigma)}\widetilde{d}_{m'}(X/\sigma).$$ Similarly, relative Hard Lefschetz says that if $0\leq m\leq \frac{1}{2}(\dim(X)-\dim(V(\tau)))$, then $$\sum_{\sigma\subseteq\tau}(-1)^{\dim(\tau)-\dim(\sigma)}\widetilde{d}_m(X/\sigma)\geq \sum_{\sigma\subseteq\tau}(-1)^{\dim(\tau)-\dim(\sigma)}\widetilde{d}_{m+1}(X/\sigma).$$ These conditions generalize to the relative setting the famous restrictions on the $f$-vector of a simplicial toric variety that come from Poincaré duality and Hard Lefschetz ${\rm (}$see [@Ful93 Chapter 5.6]${\rm )}$. \[rmk\_special\_case\] Suppose that $f\colon X\to Y$ is a proper, birational toric map. We may assume that $N_X=N_Y$ and $f_N$ is the identity, hence $\Delta_X$ gives a fan refinement of $\Delta_Y$. If for a cone $\tau\in\Delta_Y$ we define $\delta_{\tau}$ by $$\label{eq_rmk_special_case} \delta_{\tau}=\sum_{\sigma\subseteq\tau}(-1)^{\dim(\tau)-\dim(\sigma)}d_0(X/\sigma),$$ then we want to give a “nonnegative expression" for $\delta_{\tau}$. For every cone $\tau\in\Delta_Y$, let $\iota(\tau)$ denote the number of rays in $\Delta$ that are contained in the relative interior of $\tau$. If $\dim(\tau)\leq 3$, then we have the following formulas: 1. $\delta_0=1$ and $\delta_{\tau}=0$ if $\dim(\tau)=1$. 2. $\delta_{\tau}=\iota(\tau)$ if $\dim(\tau)=2$. 3. $\delta_{\tau}=2\iota(\tau)$ if $\dim(\tau)=3$. The assertions in ${\rm i)}$ and ${\rm ii)}$ follow easily from (\[eq\_rmk\_special\_case\]), hence we only prove ${\rm iii)}$. In order to check this, it is convenient to consider a transversal section $T$ of $\tau$. This is a triangle such that $\Delta_X$ induces a triangulation $\Lambda$ of $T$. Let us consider the following invariants: 1. $a_3$ is the number of triangles in $\Lambda$, 2. $a_2$ is the number of segments in $\Lambda$ that are contained in the boundary of $T$, 3. $a'_2$ is the number of segments in $\Lambda$ not contained in the boundary of $T$, 4. $a_1$ is the number of points in $\Lambda$ in the boundary of $T$, 5. $a'_1$ is the number of points in the interior of $T$ ${\rm (}$hence $a'_1=\iota(\sigma)$${\rm )}$. Note that we have the following relations between these invariants: 1. $(a_1+a'_1)-(a_2+a'_2)+a_3=1$ ${\rm (}$by considering the Euler-Poincaré characteristic of $T$${\rm )}$. 2. $a_1=a_2$. 3. $3a_3=a_2+2a'_2$ ${\rm (}$by counting the segments in the boundaries of all triangles, and noting that a segment appears in 2 triangles if it is not contained in the boundary of $T$, and in 1, otherwise${\rm )}$. By combining ${\rm R1)}$ and ${\rm R3)}$, we see that $$3a_2+3a'_2-3a_1-3a'_1+3=a_2+2a'_2.$$ Simplifying and using also ${\rm R2)}$, we obtain: $$\label{eq3_rmk_special_case} a'_2-a_1-3a'_1+3=0.$$ On the other hand, it follows from ${\rm (}$\[eq\_rmk\_special\_case\]${\rm )}$ that $$\delta_{\tau}=a_3-a_2+2.$$ By using ${\rm R1)}$ and ${\rm (}$\[eq3\_rmk\_special\_case\]${\rm )}$, we obtain the desired conclusion: $$\delta_{\tau}=a'_2-(a_1+a'_1)+3=3a'_1-a'_1=2a'_1=2\iota(\sigma).$$ It is worth noting that if $\dim(\tau)=4$, then $\delta_{\tau}$ is not a multiple of $\iota(\tau)$. Indeed, by considering the blow-up of ${\mathbf A}^{\!4}$ at the origin, we see that the only possibility would be $\delta_{\tau}=3\iota(\tau)$. On the other hand, consider $f=g\circ h$, where $g\colon Z\to {\mathbf A}^{\!4}$ is the blow-up of an invariant line $L$, with exceptional divisor $E\simeq {\mathbf P}^2\times L$ and $h$ is the blow-up of $Z$ along the subset ${\mathbf P}^2\times\{0\}\subset E$. An easy computation shows that in this case $\iota(\tau)=1$ but $\delta(\tau)=4$. Combinatorics of the toric Decomposition Theorem: the general case ================================================================== [sec\_gen]{} Our goal in this section is to determine the supports of an arbitrary toric fibration $f\colon X\to Y$ and to show that they are combinatorially determined. In this case there are two difficulties, compared with the setting in the previous section: on one hand, the poset structure of $\Delta_Y$ is more complicated; second, and more crucially, we need to take into account the singularities of $X$ and $Y$. These will come up through the local behavior of the intersection cohomology complexes. In order to deal with the latter issue we begin by introducing the following invariant of an arbitrary toric variety. Let ${{\Bbb Z}}[T,T^{-1}]$ denote the ring of Laurent polynomials with integer coefficients. Given a toric variety $Y$ and two cones $\tau\subseteq\sigma$ in $\Delta_Y$, we define $$R_{\tau, \sigma}(T)= \sum_{k \in {{\Bbb Z}}} \dim_{{{\Bbb Q}}}{\mathcal H}^{k}(IC_{V(\tau)})_{x_{\sigma}}T^k \in {{\Bbb Z}}[T,T^{-1}]$$ and $$r_{\tau,\sigma}=R_{\tau, \sigma}(1)= \sum_{k\in{{\Bbb Z}}} \dim_{{{\Bbb Q}}}{\mathcal H}^{k}(IC_{V(\tau)})_{x_{\sigma}}.$$ Note that since the restriction of ${\mathcal H}^k(IC_{V(\tau)})$ to each torus-orbit is constant by Lemma \[Omega\_constructibility\], we could have replaced in the above definition $x_{\sigma}$ by any other point in $O(\sigma)$. \[rmk\_combinatorial\_invariant\] The function $R\colon \{(\tau,\sigma)\in\Delta_Y\times\Delta_Y\mid \tau\subseteq\sigma\}\to {{\Bbb Z}}[T,T^{-1}]$ only depends on the combinatorics of $\Delta_Y$. Indeed, in order to see that $R_{\tau,\sigma}$ is combinatorially determined, we may replace $Y$ by $V(\tau)$ and thus assume that $\tau=\{0\}$. In this case, the assertion is a consequence of [@Fieseler Theorems 1.1, 1.2] and [@DL Theorem 6.2]. We also note that $R_{\tau,\sigma}(T)=T^{\dim (\tau )-n}$ whenever $\dim(\sigma)-\dim(\tau)\leq 2$, where $n=\dim(Y)$. Indeed, in this case $V:=V(\tau)\cap U_{\sigma}$ is a simplicial toric variety, hence $IC_V={{\Bbb Q}}_V[\dim(V)]$. In particular, since $R_{\tau, \tau}(T) =T^{\dim (\tau)-n}$ is invertible, it follows that the function $R$ has an inverse $$\widetilde{R} \colon \{(\tau,\sigma)\in\Delta_Y\times\Delta_Y\mid \tau\subseteq\sigma\} \to {{\Bbb Z}}[T,T^{-1}]$$ with respect to the convolution on the incidence algebra corresponding to the poset $\Delta_Y$. We set $\widetilde{r}_{\tau,\sigma}=\widetilde{R}_{\tau,\sigma}(1).$ The function $\widetilde{R}$ will feature in the description of the supports of a toric fibration. \[rmk\_combinatorics\] It follows from [@Stanley3 Proposition 8.1] that, up to signs and powers of $T$, the function $\widetilde{R}$ is just the function $R$ associated with the dual poset. We are grateful to T. Braden for pointing this out to us. Given a toric fibration $f\colon X \to Y$, we define the functions $$P_f\colon \Delta_Y \to {{\Bbb Z}}[T,T^{-1}] \quad\text{and}\quad p_f\colon \Delta_Y \to {{\Bbb Z}}$$ by $$P_{f, \sigma}(T):=\sum_{k\in{{\Bbb Z}}} \dim_{{{\Bbb Q}}}H^k(f^{-1}(x_{\sigma}), IC_X)T^k \mbox{ and } p_{f, \sigma }:=P_{f, \sigma}(1)=\dim_{{{\Bbb Q}}}H^*(f^{-1}(x_{\sigma}), IC_X).$$ It is not a priori clear that $ P_{f, \sigma}(T)$ and $p_{f,\sigma}$ are combinatorially determined, but this follows from Theorem \[thm\_p\_sigma\] below. Finally, we define $$S\colon \Delta_Y \to {{\Bbb Z}}[T,T^{-1}] \quad\text{as}\quad S_{\tau}(T)=\sum_{b\in{{\Bbb Z}}} s_{\tau,b}T^b,$$ where the $s_{\tau,b}$ are the multiplicities defined in Theorem \[toric\_dec\_thm\]. It follows from i) in Theorem \[toric\_dec\_thm\] that $S_{\tau}(T)=S_{\tau}(T^{-1})$. Using the invariants $P_{f, \sigma}$ and $\widetilde{R}_{\tau,\sigma}$, we can now describe the supports of any toric fibration. \[form\_general\] Suppose that we are in the setting of Theorem \[toric\_dec\_thm\]. With the above definitions, for every $\tau\in\Delta_Y$, we have $$S_\tau(T)= \sum_{\sigma \subseteq \tau }\widetilde{R}_{\sigma, \tau}(T)P_{f,\sigma} (T).$$ We proceed as in the proof of Proposition \[form\_both\_simplicial\]. Consider the decomposition given by Theorem \[toric\_dec\_thm\]: $$Rf_*(IC_X)\simeq\bigoplus_{\tau\in\Delta_Y}\bigoplus_{b\in{{\Bbb Z}}}IC_{V(\tau)}^{\oplus s_{\tau,b}}[-b].$$ Let $\sigma\in\Delta_Y$. By taking the stalk at $x_{\sigma}$ and computing the $i^{\rm th}$ cohomology, we obtain $$\dim_{{{\Bbb Q}}}H^i(f^{-1}(x_{\sigma}),IC_X)=\sum_{\tau\subseteq\sigma}\sum_{b\in{{\Bbb Z}}}s_{\tau,b}\cdot\dim_{{{\Bbb Q}}}{\mathcal H}^{i-b}(IC_{V(\tau)})_{x_{\sigma}},$$ which gives the equality $$P_{f, \sigma}(T)=\sum_{\tau\subseteq\sigma} R_{\tau,\sigma}(T)S_{\tau}(T)$$ in ${{\Bbb Z}}[T,T^{-1}]$. Since $\widetilde{R}$ is the inverse of $R$ with respect to convolution, we conclude $$S_{\tau}(T)=\sum_{\sigma\subseteq\tau}\widetilde{R}_{\sigma,\tau}(T)\cdot P_{f, \sigma}(T).$$ This completes the proof. Evaluating at $T=1$ we find the following useful criterion for a stratum to be a support of the map $f$. \[useful-formula\] In the setting of Theorem \[toric\_dec\_thm\], we have $$\delta_{\tau}=\sum_{\sigma\subseteq\tau}\widetilde{r}_{\sigma,\tau}\cdot p_{f, \sigma}.$$ \[form\_general\_simpl\_source\] If we are in the setting of Theorem \[toric\_dec\_thm\] and $X$ is simplicial with $\dim(X)=d_X$, then for every $\tau\in\Delta_Y$, we have $$S_{\tau}(T)=\sum_{\sigma\subseteq\tau}\widetilde{R}_{\sigma,\tau}(T)\cdot \left( \sum_{m\geq 0} \left(\sum_{\ell\geq m}(-1)^{\ell-m}{{\ell}\choose m}d_{\ell}(X/\sigma)\right)T^{2m-d_X}\right)$$ and $$\delta_{\tau}=\sum_{\sigma\subseteq\tau}\widetilde{r}_{\sigma,\tau}\cdot d_0(X/\sigma).$$ Since $X$ is simplicial, we have $IC_X={{\Bbb Q}}_X[d_X]$. The first equality follows immediately from Theorem \[form\_general\] and Corollary \[Betti\_fib\]. The second equality is then obtained evaluating at $T=1$: $$p_{f, \sigma}=\sum_{i\in{{\Bbb Z}}}\dim_{{{\Bbb Q}}}H^{i+d_X}(f^{-1}(x_{\sigma}),{{\Bbb Q}})=\chi(f^{-1}(x_{\sigma}))=d_0(X/\sigma).$$ By Remark \[rmk\_combinatorial\_invariant\], in order to show that the formulas for the invariants $S_\tau$ and $\delta_{\tau}$ in Theorem \[form\_general\] and Corollary \[useful-formula\] only depend on combinatorics, it is enough to show that the invariants $P_{f, \sigma}$ only depend on combinatorics. This is implied by the following theorem. When $Y$ is a point and $X$ is projective, this is a consequence of the results in [@Fieseler]. \[thm\_p\_sigma\] If $f\colon X\to Y$ is a toric fibration, then for every $\sigma\in\Delta_Y$, we have $$\label{eq_thm_p_sigma} P_{f, \sigma }(T)=\sum_{\tau} R_{0, \tau}(T)(T^2-1)^{{\rm codim}(\tau)-{\rm codim}(\sigma)} \quad \text{and}\quad p_{f, \sigma}=\sum_{\tau}r_{0,\tau},$$ where in the first formula the sum is over all the cones $\tau$ in $\Delta_X$ with $f_*(\tau)=\sigma$, while in the second formula the $\tau$ are only those which, in addition, satisfy ${\rm codim}(\tau)={\rm codim}(\sigma)$. The second statement follows from the first by evaluating it at $T=1$, so it is enough to prove the first statement. First we prove that $H^i(f^{-1}(x_{\sigma}),IC_X) $ is pure, so that it is enough to determine its Hodge-Deligne polynomial. After replacing $Y$ by $U_{\sigma}$, we may assume that $Y=U_{\sigma}$. Moreover, it is easy to see using Lemma \[lm\_prod\_str\] that we may assume that $x_\sigma$ is a fixed point. In this case, it follows from Lemma \[retraction\_lemma\] (see also Remark \[rmk2\_retraction\_lemma\]) that $$H^i(f^{-1}(x_{\sigma}),IC_X) =H^k(X,IC_X),$$ therefore, as discussed in Remark \[mhs\_intcoh\], $H^i(f^{-1}(x_{\sigma}),IC_X) $ is pure. We proceed as in the proof of Corollary \[Betti\_fib\]. Set $n= \dim X$. By Lemma \[Omega\_constructibility\], the restriction of the complex $IC_X$ to every torus orbit $O(\tau)$ in $f^{-1}(y)$ is a complex with [*constant cohomology sheaves*]{} ${\mathcal H}^{k}(IC_{X})_{x_\tau}$, underlying a pure Hodge-Tate structure of weight $k+n$. Set $t=\dim O(\tau)$. Since $H^p_c(O(\tau))\cong {{\Bbb Q}}(p-t)^{\oplus { t\choose{p}}}$, the compact cohomology group $$H^p_c(O(\tau), {\mathcal H}^{q}(IC_{X})_{x_\tau} )$$ has a Hodge structure of Hodge-Tate type and weight $2(p-t)+q+n$ As it is well-known, see \[p. 571\][dM]{} or \[Example 5.2(1)\][CLMS]{}, the differentials in the hypercohomology spectral sequence $$E_2^{p \, q}=H^p_c(O(\tau), {\mathcal H}^{q}(IC_{X})_{x_\tau} ) \Rightarrow H^{p+q}_c(O(\tau), IC_{X} )$$ are compatible with the Hodge structure , and they are therefore forced to vanish. It follows that the Hodge-Deligne polynomial of $H^{p+q}_c(O(\tau), IC_{X} )$ is $R_{0, \tau}(T)(uv-1)^t$. Adding over all torus orbits contained in $f^{-1}(x_{\sigma})$ and setting $u=v=T$, we obtain the statement. \[rmk\_KS3\] Due to the combinatorial nature of the definition of the local $h$-polynomial in [@KS], the proofs of the analogues of Theorems \[form\_general\] and \[thm\_p\_sigma\] in that setting are more elementary. One can then use the results of this section to write our invariants $s_{\tau,b}$ as coefficients of local $h$-polynomials. [Ful93]{} A. A. Beilinson, J. Bernstein, and P. Deligne, Faisceaux pervers, in *Analysis and topology on singular spaces, I (Luminy, 1981)*, 5–171, Astérisque, 100, Soc. Math. France, Paris, 1982. J. Bernstein, V. Lunts, *Equivariant sheaves and functors*, Lecture Notes in Mathematics, 1578, Springer-Verlag, Berlin, 1994. D. Cox, J. Little, and H. Schenck, *Toric varieties*, Graduate Studies in Mathematics, 124, American Mathematical Society, Providence, RI, 2011. S.E. Cappell, L.G  Maxim, J.L. Shaneson, Hodge genera of algebraic varieties I, Comm. on Pure and Appl. Math. 61 (3), 422–449. S.E. Cappell, A. Libgober, L. Maxim, J.L. Shaneson, Julius, Hodge genera of algebraic varieties II, Math. Ann. 345 (2009), 925–972. M. A. de Cataldo, The perverse filtration and the Lefschetz Hyperplane Theorem, II, J. Algebraic Geom. 21 (2012), 305–345. M.A. de Cataldo, Proper toric maps over finite fields, Internat. Math. Res. Notices, to appear. M.A. de Cataldo, L. Migliorini, The decomposition theorem, perverse sheaves and the topology of algebraic maps. Bull. Amer. Math. Soc. (N.S.) 46 (2009), 535Ð-633. P. Deligne, Théorie de Hodge, III, Inst. Hautes Études Sci. Publ. Math. No. 44 (1974), 5–77. J. Denef and F. Loeser, Weights of exponential sums, intersection cohomology, and Newton polyhedra, Invent. Math. 106 (1991), 275–294. K.-H. Fieseler, Rational intersection cohomology of projective toric varieties, J. Reine Angew. Math. 413 (1991), 88–98. W. Fulton, *Introduction to toric varieties*, Ann. of Math. Stud. **131**, The William H. Rover Lectures in Geometry, Princeton Univ. Press, Princeton, NJ, 1993. B. Iversen, [*Cohomology of Sheaves*]{}, Universitext, Springer-Verlag, Berlin Heidelberg, 1986. J. Jurkiewicz, An example of algebraic torus action which determines the nonfiltrable decomposition, Bull. Acad. Polon. Sci. Sér. Sci. Math. Astronom. Phys. 25 (1977), 1089–1092. E. Katz, A. Stapledon, Local $h$-polynomials, invariants of subdivisions, and mixed Ehrhart theory, Adv. Math. 286, 2 (2016), 181Ð239. F. Kirwan, Intersection homology and torus actions, J. Amer. Math. Soc. 1 (1988), 385–400. M. Mustaţă, Lecture notes on toric varieties, available at $\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$ *http://www-personal.umich.edu/$\tilde{}$mmustata/toric$\_$var.html*. G. Rota, On the foundations of combinatorial theory, I, Theory of Möbius functions, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 2 (1964), 340–368. M. Saito, Mixed Hodge modules, Publ. Res. Inst. Math. Sci. 26 (1990), 221–333. H. Sumihiro, Equivariant completion. J. Math. Kyoto Univ. 14 (1974), 1–28. R. Stanley, The number of faces of a simplicial convex polytope, Adv. in Math. 35 (1980), 236–238. R. Stanley, Generalized $H$-vectors, intersection cohomology of toric varieties, and related results, in *Commutative algebra and combinatorics (Kyoto, 1985)*, 187–213, Adv. Stud. Pure Math., 11, North-Holland, Amsterdam, 1987. R. Stanley, Subdivisions and local h-vectors. J. Amer. Math. Soc. 5 (1992), 805–851. R. Stanley, [*Enumerative combinatorics I,*]{} Cambridge Studies in Advanced Mathematics 49, Cambridge University Press, 1997. [^1]: The research of de Cataldo was partially supported by NSF grant DMS-1301761 and by a grant from the Simons Foundation (\#296737 to Mark de Cataldo). The research of Migliorini was partially supported by PRIN project 2012 “Spazi di moduli e teoria di Lie”. The research of Mustaţă was partially supported by NSF grant DMS-1401227. [^2]: We say that a graded vector space $\bigoplus_{i\in{{\Bbb Z}}}V^i$ is *even* if $V^i=0$ for all odd $i$. [^3]: The sheaves are assumed to be sheaves of ${{\Bbb Q}}$-vector spaces, with respect to the analytic topology. [^4]: The usual definition also requires the isomorphism to satisfy a cocycle condition. However, we do not need this extra condition. [^5]: This should not be confused with our key invariant $\delta$, a function of a single variable.
--- abstract: 'In this paper $q$-ary Raptor codes under decoding are considered. An upper bound on the probability of decoding failure is derived using the weight enumerator of the outer code, or its expected weight enumerator if the outer code is drawn randomly from some ensemble of codes. The bound is shown to be tight by means of simulations. This bound provides a new insight into Raptor codes since it shows how Raptor codes can be analyzed similarly to a classical fixed-rate serial concatenation.' author: - | \ \ \ [^1] [^2] bibliography: - 'IEEEabrv.bib' - 'references.bib' title: Bounds on the Error Probability of Raptor Codes --- Introduction {#sec:Intro} ============ Fountain codes [@byers02:fountain] are a class of erasure codes that have the property of being rateless. Thus, they are potentially able to generate an endless amount of encoded (or output) symbols. This property makes them suitable for application in situations where the channel erasure rate is not a priori known. The first class of practical fountain codes, codes, was introduced in [@luby02:LT] together with an iterative decoding algorithm that achieves a good performance when the number of input symbols $k$ is large. In [@luby02:LT] it was shown how in order to achieve a low probability of decoding error, the encoding and iterative decoding cost per output symbol is $O \left(\ln(k)\right)$. Raptor codes were introduced in [@shokrollahi06:raptor] and outperform codes in many aspects. They consist of a serial concatenation of an outer code $\mathcal C$ (or precode) with an inner code. On erasure channels, this construction allows relaxing the design of the code, requiring only the recovery of a fraction $1-\gamma$ of the input symbols with $\gamma$ small. This can be achieved with linear encoding complexity and also linear decoding complexity using iterative decoding. The outer code is responsible for recovering the remaining fraction of input symbols, $\gamma$. If the outer code $\mathcal C$ is linear-time encodable and decodable then the Raptor code has linear encoding and iterative decoding complexity over erasure channels. Most of the existing works on and Raptor codes consider iterative decoding and assume large input block lengths ($k$ at least in the order of a few tens of thousands). However, in practice, smaller values of $k$ are more commonly used. For example, for the binary Raptor codes standardized in [@MBMS16:raptor] and [@luby2007rfc] the recommended values of $k$ range from $1024$ to $8192$. For these input block lengths, iterative decoding performance degrades considerably. In this context, a different decoding algorithm is adopted that is an efficient decoder, in the form of inactivation decoding [@shokrollahi2005systems]. An inactivation decoder solves a system of equations in several stages. First a set of variables is declared *inactive*. Next a system of equations involving the set of inactive variables needs to be solved, for example using Gaussian elimination. Finally, once the value of the inactive variables is known, all other variables are recovered using iterative decoding. Recently there have been several works addressing the complexity of inactivation decoding for Raptor and codes [@lazaro:ITW; @lazaro:scc2015; @lazaro:Allerton2015; @mahdaviani2012raptor]. The probability of decoding failure of and Raptor codes under decoding has also been subject of study in several works. In [@Rahnavard:07] upper and lower bounds to the intermediate symbol erasure rate were derived for codes and Raptor codes with outer codes in which every element of the parity check matrix is Bernoulli random variables with parameter $p$. This work was extended in [@schotsch:2013], where lower an upper bounds to the performance of codes under decoding were derived. A further extension was presented in [@Schotsch:14], where an approximation to the performance of Raptor codes under decoding is derived under the assumption that the number of erasures correctable by the outer code is small. Hence, this approximation holds only if the rate of the outer code is sufficiently high. In [@Liva10:fountain] it was shown by means of simulations how the error probability of $q$-ary Raptor codes is very close to that of linear random fountain codes. In [@wang:2015] upper and lower bounds to the probability of decoding failure of Raptor codes were derived. The outer codes considered in [@wang:2015] are binary linear random codes with a systematic encoder. Recently, ensembles of Raptor codes with linear random outer codes were also studied in a fixed-rate setting in [@lazaro:ISIT2015],[@lazaro:JSAC]. Although a number of works has studied the probability of decoding failure of Raptor codes, to the best of the knowledge of the authors, up to now the results hold only for specific binary outer codes (see [@Rahnavard:07; @wang:2015; @lazaro:ISIT2015; @lazaro:JSAC]). In this paper an upper bound on the probability of decoding failure of Raptor codes is derived, based on the weight enumerator of their outer codes. The bound is extended to ensembles of Raptor codes where the outer code is drawn randomly from an ensemble. In this case, it is necessary to know the average weight enumerator for the outer code ensemble. By means of simulations, the derived bound is shown to be tight, specially in the error floor region, for Raptor codes with Hamming and linear random outer codes. In contrast to [@Rahnavard:07; @wang:2015; @lazaro:ISIT2015; @lazaro:JSAC] not only binary Raptor codes are considered, but also $q$-ary Raptor codes. The bounds presented in this paper can be seen as an extension of the upper bound in [@schotsch:2013] to Raptor codes. The rest of the paper is organized as follows. In Section \[sec:prelim\] some preliminary definitions are presented. Section \[sec:perf\_bound\] presents the upper bounds on the probability of decoding failure for the case in which the outer code is deterministic. In Section \[sec:ensemble\] these bounds are extended to the case in which the outer code is drawn from a linear parity-check based ensemble. Numerical results are presented in Section \[sec:numres\]. Section \[sec:Conclusions\] presents the conclusions of our work. Preliminaries {#sec:prelim} ============= We consider Raptor codes constructed over $\mathbb {F}_{q}$ with an $(h,k)$ outer linear block code ${\mathcal{C}}$. We shall denote the $k$ input (or source) symbols of a Raptor code as ${{\mathbf{{u}}}=({u}_1,~{u}_2,~\ldots, {u}_k)}$. The elements of ${\mathbf{{u}}}$ belong to $\mathbb {F}_{q}$. Out of the $k$ input symbols, the outer code generates a vector of $h$ intermediate symbols ${{\mathbf{{v}}}=({v}_1,~{v}_2,~\ldots, {v}_h)} \in {\mathcal{C}}$. Denoting by ${\G_{\text{o}}}$ the employed generator matrix of the outer code, of dimension $(k \times h)$ and with elements in $\mathbb {F}_{q}$, the intermediate symbols can be expressed as $${\mathbf{{v}}}= {\mathbf{{u}}}{\G_{\text{o}}}.$$ These intermediate symbols serve as input to an LT encoder, which can generate an unlimited number of output symbols, ${\mathbf{{c}}=({c}_1, {c}_2, \ldots, {c}_n)}$, where $n$ can grow unbounded. Again, the elements of $\mathbf{{c}}$ belong to $\mathbb {F}_{q}$. For any $n$ the output symbols can be expressed as $$\mathbf{{c}} = {\mathbf{{v}}}{\G_{\text{LT}}}= {\mathbf{{u}}}{\G_{\text{o}}}{\G_{\text{LT}}}$$ where ${\G_{\text{LT}}}$ is an $(h \times n)$ matrix whose elements belong to $\mathbb {F}_{q}$. Each column of ${\G_{\text{LT}}}$ is associated with ${c}_i$. More specifically, each column of ${\G_{\text{LT}}}$ is generated by first selecting an output degree $d$ according to the degree distribution ${\Omega= (\Omega_1, \Omega_2, \ldots, \Omega_{{ d_{\max}}})}$, and then selecting $d$ different indexes uniformly at random between $1$ and $h$. Finally, the elements of the column corresponding to these indexes are drawn independently and uniformly at random from $\mathbb {F}_{q} \backslash \{0\}$, while all other elements of the column are set to zero. The output symbols $\mathbf{{c}}$ are transmitted over a at the output of which each transmitted symbol is either correctly received or erased.[^3] We denote by $m$ the number of output symbols collected by the receiver of interest, and we express it as $m=k+{\delta}$. Let us denote by ${\mathbf{{{y}}}=({{y}}_1, {{y}}_2, \ldots, {{y}}_m)}$ the $m$ received output symbols. Denoting by $\mathcal{I} = \{i_1, i_2, \hdots, i_m \}$ the set of indices corresponding to the $m$ non-erased symbols, we have $${{y}}_j = {c}_{i_j}.$$ An decoder (for example, an inactivation decoder) proceeds by solving the linear system of equations $$\mathbf{{{y}}} = {\mathbf{{u}}}{\tilde{\mathbf{G}}}$$ where $$\begin{aligned} {\tilde{\mathbf{G}}}= {\G_{\text{o}}}{{\tilde{\mathbf{G}}}_{LT}}\label{eq:sys_eq}\end{aligned}$$ with ${{\tilde{\mathbf{G}}}_{LT}}$ given by the $m$ columns of ${\G_{\text{LT}}}$ with indices in $\mathcal{I}$. Given a block code ${\mathcal{C}}$ of length $h$ we shall denote its weight enumerator as ${A}= \{{A}_0, {A}_1 \hdots {A}_h\}$, where ${A}_i$ denotes the multiplicity of codewords of weight $i$. Similarly, given an ensemble of block codes, all with the same length $h$, along with a probability distribution on the codes in the ensmble, we shall denote its average weight enumerator as ${{\mathsf{A}}= \{{\mathsf{A}}_0, {\mathsf{A}}_1 \hdots {\mathsf{A}}_h\}}$, where ${\mathsf{A}}_i$ denotes the expected multiplicity of codewords of weight $i$ of a code drawn randomly from the ensemble. Upper Bounds on the Error Probability {#sec:perf_bound} ===================================== The following theorem establishes an upper bound on the probability of decoding failure $\Pf$ under decoding of a Raptor code constructed over $\mathbb {F}_{q}$ as a function of the receiver overhead ${\delta}$. \[theorem:rateless\] Consider a Raptor code constructed over $\mathbb {F}_{q}$ with an $(h,k)$ outer code ${\mathcal{C}}$ characterized by a weight enumerator ${A}$, and an inner code with output degree distribution $\Omega$. The probability of decoding failure under optimum erasure decoding given that ${m=k+{\delta}}$ output symbols have been collected by the receiver can be upper bounded as $${ \mathsf{P}_{\mathsf{F}}}\leq \sum_{l=1}^h {A}_{{l}} {\pi_{{l}}}^{k+{\delta}}$$ where ${\pi_{{l}}}$ is the probability that a generic output symbol is equal to $0$ given that the vector ${\mathbf{{v}}}$ of intermediate symbols has Hamming weight $l$. The expression of ${\pi_{{l}}}$ is [@schotsch:2013] $$\begin{aligned} {\pi_{{l}}}&= \frac{1}{q} + \frac{q-1}{q} \sum_{j=1}^{{ d_{\max}}} \Omega_j \frac{{\mathcal{K}}_j(l; h,q)}{{\mathcal{K}}_j(0; h,q)} \label{eq:pl}\end{aligned}$$ where ${\mathcal{K}}_j(l; h,q)$ is the Krawtchouk polynomial of degree $j$ with parameters $h$ and $q$.[^4] An optimum (e.g. inactivation) decoder solves the linear system of equations in . Decoding fails whenever the system does not admit a unique solution, that is, if and only if ${\mathsf{rank}}({\tilde{\mathbf{G}}})<k$, i.e. if ${\exists\, {\mathbf{{u}}}\in \mathbb {F}_q^k \backslash \{ \textbf{0}\} \,\, \text{s.t.} \,\, {\mathbf{{u}}}{\tilde{\mathbf{G}}}= \textbf{0}}$. Consider two vectors ${\mathbf{{u}}}\in \mathbb {F}_q^k, {\mathbf{{v}}}\in \mathbb {F}_q^h$. Let us define $E_{{\mathbf{{u}}}}$ as the event ${\mathbf{{u}}}\mathbf{G}_o {{\tilde{\mathbf{G}}}_{LT}}= \mathbf{0}$. Similarly, we define $E_{{\mathbf{{v}}}}$ as the event ${\mathbf{{v}}}{{\tilde{\mathbf{G}}}_{LT}}= \mathbf{0}$. We have $$\begin{aligned} { \mathsf{P}_{\mathsf{F}}}& = \Pr\left\{ \small{\bigcup_{{\mathbf{{u}}}\in \mathbb {F}_q^k \backslash \{ \textbf{0}\}}} E_{{\mathbf{{u}}}} \right\} = \Pr\left\{ \small{\bigcup_{{\mathbf{{v}}}\in {\mathcal{C}}\backslash \{ \textbf{0}\} }} E_{{\mathbf{{v}}}} \right\} \label{eq:existence}\end{aligned}$$ where we made use of the fact that due to linearity, the all zero intermediate word is only generated by the all zero input vector. Developing we have $$\begin{aligned} { \mathsf{P}_{\mathsf{F}}}& = \Pr \left\{ \small{\bigcup_{l=1}^h} \,\, \small{ \bigcup_{{\mathbf{{v}}}\in \mathbb {\mathcal{C}}_l } } E_{{\mathbf{{v}}}} \right\} \label{eq:existence2}\end{aligned}$$ where, by definition $${\mathcal{C}}_l = \left\{ {\mathbf{{v}}}\in {\mathcal{C}}: w_H({\mathbf{{v}}}) = l \right\}$$ is the set of codewords in ${\mathcal{C}}$ of Hamming weight $l$. Let $L$ be a discrete random variable representing the Hamming weight of vector ${\mathbf{{v}}}\in {\mathcal{C}}$. Moreover, let $J$ and $I$ be discrete random variables representing the number of intermediate symbols which are linearly combined to generate the generic output symbol $y$, and the number of non-zero such intermediate symbols, respectively. Note that $I \leq L$. We can upper bound as $$\begin{aligned} { \mathsf{P}_{\mathsf{F}}}& \leq \sum_{l=1}^{h} \Pr \left\{ \small{ \bigcup_{{\mathbf{{v}}}\in \mathbb {\mathcal{C}}_l } } E_{{\mathbf{{v}}}} \right\} \leq \sum_{l=1}^{h} {A}_{{l}} \Pr \left\{ E_{{\mathbf{{v}}}} | L=l \right\} \, . \label{eq:existence3}\end{aligned}$$ Observing that the output symbols are independent of each other, we have $$\Pr \left\{ E_{{\mathbf{{v}}}} | L=l \right\} = {\pi_{{l}}}^{k+{\delta}}$$ where ${\pi_{{l}}}= \Pr \{ y=0 | L=l\}$. An expression for ${\pi_{{l}}}$ may be obtained observing that $$\begin{aligned} {\pi_{{l}}}&= \sum_{j=1}^{{ d_{\max}}} \Pr \{ y=0 | L=l,J=j \} \Pr \{ J=j | L=l \} \\ &\stackrel{(\mathrm{a})}{=} \sum_{j=1}^{{ d_{\max}}} \Omega_j \Pr \{ y=0 | L=l,J=j \} \\ &\stackrel{(\mathrm{b})}{=} \sum_{j=1}^{{ d_{\max}}} \Omega_j \sum_{i=0}^{\min\{j,l\}} \Pr \{ y=0 | I=i \} \! \Pr \{ I=i | L=l, J=j \}\end{aligned}$$ where equality ‘$(\mathrm{a})$’ is due to $= \Omega_j$ and equality ‘$(\mathrm{b})$’ to $\Pr \{ y=0 | L=l, J=j, I=i \} = \Pr \{ y=0 | I=i \}$. Letting , since the $j$ intermediate symbols are chosen uniformly at random by the LT encoder we have $$\begin{aligned} \label{eq:neighbors} {\vartheta_{i,l,j} }= \frac{ \binom{{l}}{i} \binom{h-{l}}{j-i} } { \binom{h}{j}} \, .\end{aligned}$$ Let us denote $\Pr \{ y=0 | I=i \}$ by ${\varphi_i }$ and let us observe that, due to the elements of ${\tilde{\mathbf{G}}}$ being and uniformly drawn in $\mathbb {F}_{q} \setminus \{0\}$, on invoking Lemma \[lemma:galois\] in the Appendix[^5] we have $$\begin{aligned} \label{eq:sum} {\varphi_i }=\frac{1}{q} \left( 1 + \frac{(-1)^i}{(q-1)^{i-1}}\right).\end{aligned}$$ We conclude that ${\pi_{{l}}}$ is given by $$\begin{aligned} {\pi_{{l}}}&= \sum_{j=1}^{{ d_{\max}}} \Omega_j \sum_{i=0}^{\min\{j,l\}} {\vartheta_{i,l,j} }\, {\varphi_i }\end{aligned}$$ where ${\vartheta_{i,l,j} }$ and ${\varphi_i }$ are given by and , respectively. Expanding this expression and rewriting it using Krawtchouk polynomials and making use of the Chu-Vandermonde identity, one obtains . This completes the proof. The following theorem makes the bound in Theorem \[theorem:rateless\] tighter for $q>2$. It is equivalent to Theorem \[theorem:rateless\] for $q=2$. \[lemma:bound\_tight\] Consider a Raptor code constructed over $\mathbb {F}_{q}$ with an $(h,k)$ outer code ${\mathcal{C}}$ characterized by a weight enumerator ${A}$, and an inner with output degree distribution $\Omega$. The probability of decoding failure under optimum erasure decoding given that ${m=k+{\delta}}$ output symbols have been collected by the receiver can be upper bounded as $${ \mathsf{P}_{\mathsf{F}}}\leq \sum_{l=1}^h \frac{{A}_{{l}}}{q-1} {\pi_{{l}}}^{k+{\delta}}$$ The bound can be tightened by a factor $q-1$ exploiting the fact that for a linear block code ${\mathcal{C}}$ constructed over $\mathbb {F}_{q}$, if $\mathbf{{c}}$ is a codeword, $\alpha \mathbf{{c}}$ is also a codeword, $\forall \alpha \in \mathbb F_{q} \backslash \{0\}$ [@Liva2013]. The upper bound in Theorem \[lemma:bound\_tight\] also applies to LT codes. In that case, ${A}_{{l}}$ is simply the total number of sequences of Hamming weight $l$ and length $k$, $${A}_{{l}}= \binom{k}{l} (q-1)^{l-1}.$$ The upper bound obtained for LT codes coincides with the bound in [@schotsch:2013] (Theorem 1). Case of Random Outer Codes from Linear Parity-Check Based Ensembles {#sec:ensemble} =================================================================== Both Theorem \[theorem:rateless\] and Theorem \[lemma:bound\_tight\] apply to the case of a specific outer code. Next we extend these results to the case of a random outer code drawn from an ensemble of codes. Specifically, we consider a parity-check based ensemble of outer codes, denoted by $\msr{C}^\text{o}$, defined by a random matrix of size $(h - k) \times h$ whose elements belong to $\mathbb F_q$. A linear block code of length $h$ belongs to $\msr{C}^\text{o}$ if and only if at least one of the instances of the random matrix is a valid parity-check matrix for it. Moreover, the probability measure of each code in the ensemble is the sum of the probabilities of all instances of the random matrix which are valid parity-check matrices for that code. Note that all codes in $\msr{C}^\text{o}$ are linear, have length $h$, and have dimension $k_{\mathcal{C}}\geq k$. In the following we use the expression “Raptor code ensemble” to refer to the set of Raptor codes obtained by concatenating an outer code belonging to the ensemble $\msr{C}^\text{o}$ with an encoder having distribution $\Omega$. We shall denote this ensemble as $(\msr{C}^\text{o}, \Omega)$. \[corollary:rateless\] Consider a Raptor code ensemble $(\msr{C}^o, \Omega)$ and let ${\mathsf{A}}= \{ {\mathsf{A}}_0,{\mathsf{A}}_1,\dots,{\mathsf{A}}_h \}$ be the expected weight enumerator of a code ${\mathcal{C}}$ that is randomly drawn from $\msr{C}^o$, i.e., let ${{\mathsf{A}}_{l} = {\mathbb{E}}_{ \msr{C}^o }[A_l({\mathcal{C}})]}$ for all $l \in \{0,1,\dots,h\}$. Let $$\begin{aligned} { \bar {\mathsf{P}}_{\mathsf{F}}}= {\mathbb{E}}_{ \msr{C}^o } [ \Pf({\mathcal{C}})] \label{eq:ensemble}\end{aligned}$$ be the average probability of decoding failure of the Raptor code obtained by concatenating an instance of ${\mathcal{C}}$ with the encoder, under optimum erasure decoding and given that ${m=k+{\delta}}$ output symbols have been collected by the receiver. Then $${ \bar {\mathsf{P}}_{\mathsf{F}}}\leq \sum_{l=1}^h \frac{{\mathsf{A}}_{{l}}}{q-1} {\pi_{{l}}}^{k+{\delta}} \, .$$ Due to Theorem \[lemma:bound\_tight\] we may write $$\begin{aligned} { \bar {\mathsf{P}}_{\mathsf{F}}}\leq {\mathbb{E}}_{ \msr{C}^o } \left[ \sum_{l=1}^h \frac{{A}_{{l}}({\mathcal{C}}) }{q-1} {\pi_{{l}}}^{k_{\mathcal{C}}+{\delta}} \right]. \label{eq:ensemble2}\end{aligned}$$ For all outer codes ${\mathcal{C}}\in \msr{C}^\text{o}$ we have $k_{\mathcal{C}}\geq k$. Since ${\pi_{{l}}}\leq 1$ we can write $${\pi_{{l}}}^{k_{\mathcal{C}}+{\delta}} \leq {\pi_{{l}}}^{k+{\delta}}$$ which allows us to upper bound as $${ \bar {\mathsf{P}}_{\mathsf{F}}}\leq {\mathbb{E}}_{ \msr{C}^o } \left[ \sum_{l=1}^h \frac{{A}_{{l}}({\mathcal{C}}) }{q-1} {\pi_{{l}}}^{k+{\delta}} \right]= \sum_{l=1}^h \frac{{\mathsf{A}}_{{l}}}{q-1} {\pi_{{l}}}^{k+{\delta}}$$ where the last equality follows from linearity of expectation. Numerical Results {#sec:numres} ================= All results presented in this section use the output degree distribution employed by standard R10 Raptor codes, [@MBMS16:raptor; @luby2007rfc], $$\begin{aligned} \Omega({\mathtt{x}}) &= \sum_{j=1}^{{ d_{\max}}} \Omega_j {\mathtt{x}}^j \\ &= 0.0098{\mathtt{x}}+ 0.4590{\mathtt{x}}^2+ 0.2110{\mathtt{x}}^3+0.1134{\mathtt{x}}^4 \\ &+ 0.1113{\mathtt{x}}^{10} + 0.0799{\mathtt{x}}^{11} + 0.0156{\mathtt{x}}^{40}. \label{eq:dist_mbms}\end{aligned}$$ Binary Raptor Codes with Hamming Outer Codes -------------------------------------------- In this section we consider binary Raptor codes with (deterministically known) Hamming outer codes. The weight enumerator of a binary Hamming code of length $h=2^t-1$ and dimension $k=h-t$ can be derived easily using the recursion $$(i+1)\, A_{i+1} + A_i + (h-i+1)\, A_{i-1}= \binom{h}{i}$$ with $A_0=1$ and $A_1=0$ [@MacWillimas77:Book]. The weight distribution obtained from this recursion can then incorporated in Theorem \[theorem:rateless\] to derive the corresponding upper bound on the probability of Raptor decoding failure under optimum decoding. ![Probability of decoding failure $\Pf$ versus the absolute overhead for a binary Raptor code with a $(63,57)$ Hamming outer code. The solid line denotes the upper bound on the probability of decoding failure expressed by Theorem \[theorem:rateless\]. The markers denote simulation results.[]{data-label="fig:Hamming_sim"}](Hamming.pdf){height="7.8cm"} Figure \[fig:Hamming\_sim\] shows the decoding failure rate for a binary Raptor code using a $(63,57)$ binary Hamming outer code as a function of the absolute overhead, ${\delta}$. The upper bound established in Theorem 1 is also shown. In order to obtain the values of failure rate, for each ${\delta}$ value Monte Carlo simulations were run until $200$ errors were collected using inactivation decoding. It can be observed how the upper bound is tight. Linear Random Outer Code ------------------------ In this subsection, we consider a $(\msr{C}^o, \Omega)$ Raptor code ensemble constructed over $\mathbb F_q$, where the distribution $\Omega$ is the one defined in and where $\msr{C}^o$ is the uniform parity-check ensemble, with parity-check matrix of size $(h-k) \times h$ and characterized by entries with uniform distribution in $\mathbb F_q$. The expected multiplicity of codewords of weight $l$ for an outer code drawn randomly in $\msr{C}^o$ according to the described procedure is known to be $$\begin{aligned} {\mathsf{A}}_{{l}} = \binom{h}{{l}} q^{-(h-k)} (q-1)^l. \label{eq:wef_random}\end{aligned}$$ In order to obtain the experimental values of decoding failure rate, $6000$ different outer codes were generated. For each outer code and for each overhead value $10^3$ inactivation decoding attempts were carried out. The average failure rate was calculated by averaging the failure rates of the individual Raptor codes. In order to select the outer code an $(h-k)\times h$ parity check matrix was selected at random by generating each of its elements according to a uniform distribution in $\mathbb F_{q}$. In Figure \[fig:pf\_k\_64\_h70\] we show simulation results for $k=64$ and $h=70$. Two different $(\msr{C}^o, \Omega)$ Raptor code ensembles were considered, one constructed over $\mathbb F_{2}$ and one constructed over $\mathbb{F}_4$. We can observe how in both cases the bounds hold and are tight except for very small values of ${\delta}$. ![Expected probability of decoding failure $\barPf$ vs absolute overhead for Raptor code ensembles where the outer code is drawn randomly from the uniform parity-check ensemble. The solid and dashed lines denote the upper bounds on the average probability of decoding failure for the ensembles constructed over $\mathbb{F}_2$ and $\mathbb{F}_4$ respectively. The markers denote simulation results.[]{data-label="fig:pf_k_64_h70"}](pf_k_64_h_70.pdf){height="7.8cm"} Conclusions {#sec:Conclusions} =========== In this paper we have consider Raptor codes under decoding. We have derived an upper bound on the probability of decoding failure of Raptor codes with generic $q$-ary outer codes. This bound is general and only requires the knowledge of the weight enumerator of the outer code. The bound also applies to ensembles of Raptor codes where the outer code is randomly selected from an ensemble. The bound is shown to be tight, specially in the error floor, by means of simulations. \[sec:appendix\] The following lemma is used in the proof of Theorem \[theorem:rateless\]. \[lemma:galois\] Let $X_1$, $X_2$ ... $X_l$ be discrete i.i.d random variables uniformly distributed over $\mathbb F_{2^m} \backslash \{0\}$. Then $$\Pr \{X_1 + X_2+ \hdots + X_l = 0 \}= \frac{1}{q} \left( 1 + \frac{(-1)^i}{(q-1)^{i-1}}\right)$$ where $q=2^m$. Observe that the additive group of $\mathbb F_{2^m}$ is isomorphic to the vector space $\mathbb Z_2^m$. Thus, we may let $X_1$, $X_2$ ... $X_l$ be i.i.d random variables with uniform probability mass function over the vector space $\mathbb Z_2^m \backslash \{0\}$. Let us introduce the auxiliary random variable $$W := X_1 + X_2+ \hdots + X_l$$ and let us denote by $P_W(w)$ and by $P_X(x)$ the probability mass functions of $W$ and $X_i$, respectively, where $$P_X(x) = \begin{cases} 0 & \text{if } x=0 \\ \frac{1}{q-1} & \text{otherwise.} \end{cases}$$ Due to independence we have $$P_W = P_X \ast P_X \ast \hdots \ast P_X$$ which, taking the $m$-dimensional two-points $\msr{J} \{\cdot\}$ of both sides, yields $$\msr J \{ P_W(w) \} = \left( \msr J \{ P_X(x) \} \right)^l.$$ Next, since $$\hat P_X(t) := \msr J \{ P_X(x) \}= \begin{cases} 1 & \text{if } t=0 \\ \frac{-1}{q-1} & \text{otherwise} \end{cases}$$ we have $$\hat P_W(t) := \msr J \{ P_W(w) \} = \begin{cases} 1 & \text{if } t=0 \\ \frac{(-1)^l}{(q-1)^l} & \text{otherwise.} \end{cases}$$ We are interested in $P_W(0)$ whose expression corresponds to $$P_W(0) = \frac{1}{q} \sum_t \hat P_W(t) = \frac{1}{q} + \frac{1}{q} (q-1) \frac{(-1)^l}{(q-1)^l}$$ from which the statement follows. The result in this lemma appears in [@schotsch:2013]. However, the proof in [@schotsch:2013] uses a different approach based on a known result on the number of closed walks of length $l$ in a complete graph of size $q$ from a fixed but arbitrary vertex back to itself. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported in part by [ESA/ESTEC]{} under Contract No. [4000111690/14/NL/FE]{} “NEXCODE”. [^1]: This work has been accepted for publication at IEEE Globecom 2017. [^2]: ©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works [^3]: The results developed in this paper remain valid regardless the statistic of the erasures introduced by the channel. [^4]: The Krawtchouk polynomial of degree $j$ with parameters $n$ and $q$ is defined as [@MacWillimas77:Book] $${\mathcal{K}}_k(x;n,q) = \sum_{j=0}^k (-1)^j \binom{x}{j} \binom{n-x}{k-j} (q-1)^{k-j}.$$ [^5]: The proof in the Appendix is only valid for fields with characteristic $2$, the case of most interest for practical purposes. The proof of the general case is a trivial extension of Lemma \[lemma:galois\] in the Appendix.
--- abstract: 'Very special relativity (VSR) keeps the main features of special relativity but breaks rotational invariance. We will show how VSR like terms which depend on a fixed null vector can be generated systematically. We start with a formulation for a spinning particle which incorporates VSR. We then use this formulation to derive the VSR modifications to the Maxwell equations. Next we consider VSR corrections to Thomas precession. We start with the coupling of the spinning particle to the electromagnetic field adding a gyromagnetic factor which gives rise to a magnetic moment. We then propose a spin vector in terms of the spinning particle variables and show that it obeys the BMT equation. All this is generalized to the VSR context and we find the VSR contributions to the BMT equation.' author: - Jorge Alfaro - 'Victor O. Rivelles' title: Very Special Relativity and Lorentz Violating Theories --- Introduction ============ The standard model of particle physics is a well established and experimentally confirmed theory but it needs to be extended in order to incorporate some known phenomena like, for instance, neutrino masses and dark matter. Possibly the standard model is the low energy limit of a larger theory which hopefully includes gravity. Since the present experimental data are not enough to point out how to extend the standard model we must seek for small deviations of it which could be detected at low energies. One possibility is that fields from a more complete theory couple to the standard model fields like constant background fields causing deviations of Lorentz symmetry [@Colladay:1998fq]. This is a very active line of investigation with many theoretical results awaiting for experimental confirmation (for a review see [@Liberati:2013xla]). Usually such proposals have as a consequence that the dispersion relation for light is modified. A more conservative alternative would keep the essential features of special relativity, like the constancy of the velocity of light, but leave aside rotation symmetry for instance. This can be achieved by taking subgroups of the Lorentz group which preserve the constancy of the velocity of light. Such subgroups were identified and used to built what is called very special relativity (VSR) [@Cohen:2006ky]. One of its mains feature is that the inclusion of $P$, $T$ or $CP$ symmetries enlarges VSR to the full Lorentz group so that VSR is only relevant in theories where one of the discrete symmetries is broken. Two subgroups of the Lorentz group, $SIM(2)$ and $HOM(2)$, have the property of rescaling a fixed null vector $n^\mu$. Then terms containing ratios of contractions of $n^\mu$ with other kinematic vectors will be invariant under transformations of these subgroups. A proposal to generate mass for neutrinos along these lines was presented in [@Cohen:2006ir] where an equation for a left-handed fermion incorporating VSR was given $$\label{1.1} \left( \slash\!\!\! p - \frac{1}{2} m^2 \frac{\slash\!\!\!n}{n^\mu p_\mu} \right) \psi_L = 0,$$ where $m$ sets the VSR scale. When the equation of motion is squared we find that it describes a free fermion of mass $m$. The price to be paid is the presence of non-local operators as well as the lack of rotational symmetry. In this way it is possible to save some of the important effects of special relativity and consider possible violations of space isotropy. Several aspects of VSR have been considered, like the inclusion of supersymmetry [@Cohen:2006sc; @Vohanka:2011aa], curved spaces [@Gibbons:2007iu; @Muck:2008bd], noncommutativity [@SheikhJabbari:2008nc; @Das:2010cn], dark matter [@Ahluwalia:2010zn] and also in cosmology [@Chang:2013xwa]. We can take this specific realization of VSR and consider the addition of interaction terms in the context of the usual Lorentz violating theories. We can regard the inclusion of operators containing a constant and null vector $n^\mu$ as determining a preferred direction in space. It breaks Lorentz symmetry to $ISO(2)$ but allowing a scale transformation on $n^\mu$ the symmetry can be enlarged to $SIM(2)$. So the inclusion of terms containing ratios of $n^\mu$ contracted with other kinematic vectors will lead to the consideration of VSR like terms as that in (\[1.1\]). When a Lorentz invariant action is extended by the addition of Lorentz violating operators the coefficients of such operators are in general arbitrary and unrelated to each other. In this paper we will show that Lorentz violating terms, like the one present in (\[1.1\]), can be derived in a systematic way. To do that we start in Section (\[s1\]) with the model of a massive spinning particle describing a free fermion. It is characterized by its worldline reparametrization and worldline supersymmetry. To give rise to a VSR term similar to that in (\[1.1\]) the supersymmetry constraint is modified. This is the only point where a Lorentz violating term is added by hand. In Section (\[s3\]) we consider a spinning particle with ${\cal N}=2$ supersymmetries which describes an abelian gauge field. In order to derive the Lorentz violating terms contributing to the Maxwell equations we consider the modified supersymmetry constraint from the previous section. We find a massive photon in agreement with the VSR construction done in [@Cheon:2009zx]. This approach provides a systematical way of generating Lorentz violating terms like the one in (\[1.1\]). We then apply this approach to interacting theories. To be concrete we consider the relativistic equation describing Thomas precession also known as BMT equation [@Bargmann:1959gz; @Jackson]. It describes the dynamics of an axial 4-vector $S^\mu$, associated to the spin of the electron, in the presence of an uniform electromagnetic field and from it it is possible to derive the precession angular velocity of the spin in the electron rest frame. In order to apply our formalism we have to construct $S^\mu$ in terms of the spinning particle variables and this is done in the next section where we consider the coupling of the usual spinning particle to the Maxwell field and define a Grassmannian spin vector $S^\mu$ for the spinning particle. We then show how it naturally leads to the BMT equation. Then in Section \[s5\] we use the VSR spinning particle obtained in Section \[s1\] to derive corrections to the BMT equation. We find that the spin vector $S^\mu$ must have additional terms depending on $n^\mu$. We work in the limit where $m$ is much smaller than the electron mass and find that many new terms contribute to the BMT equation. As expected the coefficients of the Lorentz breaking terms are not arbitrary, the only arbitrariness being the value of $m$. Finally, in Section \[s6\] we present some conclusions. VSR Spinning Particle {#s1} ===================== The spinning particle [@Berezin:1976eg] provides a particle description for a Dirac field in the same way as the relativistic particle is associated to the Klein-Gordon field. Besides the particle coordinates $X^\mu(\tau)$ and its momentum $P^\mu(\tau)$ we also need Grassmannian coordinates $\Psi^\mu(\tau)$ and $\Psi_5(\tau)$ satisfying Poisson brackets $$\{ X^\mu, P_\nu\} = \delta^\mu_\nu, \qquad \{ \Psi^\mu, \Psi^\nu \} = \frac{i}{2}\eta^{\mu\nu}, \qquad \{\Psi_5, \Psi_5\} = -\frac{i}{2}.$$ We assume the existence of a first class constraint ${\cal S}$ which generates worldline supersymmetry $$\label{2.2} {\cal S} = P_\mu \Psi^\mu - M \Psi_5.$$ We then find that the Poisson bracket algebra closes on the Hamiltonian constraint $$\{ {\cal S}, {\cal S} \} = i {\cal H}, \qquad {\cal H} = \frac{1}{2} (P^2 - M^2).$$ The quantization is performed by promoting the Poisson brackets to commutators or anticommutators $$[ X^\mu, P_\nu ] = -i \delta^\mu_\nu, \qquad \{\Psi^\mu, \Psi^\nu \} = \frac{1}{2} \eta^{\mu\nu}, \qquad \{ \Psi_5, \Psi_5 \} = - \frac{1}{2},$$ so that $P_\mu = i \partial_\mu$ and the Grassmannian variables are proportional to the Dirac gamma matrices $\Psi^\mu = \frac{1}{2} \gamma^\mu \gamma_5, \Psi_5 = \frac{1}{2} \gamma_5$. Then the physical states $\varphi(x)$ must satisfy the supersymmetry constraint ${\cal S} \varphi(x) = 0$ and we get the massive Dirac equation. In VSR the massive Dirac equation for a fermion is modified to [@Cohen:2006sc] $$\label{2.5} \left( i \slash\!\!\!\partial + \frac{i}{2} m^2 \frac{\slash\!\!\!n}{n \partial} - M \right) \varphi(x) = 0,$$ where $n^2=0$, $m$ is the VSR mass scale and $n \partial = n^\mu \partial_\mu$. We will use the notation that for two vectors $A^\mu$ and $B^\mu$, $AB = A^\mu B_\mu$. This strongly suggests that we modify the supersymmetry constraint (\[2.2\]) to $$\label{2.6} {\cal S} = P \Psi - \frac{1}{2} m^2 \frac{\Psi n}{P n} - M \Psi_5,$$ so that the Poisson bracket algebra still closes on the Hamiltonian constraint which is modified to $${\cal H} = \frac{1}{2} ( P^2 - m^2 - M^2).$$ Then the effect of VSR like term in the supersymmetry constraint is just a shift in the squared particle mass. Notice also that the supersymmetry constraint is still well behaved with respect to VSR transformations since $n^\mu$ appears on the numerator and denominator of the new term. The action for the VSR spinning particle has the standard form $$\label{2.8} S = \int \, d\tau \, ( P \dot{X} - i \Psi \dot{\Psi} + i \Psi_5 \dot{\Psi_5} + e {\cal H} + i \chi {\cal S} ),$$ where $e(\tau)$ and $\chi(\tau)$ are Lagrange multipliers. From the constraints we can derive the infinitesimal worldline supersymmetry transformations $$\begin{aligned} \delta X^\mu &=& -i \epsilon \left( \Psi^\mu + \frac{1}{2} m^2 \frac{\Psi n}{(P n)^2} n^\mu \right), \\ \delta P^\mu &=& 0, \\ \delta \Psi^\mu &=& - \frac{1}{2} \epsilon \left( P^\mu - \frac{1}{2} m^2 \frac{n^\mu}{P n} \right), \\ \delta \Psi_5 &=& - \frac{1}{2} M \epsilon, \\ \delta e &=& -i \epsilon \chi, \\ \delta \chi &=& \dot{\epsilon},\end{aligned}$$ and worldline reparametrization transformations $$\begin{aligned} \delta X^\mu &=& \xi P^\mu, \\ \delta e &=& \dot{\xi} \\ \delta P^\mu &=& \delta \Psi^\mu = \delta \Psi_5 = \delta \chi = 0,\end{aligned}$$ where $\epsilon$ is a Grassmannian supersymmetry parameter and $\xi$ is the reparametrization parameter. The action is invariant under these transformations up to a total derivative term. Upon quantization the wave function has to satisfy the supersymmetry constraint (\[2.6\]) and we get the VSR Dirac equation (\[2.5\]). Maxwell Equations in VSR {#s3} ======================== Since the Dirac equation is modified in VSR the same must happen to the Maxwell equations. To show this we can use the spinning particle with extended supersymmetry. The general case was treated in [@Howe:1988ft] where it was shown that a spinning particle with ${\cal N}$ supersymmetries describes massless field equation for particles with spin ${\cal N}/2$. A path integral analysis was performed in [@Pierri:1990rp]. Here we will consider the case ${\cal N}=2$ in the context of VSR. We consider two Grassmannian variables $\Psi^\mu_i$, $i=1,2$, and the following constraints $$\begin{aligned} {\cal H} &=& \frac{1}{2} (P^2 - m^2) \\ {\cal S}_i &=& P \Psi_i - \frac{1}{2} m^2 \frac{\Psi_i n}{Pn},\\ {\phi}_{ij} &=& \Psi_i \Psi_j.\end{aligned}$$ The constraint ${ \phi}_{ij}$ generates $SO(2)$ rotations so we have two supersymmetries. The constraint algebra is $$\begin{aligned} \{ {\cal S}_i, {\cal S}_j \} &=& \delta_{ij} {\cal H}, \\ \{ \phi_{ij}, {\cal S}_k \} &=& {\cal S}_i \delta_{jk} - {\cal S}_j \delta_{ik}, \\ \{ \phi_{ij}, \phi_{kl} \} &=& \delta_{ik} \phi_{jl} - \delta_{il} \phi_{jk} + \delta_{jk} \phi_{il} - \delta_{jl} \phi_{ik}. \end{aligned}$$ The physical states $\varphi$ must satisfy all constraints. The anticommutation relations $$\{ \Psi^\mu_i, \Psi^\nu_j \} = \eta^{\mu\nu} \delta_{ij},$$ can be realized in terms of gamma matrices as [@Howe:1988ft] $$\Psi^\mu_1 = \gamma^\mu \otimes 1, \qquad \Psi^\mu_2 = \gamma_5 \otimes \gamma^\mu.$$ This means that the physical states are bispinors $\varphi_{\alpha\beta}$. Then the $SO(2)$ constraint implies that $\varphi_{\alpha\beta} = (\sigma^{\mu\nu} C)_{\alpha\beta} F_{\mu\nu}(x)$, where $C$ is the charge conjugation matrix and $(\sigma^{\mu\nu} C)_{\alpha\beta}$ is symmetric in the spinor indices. The constraint ${\cal S}_i$ implies that $$\displaystyle{\not} \partial_\alpha^\beta \varphi_{\beta\gamma} + \frac{1}{2} m^2 \frac{\displaystyle{\not} n_\alpha^\beta}{n \partial} \varphi_{\beta\gamma} = 0.$$ We can rewrite this equation for $F_{\mu\nu}$ getting $$\left( \partial_\mu F_{\nu\lambda} + \frac{1}{2} m^2 \frac{n_\mu}{n \partial} F_{\nu\lambda} \right) (\gamma^\mu \sigma^{\nu\lambda})_\alpha^\beta = 0.$$ Since $\gamma^\mu \sigma^{\nu\lambda}$ is proportional to $\epsilon^{\mu\nu\lambda\rho} \gamma_\rho \gamma_5$ and $\eta^{\mu[\nu} \gamma^{\lambda]}$ we can take the trace to get $$\label{29} \partial_{[\mu} F_{\nu\lambda]} + \frac{1}{2} \frac{m^2}{n\partial} n_{[\mu} F_{\nu\lambda]} = 0,$$ while multiplying by $\gamma_5$ and taking the trace we get $$\label{30} \partial^\mu F_{\mu\nu} + \frac{1}{2} \frac{m^2}{n\partial} n^\mu F_{\mu\nu} = 0.$$ In special relativity when $m^2=0$ we recover the Bianchi identities and the Maxwell equations. In VSR they are modified. They also imply that $$\square F_{\mu\nu} + m^2 F_{\mu\nu} = 0,$$ showing that $F_{\mu\nu}$ has mass $m$. We can try to solve the VSR Bianchi identities (\[29\]) and remarkably there is a solution $$F_{\mu\nu} = D_{[\mu} A_{\nu]}, \qquad D_\mu = \partial_\mu + \frac{1}{2} \frac{m^2}{n \partial} n_\mu.$$ Notice that $D_\mu$ has an abelian algebra but it does not satisfy the Leibniz rule. Notice also that $F_{\mu\nu}$ is not invariant under the usual gauge transformation but it is invariant under $$\label{33} \delta A_\mu = D_\mu \Lambda.$$ Then the VSR Maxwell equations (\[30\]) can be written as $$D^\mu F_{\mu\nu} = 0,$$ and we have a massive field described by a field equation with a modified gauge invariance (\[33\]). Our results agree with those found in [@Cheon:2009zx]. The non-Abelian extension of VSR gauge fields was done in [@Alfaro:2013uva]. It was found that since all gauge fields in a given multiplet acquire a common mass $m$ it can not be used as a replacement for the Higgs mechanism. Coupling the Spinning Particle to the Maxwell Field and the BMT Equation {#s2} ======================================================================== In this section we show how to derive the BMT equation in special relativity using the spinning particle variables. Firstly we couple the spinning particle to a background electromagnetic field $A_\mu$ using the minimal substitution $P^\mu \rightarrow P^\mu - q A^\mu$ in the supersymmetry constraint (\[2.2\]). To introduce the gyromagnetic factor $g$ we consider the proposal for a spinning particle with “anomalous" magnetic moment [@Barducci:1982yw]. There is a more detailed treatment in [@Gitman:1992an] where the expressions are more explicit. Besides the minimal substitution we also have to replace $M \rightarrow M + 2i \mu F\Psi\Psi$, in the supersymmetry constraint, where the magnetic moment is $$\mu = \frac{q}{2M} ( \frac{g}{2} - 1),$$ so we get $$\label{4.2} {\cal S} = \Psi (P - q A) - (M + 2i\mu F\Psi\Psi) \Psi_5.$$ The notation $FAB = F_{\mu\nu} A^\mu B^\nu$ is used throughout the rest of the paper. The Hamiltonian constraint is now $${\cal H} = \frac{1}{2} [ (P - qA)^2 - M^2 ] - i( q + 2 \mu M)F\Psi\Psi - 4i\mu F(P- qA)\Psi \Psi_5 + 2\mu^2 (F\Psi\Psi)^2.$$ It can also be checked that $\{ {\cal H}, {\cal S} \}=0$. The action has the same form as before (\[2.8\]) and the equations of motion are $$\begin{aligned} P^\mu &=& qA^\mu - \frac{1}{e} \dot{X}^\mu + 4i\mu F^{\mu\nu}\Psi_\nu \Psi_5 - \frac{i}{e} \chi\Psi^\mu, \label{38} \\ \dot{P}^\mu &=& e [ -q (P^\nu - qA^\nu) \partial^\mu A_\nu - i (q + 2\mu M) \partial^\mu F\Psi\Psi - 4i\mu (\partial^\mu F)(P-qA)\Psi \Psi_5 \nonumber \\ &-& 4i\mu q (\partial^\mu A^\nu) F_{\rho\nu}\Psi^\rho\Psi_5 + 4 \mu^2 F\Psi\Psi \partial^\mu F\Psi\Psi ] - iq\chi \Psi\partial^\mu A, \\ \dot{\Psi}^\mu &=& e[ -(q + 2\mu M) F^{\mu\nu}\Psi_\nu + 2\mu F^{\mu\nu}(P-qA)_\nu \Psi_5 - 4 i \mu^2 F\Psi\Psi F^{\mu\nu} \Psi_\nu ] \nonumber \\ &-& \frac{1}{2} \chi [ (P-qA)^\mu - 8i\mu F^{\mu\nu}\Psi_\nu \Psi_5], \\ \dot{\Psi}_5 &=& -2\mu e F(P-qA)\Psi - \frac{1}{2} \chi (M + 2i \mu F\Psi\Psi), \end{aligned}$$ plus the constraints. We now choose the gauge $e = -1/M$ and $\chi =0$. Since we are interested only in weak and uniform background fields we can linearize the above equations. Also eliminating the momentum we find $$\begin{aligned} \ddot{X}^\mu &=& \frac{q}{M} F^{\mu\nu} \dot{X}_\nu, \label{4.8} \\ \dot{\Psi}^\mu &=& (\frac{q}{M} + 2\mu) F^{\mu\nu}\Psi_\nu - 2\mu F^{\mu\nu}\dot{X}_\nu\Psi_5\label{4.9}, \\ \dot{\Psi}_5 &=& 2\mu F\dot{X}\Psi, \\ \dot{X}\Psi &+& 2i\frac{\mu }{M} F\Psi\Psi \Psi_5 - \Psi_5 = 0, \label{45} \\ \dot{X}^2 &-& 1 - 2 \frac{i}{M} ( \frac{q}{M} + 2\mu) F\Psi\Psi = 0. \label{4.12}\end{aligned}$$ The next step is to define an axial spin vector $S^\mu(\tau)$ which generalizes the rest frame spin of the electron. Requiring that its time component vanishes in the rest frame it must satisfy $\dot{X} S = 0$. There are several proposals to describe the relativistic spin through some particle model (see for instance [@Deriglazov:2011gy]). Here we assume that $S^\mu$ is a pseudo-vector even in the Grassmannian variables and the natural choice is $$S^\mu = \epsilon^{\mu\nu\rho\sigma} \dot{X}_\nu \Psi_\rho \Psi_\sigma.$$ When computing $\dot{S}^\mu$ we have to rewrite all terms quadratic in $\Psi$ in terms of $S$. To do that we use the identity $$\label{4.14} \Psi^\mu \Psi^\nu = \frac{1}{2\dot{X}^2} \epsilon^{\mu\nu\rho\sigma} \dot{X}_\rho S_\sigma - \dot{X}^{[\mu} \Psi^{\nu]} \frac{\dot{X}\Psi}{\dot{X}^2},$$ where $A_{[\mu} B_{\nu]} = A_\mu B_\nu - A_\nu B_\mu$ with no factor of 1/2. We also have to use the field equations (\[4.9\]-\[4.12\]) noting that $\dot{\Psi}^\mu, \dot{\Psi}_5, \dot{X}^2 - 1$ and $\dot{X}\Psi - \Psi_5$ are all of ${\cal O}(F)$. The calculation is lengthy and tedious. There are several terms proportional to $\epsilon^{\mu\nu\rho\sigma}\dot{X}_\nu \Psi_\sigma \Psi_5$ which can not be rewritten in terms of $S^\mu$ but cancel against each other. At the end the result is $$\dot{S}^\mu = \frac{qg}{2M} \left( F^{\mu\nu}S_\nu + \dot{X}^\mu FS\dot{X} \right) - \dot{X}^\mu S\ddot{X}.$$ Using now the equation of motion (\[4.8\]) we get the BMT equation $$\dot{S}^\mu = \frac{q}{M} \left( \frac{g}{2} F^{\mu\nu}S_\nu + (\frac{g}{2} - 1) \dot{X}^\mu FS\dot{X} \right).$$ Having obtained the BMT equation from the spinning particle the next step is to generalize it to VSR since we already know the supersymmetry constraint (\[2.6\]). Coupling the VSR Spinning Particle {#s5} ================================== We go along the same lines as in the previous section. The simplest choice for the supersymmetry constraint which reduces to (\[4.2\]) and (\[2.6\]) is $${\cal S} = \Psi (P - q A) - (M + 2i\mu F\Psi\Psi) \Psi_5 - \frac{1}{2} m^2 \frac{\Psi n}{(P-qA)n}.$$ The Poisson bracket algebra of two supersymmetries constraints closes on $$\begin{aligned} \label{52} {\cal H} &= \frac{1}{2} [ (P - qA)^2 - m^2 - M^2 ] - i( q + 2 \mu M)F\Psi\Psi - 4i\mu F(P- qA)\Psi \Psi_5 + 2\mu^2 (F\Psi\Psi)^2 \nonumber \\ &+ iq m^2 \Psi n \frac{ F\Psi n}{[(P-qA)n]^2} - 2i\mu m^2 \frac{F\Psi n}{(P-qA)n} \Psi_5 + 2\mu m^2 \Psi n \frac{ n^\rho \partial_\rho F\Psi\Psi}{[(P-qA)n]^2} \Psi_5.\end{aligned}$$ Again it is possible to show that its Poisson bracket with ${\cal S}$ vanishes. When deriving the equations of motion we have to deal with $(P-qA)n$ in several denominators. To do that we take the field equation obtained by varying $P^\mu$ $$\begin{aligned} (P-qA)^\mu &=& - \frac{1}{e} \dot{X}^\mu + 4i\mu F^{\mu\nu}\Psi_\nu\Psi_5 + 2m^2 \left( iq \Psi n \frac{F\Psi n}{[(P-qA)n]^3} - i\mu \frac{F\Psi n \Psi_5}{[(P-qA)n]^2} \right. \nonumber \\ &+& \left. 2\mu \Psi n \frac{n^\rho \partial_\rho F\Psi\Psi}{[(P-qA)n]^3} \Psi_5 \right) n^\mu - \frac{i}{e} \chi \left( \Psi^\mu + \frac{1}{2} m^2 \frac{\Psi n}{[(P-qA)n]^2} n^\mu \right),\end{aligned}$$ and contract it with $n^\mu$ so that $$(P-qA)n = - \frac{\dot{X} n}{e} \left( 1 + 4i\mu e \frac{F\Psi n}{\dot{X}n} \Psi_5 + i \chi \frac{ \Psi n}{\dot{X} n} \right).$$ We can now invert this equation taking into account that we have Grassmannian variables inside the parenthesis $$\frac{1}{(P-qA)n} = - \frac{e}{\dot{X}n} \left( 1 - 4i\mu e \frac{F\Psi n}{\dot{X}n} \Psi_5 - i \chi \frac{\Psi n}{\dot{X} n} + 8\mu e \frac{F\Psi n\Psi_5}{(\dot{X}n)^2} \chi \Psi n \right).$$ Since the particle has mass $\sqrt{m^2 + M^2}$ we now choose the gauge $e = -1/\sqrt{m^2 + M^2}$ and $\chi =0$. The linearized equations of motion become $$\begin{aligned} \ddot{X}^\mu &=&\frac{q}{\sqrt{m^2 + M^2}} F^{\mu\nu} \dot{X}_\nu, \label{5.6}\\ \dot{\Psi}^\mu &=& \frac{q + 2\mu M}{\sqrt{m^2 + M^2}} F^{\mu\nu}\Psi_\nu - 2\mu F^{\mu\nu}\dot{X}_\nu\Psi_5 - \frac{q}{2} \frac{m^2}{(m^2 + M^2)^{3/2}} \frac{1}{(\dot{X} n)^2} \left( F\Psi n \,\, n^\mu - \Psi n \,\, F^{\mu\nu}n_\nu \right) \nonumber \\ &+& \mu \frac{m^2}{m^2 + M^2} \frac{F^{\mu\nu} n_\nu}{\dot{X}n} \Psi_5, \\ \dot{\Psi}_5 &=& 2\mu F\dot{X}\Psi + 2 \mu \frac{m^2}{m^2 + M^2} \frac{F\Psi n}{\dot{X}n}, \\ \dot{X}\Psi &+& 2 \frac{\mu }{\sqrt{m^2 + M^2}} F\Psi\Psi \Psi_5 - \frac{M}{\sqrt{m^2 + M^2}}\Psi_5 - \frac{1}{2} \frac{m^2}{m^2 + M^2} \frac{\Psi n}{\dot{X} n} = 0, \\ \dot{X}^2 &-& 1 - \frac{2i}{m^2 + M^2} ( q + 2\mu M) F\Psi\Psi + 6iq \frac{m^2}{m^2 + M^2} \frac{\Psi n}{(\dot{X} n)^2} F\Psi n = 0.\end{aligned}$$ Notice that the Lorentz force law in VSR (\[5.6\]) keeps the same form as in special relativity and does not depend on $n^\mu$. Just the mass has changed to include the VSR mass scale $m$. The next step is to compute $\dot{S}^\mu$. Besides the identity (\[4.14\]) we will need another identity obtained from the former one by contracting it with $n^\mu$. After using the equations of motion it reads $$\Psi_\mu \Psi n = \frac{2(m^2+M^2)}{m^2+2M^2} \left( \frac{1}{2(\dot{X})^2} \epsilon_{\mu\nu\rho\sigma} n^\nu \dot{X}^\rho S^\sigma + \frac{M}{\sqrt{m^2+M^2}} \dot{X}n \Psi_\mu \Psi_5 \right) + \dots,$$ where $\dots$ are terms proportional to $\dot{X}^\mu$ which do not contribute to the relevant calculations. It is then found that the cancellation among the $\epsilon^{\mu\nu\rho\sigma}\dot{X}_\nu \Psi_\sigma \Psi_5$ terms no longer occurs and $\dot{S}$ can not be written in terms $S$. The only way out is to modify the definition of $S^\mu$. In fact having a new vector $n^\mu$ allows the construction of other vectors out of a bilinear in the Grassmannians. For instance $$\tilde{S}^\mu = \frac{1}{\dot{X} n} \epsilon^{\mu\nu\rho\sigma} \dot{X}_\nu n_\rho \Psi_\sigma \Psi_5$$ satisfies $\dot{X}\tilde{S}=0$ so it is a candidate. Another possibility is $\epsilon^{\mu\nu\rho\sigma} n_\nu \Psi_\rho \Psi_\sigma$ which does not vanish when contracted with $\dot{X}$ but with $n$. It is possible to multiply it by a projector so that it vanishes when contracted with $\dot{X}$ $$\hat{S}^\mu = \frac{1}{\dot{X} n} \epsilon^{\mu\nu\rho\sigma} n_\nu \Psi_\rho \Psi_\sigma - \frac{\dot{X}^\mu}{\dot{X}^2} \frac{1}{\dot{X}n} \epsilon^{\lambda\nu\rho\sigma} \dot{X}_\lambda n_\nu \Psi_\rho \Psi_\sigma.$$ It turns out that $\hat{S}$ can be written as a combination of $S$ and $\tilde{S}$ as $$\hat{S}^\mu = 2 \frac{m^2+M^2}{m^2+2M^2} S^\mu + 4 \frac{M\sqrt{m^2+M^2}}{m^2+2M^2} \tilde{S}^\mu -\frac{m^2}{m^2+2M^2} \frac{S n}{\dot{X} n} \dot{X}^\mu + \frac{m^2}{m^2+2M^2} \frac{S n}{(\dot{X} n)^2} n^\mu.$$ We then found that the only combination $S$ and $\tilde{S}$ that gets rid of the $\epsilon^{\mu\nu\rho\sigma}\dot{X}_\nu \Psi_\sigma \Psi_5$ terms mentioned above is given by $$\label{65} S_T^\mu = S^\mu - \frac{m^2}{M\sqrt{m^2+M^2}} \tilde{S}^\mu = \epsilon^{\mu\nu\rho\sigma} \dot{X}_\nu \left( \Psi_\rho\Psi_\sigma - \frac{m^2}{M\sqrt{m^2+M^2}} \frac{1}{\dot{X}n} n_\rho\Psi_\sigma\Psi_5 \right).$$ The factor $- \frac{m^2}{M\sqrt{m^2+M^2}}$ is essential for the cancellation. To show that we need further identities like $$\begin{aligned} \epsilon_{\mu\nu\rho\sigma} \dot{X}^\rho \tilde{S}^\sigma &=& \frac{ \dot{X}_{[\mu} n_{\nu]} }{\dot{X} n} \dot{X}\Psi \Psi_5 - \dot{X}_{[\mu} \Psi_{\nu]} \Psi_5 + \dot{X}^2 \frac{ n_{[\mu} \Psi_{\nu]} }{\dot{X}n} \Psi_5, \\ \epsilon_{\mu\nu\rho\sigma} n^\rho \tilde{S}^\sigma &=& \frac{ \dot{X}_{[\mu} n_{\nu]} }{\dot{X}n} \Psi n \Psi_5 + n_{[\mu} \Psi_{\nu]} \Psi_5.\end{aligned}$$ Since the VSR scale is very small we can consider only the case $m^2<<M^2$ and keep terms up to order $m^2/M^2$. In this case we get after a long calculation $$\begin{aligned} \label{68} \dot{S}^\mu_T &=& \frac{1}{M} \left(1-\frac{1}{2}\frac{m^2}{M^2} \right) (q+2\mu M) F^{\mu\nu}S_{T\nu} + 2\mu \left(\dot{X}^\mu - \frac{1}{2}\frac{m^2}{M^2}\frac{n^\mu}{\dot{X}n}\right) FS_T\dot{X} \nonumber \\ &+& \mu \frac{m^2}{M^2} F^{\mu\nu}\dot{X}_\nu \frac{S_Tn}{\dot{X}n} + \frac{q}{2} \frac{m^2}{M^3} F^{\mu\nu} n_\nu \frac{S_Tn}{(\dot{X}n)^2} + \frac{q}{2} \frac{m^2}{M^3} \left(\dot{X}^\mu - \frac{n^\mu}{\dot{X}n} \right) \frac{FS_Tn}{\dot{X}n} \nonumber \\ &-& \frac{q}{2} \frac{m^2}{M^3} \dot{X}^\mu F\dot{X}n \frac{S_Tn}{(\dot{X}n)^2}.\end{aligned}$$ This is the generalization of the BMT equation to VSR. As anticipated there are several terms that can be built out of $n^\mu$ but all the coefficients are determined. A consistency check is to notice that $$\dot{X} \dot{S}_T = \frac{q}{\sqrt{m^2+M^2}} F\dot{X}S_T.$$ We now have to go to the electron rest frame by a Lorentz boost and $n^\mu$ has to be transformed as well. It can be checked that $S_T \dot{S}_{T}=0$ so that in the rest frame $\vec{S}_T \cdot \dot{\vec{S}}_T =0$ which means that the spin is precessing in that frame. This has been explicitly verified. Then it is possible to compute the VSR corrections to Thomas precession and also the VSR contribution to the anomalous magnetic moment of the electron. An alternative way to derive the extension of the BMT equation to VSR is by making use of the distribution function for the spinning particle. In order to relate quantities depending on the Grassmannian variables with observable quantities it is usual to define a distribution function in phase space [@Berezin:1976eg]. The distribution $\rho(\Psi,\Psi_5,t)$ must satisfy a Liouville equation $$\label{70} \frac{\partial \rho}{\partial t} + \{ H, \rho \} = 0,$$ and must be normalized to one $$\int d\Psi_5 d\Psi_3 d\Psi_2 d\Psi_1 d\Psi_0 \,\, \rho(\Psi,\Psi_5) = 1.$$ It is used to define the averaged value of a dynamical variable $F(\Psi,\Psi_5)$ as $$<F> \, = \int d\Psi_5 d\Psi_3 d\Psi_2 d\Psi_1 d\Psi_0 \,\, F(\Psi,\Psi_5) \, \rho(\Psi,\Psi_5).$$ In this context the Grassmannian variables are regarded as independent variables so that the supersymmetry constraint ${\cal S}$ is used only at the end of all calculations. In the relativistic case $P \Psi$ and $\Psi_5$ are gauge degrees of freedom so that the distribution function is given by [@Berezin:1976eg] $$\label{73} \rho = \frac{1}{2} \left( v(t) \Psi + \frac{1}{3} \epsilon^{\mu\nu\rho\sigma} \frac{P_\mu}{M} \Psi_\nu \Psi_\rho \Psi_\sigma \right) \delta\left(\frac{P \Psi}{M}\right) \delta\left(\Psi_5\right),$$ where $v(t)$ satisfies $P v= 0$ and the coefficient $1/3$ is required by normalization. The distribution function is defined for the free spinning particle and interactions are introduced in the Hamiltonian in (\[70\]). Then $P^\mu = M \dot{X}^\mu$ and (\[73\]) reduces to $$\label{74} \rho = \frac{1}{2} \left( v(t) \Psi + \frac{1}{3} \epsilon^{\mu\nu\rho\sigma} \dot{X}_\mu \Psi_\nu \Psi_\rho \Psi_\sigma \right) \dot{X} \Psi \, \Psi_5,$$ with $\dot{X} v = 0$. If we consider the VSR contributions to the spinning particle as part of the interactions then our distribution function is (\[74\]) and we can use it to compute the averaged value of $S^\mu_T$ (\[65\]). We then find that $<S^\mu_T> = v^\mu$. We now use (\[70\]) to find the equation satisfied by $<S^\mu_T>$. To this end we get the Hamiltonian $H$ from (\[2.8\]) as $H = - e {\cal H} = {\cal H}/\sqrt{m^2+M^2}$ with ${\cal H}$ given by (\[52\]). Computing the Poisson brackets and using the constraints (\[45\]) and (\[4.12\]) and eliminating the momentum using (\[38\]) we find that the equation satisfied by $<S^\mu_T>$ when $m^2 << M^2$ is exactly (\[68\]). This provides a powerful check that our extension of the BMT equation to VSR is in the right direction. Alternatively we could had started with a distribution function for the VSR spinning particle which already takes into account the VSR effects as described in Section \[s1\]. Now the gauge degrees of freedom are $\Pi \Psi$ and $\Psi_5$, where $$\Pi^\mu = P^\mu - \frac{1}{2} m^2 \frac{n^\mu}{Pn}$$ so that the distribution function is $$\rho = \frac{1}{2} \left( v \Psi + \frac{1}{3} \epsilon^{\mu\nu\rho\sigma} \frac{\Pi_\mu}{M} \Psi_\nu \Psi_\rho \Psi_\sigma \right) \frac{\Pi \Psi}{M} \Psi_5,$$ with $\Pi v =0$. We again replace the momenta getting $$\Pi^\mu = \sqrt{m^2 + M^2} \dot{X}^\mu - \frac{1}{2} \frac{m^2}{\sqrt{m^2 + M^2}} \frac{n^\mu}{\dot{X} n}.$$ Now the averaged value of $S^\mu_T$ is given by a more complicated expression $$\label{78} <S^\mu_T> = \frac{\sqrt{m^2 + M^2}}{M} \left[ \left(1 - \frac{1}{2} \frac{m^2}{m^2+M^2} \right) v^\mu - \frac{1}{2} \frac{m^2}{m^2+M^2} \left( \dot{X}^\mu - \frac{1}{2} \frac{m^2}{m^2+M^2} \frac{n^\mu}{\dot{X}n} \right) \frac{v n}{\dot{X}n} \right],$$ which is a consequence of the fact that $v^\mu$ no longer satisfies $\dot{X} v= 0$ but $$\dot{X}v - \frac{1}{2} \frac{m^2}{\sqrt{m^2+M^2}} \frac{n v}{\dot{X}n} = 0.$$ Notice that we still have $\dot{X} <S_T> = 0$. In the limit $m^2 << M^2$ the Liouville equation now gives $$\begin{aligned} \label{80} \dot{v}^\mu &=& \frac{1}{M} \left( 1 - \frac{1}{2} \frac{m^2}{M^2} \right) (q + 2\mu) F^{\mu\nu} v_\nu + 2\mu \left[ \left(1 + \frac{1}{2} \frac{m^2}{M^2} \right) \dot{X}^\mu - \frac{1}{2} \frac{m^2}{M^2} \frac{n^\mu}{\dot{X}n} \right] Fv\dot{X} + \nonumber \\ &+& \frac{m^2}{M^2} \frac{q}{2M} F^{\mu\nu}n_\nu \frac{vn}{\dot{X}n} - \frac{1}{2M} \frac{m^2}{M^2} \left( 2\mu M \dot{X}^\mu + q \frac{n^\mu}{\dot{X}n} \right) \frac{Fvn}{\dot{X}n}.\end{aligned}$$ We then take the time derivative in (\[78\]) and use (\[80\]) to find that $<\dot{S}^\mu_T>$ again obeys (\[68\]). As a last remark we want to mention that the distribution function is also required to satisfy some sort of positivity condition [@Berezin:1976eg] like $$\int d\Psi_5 d\Psi_3 d\Psi_2 d\Psi_1 d\Psi_0 \,\, \rho \,\, F^\star F \ge 0,$$ for any phase space function $F$. Like in the classical relativistic case [@Berezin:1976eg] our distribution functions do not satisfy a positivity condition. It seems that this can only be implemented when the spinning particle has internal degrees of freedom [@Barducci:1980xk]. Conclusions {#s6} =========== We discussed the inclusion of VSR like terms in a Lorentz invariant theory starting with the spinning particle model for a fermion. It provides a way to generate a class of Lorentz violating theories which have a preferred direction in space but at the same time keeps many essential elements of special relativity. Its effects appear at a scale $m$ where the anisotropy becomes relevant. Many terms invariant by VSR can be added to relativistic invariant equations and we developed a systematic way to generate such terms. In particular we determined how the BMT equation, which describes the electron spin precession in a electromagnetic field, is modified by VSR. We showed that in the rest frame the spin still precesses but VSR effects will now produce new effects. It has been argued that VSR is not consistent with Thomas precession [@Das:2009fi] but our analysis does not support this view. It is well known that for a particle with $g=2$ in a magnetic field the spin precesses in such a way that the longitudinal polarization is constant, while the presence of a electric field in the relativistic limit makes the spin to precess very slowly. It would be interesting to find how VSR changes these properties. Acknowledgments =============== The work of J.A. was partially supported by Fondecyt \# 1110378 and Anillo ACT 1102. He also wants to thank the Instituto de Física, USP and the IFT/SAIFR for its kind hospitality during his visits to São Paulo. The work of V.O.R. is supported by CNPq grant 304116/2010-6 and FAPESP grant 2008/05343-5. He also wants to thank Facultad de Fisica, PUC Chile for its kind hospitality during his visits to Santiago. [999]{} D. Colladay and V. A. Kostelecky, [*[Lorentz violating extension of the standard model]{}*]{}, [*Phys.Rev.*]{} [**D58**]{} (1998) 116002, \[[[hep-ph/9809521]{}](http://xxx.lanl.gov/abs/hep-ph/9809521)\]. S. Liberati, [*[Tests of Lorentz invariance: a 2013 update]{}*]{}, [[arXiv:1304.5795]{}](http://xxx.lanl.gov/abs/1304.5795). A. G. Cohen and S. L. Glashow, [*[Very special relativity]{}*]{}, [ *Phys.Rev.Lett.*]{} [**97**]{} (2006) 021601, \[[[hep-ph/0601236]{}](http://xxx.lanl.gov/abs/hep-ph/0601236)\]. A. G. Cohen and S. L. Glashow, [*[A Lorentz-Violating Origin of Neutrino Mass?]{}*]{} \[[[hep-ph/0605036]{}](http://xxx.lanl.gov/abs/hep-ph/0605036)\]. A. G. Cohen and D. Z. Freedman, [*[SIM(2) and SUSY]{}*]{}, [*JHEP*]{} [**0707**]{} (2007) 039, \[[[ hep-th/0605172]{}](http://xxx.lanl.gov/abs/hep-th/0605172)\]. J. Vohanka, [*[Gauge Theory and SIM(2) Superspace]{}*]{}, [*Phys.Rev.*]{} [ **D85**]{} (2012) 105009, \[[[ arXiv:1112.1797]{}](http://xxx.lanl.gov/abs/1112.1797)\]. G. Gibbons, J. Gomis, and C. Pope, [*[General very special relativity is Finsler geometry]{}*]{}, [*Phys.Rev.*]{} [**D76**]{} (2007) 081701, \[[[arXiv:0707.2174]{}](http://xxx.lanl.gov/abs/0707.2174)\]. W. Muck, [*[Very Special Relativity in Curved Space-Times]{}*]{}, [ *Phys.Lett.*]{} [**B670**]{} (2008) 95–98, \[[[arXiv:0806.0737]{}](http://xxx.lanl.gov/abs/0806.0737)\]. M. Sheikh-Jabbari and A. Tureanu, [*[Realization of Cohen-Glashow Very Special Relativity on Noncommutative Space-Time]{}*]{}, [*Phys.Rev.Lett.*]{} [ **101**]{} (2008) 261601, \[[[ arXiv:0806.3699]{}](http://xxx.lanl.gov/abs/0806.3699)\]. S. Das, S. Ghosh, and S. Mignemi, [*[Noncommutative Spacetime in Very Special Relativity]{}*]{}, [*Phys.Lett.*]{} [**A375**]{} (2011) 3237–3242, \[[[arXiv:1004.5356]{}](http://xxx.lanl.gov/abs/1004.5356)\]. D. Ahluwalia and S. Horvath, [*[Very special relativity as relativity of dark matter: The Elko connection]{}*]{}, [*JHEP*]{} [**1011**]{} (2010) 078, \[[[arXiv:1008.0436]{}](http://xxx.lanl.gov/abs/1008.0436)\]. Z. Chang, M.-H. Li, X. Li, and S. Wang, [*[Cosmological model with local symmetry of very special relativity and constraints on it from supernovae]{}*]{}, [[arXiv:1303.1593]{}](http://xxx.lanl.gov/abs/1303.1593). S. Cheon, C. Lee, and S. J. Lee, [*[SIM(2)-invariant Modifications of Electrodynamic Theory]{}*]{}, [*Phys.Lett.*]{} [**B679**]{} (2009) 73–76, \[[[arXiv:0904.2065]{}](http://xxx.lanl.gov/abs/0904.2065)\]. V. Bargmann, L. Michel, and V. Telegdi, [*[Precession of the polarization of particles moving in a homogeneous electromagnetic field]{}*]{}, [ *Phys.Rev.Lett.*]{} [**2**]{} (1959) 435. J. Jackson, [*[Classical Electrodynamics]{}*]{}. , [2nd]{} ed., [1975]{}, Ch. 11. F. A. Berezin and M. S. Marinov, “Particle Spin Dynamics as the Grassmann Variant of Classical Mechanics,” Annals Phys.  [**104**]{}, 336 (1977). P. S. Howe, S. Penati, M. Pernici, and P. K. Townsend, [*[Wave Equations for arbitrary spin from quantization of the extended supersymmetric spinning particle]{}*]{}, [*Phys.Lett.*]{} [**B215**]{} (1988) 555. M. Pierri and V. O. Rivelles, [*[BRST Quantization of Spinning Relativistic Particles with Extended Supersymmetries]{}*]{}, [*Phys.Lett.*]{} [**B251**]{} (1990) 421–426. J. Alfaro and V. O. Rivelles, [*[Non Abelian Fields in Very Special Relativity]{}*]{}, Phys. Rev. D [**88**]{}, 085023 (2013) \[arXiv:1305.1577 \[hep-th\]\]. A. Barducci, [*[Pseudoclassical description of relativisitic spinning particles with anomalous magnetic moment]{}*]{}, [*Phys.Lett.*]{} [**B118**]{} (1982) 112. D. Gitman and A. Saa, [*[Quantization of spinning particle with anomalous magnetic momentum]{}*]{}, [*Class.Quant.Grav.*]{} [**10**]{} (1993) 1447–1460, \[[[hep-th/9209086]{}](http://xxx.lanl.gov/abs/hep-th/9209086)\]. A. Deriglazov, [*[Semiclassical Description of Relativistic Spin without use of Grassmann variables and the Dirac equation]{}*]{}, [*Annals Phys.*]{} [ **327**]{} (2012) 398–406, \[[[ arXiv:1107.0273]{}](http://xxx.lanl.gov/abs/1107.0273)\]. S. Das and S. Mohanty, [*[Very Special Relativity is incompatible with Thomas precession]{}*]{}, [*Mod.Phys.Lett.*]{} [**A26**]{} (2011) 139–150, \[[[arXiv:0902.4549]{}](http://xxx.lanl.gov/abs/0902.4549)\]. A. Barducci, R. Casalbuoni and L. Lusanna, “Anticommuting Variables, Internal Degrees of Freedom, and the Wilson Loop,” Nucl. Phys. B [**180**]{}, 141 (1981).
--- abstract: 'In order to annihilate in the early Universe to levels well below the measured dark matter density, asymmetric dark matter must possess large couplings to the Standard Model. In this paper, we consider effective operators which allow asymmetric dark matter to annihilate into quarks. In addition to a bound from requiring sufficient annihilation, the energy scale of such operators can be constrained by limits from direct detection and monojet searches at colliders. We show that the allowed parameter space for these operators is highly constrained, leading to non-trivial requirements that any model of asymmetric dark matter must satisfy.' author: - 'Matthew R. Buckley$^{1}$' bibliography: - 'effectiveops.bib' title: Asymmetric Dark Matter and Effective Operators --- Despite decades of experimental effort, remarkably little is known about the nature of dark matter. For many years, the leading theoretical class of candidates for dark matter has been a Weakly Interacting Massive Particle (WIMP). The success of this paradigm is due in large part to the surprising fact that a thermal relic with a weak-scale mass and interaction strength will have the correct dark matter abundance. However, it should be noticed that in most phenomenologically viable models, some level of fine-tuning is necessary, weakening the motivations behind the ‘WIMP miracle’ (see, for example Ref. [@Feng:2009qf]). Recently, a proposal for an alternative origin of dark matter has gained in prominence: that of asymmetric dark matter (ADM) [@Kribs:2009fy; @Cohen:2009fz; @An:2009vq; @Cohen:2010kn; @Kaplan:2009ag; @Buckley:2010ui; @Davoudiasl:2010am; @Belyaev:2010vn; @Graesser:2011wi; @Haba:2010bm; @Shelton:2010ta; @Blennow:2010qf; @Frandsen:2011kx] (for earlier works along similar lines, see Refs. [@Agashe:2004bm; @Banks:2006xr; @Cosme:2005sb; @Farrar:2005zd; @Hooper:2004dc; @Kaplan:1991ah; @Kitano:2004sv; @Kitano:2008tk; @Suematsu:2005kp; @Thomas:1995ze; @Tytgat:2006wy]). In this class of models, the coincidence of energy densities of baryons and dark matter (which differ only by a factor of $\sim 6$) is taken as the driving motivation. This leads to the conclusion that dark matter, like baryons, should be composed of a particle $\chi$ with a quantum number $X$ which is conserved at low energies and generated through some $X$-violating process, rather than consisting of a thermal bath of $\chi/\bar{\chi}$ particles with the $X$ number of the Universe equal to zero. The similarity of the baryon and dark matter densities suggests that the $X$-violating process should somehow be connected to $B$ or $L$ number violating processes that must have occurred in early Universe baryogenesis. The larger density of dark matter can then be explained either through a dark matter mass $m_\chi$ of the order $4-10$ GeV (such models include darkogenesis [@Shelton:2010ta] and hylogenesis [@Davoudiasl:2010am]), or by a much heavier dark matter mass (weak scale or above) combined with a mass suppression during the era of $X-B$ transfer (Xogenesis [@Buckley:2010ui]). The large number of proposed ADM models differ wildly in their explanation of the origin of the $X$ asymmetry, the mechanism of transfer of asymmetry from the dark to the visible sectors, and the required mass $m_\chi$. However, there is one universal requirement that every model must meet: the thermal relic density of $\chi/\bar{\chi}$ (the symmetric component of dark matter) must be compose only a small fraction of dark matter’s total contribution to the Universe’s energy budget.[^1] Since the contribution to the matter density of the symmetric component is much less than $\Omega_{\rm DM}$, the thermal cross section in the early Universe must be significantly larger than that usually assumed for a WIMP. Thus, in ADM either there must be large couplings between the dark matter and some visible sector particles, or additional very light states in the dark sector into which the dark matter can annihilate without over-closing the Universe. In the former scenario, the required large interactions with the Standard Model may result in direct detection cross sections that can be probed by current experiments. In this paper, we consider effective operators between two dark matter particles and two quarks. The effective operator formalism allows us to remain agnostic as to the particle content at high energy scales, by considering only operators that respect Standard Model gauge invariance after electro-weak symmetry breaking and couple the dark matter directly to the Standard Model fields. In order to include the low-energy effects of any unknown high-mass particles, we add operators to the Lagrangian that are of dimension greater than four. Such operators must be suppressed by an energy scale $\Lambda$, which is roughly equivalent to the mass of the mediating particle over the coupling at the high scale. A familiar example of effective operators is the four-fermion interaction, which accounts for the weak interaction at scales much less than the mass of the $W$ and $Z$ bosons. In this case, the dimension six operator is suppressed by the Fermi constant, which in the language of this paper would be expressed as $G_F= \Lambda^{-2}$. In this particular example $\Lambda$ would be defined as $2^{5/4}m_W/g$, where $m_W$ is the $W$ boson mass, and $g$ is the weak coupling constant. Such effective operators preserve both $X$ and $B$, and so are not related to the origin of dark matter or baryons, however they are necessary components for any successful ADM model. Assuming that the symmetric component of dark matter makes up less than $10\%$ of the total $\Omega_{\rm DM}$, we place upper bounds on the suppression scale $\Lambda$ for each operator for both complex scalar and Dirac fermion dark matter.[^2] Comparison with the predicted direct detection cross section with the current experimental bounds can then used to place lower bounds on $\Lambda$ for many of the operators. Monojet plus missing energy ($\slashed{E}_T$) searches at the Tevatron, which would arise from pair production of dark matter plus a jet (used for the event trigger) can also place lower limits on the suppression scale [@Bai:2010ys; @Goodman:2010ly; @Goodman:2010zr]. As we shall show, these bounds place severe restrictions on the allowed range of the scale $\Lambda$; in fact, they completely exclude the entire parameter space for several classes of operators. From this, we can greatly constrain the possible interactions for any asymmetric dark matter model. In using the effective operator formalism, this paper has similarities to the work of Refs. [@Bai:2010ys; @Goodman:2010ly; @Goodman:2010zr; @Fox:2011tg], which consider the bounds on effective operators for symmetric dark matter. As we will show, the application of these bounds to the asymmetric dark matter leads to some very interesting conclusions: namely that (outside some tightly constrained regions of parameter space) a successful model of asymmetric dark matter must contain new light states, leptophilic couplings, or new confining gauge interactions. These conclusions should be taken into account when considering motivations for asymmetric dark matter model-building. There are, of course, two major assumptions underlying this approach which deserve to be stressed at this point. First, that the same operator that over-annihilates dark matter in the early Universe is active today, and second that the annihilation operator allows for couplings of dark matter to quarks. The latter assumption allows the operators to be bounded by results from direct detection and hadronic collider experiments, though leptophilic dark matter can be probed by LEP searches [@Fox:2011tg] instead. Dark matter which annihilates into some new light state of the dark sector is much more difficult to probe, though by the assumption of ADM such states must be light enough not to dominate the matter density, while also evading BBN constraints on relativistic degrees of freedom. The assumption that a single operator is responsible for both direct detection and over-annihilation is primarily made for simplicity: the derived bounds on operators would not apply to scenarios with (for example) composite dark matter [@Nussinov:1985xr; @Chivukula:1989qb; @Bagnasco:vn; @Khlopov:2008ly; @Frandsen:2011kx; @Gudnason:zr] or dark atoms [@Kaplan:2009kx]. In both cases the present-day direct detection cross sections are suppressed by form factors (though the collider bounds would be unaffected). These assumptions are fairly strong, but – as will be demonstrated – the operators considered in this paper are highly constrained. Therefore, should future direct detection and collider bounds completely rule out the operator parameter space, then we can conclude that the dark sector in ADM models is either leptophilic, composite, or contains some additional light states into which the dark matter can annihilate (but which does not contribute greatly to the present day energy density). In this paper, we consider eight possible effective operators linking dark matter with quarks through a weakly coupled UV completion. We ignore some possible additional operators which contain mixed axial/vector or pseudoscalar/scalar interactions ([*e.g.*]{} we consider $\bar{\chi}_F\gamma^5 \chi_F \bar{q} \gamma^5 q$ but not $\bar{\chi}_F \chi_F \bar{q} \gamma^5 q$) as the derived bounds are very similar to the ones placed on the operators written below. The operators of interest for complex scalar dark matter (denoted $\chi_S$) are $$\begin{aligned} {\cal L}_{S,S} & = & \frac{m_q}{\Lambda^2} \chi_S^* \chi_S \bar{q}q \label{eq:lagSS} \\ {\cal L}_{S,P} & = & \frac{im_q}{\Lambda^2} \chi_S^* \chi_S \bar{q}\gamma^5 q \label{eq:lagSP} \\ {\cal L}_{S,V} & = & \frac{1}{\Lambda^2} \chi_S^*\partial_\mu \chi_S \bar{q} \gamma^\mu q. \label{eq:lagSV}\end{aligned}$$ Dark matter composed of Dirac fermions is denoted $\chi_F$, and the effective operators under consideration are: $$\begin{aligned} {\cal L}_{F,S} & = & \frac{m_q}{\Lambda^3} \bar{\chi}_F \chi_F \bar{q} q \label{eq:lagFS} \\ {\cal L}_{F,P} & = & \frac{m_q}{\Lambda^3} \bar{\chi}_F \gamma^5 \chi_F \bar{q} \gamma^5 q \label{eq:lagFP} \\ {\cal L}_{F,V} & = & \frac{1}{\Lambda^2} \bar{\chi}_F \gamma^\mu \chi_F \bar{q} \gamma_\mu q \label{eq:lagFV} \\ {\cal L}_{F,A} & = & \frac{1}{\Lambda^2} \bar{\chi}_F \gamma^5 \gamma^\mu \chi_F \bar{q} \gamma^5 \gamma_\mu q \label{eq:lagFA} \\ {\cal L}_{F,T} & = & \frac{1}{\Lambda^2} \bar{\chi}_F\sigma^{\mu\nu}\chi_F \bar{q}\sigma_{\mu\nu}q \label{eq:lagFT}\end{aligned}$$ The second subscript ($S$, $P$, $V$, $A$, or $T$) refers to scalar, pseudoscalar, vector, axial-vector, and tensor interactions respectively, while the first ($S$ or $F$) refer to the spin of the dark matter (scalar or fermion). We have assumed that the coupling to quarks is flavor-blind, and so Eqs. - should be thought of including an implicit sum over all six quark flavors. We shall comment later on the implications of relaxing this constraint. Annihilation in the early Universe can proceed through either $s$- or $p$-wave processes (or some combination thereof). The latter case is velocity suppressed, while the former contains terms that are independent of $v$. The interactions in Eqs.  and are exclusively $p$-wave. For each operator, we can calculate the cross section times velocity, expanding out to second order in $v$ (see Refs. [@Beltran:2008xg; @Fitzpatrick:2010uq] for details): $$\begin{aligned} (\sigma |v|)_{S,S} & = & \frac{3}{8\pi\Lambda^4} \sum_q m_q^2 \left(1-\frac{m_q^2}{m_\chi^2}\right)^{3/2} \label{eq:sigmavSS} \\ (\sigma |v|)_{S,P} & = & \frac{3}{8\pi\Lambda^4} \sum_q m_q^2 \sqrt{1-\frac{m^2_q}{m_\chi^2}}\label{eq:sigmavSP} \\ (\sigma |v|)_{S,V} & = & \frac{3m_\chi^2}{6\pi\Lambda^4} \sum_q \sqrt{1-\frac{m^2_q}{m_\chi^2}} \left(2+\frac{m_q^2}{m_\chi^2}\right)v^2 \label{eq:sigmavSV}\end{aligned}$$ $$\begin{aligned} (\sigma |v|)_{F,S} & = & \frac{3m_\chi^2}{8\pi\Lambda^6} \sum_q m_q^2\left(1-\frac{m_q^2}{m_\chi^2}\right)^{3/2} v^2 \label{eq:sigmavFS} \\ (\sigma |v|)_{F,P} & = & \frac{3m_\chi^2}{2\pi\Lambda^6} \sum_q m_q^2 \sqrt{1-\frac{m^2_q}{m_\chi^2}} \times\label{eq:sigmavFP} \\ & & \left[1+\left(\frac{2m_\chi^2-m_q^2}{8(m_\chi^2-m_q^2)}\right) v^2 \right] \nonumber \\ (\sigma |v|)_{F,V} & = & \frac{3m_\chi^2}{2\pi\Lambda^4} \sum_q \sqrt{1-\frac{m^2_q}{m_\chi^2}} \times \label{eq:sigmavFV} \\ & & \left[ \left(2+\frac{m_q^2}{m_\chi^2}\right)+\left(\frac{8m_\chi^4-4m_q^2m_\chi^2+5m_q^4}{24m_\chi^2(m_\chi^2-m_q^2)}\right)v^2 \right] \nonumber\\ (\sigma |v|)_{F,A} & = & \frac{3m_\chi^2}{2\pi\Lambda^4} \sum_q \sqrt{1-\frac{m^2_q}{m_\chi^2}} \times \nonumber \\ & & \left[\frac{m_q^2}{m_\chi^2}+\left(\frac{8m_\chi^4-22m_q^2m_\chi^2+17m_q^4}{24m_\chi^2(m_\chi^2-m_q^2)}\right)v^2 \right]\label{eq:sigmavFA} \\ (\sigma |v|)_{F,T} & = & \frac{3m_\chi^2}{2\pi\Lambda^4} \sum_q \sqrt{1-\frac{m^2_q}{m_\chi^2}} \times\left[16 \left(1+ \frac{m_q^2}{m_\chi^2}\right)\right. \nonumber \\ & & \left.+\frac{2}{3}\left(4+\frac{7m_q^2(m_\chi^2+16m_q^2)}{m_\chi^2(m_\chi^2-m_q^2)}\right)v^2 \right]. \label{eq:sigmavFT}\end{aligned}$$ Effective operators involving leptons rather than quarks would give similar results for $\sigma|v|$, divided by an overall factor of $3$ to account for the quark color. Defining $\sigma |v| \equiv a+bv^2 + {\cal O}(v^3)$, the relic abundance of the symmetric dark matter component after thermal freeze-out is $$\Omega_{\rm DM} h^2 \approx \frac{(1.04 \times 10^9 ~\mbox{GeV}) x_f}{M_{\rm Pl} \sqrt{g_*} (a+3 b/x_f)}.$$ Here, $M_{\rm Pl}$ is the reduced Planck mass, $x_f$ is the ratio of dark matter mass to temperature at freeze-out (detailed calculation shows that $x_f \sim 20-30$ [@Kolb:1990vq]), and $\sqrt{g_*}$ is the number of effective degrees of freedom at the time of freeze-out. Requiring that the symmetric dark matter contributes less than $10\%$ of the total, we can place an upper bound on the scale $\Lambda$ of the higher dimensional operators. This choice is somewhat arbitrary, but without significant dilution of symmetric dark matter relative to the asymmetric component, there would be little hope in experimentally differentiating the two (and indeed, little reason to refer to the model as “asymmetric”). The resulting constraints on $\Lambda$ as a function of $m_\chi$ are shown in Fig. \[fig:lambdabounds\], along with the limits for a thermal WIMP ([*i.e.*]{} dark matter who’s symmetric component makes up $100\%$ of the dark matter in the Universe). These latter limits are equivalent to those of Ref. [@Beltran:2008xg]. Even before considering bounds on the couplings from direct detection, we can already place significant constraints on the scale $\Lambda$ by requiring that the effective operators arise from a weakly coupled UV completion. In that case, we require that any exchanged particle must have a mass greater than $2m_\chi$. With the additional requirement of perturbative couplings, we find that $m_\chi < 2\pi \Lambda$. As can be seen in Fig. \[fig:lambdabounds\], this requirement severely limits the range of $\Lambda$ and $m_\chi$ that can provide sufficient annihilation for many of the operators, and effectively places an upper bound on the mass of dark matter in these scenarios. However, this bound is somewhat porous, as the factor of $2\pi$ is not a hard limit. Other ${\cal O}(1)$ factors may reasonably be adopted, however this does not qualitatively change our conclusions. While one may certainly imagine non-perturbatively coupled dark matter scenarios ([*e.g.*]{} technicolor or composite dark matter [@Nussinov:1985xr; @Chivukula:1989qb; @Bagnasco:vn; @Khlopov:2008ly; @Frandsen:2011kx; @Gudnason:zr]), in those cases it is not possible to calculate the relevant cross sections, and so to make quantitative predictions we must insist that $\Lambda \gtrsim m_\chi /2 \pi$. In any event, a strongly-coupled theory would contain additional states, which as we have noted, are a possible method of evading the bounds derived in this paper. We next consider the constraints on $\Lambda$ from direct detection. For each operator in Eqs. -, we calculate the resulting spin-dependent or spin-independent elastic scattering cross section as a function of dark matter mass and scale $\Lambda$ [@Beltran:2008xg; @Fitzpatrick:2010uq; @G.Belanger:2008fk]. Comparison to the experimental upper limits on the nucleon-DM scattering cross section $\sigma_{\chi N}$ allows us to place lower limits on $\Lambda$ as a function of $m_\chi$. Note that for $m_\chi \lesssim 5$ GeV, no bounds are set by the current experiments. The strength of the direct detection bounds depends greatly on whether the dark matter interacts with nucleons via spin-dependent or spin-independent interactions. Of the effective operators of interest in this paper, the scalar and vector interactions (Eqs. , , , and ) induce spin-independent scattering, while the fermionic axial and tensor interactions (Eqs.  and ) result in spin-dependent scattering [@Jungman:1995df]. Note that the pseudoscalar interactions (Eq.  and ) do not lead to either spin-dependent or spin-independent couplings that are velocity independent. We include the derived bounds from the resulting spin-dependent direct detection cross section [@Bai:2010ys; @Cheng:1988im], which are proportional to powers of the momentum transfer $q = \sqrt{2m_\chi E_R}$ (here $E_R$ is the energy of the recoiling nucleon; we assume $E_R \sim 50$ keV). As can be seen in Fig. \[fig:lambdabounds\], the resulting bound on $\Lambda$ from these $q$-dependent interactions are extremely weak; in fact, they require a mediator mass typically less than the mass of the dark matter. The spin-independent constraints are taken from CDMS [@Kamaev:2009vn; @Collaboration:2010nx], CoGeNT [@Aalseth:2008rx], CRESST [@Altmann:fk], XENON-10 [@Angle:2007uj], and XENON-100 [@Aprile:2010um]. For dark matter with mass between $\sim 5- 80$ GeV, XENON-10 and XENON-100 provide the best limits at the present time. Above this range, CDMS has the best constraint. For very low masses, near $\sim 1$ GeV, the CRESST detector has the most stringent bounds. In the intermediate region, CoGeNT and the CDMS low threshold [@Collaboration:2010nx] dominate. Spin-dependent constraints are a combination of COUPP [@Behnke:2008kl], CRESST [@Altmann:fk], and PICASSO [@Archambault:2009oq], the last of these providing the best limits between $\sim 5-100$ GeV, while CRESST dominates in the very low mass window. The underlying assumption should again be noted: we are requiring that the same operators responsible for the thermal relic abundance will be responsible for any direct detection interaction. If, for example, the dark matter had large couplings to leptons ([*i.e.*]{} small $\Lambda$ for $\bar{\chi}\chi \ell \bar{\ell}$-type operators), this could provide sufficient suppression of the symmetric component, while a separate operator would be responsible for direct detection. We will have more to say on this in the conclusion. ![image](./SS1.pdf){width="0.8\columnwidth"}![image](./SP2.pdf){width="0.8\columnwidth"} ![image](./SV1.pdf){width="0.8\columnwidth"}![image](./FS2.pdf){width="0.8\columnwidth"} ![image](./FP2.pdf){width="0.8\columnwidth"}![image](./FV2.pdf){width="0.8\columnwidth"} ![image](./FA2.pdf){width="0.8\columnwidth"}![image](./FT2.pdf){width="0.8\columnwidth"} Finally, we can consider constraints on effective operators searches at the Tevatron [@Bai:2010ys; @Goodman:2010ly; @Goodman:2010zr]. We use the bounds from Ref. [@Goodman:2010zr], as all the operators in Eqs. - are considered. For each operator, the resulting cross section for dark matter pair production plus an extra jet was compared to the searches performed in the monojet $+\slashed{E}_T$ channel at CDF [@CDFmonojet]. As the experimental data is in agreement with the Standard Model, lower bounds are placed on $\Lambda$ for each operator. While the range of dark matter masses that can be probed by this method is limited, these constraints do provide a bound which is in several cases complimentary to that provided by direct detection. We show the remaining allowed regions in Fig. \[fig:lambdabounds\] in grey. In total, we see that the four requirements (over-annihilation, direct detection, Tevatron monojets, and consistency of the effective operator expansion), completely exclude four classes of asymmetric dark matter over the entire mass range. It is interesting to note that this is also true for many effective operator expansions for WIMP thermal dark matter, as has been noted before [@Beltran:2008xg], and can be seen in Fig. \[fig:lambdabounds\] by considering the $\Omega_{\rm DM}h^2 =0.1$ line for $\Lambda$. Several windows in $\Lambda$ vs. $m_\chi$ space remain open for pseudoscalar, axial, and tensor operators. Many of these are in the low mass ($\sim5-10$ GeV) region, which is especially intriguing in the context of ADM. Monojet searches at the LHC with 100 fb$^{-1}$ of data may reasonably be expected yield a factor of ${\cal O}(10)$ improvement over the Tevatron bounds and extend them to higher $m_\chi$ [@Goodman:2010zr]. This would allow most of the remaining pseudoscalar and axial windows to be closed, and greatly reduce the allowed parameters for the tensor case. It is clear then, that asymmetric dark matter coupled to quarks via effective operators are highly constrained by the data. We now briefly mention the impact of generation or flavor dependent couplings on the bounds for $\Lambda$. Clearly, we cannot perform an exhaustive analysis, as there are a limitless set of possible flavor-dependencies that can be added to the effective operators. In general, coupling to fewer generations will make the upper bounds coming from the over-annihilation constraint much more restrictive. As there are fewer Standard Model particles involved in the annihilation process, each particle which does interact must do so more efficiently. Thus, $\Lambda$ must be lower. Combined with the $\Lambda > m_\chi/2\pi$ bound, this can completely exclude scalar and pseudoscalar mediators, for example if the dark matter couples only to $u$ and $d$ quarks. The changes to direct detection bounds in a flavor-dependent scenario are more complicated. Vector mediators depend only on the couplings to $u$ and $d$ quarks, so restricting couplings only to these two flavors will not change the bounds, while coupling to the heavier generations will completely eliminate it. Scalar, axial, and tensor interactions depend most heavily on $u$, $d$, and $s$ couplings; eliminating these will cause the bounds to loosen considerably. Flavor dependent constraints from monojets are investigated in Ref. [@Bai:2010ys]. Couplings to $u$ quarks are the most constrained, followed by $d$ and $s$, as expected for results from a proton-antiproton collider. Reducing the number of quarks that couple to dark matter allows each coupling to be larger, thus setting a less restrictive lower limit on $\Lambda$, as can be see by comparison of the results of Ref. [@Bai:2010ys] and Ref. [@Goodman:2010zr]. Returning to the general case, even when certain operators are completely excluded, it is obvious that many possibilities exist to which would allow us to escape the conclusions of the analysis presented above. For example, the annihilation could not proceed through dark matter-quark interactions, or perhaps the assumption that the annihilation proceeds through an effective operator could be incorrect. In both cases, this communicates valuable information about the structure of any asymmetric dark matter model. Let us consider each in turn. If we imagine asymmetric dark matter avoids the constraints derived in this paper by annihilating primarily into some other light field, then one possible explanation is that the fundamental field which was integrated out in the effective operator is leptophilic. This is an intriguing possibility in light of the models of leptophilic dark matter (see, for example, Ref. [@Arkani-Hamed:2008vn; @Cholis:2008kx; @Cholis:2008uq; @Fox:2008fk; @Essig:2009ys; @Kohri:2009ys]) which attempt to explain anomalies in the PAMELA positron fraction [@Adriani:2008bh] and Fermi Gamma-Ray Space Telescope $e^++e^-$ spectrum [@Collaboration:2009dq]. Effective operators involving leptons would not be greatly constrained by direct detection, however the over-annihilation requirement would remain, as would the $\Lambda > m_\chi/2\pi$ constraint. The monojet search could be replaced by a monophoton search at LEP [@Fox:2011tg], though the mass range would be limited. Alternatively, the dark matter could annihilate efficiently into some new dark state that is either very light or unstable, decaying into Standard Model particles before Big Bang Nucleosynthesis (BBN) (see for example Ref. [@Hall:2010uq]). In the former case, CMB and BBN constraints on the number of relativistic species (usually stated in terms of the number of neutrino flavors) must be avoided. This could be achieved through significant entropy injection into the thermal bath after dark matter annihilation decouples [@Ackerman:2008zr]. In any event, this possibility requires an extended dark sector in addition to the dark matter and the high-scale mediator. Finally, the results of this paper could be interpreted to mean that the annihilation and scattering of asymmetric dark matter cannot be written in terms of an effective field theory. This may mean that the coupling to the mediator is non-perturbative; as in the case of quirky asymmetric dark matter [@Kribs:2009ve] or composite [@Nussinov:1985xr; @Chivukula:1989qb; @Bagnasco:vn; @Khlopov:2008ly; @Frandsen:2011kx; @Gudnason:zr]. Alternatively, the mediator mass could simply be lower than the cutoff of $m_\chi/2\pi$. This is an interesting possibility, because it requires that ADM not be “maverick” [@Beltran:2010cr]. That is, additional light states would be required in order to satisfy all the experimental constraints. Should this possibility be born out, this would again be very interesting in the context of the light mediator solutions [@Arkani-Hamed:2008vn; @Bjorken:2009mm; @Cholis:2008qq; @Cholis:2008vb; @Cholis:2008wq; @Hisano:2003ec; @Pospelov:2008jd] of the PAMELA and Fermi anomalies. In this paper, we have investigated effective operators suppressed by a scale $\Lambda$ connecting dark matter to quarks in the context of asymmetric dark matter models. The large couplings (compared to that of a WIMP model) required by the over-annihilation of ADM in the early Universe, combined with experimental constraints from direct detection and Tevatron searches greatly constrain the parameter space of $\Lambda$ as a function of dark matter mass $m_\chi$. Only relatively narrow windows – including several at low mass where many ADM models prefer the dark matter to be – remain for most of the operators, and future LHC data can be expected to close many of these. As efficient thermal annihilation is the one universal requirement of ADM, we consider it useful to clearly set forth the bounds which any such model must satisfy. Most currently existing models of ADM evade these constraints though various method. In this paper, we have outlined several general techniques for doing so: - by annihilation into leptons, - by annihilation into additional dark states which are very light or unstable, - low mass mediators that cannot be written as effective operators, - new confining gauge groups in the dark sector. It is interesting to consider the implications of these possibilities. Outside a narrow range of parameters, the experimental constraints seem to push asymmetric dark matter into scenarios which are either leptophilic or contain additional light states. Acknowledgements {#acknowledgements .unnumbered} ================ The author thanks Scott Dodelson, Graham Kribs, Roni Harnik, Dan Hooper, Patrick Fox, Hugh Lippincott, Ethan Neil, Will Shepherd, Ian Shoemaker, and Tim Tait for their advice and suggestions. [^1]: While it is certainly possible for both symmetric and asymmetric components to contribute significantly, this requires multiple coincidences in the operators responsible for both transfer and annihilation. Such a model may be found in Ref. [@Graesser:2011wi]. While in this paper we shall not consider this possibility in more depth, we include results applicable to pure symmetric dark matter, allowing the reader to interpolate the results for a mixed scenario. [^2]: Majorana fermions and real scalars possess no conserved global current $X$, and so are not good candidates for ADM. Small Majorana masses – leading to $\chi-\bar{\chi}$ oscillations on cosmological timescales – are not ruled out in ADM models, but can be ignored for the purpose of this paper.
--- author: - 'Kasper Peeters,' - Maciej Matuszewski - and Marija Zamaklar title: Holographic meson decays via worldsheet instantons --- Introduction ============ The holographic approach offers a framework to address some of the most challenging questions in strongly coupled gauge theories in a (semi) analytic way. While most of the work in the holographic approach has taken place in the context of supersymmetric theories, the expectation is that similar methods can be applied to study of strongly coupled phenomena in QCD. At the moment the geometry which is dual to QCD is not yet known. However, there are proposals for dual geometries which capture various qualitative features of QCD. One of the most successful dual models is the Sakai-Sugimoto model [@Sakai:2005yt] which is special in the sense that it incorporates chiral symmetry breaking in the dual description. In this paper we have used the Sakai-Sugimoto model to compute probabilities for decays of mesonic particles, via breaking of flux tubes. As this process is a strongly coupled phenomenon, its computation in QCD is not easily performed. Yet, knowing the probability for a flux tube to break is crucial for understanding both the decay widths of mesons and the hadronisation phase in high-energy scattering processes. A long time ago, a very successful phenomenological model, the Lund fragmentation model [@Sjostrand:1982fn; @Andersson:1983ia] was developed in order to model hadronisation in event generators for high-energy collisions. In this model, mesons are modelled by two (massive) particles which are connected by a relativistic string, which models the QCD flux tube. In a high-energy collision, the pair-produced quark and antiquark move away from each other, with the colour string stretching between them. As the string becomes longer, it eventually snaps, producing a new quark-antiquark pair and so on, leading to a shower of mesonic particles. The probability for a string to break at particular point was “derived” by Casher, Neuberger and Nussinov (CNN) in the early days of QCD [@Casher:1978wy]. The formula was written down by making an analogy with electromagnetism: the electric field in the Schwinger formula was replaced with an (abelianised) chromoelectric field and quarks were treated as free charged particles which are minimally coupled to this field. While this model agrees qualitatively with experimental data, it contains several free parameters which need to be fixed by comparison with experiment. A holographic approach may potentially shed some light on the origin of these free parameters.[^1] The probability for a QCD string to break by producing a quark-antiquark pair is also relevant when computing the lifetime of mesons. In the Sakai-Sugimoto model, large-spin mesons are modelled by macroscopic, rotating, U-shaped strings with endpoints which are stabilised from collapse by a centrifugal force and are constrained to “move” on probe D8-branes, see figure \[backgroundprobe\]. The probability for such a string to split can be computed in two ways. In our previous work [@Peeters:2005fq; @Sonnenschein:2017ylo] we used a string bit model, in which we computed the probability for a string to fluctuate in the holographic direction and hit the probe brane. As it hits the probe, the string can split with some probability. The resulting decay width $\Gamma$ was found to exhibit exponential suppression in the masses of the pair-produced quarks and linear dependence on the effective length of the QCD string flux tube. Wave-function based approaches as in [@Peeters:2005fq] are, however, numerically hard to handle in the continuum limit, where the number of string beads is large. In addition, the computations of string fluctuations in [@Peeters:2005fq] were for computational reasons restricted to the near-wall region where the background metric is linearised around flat space. In order to improve on these points, we initiate in the present paper an alternative, instanton approach to the study of holographic breaking of the QCD flux tube. That is, we will construct a string worldsheet instanton which interpolates between the unsplit and split U-shaped mesonic strings. As in [@Peeters:2005fq], we consider a simplified system, which is represented by a hanging U-shaped string, which does *not* rotate but is prevented from collapse by a Dirichlet boundary condition. Such a system is similar to the strings used in the original Lund model, which was initially also applied to non-rotating systems. The instanton configuration has the geometry of a cylindrical surface, with circular boundaries which are concentric in the field theory directions, and separated from each other in the holographic direction of the dual geometry. A generic instanton configuration would take into account the backreaction of the produced quarks on the flux tube, through the bending of the flux tube in the holographic direction. Our instanton describes the decay of a finite-size flux tube and finite-volume mesonic particle in which the endpoint quarks accelerate from each other. The CNN formula does not take into account backreaction of the pair-produced quarks and it also deals with an infinitely long flux tube. In the large volume limit, the QCD flux tube is much longer than the radius at which the quarks are pair-produced and one expects that the dynamics of the external quarks is decoupled from the string breaking process. The probability is then fully determined by the property of the tube and does not depend on the quarks in the original meson. In order to compare our findings with the results of CNN, we have therefore also investigated the large-volume limit of our result, in which we indeed reproduce the simple exponential suppression of the decay probability with the square of the quark mass, $\exp(-m_q^2/T)$, where $T$ is the tension of the string [@Casher:1978wy]. Our paper is organised as follows. In section \[s:flat\_breaking\] we first review the key features of the worldline derivation of the Schwinger pair production formula [@Affleck:1981bma] and its generalisation to QCD [@Casher:1978wy]. In section \[s:flat\_instanton\] we consider, as a warm-up exercise, the old string model with massive endpoints in flat space, and construct the instanton configuration which reproduces results of [@Casher:1978wy]. In section \[s:ss\_instanton\] we construct a similar instanton configuration in the Sakai-Sugimoto model and compute from it the probability for meson decay. Our main findings and open questions are discussed in the last section. QCD string breaking à la Schwinger in flat spacetime {#s:flat_breaking} ==================================================== It has been known for a long time that the presence of an external electric field leads to the production of electrically charged particle-antiparticle pairs [@Schwinger:1951nm]. While the original computation of Schwinger was done by perturbatively summing a class of one-loop diagrams in quantum field theory, the same result was later rederived in the worldline approach, by construction of a worldline instanton [@Affleck:1981bma]. The same worldline instanton approach was also used to describe the production of monopole/anti-monopole pairs in an external magnetic field [@Affleck:1981ag]. In this section we will briefly review the basic derivation of the Schwinger result for a production of particle-antiparticle pairs in an external electric field, using the worldline instanton approach [@Affleck:1981bma]. We then review an application of this formula to the pair production of quark-antiquark pairs inside the QCD flux tube following the seminal work of Casher et al. [@Casher:1978wy]. Assume that a non-vanishing electric field $E$ is turned on in the $X^1$ direction. In order to construct the worldline instanton describing the production of a particle-antiparticle pair of masses $m$ and charge $q$, one needs to consider the Wick rotated system obtained by $\tau \rightarrow -i \tau$, $A_0 \rightarrow -i A_0$ and solving the *classical* equations of motion of a particle in the Euclidean background with $F_{01}=-iE$. The action of the particle is given by $$\label{actionpp} S_{E} = \int\!{\rm d}\tau\, \bigg( m \sqrt{ \dot{X}^\nu \dot{X}_\nu} - i q A_{\nu} \dot{X}^\nu \bigg)\,.$$ It is not hard to see that the solution for the particle worldline is given by $$\label{instantonpp} X^0(\tau) = R \cos(2 \pi n \tau) \, , \quad X^1(\tau) = R \sin(2 \pi n \tau) \, , \quad X^2 =0 \, , \quad X^3 = 0 \,,$$ where $X^0$ is the Wick rotated target space time direction and $R$ is fixed in terms of $E$ by the equation of motion, see below. We see that the worldline instanton looks like a loop of radius $R$. The parameter $n$ labels different instantons, and describes how many times the particle “winds” around the loop. As usual, the particle propagating “backwards” in the (Euclidean) time $X^0$ is interpreted as an antiparticle. Hence, the left-hand side of the loop can be interpreted as the worldline of the antiparticle, while the right-hand side as the worldline of the particle, see figure \[instantonoriginal\]. Substituting the solution  into the particle action  and integrating over the worldline gives $$\label{e:worldlineaction} S_{\text{class}} = 2 \pi n R m - \pi n q E R^2 \, .$$ The extrema of the action will give classical solutions, and one finds that the radius of the loop is fixed to be $R=m/(qE)$ for which the action reduces to $S_{\text{class}}= \pi \frac{m^2}{qE} n$. ![\[instantonoriginal\] Worldline instanton for particle-antiparticle pair production in an external electric field $E$.](FinalFigure1){width="40.00000%"} So the full loop  describes a particle-antiparticle pair which is produced at the Euclidean time $X^0=-R$, in which the particles move away from each other. Once the particle and antiparticle go on-shell, i.e. once they reach a distance $2m/(qE)$, one can analytically continue the solution  back to Lorentzian time. The Lorentzian solution describes a pair of particles accelerating away from each other with proper acceleration . Exponentiation of this Euclidean action $S_{\text{class}}$ with winding $n=1$ gives the most dominant contribution for the probability of particle production in the saddle point approximation. Looking at the fluctuations around this classical path , and summing over their contributions in the path integral [@Affleck:1981bma], produces a pre-factor to the exponent $e^{-S_{\text{class}}}$, and one obtains the celebrated Schwinger formula for the probability of production of particles. The probability for the pair production of particles of spin half and charge $q$, per unit volume and per unit time, is given by [@Schwinger:1951nm] $$\label{exponent} P_{\text{pp}} = \frac{E^2}{8 \pi^3} \sum_{n=1}^\infty \frac{1}{n^2} e^{-\frac{\pi m^2}{q |E|} n } \, .$$ A long time ago, Casher et al. [@Casher:1978wy] argued that the Schwinger pair production formula  can be directly applied to QCD in order to derive a formula for the decay of mesons. In their set up, Casher et al. assumed that at the hadronic energy scale of $1$ GeV the quarks inside mesons can be treated as Dirac particles with constituent masses $m$ and charge $q$. They also assumed that at timescales which are short compared to the hadronic timescale, mesons can be modelled as chromo-electric flux tubes (“thick strings”) of universal thickness such that the chromo-electric field can be treated as a classical, constant, longitudinal *abelian* field. Hence the process of meson decay can be seen as Schwinger pair production of quark-antiquark pair, by the (abelianised) QCD field. The flux tube is parametrised by the radius $r_t$, the “abelianised” QCD field strength $\mathcal{E}_t$ and the gauge coupling $g$ which is related to the charge of the quarks $q$ as $q=g/2$. It has been argued that the reason for the factor $1/2$ between $g$ and $q$ is the fact that quarks couple to the gauge field through the $SU(3)$ generators $\lambda^a/2$. The energy per unit length stored in the tube is the *effective* tube (string) tension and is given by $$\gamma_{\text{QCD}} = \frac{1}{2 \pi \alpha'}= \frac{1}{2}\mathcal{E}_t^2 \pi r_t^2 \, ,$$ $\gamma_{\text{QCD}} \sim 0.177 (\text{GeV})^2$ and the radius of the tube is $r_t \sim 2.5 \text{GeV}^{-1}$. On the other hand, using the (abelian) Gauss law and the fact that flux lines are non-vanishing only between the quarks (like for a capacitor), one has $\mathcal{E}_t \pi r_t^2 = q = g/2$ which implies that the effective tension of the flux tube is $$\label{effectivestring} \gamma_{\text{QCD}} = \frac{1}{2}\mathcal{E}_t q \, .$$ Hence the Schwinger formula  can be rewritten in terms of the natural QCD variables as $$\label{QCDSchwinger} P_{\text{QCD}} = \frac{\gamma_{\text{QCD}}}{ \pi^3} \sum_{n=1}^\infty \frac{1}{n^2} e^{-\frac{\pi m_{q}^2 }{ 2 \gamma_{\text{QCD}}} n } \, .$$ We should comment at this stage that the factor of $1/2$ in the exponent is a consequence of the fact that the QCD field in the flux tube has been treated as an abelian field. In the more recent paper [@Nayak:2005pf], a proper generalisation of the Schwinger formula to a non-abelian field has been derived and the production rate has been shown to depend on two independent Casimir gauge invariants $E^aE^a$ and $d_{abc}E^a E^b E^c$. In what follows we will see that the holographic model reproduces the exponential dependence of the production rate in , up to this numerical factor. From the formula (\[QCDSchwinger\]), the probability for a meson to decay after time $t$, measured in the meson rest frame, is $1- e^{-V_4(t) P_{QCD}}$ where $V_4(t)$ is the four-volume spanned by the system until time $t$. For a meson which is modelled by a rotating flux tube of lenght $L$, this volume is $V_4(t) = \pi r_t^2 L t$. Therefore, the decay width (probability per unit time) is $\Gamma= \pi r_t^2 L P_{\text{QCD}}$. Because the meson mass is $M = \pi \gamma_{\text{QCD}} L$, one finds that the ratio of decay width $\Gamma$ and the meson mass $M$ is independent of the effective length of the string, $$\left(\frac{\Gamma}{M} \right)_{\text{rot}} = \frac{2 r_t^2}{\gamma_{\text{QCD}}} P_{\text{QCD}}\,.$$ Similarly, modelling mesons as one-dimensional oscillators implies $(\Gamma/M)_{\text{osc}} = \pi/4(\Gamma/M)_{\text{rot}}$, so that the ratio is independent of the effective size of the sysem. At the moment, the experimental data on meson decays do not agree with this prediction for lighter mesons (see the discussion in [@Peeters:2005fq; @Sonnenschein:2017ylo]), while for high-spin mesons, where one would expect this model to work bettter, the data are not accurate enough to confirm or reject such a prediction. Worldsheet instanton in flat space and string splitting {#s:flat_instanton} ======================================================= While our main goal is to study meson decays in the holographic setup, we will as a warm up exercise first consider the process of meson decay in flat space without using the analogy with the Schwinger formula. This will provide an alternative, new derivation of the formula  which, to the best of our knowledge, has so far not been presented elsewhere. In order to model mesons, including their flux tube as well as endpoint quarks, we will use an action for the relativistic string suplemented with two massive particles which are attached to the string endpoints. This is the “old” string model, as discussed and reviewed in [@Barbashov:1990ce]. We want to find the Euclidean worldsheet configuration which interpolates between the unsplit and split string with massive end points. After performing a Wick rotation in the target and worldsheet space-time, the string action becomes [^2] $$\label{actionflat} \begin{aligned} S &= \gamma \int_{\tau_1}^{\tau_2}\!{\rm d} \tau\, \int_{0}^{\pi} d\sigma \sqrt{-(\dot{X} \cdot X')^2 + \dot{X}^2 X'^2 }\\[1ex] &\qquad\qquad + m \int_{ \tau_1}^{ \tau_2}\! {\rm d}\tau \bigg(\sqrt{\dot{X}^{2}(\tau , \sigma=0)} + \sqrt{\dot{X}^2(\tau, \sigma=\pi)}\bigg) \\[1ex] &\equiv S_{\text{bulk}} + S_{\partial, \sigma=0} + S_{\partial, \sigma=\pi }\,. \end{aligned}$$ Here $X \cdot X \equiv X^{\mu} X^{\nu} g_{\mu\nu}(X)$ and $g_{\mu\nu}(X)$ denotes the Euclidean metric in the target space, which for us at this stage is just a flat metric, $g_{\mu\nu} = \delta_{\mu\nu}$. The tension of the string is denoted with $\gamma= 1/(2\pi \alpha')$ and $m$ is the mass of the particles attached to the string endpoints. We are interested in finding a Euclidean, two-dimensional string configuration which interpolates between a single and a double string. The initial string had only two quarks at its endpoints. In the fully dynamical string model, the position of these outer quarks is fixed by the total angular momentum of the meson, which prevents strings from collapsing, see for example [@Barbashov:1990ce]. To simplify the discussion, we will in this paper take quarks in the original meson to satisfy Diriclet boundary conditions and confine them to move on a line in the Euclidean target space. At some point in the Euclidean “time”, a pair of massive quarks is pair-produced in the interior of the string. These particles represent new “internal” endpoints of the string, which can move freely, i.e. satisfy Neumann boundary conditions, see figure \[generalisedinstanton\]. In order to account for the pair-produced quarks, one needs to modify the action  by adding to it the worldline action of the pair-produced quarks. Adding this extra term in the action is in spirit the same as what one does in order to describe the pair-production of the charged particle-antiparticle pair in the external electric field using the instanton approach in the world-line formalism, see e.g. [@Semenoff:2011ng]. The main difference is that the role of the electric field is now played by the tension of the split string, which pulls apart the pair-produced particles. For simplicity, we will also assume that the particles have no transverse momentum, so that the whole process of string splitting is planar, i.e. that both in- and outgoing strings are in the same two-dimensional plane. As the variation of the action  leads to bulk and boundary equations of motion we have to make sure that both are satisfied. In the two-dimensional target space, the bulk equations of motion are always satisfied, thanks to reparametrisation invariance of the action. So we just need to make sure that the boundary equations of motion for the Neumann boundary conditions hold. Note that the boundary equations of motion receive nontrivial contribution from the surface terms of the bulk part of the action, $$\label{equationNeumann} \frac{\partial\,\,\,}{\partial\tau}\left(\frac{\partial \mathcal{L}_{\partial, \sigma_B}}{\partial \dot{X}^0}\right)-\frac{\partial \mathcal{L}_{\text{bulk}}}{\partial {X'}^0}\Bigg|_{\sigma_B} = 0\,,$$ where $\sigma_B$ is the position of the boundary (or boundaries) of the string for which Neumann conditions are imposed. Before the quarks were pair-produced, the string was straight and stretching between $(0,2 L)$ in the target space. To describe this string configuration we choose the parametrisation $$\label{background} X^0 = \tau\,, \quad \quad X^1 = 2 L \frac{\sigma}{\pi} \,, \quad\quad \sigma \in [0, \pi\rangle\,.$$ The instanton configuration for a splittting string is plotted in figure \[generalisedinstanton\], and is given by $$\label{solusplit} \begin{aligned} X_L^0(\tau,\sigma)&=X_R^0(\tau,\sigma)=\tau\,, \\[1ex] X_L^1(\tau,\sigma)&=x_L(\tau,\sigma)=-\frac{\sigma}{\pi}\left(\sqrt{-\tau^2+ \kappa^2}- a \right)\,, & \sigma \in [0, \pi\rangle \,, \\[1ex] X_R^1(\tau,\sigma)&=x_R(\tau,\sigma)=\left(1-\frac{\sigma}{\pi}\right)\left(\sqrt{-\tau^2+\kappa^2}+ a \right)+2 a \frac{\sigma}{\pi}\,, &\sigma \in [0, \pi\rangle\,, \end{aligned}$$ where $\kappa$ and $a$ are arbitrary constants, and $X_L$ and $X_R$ describe left and right half of the instanton (the red and blue areas in figure \[generalisedinstanton\]). Note that while we have written the solution piece-wise, the two “sides” of the instanton, $X_L$ and $X_R$ are glued in a smooth way. The solution above is a Euclidean version of a solution found in [@Bardeen:1975gx]. It is easy to see that the ansatz  satisfies the Neumann boundary equations of motions , $$\label{fullinstanton} -m\frac{\partial\,\,\,}{\partial\tau}\left(\frac{\dot{X}_L^\mu(\tau, \sigma=\pi)}{\sqrt{1+\dot{x}_L^2(\tau,\sigma=\pi)}}\right)+\gamma\left(-\dot{x}_L\dot{X}_L^\mu+\left(\frac{1+\dot{x}_L^2}{x_L'}\right)X_L'^\mu\right)\bigg |_{\sigma=\pi}=0\,,$$ (where $\mu=0,1$) provided that $\kappa= m/\gamma$. A similar expression holds for the right-hand piece $X_R^\mu$. Note also that the solution , where the outer quarks are moving on straight lines, can be generalised to a solution for which the endpoints do not follow straight lines but move on arbitrary curves $f_{L}(\tau)$ and $f_R(\tau)$, see figure \[generalisedinstanton\]b. $$\label{solusplitV2} \begin{aligned} X_L^0(\tau,\sigma)&=X_R^0(\tau,\sigma)=\tau\,, \\[1ex] X_L^1(\tau,\sigma)&=x_L(\tau,\sigma)=-\frac{\sigma}{\pi}\left(\sqrt{-\tau^2+ \kappa^2}- a \right) + f_L(\tau) \left( 1- \frac{\sigma}{\pi} \right)\,, \quad& \sigma & \in (0, \pi)\,, \\[1ex] X_R^1(\tau,\sigma)&=x_R(\tau,\sigma)=\left(1-\frac{\sigma}{\pi}\right)\left(\sqrt{-\tau^2+\kappa^2}+ a \right) + f_R(\tau) \frac{\sigma}{\pi} \,, \quad& \sigma &\in (0, \pi) \,,\\ \end{aligned}$$ with $\kappa = m/\gamma$ as above. We therefore see that in whichever way the outer quarks move, the dynamics of the inner free (Neumann) quarks is unaffected. The “motion” of the inner quarks is always circular, with a radius of curvature which is determined by the ratio of the particle mass and the string tension which pulls the produced quarks. ![\[generalisedinstanton\]Flat space instantons, describing the breaking of the string flux tube. The instanton on the left describes breaking of the flux tube where the external quarks are at a fixed distance, while the instanton on the right depicts external quarks which “move” on arbitrary paths.](FinalFigure2Version2){width="80.00000%"} In order to evaluate the probability for a single event of particle pair production, we need to evaluate the action of the instanton. The string configuration (\[solusplit\]) describes the process of pair production inside the string, as well as the propagation of the outer, background quarks. Hence in order to isolate the part which describes particle production, we need to subtract the contribution corresponding to the background. In other words, the quantity of interest which gives us the probability for the particle production is $$\begin{gathered} \label{actionproduction} S_{\text{pp}} = S_{\text{full}} - S_{\text{background}} = m(\mathrm{Circumference}\,\,\mathrm{of}\,\,\mathrm{circle})-\gamma(\mathrm{Area}\,\,\mathrm{of}\,\,\mathrm{circle}) \\[1ex] = m(2\pi \kappa)-\gamma(\pi \kappa^2)= \frac{\pi m^2}{\gamma}\,.\end{gathered}$$ Here $S_{\text{background}}$ is the action of the background configuration with no particle production , $S_{\text{full}}$ is the action of the instanton configuration  and $\gamma$ is the string tension. It is easy to see that the same result is obtained with the more general solution , where the outer quarks move on arbitrary, non-straight paths. In other words, the dynamics of the external quarks is fully decoupled from the production process inside the flux tube. We thus find that the probability for a single pair production event to happen is given by $$\label{particlestuff} P_{\text{PP}} = e^{- S_{\text{pp}}} = e^{-\frac{\pi m_q^2}{\gamma}} \, ,$$ We see that this is the as the contribution of a single instanton in the CNN formula (\[QCDSchwinger\]), up to a numerical factor of a half. So we see that in the process of string splitting, the string tension has the same role as the electric field in the Schwinger process, that is, it pulls the produced particles away from each other. However, note again that there is a difference with respect to the Schwinger process, as quarks couple to the string endpoints in a different way than described by minimal coupling to electromagnetism (as used in the CNN approach). The position of the instanton can be at any arbitrary point on the string worlsheet, as long as the instanton is not too close to the boundary of the string worldsheet, so that size of the instanton circle “fits” into the string worldsheet. One can therefore compute the probability per unit time $\Gamma$ from the the probability per unit volume and unit time  as $\Gamma =P_{\text{pp}}L$ where $L$ is the string length. On the other hand, the mass of the initial mesonic “particle” is $M = \gamma L + 2m$, where $m$ is the mass of the initial quark pairs. In the limit of long strings ($L\gg m/\gamma$) one can approximate $M\approx \gamma L$ and also ignore subtleties related to whether the instanton fits into the worldsheet or not. In this limit one recovers the results from the previous section as well as from the holographic computation of [@Peeters:2005fq], namely that $\Gamma/M$ is a constant for all mesonic particles. In summary, we have constructed a flat space instanton configuration which describes splitting of the open relativistic string with the massive endpoints, into two strings with massive endpoints. The probability for such a process to occur is the same as the probability for pair production of charged massive particles in an external electric field of a strength which is proportional to the tension of the string. String splitting in the Sakai-Sugimoto holographic model {#s:ss_instanton} ======================================================== Our discussion of splitting strings was so far done in flat space using the old string model of mesons. In holographic models of QCD, like the Sakai-Sugimoto model, mesons are incorporated by adding one or more flavour D8 probe branes in the holographic background which is dual to the confining, pure glue theory at strong coupling [@Sakai:2005yt]. In this setup the mesons appear either as light fluctuations of the probe D8-brane in the supergravity (DBI) approximation or as semi-classical string configurations of relativistic strings whose worldsheets end on the probe brane. Light DBI excitations of the probe brane describe mesons up to spin one, while higher-spin mesons are described by semi-classical strings. In what follows we will focus on the phenomenologically more interesting case of higher-spin mesons and their decay by constructing string worldsheet instanton configurations in the holographic background. Review of high-spin mesons in the Sakai-Sugimoto model ------------------------------------------------------ The background geometry in which probe D8-branes and large strings are embedded is given by $$\begin{gathered} \label{backgroundgeometry} {\rm d}s^2 = \left(\frac{u}{R_{D_4}}\right)^{3/2}\left(-{\rm d} t^2+ \delta_{ij}{\rm d} x^i {\rm d} x^j + f_\Lambda(u){\rm d} x_4^2 \right) + \left(\frac{R_{D_4}}{u}\right)^{3/2} \bigg(\frac{{\rm d} u^2 }{f_{\Lambda}(u)} + u^2 {\rm d} \Omega_4 \bigg)\,, \\[1ex] f_\Lambda(u) = 1-\frac{{u_\Lambda}^3}{u^3}\,, \quad \quad i=1,2,3 \, , \end{gathered}$$ where ${\rm d}\Omega_4$ is the metric on a round four sphere. There is also a non-constant dilaton and a RR four-form field strength, $$\begin{aligned} e^\phi = g_s \left( \frac{u}{R_{D_{4}}} \right)^{3/4} \quad \quad F_4 = \frac{2 \pi N_c}{V_4} \epsilon_4 \, .\end{aligned}$$ Here $R_{D_4}^3= \pi g_s N_c l_s^3$, $g_s$ is the string coupling and $l_s^2=\alpha'$ is the string length. Here $u$ is the “holographic” direction, which is bounded from below by $u \geq u_{\Lambda}$. The world volume, non-holographic, directions in which the gauge theory lives are $t,x_1,x_2,x_3$. One of the main properties of this background is the cigar-like submanifold spanned by the periodic coordinate $x_4$ and the holographic direction $u$. The tip of the cigar is positioned at $u=u_{\Lambda}$ where the $x_4$ circle (smoothly) shrinks to zero size.[^3] One usually refers to the region near $u_{\Lambda}$, as the “wall”. In order to incorporate quark/flavour degrees of freedom to this pure glue theory, one needs to place probe flavour $D8$ brane(s) in this geometry. There are different ways in which one can embed flavour $D8$ branes in this background. For us, the relevant embedding of the probe flavour $D8$ brane is the one in which $D8$ brane fills out all directions except the cigar $(u,x_4)$ submanifold. In the cigar submanifold, the flavour $D8$ brane has a U-shape, see figure \[backgroundprobe\], with the tip of the probe brane which is at some distance $m_q$ from the wall $u_{\Lambda}$, see [@Sakai:2005yt]. In principle this parameter $m_q$ is a free parameter for the embedding of the brane, and can be changed by changing the asymptotic separation between the endpoints of the probe, see figure \[backgroundprobe\]. ![\[backgroundprobe\]Sakai-Sugimoto background with the probe D8-brane and the U-shaped, mesonic string which hangs in the holographic direction towards the wall at $u=u_{\Lambda}$.](LoopAndSpacetime3){width="70.00000%"} Large spin mesons correspond to rotating strings, whose endpoints are fixed on the flavour D8-brane. The strings are prevented from collapsing by a centrifugal force [@Kruczenski:2004me]. As the spin of the string is increased, the distance between the string endpoints increases as well, i.e. the string becomes larger and its worldsheet becomes and more and more U-shaped. The two “vertical” parts of the string stretch almost vertically from the probe brane to the wall and the horizontal part of the string stretches almost parallel to the wall, see figure \[backgroundprobe\]. It was shown in [@Kruczenski:2004me] that this string configuration is holographically equivalent to the system of two quarks which are connected by a flux tube, i.e. to the same model we have analysed in the previous section. By analysing the mass of this string configuration, it was shown that the vertical parts of the string correspond the (bare) quark masses of the meson, while the horizontal part of the string corresponds to the energy stored in the QCD flux tube. In order to model a system with different quark masses for the two quarks, one needs to introduce more than one flavour D8-brane, each hanging at different distance from the wall. The positions of these probes in the holographic diretion specify different quark masses. A meson with different quark masses is then a string with endpoints ending on these two different flavour D8-branes. We would now like to study the decay of such a string configuration. The hanging string is subject to quantum fluctuations and when a part of the string worldsheet touches one of the flavour branes it can split and attach new endpoints to that flavour brane. The probability for a string to touch the flavour brane due to quantum fluctuations was computed in [@Peeters:2005fq] by constructing the string wave-function using a string bead model for a discretised string worldsheet. Our approach here will be different. We will here construct a configuration of the Euclidean worldsheet, which interpolates between the single and double U-shaped strings, that is, a worldsheet instanton. String worldsheet instanton --------------------------- Our main goal is to holographically compute the probability per unit volume and time for a QCD flux tube to break. As in our previous analysis [@Peeters:2005fq] and as in the flat space construction from the previous section, we simplify the problem by looking at a U-shaped string with endpoints which are “forced by hand” to follow a circular path of some radius. Imposing these boundary conditions is not unreasonable, as in the Lorentzian picture these correspond to quarks which accelerate away from each other with constant acceleration, as it happens in the hadronisation phase in high-energy scattering processes.[^4] Generically the string breaking process will be sensitive to the precise boundary conditions one imposes for the external quarks. However, one would expect that there is a limit in which the exact dynamics of the external quarks decouples from the breaking process (a sort of large volume limit), so that the quark production process in this limit can be treated as a Schwinger process in a constant external field, like in [@Casher:1978wy]. As it is a priori not clear what this limit is or whether it exists in our setup, our approach will be to first construct the general solution and compute the decay probability for an arbitrary mesonic particle, and then see if there is a limit in which this probability reduces to the Schwinger probability. A string can break only at the point where the interior of the world sheet touches the probe D8-brane. In real time such a situation happens because under quantum fluctuations, parts of the string worldsheet touch the probe brane.[^5] In the Euclidean setup, in order to construct the instanton configuration for a splitting string, one needs to start with the string worldsheet which is “pinned” to the D8 probe at some internal worldsheet point. So we impose Dirichlet boundary conditions in $u$ and $x_4$ directions both for the string endpoints and for the “pinning point”. Once the string has split at the pinning point, the newly generated string endpoints are free to “move” in the D8 worldvolume directions ($x_0,x_1,x_2,x_3$ and $S^4$) freely, i.e. they satisfy Neumann boundary conditions. In order to construct worldsheet instanton, we need to solve the string equations of motion in the Wick rotated background . It will be convenient to change to background coordinates as follows, $$z = \frac{1}{u}\,, \quad \quad z_{D_4} = \frac{1}{R_{D_4}}\,, \quad \quad z_{\Lambda} = \frac{1}{u_{\Lambda}} \, ,$$ which turns the metric into $$\label{backgroundgeometryV2} \begin{aligned} {\rm d}s^2 = \left(\frac{z_{D_4}}{z}\right)^{3/2}\bigg({\rm d} x_0^2 + {\rm d}\rho^2 + \rho^2 {\rm d}\theta^2 &+ \rho^2 \sin \theta {\rm d} \phi^2 + f_\Lambda(z){\rm d} x_4^2 \bigg) \\[1ex] &+ \frac{1}{z_{D_4}^{3/2} z^{5/2}} \bigg(f_{\Lambda}^{-1}(z) {\rm d} z^2 + z^2 {\rm d} \Omega_4 \bigg)\,,\\[1ex] f_\Lambda(z) &\equiv 1-\frac{z^3}{z_{\Lambda}^3} \quad \quad i=1,2,3 \, , \end{aligned}$$ and $0 \leq z \leq z_{\Lambda}$. We have Wick-rotated time and we have also introduced spherical coordinates in the $(x_1,x_2,x_3)$ directions. The string worldsheet extends in the radial direction $z$, has cylindrical symmetry in the worldvolume directions, and hangs from a fixed position in the $x_4$ direction, which is at the tip of the D8-probe. A standard coordinate choice on the worldsheet is the static gauge, in which one makes the following ansatz for the string worldsheet, $$\label{parametrisation} z = \sigma\,, \quad \quad \rho= \rho(z)\,, \quad \quad \theta = \frac{\pi}{2}\,, \quad \quad \phi = \tau \, .$$ Plugging this into the string action one gets $${\mathcal L} = \gamma \sqrt{- ({\dot X} \cdot X')^2 + (X')^2 {\dot X}^2} = \gamma \frac{\rho}{z^{3/2}} \sqrt{z_{D4}^3 \rho'^2 + \frac{1}{z f_{\Lambda}(z)} }\,,$$ which leads to the equations of motion $$\begin{gathered} \label{equationsofmotionV1} 2z_{\Lambda}^3 \bigg(z_{\Lambda}^3 + z_{D_4}^3z(-z^3 + z_{\Lambda}^3)\rho'^2 \bigg) + \\[1ex] + z_{D_4}^3 \rho \bigg( z_{\Lambda}^3(z^3 + 2 z_{\Lambda}^3)\rho' + 3 z_{D_4}^3 z (z^3 - z_{\Lambda}^3)^2 \rho'^3 + 2 z z_{\Lambda}^3 (z^3 -z_{\Lambda}^3)\rho'' \bigg) = 0 \, .\end{gathered}$$ The above choice for the worldsheet coordinate is, however, not very good when constructing numerical solutions. The U-shaped strings we are after have, in this system, parts in which either the $z'$ or $\rho'$ derivatives are large. In fact, because of the combination of almost vertical and almost horizontal segments, no single coordinate system turned out to be particularly well-suited to finding reliable solutions in all regions of the parameter space which we have explored. We have therefore used a numerical solution method which automatically switches between three different coordinate systems on the worldsheet ($\sigma=z$, $\sigma = z+\rho$ and $\sigma=\rho$) so as to keep the solution regular. The equations of motion  admit two types of solutions which have different topologies and satisfy different boundary conditions at the string endpoints. The first solution corresponds to the the single U-shaped string and it describes the (original) quark and antiquark which are forced to “move” on a circular orbit, and are connected by a flux tube. If one was to Wick rotate this configuration to Lorentzian time, it would correspond to a quark and antiquark which accelerate away from each other, while being connected by a flux tube. We will refer to this solution as solution ([**I)**]{}; see the left-hand side plot in figure \[VariousSolutions\]. The second solution is a string with two disconnected boundaries, which the describes process of flux tube breaking. The outer boundary of the string is forced by a Dirichlet boundary condition to be on a circle of a fixed radius $R_{2}$. The inner boundary is forced to be on a particular D8 probe (with Dirchlet boundary conditions in the $x_4$ and $z$ directions), but the internal ends of this string are free to “move” arbitrarily along the D8 probe (Neumann boundary conditions). The physical reason why we impose “free” Neumann boundary conditions on the inner edge of the string is that this part of the worldsheet corresponds to the pair-produced quarks which “move” only under the influence of the flux tube, and are not coupled to any external source. We will refer to this solution as solution ([ **II**]{}), see right-hand side plot in figure \[VariousSolutions\]. ![\[VariousSolutions\] Typical single loop solution [( **I)**]{}, left and a double loop solution [(**II)**]{}, right.](3DSingleLoopEx "fig:"){width="40.00000%"} ![\[VariousSolutions\] Typical single loop solution [( **I)**]{}, left and a double loop solution [(**II)**]{}, right.](3DDoubleLoopEx "fig:"){width="50.00000%"} It is not hard to see that if one imposes a Dirchlet boundary condition in the $z$ direction, then the inner boundary of the hanging string has to ends orthogonally on the D8-brane worldvolume. Namely, in the gauge $z=\sigma$, the Neumann boundary condition in the $\rho$ direction yields $$\label{orthogonal} \frac{\partial {\mathcal L}}{\partial \rho'} = \gamma \left(\frac{z_{D4}}{z}\right)^\frac{3}{2} \bigg(z_{D_4}^3 \rho'^2 + (z f_{\Lambda})^{-1} \bigg)^{- \frac{1}{2}} \rho \rho' = 0 \quad \Rightarrow \quad \frac{{\rm d} \rho}{{\rm d} z} = 0 \, ,$$ i.e. the string hangs orthogonally from the D8 probe. For both string configurations, the constituent quark masses are given by [@Kinar:1998vq] $$\label{quarkmasses} m_{Q} = \frac{1}{2 \pi \alpha'} \int_{z_\Lambda}^{z_{m_Q}} {\rm d} z \, \sqrt{g_{zz} g_{00} } = \frac{1}{2 \pi \alpha'} \int_{z_\Lambda}^{z_{m_Q}} {\rm d} z \, \frac{1}{f_{\Lambda}^{1/2} z^{2}} \, ,$$ which is just the proper distance of a string hanging from the tip of the probe D8-brane, $z_{m_Q}$ to the IR wall at $z_{\Lambda}$. Note that if there is more than one probe D8-brane, which each ends at a different $z_{{m_Q}_i}$, then one has a system with different quark masses $m_{Q_i}$. Let us now first look at the solution of type ([**I**]{}). The equation of motion  are second order differential equations, and as such have two undetermined constants of integration. In order to see which parameters characterise a solution, let us look at the $z=\sigma$ gauge, since this is the simplest and the results are independent of the gauge. In this gauge $\sigma$ takes values in $(z_{m_{Q}},z_B)$ where $z_B$ is the position of the bottom of the string loop, see figure \[loopssingle\]. For the solution ([**I**]{}) we require that the tip of the loop is at the coordinate origin $\rho(z_B)=0$. Also, as we are interested only in smooth loops, we will require that at the bottom ${\rm d} z /{\rm d} \rho\big|_B=0$. For a given position of the probe brane $z_{m_Q}$, these two requirements uniquely fix the solution ([**I**]{}). We are therefore only left with two parameters which specify the solution ([**I**]{}): $z_B$, the position of the bottom of the loop and $z_{m_Q}$, specifying the position of the top of the loop. In what follows we will usually work with fixed masses of the outer quarks $m_{Q}$, or equivalently we will fix $z_{m_Q}$. If one shifts the bottom of the string $z_{B}$, this will change the distance between the string endpoints, i.e. the distance between the outer quarks $R$ on the probe D8. As the position of the bottom of the loop comes closer to the wall ($z_B \rightarrow z_{\Lambda}$) the distance between the quarks $R$ becomes larger and larger, see figure \[outervsepsilon\]. In this near-wall limit, the single string loop looks more and more like a U-shaped string, see figure \[loopssingle\]. Only in this limit $z_B \rightarrow z_{\Lambda}$ is the identification of the “vertical” parts of the loop with quark masses \[quarkmasses\] fully justified [@Kinar:1998vq]. Also, only for these kind of U-shaped strings is the effective tension of the horizontal part of the string identified with $\sim \Lambda_{QCD}$, as the position of the wall specifies $\Lambda_{QCD}$ in this model. As the bottom of the loop approaches the IR wall one discovers that the action scales more and more quadratically with the size of the loop, which is the expected behaviour for a single circular Wilson loop in a confining theory. ![\[outervsepsilon\] Distance between the quarks (string endpoints) for a single loop $({\bf I})$, as a function of dimensionless separation of the tip of the loop from the wall $\epsilon=(z_{B}-z_\Lambda)/z_{\Lambda}$. Plots are given for different values of the quark massess: from bottom to top, black $m=1.31$, blue $m=2.03$, green $m=3.55$ and red $m=9.5$.](LoopradiusVSepsilon2){width="60.00000%"} ![\[loopssingle\] One-parameter family of solutions of type ([**I**]{}) plotted for a mass $m_Q=9.57$ of the external quarks. Different solutions are parametrised by different distances between the quarks $R$, or equivalently, different distances from the bottom of the loop to the wall. For all the plots we have set $z_{\Lambda}=1$.](ElongatedSingle){width="90.00000%"} For the double loop solution (${\bf II}$), one needs to introduce two probe D8-branes. The outer boundary of the string worldsheet will end on the brane with position $z=z_{m_Q}$. The position of this brane fixes the mass $m_{Q}$ of the original (heavy) quark pair. Before the split, the original flux tube stretches between these two outer quarks. When the flux tube breaks, an inner boundary is formed on the string worldsheet. As argued earlier, the inner boundary of the loop ends orthogonally on the second brane which has position $z=z_{m_q}$, and this position fixes the mass of the *produced* quark-antiquark pair, $m_{q}$. When considering solutions of type ([**II**]{}), we will fix these two parameters $m_Q$ and $m_q$ as they are the parameters which are given in the dual gauge theory. Note that the solution (${\bf II}$) consists of two, outer and inner branches, which are glued in a smooth way at the bottom of the loop, $(z_B, R_B)$; these are coloured blue and red in figure \[looplabelled\]. In contrast to the loop ${\bf (I)}$, the bottom of the loop $({\bf II})$ is no longer at the origin $\rho=0$, but it is placed at some point $( z_B, \rho = R_B \neq 0)$. Smoothness of the solution $({\bf II})$ at the bottom, as before, implies $(d z / d \rho)|_{(z_B,R_B)} =0$. In principle, bottom of the loop $z_B$ can be anywhere between $0 \leq z_{m_Q} < z_{m_q}<z_B \leq z_{\Lambda}$. However, we will be mainly interested in the loops which have bottom near the wall $z_B \rightarrow z_{\Lambda}$, since newly generated flux tubes are IR objects which exist at energies $\sim \Lambda_{QCD}$. It is also useful to introduce a dimensionless parameter $\epsilon= (z_{\Lambda}- z_B)/z_{\Lambda}\ll1$. ![Slice of a generic double loop solution [(**II)**]{} for $\phi=\text{const}$. The labels $z_{m_Q}$ and $z_{m_q}$ give the positions of two D8-branes and also specify the masses of the original (outer) quarks $m_Q$ and pair-produced (inner) quarks $m_q$. \[looplabelled\] ](FinalFigure8){width="90.00000%"} Once the bottom of the loop (${\bf II}$) is fixed to some $z=z_B$, for a given mass $m_q$, the inner (blue) branch of the solution is fully fixed by the requirement of orthogonality of the string to the $m_q$ probe brane (see ) and the condition of smoothness of the loop at the bottom. Therefore, the radius of the inner loop $R_1$ (on the $m_q$ brane), as well as the radius $R_B$ of the bottom of the loop, are fixed once $z_B$ and $m_q$ are specified. The outer (red) branch of the solution is fully fixed once the mass of the original quarks $m_Q$ is specified and one requires that this branch is glued in a smooth way to the inner branch. Note that the outer branch of the solution ([**II**]{}) need not end orthogonally on the probe brane $m_Q$, as the position of the outer quarks is fixed by Dirichlet boundary conditions. So in summary, solution ([**II**]{}) is fixed by specifying the quark masses $m_Q$, $m_q$ and the bottom of the loop $z_B$, or equivalently, $m_Q$, $m_q$ and the inner radius $R_1$ of the loop. ![Plots of the loops of type ([**II**]{}) for increasing values of the inner radius $R_1$. The left panel shows region (i) until the value for which $R_2$ is maximal. The middle panel shows loops in region (ii), where $R_2$ decreases as $R_1$ increases, until $R_2$ reaches its local minimum. The right panel shows the squashed loops of region (iii) which occur beyond that. \[limitlimit\] ](region1loops "fig:"){width="30.00000%"}  ![Plots of the loops of type ([**II**]{}) for increasing values of the inner radius $R_1$. The left panel shows region (i) until the value for which $R_2$ is maximal. The middle panel shows loops in region (ii), where $R_2$ decreases as $R_1$ increases, until $R_2$ reaches its local minimum. The right panel shows the squashed loops of region (iii) which occur beyond that. \[limitlimit\] ](region2loops "fig:"){width="30.00000%"}  ![Plots of the loops of type ([**II**]{}) for increasing values of the inner radius $R_1$. The left panel shows region (i) until the value for which $R_2$ is maximal. The middle panel shows loops in region (ii), where $R_2$ decreases as $R_1$ increases, until $R_2$ reaches its local minimum. The right panel shows the squashed loops of region (iii) which occur beyond that. \[limitlimit\] ](region3loops "fig:"){width="30.00000%"} ![The left panel shows the behaviour of the outer radius as inner radius varies, and the three different regions. Note that there are two critical points on this graph and the “squashing” of loops takes place both in region (ii) and (iii). The right panel shows the behaviour for different masses  (blue, gray, red) and fixed mass of the outer quarks $m_Q=9.5$. \[varyingboth\]](region123_edited "fig:"){width="45.00000%"} ![The left panel shows the behaviour of the outer radius as inner radius varies, and the three different regions. Note that there are two critical points on this graph and the “squashing” of loops takes place both in region (ii) and (iii). The right panel shows the behaviour for different masses  (blue, gray, red) and fixed mass of the outer quarks $m_Q=9.5$. \[varyingboth\]](LoopsR2vsR1_different_masses "fig:"){width="45.00000%"} Let us now try to understand the moduli space of these double loops. First we observe that for fixed $m_q$ and $m_Q$, as the inner radius $R_1$ is increased, the loop extends deeper and deeper into the bulk, towards the IR wall, or in other words, $\epsilon$ decreases. As this happens, *initially* the loop becomes more and more U-shaped and wider, with increasingly longer horizontal part and with larger and larger outer radius $R_2$. In this region, the effective size of the system (the ratio of the outer versus the inner radius) grows. We will refer to this region as region (i). The left panel in figure \[limitlimit\] shows a series of loops in this region. When the inner radius $R_1$ becomes larger than a particular critical value $R_{\text{crit-1}}$ the outer radius $R_2$ starts to decrease as the inner radius grows, so loops become more and more *squashed*, see the middle panel in figure \[limitlimit\]. Note that while the squashing happens, the bottoms of all the squashed loops stay in the region which is very close to the IR wall ($\epsilon \sim 10^{-5}$). We will refer to this region as region (ii). Finally, when the radius $R_1$ becomes larger than another critical value $R_{\text{crit-2}}$, both inner and outer radius start to grow, but the loop retains its squashed shape and it starts moving outwards as a whole; see the right panel in figure \[limitlimit\]. We will refer to this as region (iii). Figure \[varyingboth\] shows the relation between the inner and outer radius as the inner radius varies, in all three regions. Note that as the inner quark mass is increased the position of the peak moves to the right, but in a such a way that the ratio of $R_{1}/R_{2}$ increases, so that the the system is effectively at smaller volume. Put differently, systems with smaller inner quark mass $m_q$ have larger effective size in the sense discussed above, and we expect them to reproduce the Schwinger results more accurately. The peak in the $R_2$ vs. $R_1$ plot persists as $m_q \rightarrow m_Q$, but increasing the masses of outer and inner quarks to larger value reduces the height of the peak and shifts its location to larger values of $R_1$, so that eventually, for $m_q,m_Q \rightarrow \infty$, only region (i) remains and one is left with a simple linear relation between $R_1$ and $R_2$, as expected from e.g. [@Armoni:2013qda]. ![Shapes of double loops (in region (i)) for fixed outer radius $R_2=2.4$ and varying inner quark mass. Note that as the inner quark mass decreases, the radius at which these quarks are produced also decreases. \[shapefixed\] ](LoopShapesFixedR2new){width="40.00000%"} ![ The relation between produced quark mass $m_q$ and radius $R_1$ at which the pair is produced for various loops in the region (i). The plots are made for $m_Q=9.5$ and for different outer radii $R_2= \{1.6, 1.98, 2.37, 2.78\}$. The largest value of $R_2$ corresponds to the bottom (red) curve.\[shapefixed2\]](FixedR2massVsR1){width="50.00000%"} In order to find the decay width of a given meson, we will need to keep $R_2$ and $m_Q$ fixed and look at the decay probability for varying inner quark mass $m_q$. Figure \[shapefixed\] illustrates that, for strings in region (i), the radius $R_1$ at which the inner quarks are produced decreases as $m_q$ is decreased. Figure \[shapefixed2\] shows the dependence between $m_q$ and $R_1$ quantitatively, for different values of the outer quark mass $m_Q$. It shows that when the total system is smaller ($R_2$ is smaller), quarks of the same mass $m_q$ are produced at a radius $R_1$ which is also smaller. Figure \[shapefixed2\] in addition suggests that if $R_1\ll R_2$ the relation between $R_1$ and $m_q$ becomes linear, as was the case for the Schwinger approximation (although the slopes of these lines depends on the size of the system, unlike for Schwinger). It is harder, and as we will argue below, less relevant, to produce similar plots for regions (ii) and (iii), as for the “squashed” loops in these regions the dependence of $R_1$ on $m_q$ is rather weak (small variations of $m_q$ lead to almost no change in $R_1$). Extracting the probability for decay ------------------------------------ Once the double loop solution is constructed we want to extract from it the probability for the flux tube to break. As in the case of the flat space instanton \[actionproduction\], in order to compute this probability, one first needs to compute the action of the solution (${\bf II}$) and then subtract from it the action of the action of the solution (${\bf I}$) $$\begin{aligned} \label{actioninstanton} \Delta S(m_q,m_Q,R_2) = S_{II}(m_q,m_Q,R_2) - S_{I}(m_Q, R_2)\, \, . \end{aligned}$$ Both these actions are evaluated for the loops ${\bf (I)}$ and ${\bf (II)}$ which have identical outer radius $R_2$, as this corresponds to the physical “size” of the initial system (cf. footnote \[f:nonsymmetric\]). Generically, the probability for a meson decay will depend on the size of the system $R_2$ and on the mass of the initial quarks $m_Q$, in addition to the mass of the pair produced quarks $m_q$. The dependence of the decay rate on the radius $R_2$ is something that one expects for realistic mesons, as the size of a meson is typically related to its angular momentum. As noted before, for a given system of fixed and large enough $R_2$, generically there are three possible radii at which quarks can be pair produced, see figure \[varyingboth\]. Each possible radius belongs to one of the regions (i), (ii) or (iii). Let us first analyse instantons in the region (i). Figure \[actionsvarious\] shows the instanton action \[actioninstanton\] in the region (i) as a function of quark mass. As the quark mass $m_q \rightarrow 0$, the instanton action goes to zero, i.e. lighter quarks are more likely to be produced, as expected. We also see that the probability per unit time and volume depends on the “size” of the system, and that the probability for a larger meson to split is smaller than for smaller mesons. This may sounds unintuitive, as one may expect larger mesons to be more unstable. However, one should keep in mind that, when computing the lifetime of mesons, one still needs to multiply this probability with the meson volume. We should also note that when evaluating the action for instantons using double loops, one always needs to make sure that the instanton action is smaller than the action of two *disconnected* loops which have the same radii $R_1$ and $R_2$ [@Olesen:2000ji; @Gross:1998gk]. For all the loops discussed in this paper, we have always checked that this holds so that no Gross-Ooguri-Olesen-Zarembo type phase transition takes place. ![\[actionsvarious\] Instanton action as a function of quark mass $m_q$, plotted for different size systems $R_2=\{1.6, 1.98, 2.37, 2.78\}$. The upper curve (red) with the largest action corresponds to the largest $R_2$.](ActionDeltaS){width="50.00000%"} Evaluating the instanton action for radius $R_1$ in regions (ii) and (iii) one gets that $S(R_{1\text{(i)}},R_2) <S(R_{1\text{(ii)}},R_2)<S(R_{\text{1(iii)}},R_2)$. We note however, that the difference between these three actions is minimal, less than a percent. Hence it seems that decay in the region (i) is the most dominant, although only marginally. Shapes of generic loops in regions (i), (ii) and (iii) which have the same $R_2$ are plotted in figure \[loopsregions\].[^6] It is unclear to us at present what is the physical relevance of the squashed loops, in particular the very squashed loops in region (iii). These loops seems to suggest the existence of “exotic” decay channels for meson decay, where the pair-produced quarks remove most of the flux tube in the decay. It could be that, once the treatment of the angular momentum is taken properly into account, these decays are forbidden due to selection rules. ![\[loopsregions\] Double loops in the regions (i), (ii) and (iii) for the same outer radius $R_2=3.0$ and inner quark mass $m_q=0.62$ (corresponding to $z_{m_q}=0.8$). The blue curve extending all the way to $\rho=0$ is a single loop with the same radius $R_2=3.0$. ](ThreeLoopsSameR2){width="55.00000%"} So in summary, the computation outlined above produces the probability for the decay of mesons of a particular “size”. It is hard to compare our findings, even at qualitative level, with experimental data, as these are mostly not known for higher spin mesons. However, one expects that in a particular limit, the holographic computation should reproduce to the computations of CNN and Schwinger which were outlined in section \[s:flat\_breaking\]. Both these computations work with a constant (chromo)electric field in infinite volume and in the approximation where the produced quarks do not back-react on the field. ![\[largevolume\] Relative radius $R_1/R_2$ of the system as a function of the inner radius $R_1$ for a fixed chosen mass $m_1=1.5$. The largest effective volume is achieved for configurations near the boundary between region (i) and (ii) in the notation of figure \[varyingboth\].](RelativeSize){width="50.00000%"} ![U-shaped loop approximation (red rectangular straight lines for the double loop, green straight lines for the single loop which is to be subtracted). The dashed parts are the same for the single and double loop solutions. For comparison we have also displayed an actual solution (blue curve). \[loopapproximate\] ](UShapedLoop2){width="90.00000%"} In order to achieve a large-volume limit in the holographic set up, one needs to look at long strings, with a horizontal part which is as large as possible in comparison to the radius $R_1$. Figure \[largevolume\] shows the effective size of the system, i.e. the distance $R_1$ at which the quarks are produced versus the full size of the system $R_2$ for quarks of fixed mass $m_q$. We see that the largest effective volume is achieved for loops at the boundary between the regions (i) and (ii). Note that having the largest effective volume does not mean that the instanton action for these loops is the smallest for a given $m_q$ fixed. By looking at shapes of these large-volume loops, see figure \[loopsregions\], we see that they look most stretched and moreover they look most like rectangular U-shaped loops. In order to get an idea of what we should expect the action for these loops to look like in the full numerical solution, let us consider an approximation of these large-volume loops with rectangular U-shaped loops, as indicated in figure \[loopapproximate\]. In this approximation, the action of the outer part of the loop (the dashed red segments in the figure) is the same for single and double loops and does not contribute to the instanton action. We therefore find that $$\Delta S_{\text{inst}} \sim S_{\text{tube}} - S_{\text{disc}} \sim 2 \pi R_1 m_q - R_1^2 \pi T\,,$$ where we have used that $m_q$ is proportional to the height of the tube, see . This expression is similar to the one obtained in the world-line derivation , and similar to the flat space expression , except that in those expressions the mass $m_q$ and the radius $R_1$ are linearly related through the equations of motion. In the present case, however, the mass $m_q$ and the radius $R_1$ are *independent* since the outer radius is (by construction) decoupled from the rest and is hence arbitrary. Our crude approximation therefore almost, but not completely, reproduces the Schwinger approximation. Motivated by this discussion, we will now focus attention to strings with the largest possible volume (largest ratio of $R_1/R_2$), which corresponds to the peak between region (i) and (ii) of figure \[limitlimit\], for small quark mass $m_q$. For these strings we find a nice linear relation between $R_1$ and $m_q$, see figure \[alsovarious\]. Note, however, these strings do not all have the same outer radius $R_2$. In order to nevertheless compare their actions, we will need to focus only on the inner part of the loop (from $\rho=R_1$ to $\rho=R_B$). The outer parts of the single and double loop (from $\rho=R_B$ to $\rho=R_2$) are approximately equal (as they are in the caricature) and would therefore cancel in the instanton action after subtracting the single loop background (see the loop in region (i) in figure \[loopsregions\]). As in the caricature, we will furthermore approximate the shape of the single loop between $\rho=0$ and $\rho=R_B$ by a straight line segment at constant $z=z_B$. The above is an ‘infinite volume approximation’ in the sense that it holds when . When we plot the instanton action $\Delta S$ versus the produced quark mass $m_q$ for these loops, one recovers a quadratic relation, see figure \[Fittinglines\]. While the best fit is quadratic, like for Schwinger, there is a nonvanishing constant present. One could remove such a term by modifying the normalisation, and it would anyhow be cancelled in a computation of the ratio of any two probabilities, so it is irrelevant. We should also comment that the numerical factor in front of $m_q^2$ in this fit differs from the factor of $\pi/4$ in the Schwinger/CNN formula , which is what one expects. The expression  is valid only qualitatively in QCD and also our holographic model is not a dual of real QCD, so one should expect these kind of differences between the two results. The probability $P_{\text{pp}}$ for a flux tube to break, per unit length and unit time, is obtained by exponentiation of the instanton action. In the approximation of large mesons with an (infinitely long) flux tube, translation invariance implies that the total probability for a meson to split will be given by $P_{\text{pp}}L$, where $L$ is the length of the flux tube. In finite-size systems however, $P_{\text{pp}}$ will in general depend on the position along the flux tube a well as the size of the system. In order to evaluate the full probability we would first need to construct the non-axially symmetric instantons, and then integrate contributions of these instantons over the full length of the flux tube. We leave such an investigation for another project. ![Linear relation between the radius at which quarks are produced and their mass, obtained from the series of “maximal” loops on the boundary between regions (i) and (ii). \[alsovarious\]](LargeVsmass){width="50.00000%"} ![\[Fittinglines\] Quadratic dependence of the instanton action $\Delta S$ on the mass $m_q$ of the pair-produced quarks in the ‘infinite volume limit approximation’ in which one considers only loops near the boundary between region (i) and (ii).](ActionVsmass){width="50.00000%"} Discussion and outlook ====================== In this paper we have studied decay of mesons using instanton techniques in two models: the old string model and the holographic model of Sakai-Sugimoto. In the first model mesons are represented by a pair of *massive* particles connected with a relativistic string (flux tube) in *flat space*. In order to study their decays, we have analytically constructed the worldsheet instanton configuration which interpolates between two mesonic particles. Using this instanton, we were able to reproduce, up to an overall numerical factor, the formula  for the probability of breaking the QCD flux tube, derived a long time ago by Casher et al. [@Casher:1978wy]. They derived their formula by making a direct replacement of various quantities in the QED formula with the analogue quantities in QCD. In contrast, in our approach the connections between the theories followed, rather than being postulated. After comparing the results for the decay probabilities it followed that the chromoelectric field had the same role as an elastic string. However this string does not couple in a minimal way to the massive particles at the string endpoints, unlike the electric or chromo-electric field in the worldline derivation of Schwinger-like formula. Our derivation is very simple, yet it produces quite a non-trivial field theory result. However, it was rudimentary in the sense that we have restricted our attention to planar processes, where both in- and outgoing particles lie in the same plane. It would be interesting to generalise this derivation to allow for the presence of transverse momenta of the outgoing particles. All of the flat space models (whether Schwinger, CNN or old string) share one “feature”: in order to incorporate (pair production of) quarks one has to introduce an extra term in the action, by hand. In the holographic model, in contrast, there is an unified treatment of the flux tube and quarks. In the second part of the paper we have developed a framework for studying the decay of large-spin mesons using the holographic Sakai-Sugimoto model. We have constructed a family of worldsheet instanton configurations, which interpolates between incoming large-spin meson and two outgoing large-spin mesons. The generic instanton describes the decay in which both the finite size of the system and the backreaction of the pair-produced particles is taken into account. In this sense, the set up is more powerful than either CNN or Schwinger computations which only give the probability in the large volume limit and with no backreaction taken into account. A shortcoming of our computation is that it is restricted to (almost) cylindrically symmetric decay channels. In the infinite volume limit the probability is the same for breaking at all points, but in a finite-size system we expect the probability to be different along the flux tube. It would be very important to study less symmetric decay. Another restriction of our computation is that the outer (original) quarks were considered only on circular orbits, which is a boundary condition suitable for event generators where quarks accelerate away from each other. In order to compute decays for mesonic particles in general, one would need to generalise our computations to systems in which the external quarks follow straight lines, and construct instantons which are positioned at an arbitrary point along the flux tube. Only than, total decay rates and the life times of the mesonic particles can be computed and one could investigate interesting prediction of the flat space, that $\Gamma/L$ is universal. When constructing instanton configurations for finite-size mesons, we have discovered an interesting decay channel in which the pair-produced quarks “eat” most of the flux tube, leading to very short outgoing mesons. At the moment it is not clear to us what is the physical significance of such a decay channel. One possibility is that once the angular momentum is properly taken into account (as it is not in our Euclidean framework) these exotic decay channels will be forbidden by a selection rule. It would be interesting to investigate this question in the future. Related to this is the question of proper treatment of angular momentum of mesons in the holographic setup. One expects that angular momentum modifies decay rates, as it provides an extra centrifugal potential for pair-produced quarks. Some of these effects have been investigated for the Schwinger process in [@Gupta:1994tx]. In order to cross check our computation, we have also investigated meson decays in the large-volume limit. As expected, we have rediscovered the qualitative form of the Schwinger/CNN formula, up to a numerical factor. The long strings which were used in this large-volume limit offer a natural playground in which one could try to set up a systematic study of finite-size effects. Namely, for large, but finite-size systems, the probability for a string to decay should have an expansion in powers of $R_1/R_2$, where $R_1$ is the radius at which quarks are produced and $R_2$ is the size of the system. It would be interesting to quantitatively study this expansion using the holographic set up. It would also be interesting to extend our analysis to finite temperature field theory. By introducing a horizon in the Sakai-Sugimoto setup one could generalise our instanton configuration to this background, and obtain the thermal probability for a flux tube to split. Finally, one could ask to which extend the results which we obtain are dependent on the exact form of the metric. In particular, one might ask if for instantons in other confining geometries the quadratic dependence on the quark mass persists. We plan to investigate these issues in future work. [20]{} natexlab\#1[\#1]{} T. Sakai and S. Sugimoto, “More on a holographic dual of [QCD]{}”, [*Prog. Theor. Phys.*]{} [**114**]{} (2006) 1083–1118, [[hep-th/0507073]{}](http://arxiv.org/abs/hep-th/0507073). T. [Sjöstrand]{}, “[The Lund Monte Carlo for jet fragmentation]{}”, [*Comp. Phys. Commun.*]{} [**27**]{} (1982) 243. B. Andersson, G. Gustafson, G. Ingelman, and T. [Sjöstrand]{}, “Parton fragmentation and string dynamics”, [*Phys. Rep.*]{} [**97**]{} (1983) 31. A. Casher, H. Neuberger, and S. Nussinov, “Chromoelectric flux tube model of particle production”, [*Phys. Rev.*]{} [**D20**]{} (1979) 179–188. A. Armoni, “[Beyond The Quenched (or Probe Brane) Approximation in Lattice (or Holographic) QCD]{}”, [*Phys. Rev.*]{} [**D78**]{} (2008) 065017, [[arXiv:0805.1339]{}](http://arxiv.org/abs/0805.1339). K. Peeters, J. Sonnenschein, and M. Zamaklar, “Holographic decays of large-spin mesons”, [*JHEP*]{} [**02**]{} (2006) 009, [[hep-th/0511044]{}](http://arxiv.org/abs/hep-th/0511044). J. Sonnenschein and D. Weissman, “[The decay width of stringy hadrons]{}”, [ *Nucl. Phys.*]{} [**B927**]{} (2018) 368–454, [[arXiv:1705.10329]{}](http://arxiv.org/abs/1705.10329). I. K. Affleck, O. Alvarez, and N. S. Manton, “[Pair Production at Strong Coupling in Weak External Fields]{}”, [*Nucl. Phys.*]{} [**B197**]{} (1982) 509–519. J. S. Schwinger, “On gauge invariance and vacuum polarization”, [*Phys. Rev.*]{} [**82**]{} (1951) 664–679. I. K. Affleck and N. S. Manton, “[Monopole Pair Production in a Magnetic Field]{}”, [*Nucl. Phys.*]{} [**B194**]{} (1982) 38–64. G. C. Nayak, “[Non-perturbative quark-antiquark production from a constant chromo-electric field via the Schwinger mechanism]{}”, [[hep-ph/0510052]{}](http://arxiv.org/abs/hep-ph/0510052). B. M. Barbashov and V. V. Nesterenko, “[Introduction to the relativistic string theory]{}”, 1990. G. W. Semenoff and K. Zarembo, “[Holographic Schwinger Effect]{}”, [*Phys. Rev. Lett.*]{} [**107**]{} (2011) 171601, [[arXiv:1109.2920]{}](http://arxiv.org/abs/1109.2920). W. A. Bardeen, I. Bars, A. J. Hanson, and R. D. Peccei, “A study of the longitudinal kink modes of the string”, [*Phys. Rev.*]{} [**D13**]{} (1976) 2364–2382. M. Kruczenski, L. A. P. Zayas, J. Sonnenschein, and D. Vaman, “[Regge trajectories for mesons in the holographic dual of large-$N_c$ QCD]{}”, [ *JHEP*]{} [**06**]{} (2005) 046, [[hep-th/0410035]{}](http://arxiv.org/abs/hep-th/0410035). Y. Kinar, E. Schreiber, and J. Sonnenschein, “[$Q \bar{Q}$ potential from strings in curved spacetime – classical results]{}”, [*Nucl. Phys.*]{} [**B566**]{} (2000) 103–125, [[hep-th/9811192]{}](http://arxiv.org/abs/hep-th/9811192). A. Armoni, M. Piai, and A. Teimouri, “[Correlators of Circular Wilson Loops from Holography]{}”, [*Phys. Rev.*]{} [**D88**]{} (2013), no. 6, 066008, [[arXiv:1307.7773]{}](http://arxiv.org/abs/1307.7773). P. Olesen and K. Zarembo, “[Phase transition in Wilson loop correlator from AdS / CFT correspondence]{}”, [[arXiv:hep-th/0009210]{}](http://arxiv.org/abs/hep-th/0009210). D. J. Gross and H. Ooguri, “[Aspects of large N gauge theory dynamics as seen by string theory]{}”, [*Phys. Rev.*]{} [**D58**]{} (1998) 106002, [[arXiv:hep-th/9805129]{}](http://arxiv.org/abs/hep-th/9805129). K. S. Gupta and C. Rosenzweig, “Semiclassical decay of excited string states on leading regge trajectories”, [*Phys. Rev.*]{} [**D50**]{} (1994) 3368–3376, [[hep-ph/9402263]{}](http://arxiv.org/abs/hep-ph/9402263). [^1]: We should note that the probe-brane approximation, in which back-reaction of the flavour brane is not taken into account, corresponds to the quenched approximation in QCD, in which dynamical features of quarks are neglected. One might thus say that in this approximation one cannot see any dynamical features of quarks, in particular one should not be able to see quark pair production as these correspond to $N_f/N_c$ corrections. However, generically the situation is more subtle than this. See in particular [@Armoni:2008jy] which puts forward a proposal to compute signals of string breaking through the computation of the correlators of connected Wilson loops. The computation of [@Armoni:2008jy] is in spirit very similar to what we do in the present paper. [^2]: Note that we are using a mostly-plus Lorentzian metric. [^3]: In order to ensure that tip of the cigar is non-singular, the periodicity of $x_4$ has to be $$\begin{aligned} \delta x_4 = \frac{4 \pi}{3}\left(\frac{R_{D_4}^3}{u_{\Lambda}}\right)^{1/2} \equiv 2 \pi R \, . \end{aligned}$$ [^4]: \[f:nonsymmetric\] A better description of meson decays would involve string surfaces with straight external boundaries and a circular inner boundary. However, this breaks the rotational symmetry and the computation is thus numerically much more involved. In what follows we will nevertheless often loosely refer to the size of the outer radius as “the size of the meson”. We intend to return to the non-symmetrical problem in a followup paper. [^5]: Here we are considering breaking of an open string into two other open strings. Note however, that there is an alternative channel of decay in which the open string radiates closed loops. In this process one does not require that the string worldsheet touches the probe, but rather any self intersection of the string worldsheet can lead to the emission of closed strings. However, this process does not describe the process of a meson decay into two mesons, but a decay of a meson into another meson plus a glueball. Such a process is suppressed by additional powers of $g_s$ which suppresses open to closed string amplitudes with respect to open to open string amplitudes, and it will not be analysed here. [^6]: At first glance it may look strange that loops in the region (ii) and (iii) have larger action than the action of the loop (i), given that loops (ii) and (iii) look like squashed version of loop (i). However, none of these loops is strictly rectangular and for given fixed $R_2$ they do not have tips which at the same distance from the wall. So in order to compare actions one needs to evaluate them explicitely.
--- abstract: | Scalar perturbations of Friedmann-Lemaitre cosmologies can be analyzed in a variety of ways using Einstein’s field equations, the Ricci and Bianchi identities, or the conservation equations for the stress-energy tensor, and possibly introducing a timelike reference congruence. The common ground is the use of gauge invariants derived from the metric tensor, the stress-energy tensor, or from vectors associated with a reference congruence, as basic variables. Although there is a complication in that there is no unique choice of gauge invariants, we will show that this can be used to advantage. With this in mind our first goal is to present an efficient way of constructing dimensionless gauge invariants associated with the tensors that are involved, and of determining their inter-relationships. Our second goal is to give a unified treatment of the various ways of writing the governing equations in dimensionless form using gauge-invariant variables, showing how simplicity can be achieved by a suitable choice of variables and normalization factors. Our third goal is to elucidate the connection between the metric-based approach and the so-called $1+3$ gauge-invariant approach to cosmological perturbations. We restrict our considerations to linear perturbations, but our intent is to set the stage for the extension to second order perturbations. author: - | \ [Claes Uggla]{}[^1]\ Department of Physics,\ University of Karlstad, S-651 88 Karlstad, Sweden - | \ [John Wainwright]{}[^2]\ Department of Applied Mathematics,\ University of Waterloo,Waterloo, ON, N2L 3G1, Canada\ title: Scalar Cosmological Perturbations --- PACS numbers: 04.20.-q, 98.80.-k, 98.80.Bp, 98.80.Jk Introduction ============ Currently, increasingly accurate observations are driving theoretical cosmology towards more sophisticated models of matter and the study of possible nonlinear deviations from FL cosmology.[^3] Motivated by this state of affairs, in a recent paper (Uggla and Wainwright (2011), hereafter referred to as UW), we initiated a program of research whose long term goal is to provide a general but concise description of [*nonlinear*]{} perturbations of FL cosmologies that will reveal the structure of the governing equations, and hence facilitate their analysis. In furthering this goal one is faced with making three choices. First, there is the choice of gauge-invariant variables: the work of Bardeen (1980) made clear that there is no unique choice. Second, the use of dimensionless variables invariably leads to physical insight via the choice of a suitable normalizing factor or factors. Third, there is the choice of how to formulate the governing equations: Einstein’s field equations, the Ricci and Bianchi identities, the conservation equation for stress-energy, or other matter equations. In this paper we systematically consider these three choices, working for the moment within the framework of linear perturbation theory. Our first goal in this paper is to present an efficient way of constructing dimensionless gauge invariants associated with the metric tensor, the stress-energy tensor, or other structures that may be introduced, and of determining their inter-relationships. We use the method of Nakamura (2003), adapted as in UW to create dimensionless gauge invariants. Our second goal is to give a unified treatment of the various ways of writing the governing equations in dimensionless form using gauge-invariant variables within the framework of the [*metric-based approach*]{} to cosmological perturbations.[^4] In UW we gave the linearized Einstein equations in two forms, which we referred to as the Poisson form, associated with the work of Bardeen (1980), and the uniform curvature form, associated with the work of Kodama and Sasaki (1984). In the present paper we derive the linearized conservation equations for the stress-energy tensor and by expressing them in terms of suitable gauge invariants, give an alternative description of the dynamics of scalar perturbations as a system of two first order (in time) partial differential equations. We also include the case where the source has multiple components. In addition, by using the inter-relationships between the different gauge invariants, we are able to give a unified description of the various “conserved quantities” that are associated with long wavelength scalar perturbations. Our third goal is to elucidate the connection between the metric-based approach and the so-called [*1+3 gauge-invariant approach*]{} to cosmological perturbations[^5], which was developed with the goal of circumventing the gauge difficulties associated with scalar perturbations (Hawking (1966), Ellis and Bruni (1989) and Ellis, Bruni and Hwang (1990)). The $1+3$ approach is formulated independently of the metric-based approach,[^6] and indeed there is a significant gap between the two approaches. In the metric-based approach it is customary to expand the metric and other basic variables in terms of a power series in a perturbation parameter as in UW, since this clarifies the linearization procedure and permits one to extend the analysis to higher order perturbations. In this respect the metric-based approach is analogous to standard elementary perturbation procedures in physics and engineering. On the other hand, the $1+3$ approach is not formulated as a conventional perturbation procedure, and relies instead on deriving exact evolution equations which are then linearized by dropping products of first order terms. In this paper we will reformulate the $1+3$ approach so as to bridge the above-mentioned gap. The plan of the paper is as follows. In section \[sec:gauge\] we give the metric and stress-energy gauge invariants and specify the four gauge choices and the two normalizations that we will use. In section \[sec:equations\] we discuss the equations for scalar perturbations that arise from the linearization of the conservation law for the stress-energy tensor, and their relation with the linearized Einstein equations. The details of the derivation, which makes use of the Replacement Principle in Appendix \[app:repl\], are given in Appendix \[app:derconserved\]. In section \[sec:cons\] we give a concise derivation of the so-called conserved quantities in gauge-invariant form. In section \[sec:1+3\] we introduce the basic variables in the $1+3$ gauge-invariant approach to scalar perturbations and derive the governing equations, which we then relate to the corresponding equations in the metric-based approach. The details are relegated to Appendix \[app:1+3\]. Section \[sec:discuss\] contains a brief summary and discussion. Gauge invariants and gauge fields {#sec:gauge} ================================= We begin by describing a dimensionless version of Nakamura’s method for constructing gauge invariants (see Nakamura (2007), equations (2.19), (2.23) and (2.26), and UW, section 2.1, for a brief introduction). Consider a family of tensor fields $A(\epsilon)$ and a background scalar $\lambda$ having dimension *length* such that $\lambda^n A(\epsilon)$ is dimensionless[^7]. The change induced in the first order perturbation ${}^{(1)}\!A$ by a gauge transformation generated by a dimensionless vector field $\xi$ on the background can be expressed using the Lie derivative $\pounds$: \[delta\_A\] \^[(1)]{}A = £\_\^[(0)]{}A, (see, for example, Bruni [*et al*]{} (1997), equation (1.2)). Let $X$ be a dimensionless vector field that satisfies \[delta\_X\] X\^a = \^a. It follows that the dimensionless object defined by \[bold\_A\] [**A**]{}\[X\] := \^n (\^[(1)]{}A - £\_[X]{} \^[(0)]{}A ), is gauge-invariant. We say that ${\bf A}[X]$ is the *gauge invariant associated with ${}^{(1)}\!A$ by $X$-compensation*. Since a choice of $X$ yields a set of gauge-invariant variables that are associated with a specific fully fixed gauge we refer to $X$ as the [*gauge field*]{}. In this paper we will use two choices for the normalization factor $\lambda$: if $A$ is a geometric quantity, we will use $\lambda=a$, where $a$ is the background scale factor, while if $A$ is a matter quantity we will use $\lambda={\cal M}$, where ${\cal M}$ is defined[^8] by . In the latter case we will denote ${\bf A}[X]$ by ${\mathbb A}[X]$. Metric gauge invariants ----------------------- Given a 1-parameter family of metrics $g_{ab}(\epsilon)$, where $\epsilon$ is a perturbation parameter and $g_{ab}(0)$ is a Robertson-Walker (RW) metric, we define a dimensionless conformal metric ${\bar g}_{ab}(\epsilon)$ according to \[bar\_g\] g\_[ab]{}() = a\^2 [|g]{}\_[ab]{}(), where $a$ is the scale factor of the RW metric. We expand ${\bar g}_{ab}(\epsilon)$ in powers of $\epsilon$: $${\bar g}_{ab}(\epsilon) = {}^{(0)} {\bar g}_{ab} + \epsilon\, {}^{(1)} {\bar g}_{ab} + \dots\, ,$$ and label the unperturbed metric and (linear) metric perturbation according to \[gamma,f\] \_[ab]{} := \^[(0)]{} [|g]{}\_[ab]{} = [|g]{}\_[ab]{}(0), f\_[ab]{} := \^[(1)]{} [|g]{}\_[ab]{} = .|\_[=0]{}. In order to construct a gauge field $X$ that satisfies , *using only the metric*, we need to decompose the metric perturbation $f_{ab}$ into scalar, vector and tensor modes. Relative to a local coordinate system[^9] we introduce the notation \[split\_f\] $$\begin{aligned} f_{00} &= -2\varphi, \\ f_{0i} &= {\bf D}_i B + B_i,\\ f_{ij} &= -2\psi \gamma_{ij} + 2{\bf D}_i {\bf D}_j C + 2{\bf D}_{(i} C_{j)} + 2C_{ij},\end{aligned}$$ where the vectors $B_i$ and $C_i$ and the tensor $C_{ij}$ satisfy $${\bf D}^i B_i = 0, \qquad {\bf D}^i C_i = 0, \qquad C^i\!_i = 0, \qquad {\bf D}^i C_{ij} = 0,$$ where ${\bf D}_i$ is the spatial covariant derivative associated with $\gamma_{ij}$. We can satisfy the *spatial part* $\Delta X^i = \xi^i$ of the requirement  by choosing \[X\_i\] X\_i = [**D**]{}\_i C + C\_i (UW, section 2.2), which we will take to be our default choice for $X_i$. With this choice, the components of the gauge invariant ${\bf f}_{ab}[X]$ associated with the metric perturbation $f_{ab}$ by $X$-compensation, are given by (UW, equations (21), (23) and (25)) \[bold\_f\_split\] $$\begin{aligned} {\bf f}_{00}[X] &= -2 \Phi[X] \, ,\\ {\bf f}_{0 i}[X] &= {\bf D}_i {\bf B}[X] + {\bf B}_i\, ,\\ {\bf f}_{ij}[X] &= -2\Psi[X] \gamma_{ij} + 2{\bf C}_{ij}\, .\end{aligned}$$ where \[metric\_gi\] := - (\_+ [H]{})X\^[0]{}, := + [H]{}X\^0, \[X\] := B - \_C + X\^0 , \[boldB,C\] [**B**]{}\_i := B\_i - \_C\_i, \_[ij]{} := C\_[ij]{}. In equation , ${\cal H}$ is the dimensionless Hubble scalar, defined by[^10] \[calH\] [H]{}:= . The quantities ${\bf B}_i$ and ${\bf C}_{ij}$, which describe the vector mode and tensor mode of the perturbation respectively, are intrinsic metric gauge invariants[^11] that are independent of the gauge field $X$. In contrast the gauge invariants $\Phi[X]$, $\Psi[X]$ and ${\bf B}[X]$, which describe the scalar mode, depend on the choice of $X^0$ but not on the choice of the spatial gauge field $X^i$. Of course, if we leave $X^i$ arbitrary then ${\bf f}_{ab}[X]$ contains additional terms and its components are given by \[bold\_f\_split1\] $$\begin{aligned} {\bf f}_{00}[X] &= -2 \Phi[X] \, ,\\ {\bf f}_{0 i}[X] &= {\bf D}_i {\bf B}[X] + {\bf B}_i + \partial_{\eta}{\bf Z}_i[X]\, ,\\ {\bf f}_{ij}[X] &= -2\Psi[X] \gamma_{ij} + 2{\bf C}_{ij} + 2{\bf D}_{(i} {\bf Z}_{j)}[X]\, .\end{aligned}$$ where \_i\[X\]:=[**D**]{}\_i C + C\_i - X\_i. Our default choice  for $X^i$ corresponds to ${\bf Z}_i[X]=0$. Stress-energy gauge invariants {#sec:stress-energy} ------------------------------ Consider a stress-energy tensor $T^a\!_b(\epsilon)$ that obeys the background symmetries, [*i.e.*]{}, it is spatially homogeneous and isotropic: $$\label{T_0} {\bf D}_i{}^{(0)}\!T^\alpha\!_\beta = 0, \qquad {}^{(0)}\!T^0\!_i = {}^{(0)}\!T^i\!_0 = 0 ,\qquad {}^{(0)}\!T^i\!_j = {{\textstyle{1\over3}}}\,\delta^i\!_j\,{}^{(0)}\!T^k\!_k .$$ We assume that $T^a\!_b(\epsilon)$ satisfies the conservation law $\,{}\!^\epsilon\bna\!_b T^b\!_a(\epsilon) = 0$, which at zeroth order yields \[conserved0\] $${}^{(0)}\!\rho^\prime = -3{\cal H}\left({}^{(0)}\!\rho\, + {}^{(0)}\!p\right) ,$$ where $${}^{(0)}\!\rho = -{}^{(0)}\!T^0\!_0,\qquad {}^{(0)}\!p = {{\textstyle{1\over3}}} {}^{(0)}\!T^k\!_k .$$ When constructing dimensionless gauge invariants it is necessary to choose a normalization factor. In Newtonian theory the dimensionless quantity $\delta\rho/\rho\equiv{}^{(1)}\!\rho/{}^{(0)}\!\rho$, where $\rho$ is the mass density, is used to describe structure formation. By analogy the same quantity is usually used in GR, but with $\rho$ being the mass-energy density instead. We propose that in GR a more natural normalization factor is the inertial mass-energy density ${}^{(0)}\!\rho + {}^{(0)}\!p$, since this is the quantity that appears instead of $^{(0)}\!\rho$ in the relativistic energy-momentum conservation equations. As we will show in this paper normalizing with ${}^{(0)}\!\rho + {}^{(0)}\!p$ leads to a simpler description of scalar density perturbations when using matter variables. We shall refer to this type of normalization as *inertial mass-density normalization* or more briefly, as [*${\cal M}$-normalization*]{}. In order to implement the above idea we assume that the inertial mass-density ${}^{(0)}\!\rho\, + {}^{(0)}\!p$ in  is positive, and introduce a normalization factor with dimension $length$, defined by $$\label{c_M} {\cal M} := \left({}^{(0)}\!\rho\, + {}^{(0)}\!p\right)^{-1/2} .$$ As in UW we introduce the notation \[A\_T\] \_T := a\^2(\^[(0)]{}+ \^[(0)]{}p), :=, and in analogy with UW[^12] we define the following intrinsic gauge invariants associated with the stress-energy tensor[^13], using $\cal M$-normalization: \[GT\] $$\begin{aligned} \hat{\mathbb T}^i\!_j &:= {\cal M}^2\,{}^{(1)}\!{\hat T}^i\!_j \\ {\mathbb T}_i &:= - {\cal M}^2\left( {\bf D}_i {}^{(1)}\!T^0\!_0 + 3{\cal H}{}^{(1)}\!T^0\!_i \right), \label{GiTi}\\ {\mathbb T} &:= {\cal M}^2({\cal C}_T^2 {}^{(1)}\!T^0\!_0 + {{\textstyle{1\over3}}} {}^{(1)}\!T^k\!_k) \label{bf_T1},\end{aligned}$$ where \[GT\_hat\] \^[(1)]{}\^i\_j:= \^[(1)]{}T\^i\_j - 13\^i\_j \^[(1)]{}T\^k\_k. We also introduce the following gauge invariants by $X$-compensation[^14] using $\cal M$-normalization: \[TXcomp\] $$\begin{aligned} \mathbb{T}^0\!_0[X] &:= {\cal M}^2\left({}^{(1)}\!T^0\!_0\right) - 3{\mathcal{H}}X^0 \label{DX}\\ \mathbb{T}^0\!_i[X] & := {\cal M}^2\,{}^{(1)}\!T^0\!_i + {\bf D}_i X^0. \label{TiX}\end{aligned}$$ It follows from  and  that \[BbbT\_i\] [T]{}\_i = - ( [**D**]{}\_i [T]{}\^0\_0\[X\] + 3[H]{} [T]{}\^0\_i\[X\] ). Note that stress-energy gauge invariants depend only on the choice of $X^0$, not on $X^i$. In analogy with UW (see equations (50)) we decompose the matter gauge invariants ${\hat{\mathbb T}}{}^i\!_j, {\mathbb T}_i, {\mathbb T}, {\mathbb T}^0\!_i[X]$ and ${\mathbb T}^0\!_0$ into scalar, vector, and tensor modes and label them as follows: \[T\_i\] $$\begin{aligned} {\hat{\mathbb T}}{}^i\!_j &= {\bf D}^i\!_j{\bar\Pi} +\, 2\gamma^{ik}{\bf D}_{(k}{\bar\Pi}_{j)} + {\bar{\Pi}}^i\!_j , \label{bf_T_hat}\\ {\mathbb T}_i & = {\bf D}_i {\mathbb D}+ {\mathbb D}_i, \label{bf_T_i} \\ {\mathbb T} & = {\bar \Gamma}, \label{bf_T}\\ {\mathbb T}^0\!_i[X] & = {\bf D}_i {\mathbb V}[X] + {\mathbb V}_i, \label{hybrid_T} \\ {\mathbb T}^0\!_0[X] &= -{\mathbb D}[X], \label{T_00}\end{aligned}$$ where $$\label{restrict} {\bf D}^i{\bar \Pi}_i =0 ,\qquad {\bar {\Pi}}^k\!_k = 0 ,\qquad {\bf D}_i{\bar {\Pi}}^i\!_j = 0, \qquad {\bf D}^i {\mathbb D}_i = 0, \qquad {\bf D}^i {\mathbb V}_i=0,$$ and \[Dij\] [**D**]{}\_[ij]{} := [**D**]{}\_[(i]{}[**D**]{}\_[j)]{} - 13\_[ij]{}[**D**]{}\^2, \^2 := [**D**]{}\^i[**D**]{}\_i. It follows from  and  that $$\mathbb{D} = \mathbb{D}[X] - 3{\mathcal{H}}\mathbb{V}[X], \qquad \mathbb{D}_i = -3{\mathcal{H}}\mathbb{V}_i .$$ Standard choices of the gauge field {#subsection:choice} ----------------------------------- In order to eliminate the gauge freedom in the scalar mode, thereby determining the perturbed metric uniquely, we have to fully specify the gauge field $X$. We fix the spatial part $X^i $ of the gauge field [*ab initio*]{} as in equation , leaving the temporal part $X^0$ to be specified. We observe that $X^0$ appears [*linearly and algebraically*]{} in the definitions of the gauge invariants: \[basic\_gi\] , \[X\], \[X\], \[X\]. Note that $\Psi[X]$ and ${\bf B}[X]$ are defined by , while ${\mathbb D}[X]$ and ${\mathbb V}[X]$ are given by[^15] \[bbDV\] [D]{}\[X\]= - [M]{}\^2 [\^[(1)]{}]{}T\^0\_0 +3[H]{}X\^0, \[X\] = [M]{}\^2[**D**]{}\^[-2]{} [**D**]{}\^i [\^[(1)]{}]{}T\^0\_i + X\^0, as follows from , ,  and . We can thus determine $X^0$ [*uniquely*]{} by requiring that one of these four variables be zero. These choices in fact correspond to four of the commonly used gauges in cosmological perturbation theory.[^16] \[gaugechoices\] Poisson gauge: $$\qquad {\bf B}[X_\mathrm{p}] = 0.$$ Uniform curvature gauge: $$\label{X_c} \qquad \Psi[X_\mathrm{c}] = 0.$$ Total matter gauge: $$\label{X_V} \qquad\,\, \mathbb{V}[X_\mathrm{v}] = 0.$$ Uniform density gauge: $$\label{X_rho} \qquad\,\,\, \mathbb{D}[X_\rho] = 0.$$ Determining $X^0$ in this way does in fact satisfy condition , $\Delta X^0=\xi^0$. This has been verified for $X_{\mathrm p}$ and $X_{\mathrm c}$ in UW (see equation (26)). For the other two cases, we need the transformation laws: \[trans\] \^[(1)]{}T\^0\_0 = -3[H]{}[M]{}\^[-2]{} \^0, \^[(1)]{}T\^0\_i = -[M]{}\^[-2]{} [**D**]{}\_i \^0, which are a consequence of .[^17] Condition  now follows immediately from , ,  and . In practice, we will not use the explicit expressions for $X^0$ that are defined implicitly by equations . Instead, in order to be able to relate gauge invariants associated with different choices of gauge field we introduce a set of [*transition rules*]{}. Let $X^0_{\bullet}$ be a specific choice of the temporal gauge field and let $X^0$ be an arbitrary choice. The difference $$\label{Zbullet1} {\bf Z}^0_\bullet[X] := X^0_\bullet - X^0,$$ is gauge-invariant on account of . The desired transition rules are as follows: \[transition\] [2]{} &= - [H]{}[**Z**]{}\^0\_, & [**B**]{}\[X\] &= [**B**]{}\[X\_\] - [**Z**]{}\^0\_, \[metricinv\]\ \[X\] &= \[X\_\] - 3[**Z**]{}\^0\_, & \[X\] &= \[X\_\] - [**Z**]{}\^0\_. \[stressenergyinv\] Equations  follow immediately from , while equations  are a consequence of ,  and . By inspection of  we see that the following linear combinations of the variables  are [*independent of the choice of $X$*]{}: \[indep\_X\] [2]{} \[, [**B**]{}\] &:= -[**B**]{}\[X\], & \[,[V]{}\] &:= -[V]{}\[X\],\ \[[V]{},[**B**]{}\] &:= [V]{}\[X\] - [**B**]{}\[X\], & \[[D]{}, [V]{}\] &:= [D]{}\[X\] - 3[V]{}\[X\],\ \[[D]{}, [**B**]{}\] &:= [D]{}\[X\] - 3[**B**]{}\[X\], & \[[D]{},\] &:= [D]{}\[X\] - 3. We can thus substitute two different choices of $X^0$ into any of the $X$-independent expressions in  and equate the results, thereby relating different gauge invariants. For example, if we first choose $X^0 = X^0_{\mathrm c}$ in $[{\mathbb D},\Psi]$ and then keep $X^0$ arbitrary as the second choice, we obtain \[X\_\] = [D]{}\[X\] - 3 , on account of . If we set $X^0 = X^0_{\rho}$ in this equation it follows that \[DcPsirho\] [D]{}\[X\_\] = - 3 , on account of . The gauge invariant $\Phi[X]$ is on a different footing from the gauge invariants  since it depends on the derivative of $X^0$ through equation . Thus requiring $\Phi[X]=0$ does not determine $X^0$ uniquely and hence does not lead to a fully fixed gauge[^18]. Nevertheless, $\Phi[X]$ does have a well-defined transition rule analogous to , namely \[transition2\] = + (\_+ )[**Z**]{}\^0\_\[X\], as follows from . By comparing  with  one can construct $X$-independent linear combinations of $\Phi[X]$ and the variables in , analogous to . For example, \[indep\_X2\] \[,[V]{}\]:= + (\_+ )[V]{}\[X\], is independent of $X$. There are three other expressions linking $\Phi[X]$ with $\Psi[X], {\bf B}[X]$ and ${\mathbb D}[X]$ that can be written if needed. If we set $X=X_{\mathrm v}$ in  and use  we obtain \[Phi\_V\] = + (\_+ )[V]{}\[X\], which we will use later. To conclude this section we note that the gauge invariants and $X$-independent combinations that we have introduced do not exhaust all possibilities, but do serve to illustrate an efficient way of defining gauge invariants and determining their inter- relationships, which constitutes one of the main results of this paper. A further example arises in appendix \[app:1+3metric\], where we make use of another gauge invariant, namely the linear perturbation of the Hubble scalar of a timelike reference congruence, denoted by ${\bf H}[X]$. In working with this gauge invariant we find it necessary to introduce an $X$-independent combination involving three gauge invariants (see equation ). ### Notation {#notation .unnumbered} Our general notation for dimensionless gauge invariants is exemplified by $\Psi[X_\mathrm{c}]$ and $\mathbb{V}[X_{\mathrm p}]$, [*i.e.*]{} a capital letter, or a bold face letter, or a special font, e.g. $\mathbb{V}$, which denotes inertial mass-density normalization, replaces the symbol for an associated gauge-variant variable, with the choice of the gauge vector field indicated by a subscript on the symbol $X$. For convenience we will often simplify the notation by setting $\Psi[X_\bullet]=\Psi_\bullet$, [*etc.*]{}. For some of the commonly used gauge invariants we will use unsubscripted symbols: \[gi\_notation\] [3]{} &:= ,& &:= ,& &:= \[X\_\],\ [**A**]{} &:= ,& [**B**]{} &:= [**B**]{}\[X\_\],& &:= \[X\_\]. \[DV\] Structure of the linearized governing equations {#sec:equations} ================================================ In this section we give different forms for the governing equations for scalar perturbations, first using the metric gauge invariants as basic variables, and then using the stress-energy gauge invariants. Linearized Einstein field equations {#sec:lin_einst} ----------------------------------- As shown in UW there are two natural choices of intrinsic *metric* gauge invariants when formulating the linearized Einstein equations for scalar perturbations, the [*uniform curvature*]{} gauge invariants and the [*Poisson*]{} gauge invariants. As in UW we introduce the geometric background scalars ${\cal A}_G$ and ${\cal C}_G^2$, with ${\cal C}_G^2$ defined in terms of the derivative of ${\cal A}_G$: \_G := 2(-’ + \^2 + K), \_G’ = -(1 + 3\_G\^2)\_G , \[A\_G\] (see UW, equation (42)). With ${\cal A}_T$ and ${\cal C}_T^2$ defined by , the background Einstein equations imply that ${\mathcal{A}}_G = {\mathcal{A}}_T$ and ${\cal C}_G^2 = {\cal C}_T^2$. We denote their common values by ${\mathcal{A}}$ and ${\cal C}^2$: \[cal\_C\] = \_G = \_T, \^2 = [C]{}\_G\^2 = [C]{}\_T\^2. ### The uniform curvature formulation {#the-uniform-curvature-formulation .unnumbered} The governing equations in the uniform curvature formulation are[^19]: \[scalar\_eq\_curv\] $$\begin{aligned} {2} {\mathcal{L}}_B{\bf B} + {\bf A} &=&\,\, - \Pi\qquad &= - {\mathcal{A}}_T\bar{\Pi}\,\label{bfB_evol}\\ {\mathcal{H}}\!\left({\mathcal{L}}_A{\bf A} + {\cal C}_G^2 {\bf D}^2{\bf B}\right) &=&\,\, {{\textstyle{1\over2}}}\Gamma + {{\textstyle{1\over3}}}{\bf D}^2\Pi &= {\mathcal{A}}_T({{\textstyle{1\over2}}}\bar{\Gamma} + {{\textstyle{1\over3}}}{\bf D}^2\bar{\Pi}), \label{Phicurv_evol}\\ {\cal H}\!\left({\bf D}^2 + 3K\right){\bf B} &=&\,\, - {{\textstyle{1\over2}}}\Delta \qquad &= - {{\textstyle{1\over2}}}{\mathcal{A}}_T\mathbb{D}, \label{Poisson_C} \\ {\mathcal{H}}{\bf A} + ({{\textstyle{1\over2}}}{\mathcal{A}}_G - K){\bf B} &=&\,\, - {{\textstyle{1\over2}}} V \qquad &= - {{\textstyle{1\over2}}}{\mathcal{A}}_T\mathbb{V}, \label{V_F}\end{aligned}$$ where the first order differential operators ${\bf{\cal L}}_A$ and ${\bf{\cal L}}_B$ are defined by \_A := \_+ , \_B := \_+ 2, \[L\_B\] with := + 1 + 3\_G\^2. For future reference we note that \[cB\] = -()\^[-1]{}()\^, as follows from . ### The Poisson formulation {#the-poisson-formulation .unnumbered} The governing equations in the Poisson formulation are (UW, equations (54)): \[scalar\_eq\_Poisson\] $$\begin{aligned} {2} \Psi - \Phi &=&\,\, \Pi\qquad\qquad &= {\mathcal{A}}_T\bar{\Pi}, \label{phi} \\ \left({\bf {\cal L}} - {\cal C}_G^2{\bf D}^2\right)\!\Psi &=&\,\, {{\textstyle{1\over2}}}\Gamma + \left({{\textstyle{1\over3}}}{\bf D}^2 + {\cal H}{\mathcal{L}}_A \right)\!\Pi &= {\mathcal{A}}_T\!\left({{\textstyle{1\over2}}}\bar{\Gamma} + \!\left({{\textstyle{1\over3}}}{\bf D}^2 + {\cal H}\partial_\eta +2{\mathcal{H}}'\right)\!\bar{\Pi}\right), \label{bardeen} \\ ({\bf D}^2 + 3K)\Psi &=& \,\, {{\textstyle{1\over2}}} \Delta \qquad\qquad &= {{\textstyle{1\over2}}}{\mathcal{A}}_T\mathbb{D}, \label{Poisson_P} \\ \partial_{\eta}\Psi+ {\cal H}\Phi &=&\,\, - {{\textstyle{1\over2}}}{V} \qquad\qquad &= -{{\textstyle{1\over2}}}{\mathcal{A}}_T\mathbb{V}, \label{V_P}\end{aligned}$$ where the second order differential operator ${\bf{\cal L}}$ is defined by \[factorL\_s\] [**[L]{}**]{}() := [H]{}\_A\_B( ), or equivalently \[L\_s\] [**[L]{}**]{} = \_\^2 + 3(1 + [C]{}\_G\^2)[H]{}\_+ [H]{}\^2[B]{} -(1 + 3\_G\^2)K, (UW, equation (56)). For a discussion of these two systems of governing equations, and the ways in which they differ, we refer to UW, section 3.2, following equation (56). Linearized conservation equations, without Einstein’s field equations {#sec:lin_cons} --------------------------------------------------------------------- As shown in Appendix \[app:derconserved\], linearizing the conservation law $\bna\!_b T^b\!_a = 0$ leads to the following gauge-invariant equations: \[cons\_X\] $$\begin{aligned} \partial_\eta(\mathbb{D}[X] - 3\Psi[X]) + {\bf D}^2(\mathbb{V}[X] - {\bf B}[X]) &= - 3{\cal H}\bar{\Gamma},\label{cons0} \\ (\partial_\eta + {\cal H})\mathbb{V}[X] + \Phi[X] + {\mathcal{C}}_T^2\mathbb{D} &= -\bar{\Gamma} - {\bar{\Xi}} ,\label{consi}\end{aligned}$$ where \[Xi\] [|]{} := 23([**D**]{}\^2 + 3K)|, and $\mathbb{D}[X], \mathbb{V}[X], \bar{\Gamma}$ and $\bar{\Pi}$ are defined by equations  and . These equations are valid for any choice of temporal gauge field $X^0$. Referring to  and  we recognize the three groups of terms on the left side as the $X$-independent expressions $[{\mathbb D}, \Psi]$, $[{\mathbb V}, {\bf B}]$ and $[\Phi, {\mathbb V}]$ in . We choose $X^0=X^0_{{\mathrm{c}}}$ and $X^0=X^0_{\mathrm p}$ in the first two, and $X^0=X^0_{\mathrm v}$ in the third, and use . Equations  then assume the concise form $$\begin{aligned} \partial_\eta\mathbb{D}_\mathrm{c} + {\bf D}^2\mathbb{V} &= - 3{\cal H}\bar{\Gamma} ,\label{Dcevol}\\ \Phi_\mathrm{v} + {\mathcal{C}}_T^2\mathbb{D} &= -\bar{\Gamma} - {\bar{\Xi}},\end{aligned}$$ where $\mathbb{D}_\mathrm{c}=\mathbb{D}[X_\mathrm{c}]$, ${\mathbb V} = {\mathbb V}[X_{\mathrm p}]$, $\mathbb{D} = \mathbb{D}[X_\mathrm{v}]$ and $\Phi_\mathrm{v} = \Phi[X_\mathrm{v}]$, in accordance with our convention for labeling gauge invariants. In certain circumstances, the first equation can be interpreted as a conservation law for $\mathbb{D}_\mathrm{c}$, as will be discussed in section \[sec:cons\]. The second equation[^20] shows that for a barotropic perfect fluid $\Phi_\mathrm{v}$ is proportional to $\mathbb{D}$, and is in fact zero for dust. One would like to use equations  to obtain a system of evolution equations for the stress-energy gauge invariants $ \mathbb{D}[X]$ and $ \mathbb{V}[X]$, for some choice of the gauge field $X$. This is not possible due to the presence of the metric variables $\partial_\eta \Psi[X]$ and $\Phi[X]$. However, in the case that the stress-energy tensor is the total stress-energy tensor one can use the linearized Einstein equations to eliminate these terms and achieve the desired goal, as we will show in section \[sec:cons\_plus\_einstein\]. On the other hand equations  are valid for each (non-interacting) individual stress-energy tensor of a multi-component source, and as such they form a convenient starting point for the derivation of a simple system of governing equations for scalar perturbations of such a source. We will derive these equations in section \[subsec:multi\]. Linearized conservation equations in conjunction\ with Einstein’s field equations {#sec:cons_plus_einstein} ------------------------------------------------- We now use the results of sections \[sec:lin\_einst\] and \[sec:lin\_cons\] to derive a system of governing equations in the form of a first order (in time) system of partial differential equations with the stress-energy gauge invariants as basic variables. Choose $X^0 = X^0_{\mathrm p}$ in , and eliminate ${\mathbb D}_{\mathrm p}$ using ${\mathbb D} = {\mathbb D}_{\mathrm p} - 3{\mathcal{H}}{\mathbb V}$, which is obtained from the $X$-independent invariant $[{\mathbb D}, {\mathbb V}]$ in . The resulting equations are \[conservedcompX2\] $$\begin{aligned} \partial_\eta(\mathbb{D} + 3{\mathcal{H}}{\mathbb V}) - 3\partial_\eta \Psi + {\bf D}^2\mathbb{V} &= - 3{\cal H}\bar{\Gamma},\label{cons02} \\ (\partial_\eta + {\cal H})\mathbb{V} + \Phi + {\mathcal{C}}_T^2\mathbb{D} &= -\bar{\Gamma} - {\bar{\Xi}}, \label{DV2}\end{aligned}$$ using the notation . The combination  $-$ $3{\mathcal{H}}$ can be rearranged to read $$(\partial_\eta - 3{\mathcal{H}}{\mathcal{C}}_T^2)\mathbb{D} + ({\bf D}^2+3K)\mathbb{V} - 3(\partial_\eta\Psi + {\mathcal{H}}\Phi + {{\textstyle{1\over2}}}{\mathcal{A}}_G\mathbb{V}) = 3{\mathcal{H}}{\bar{\Xi}},$$ where ${\mathcal{A}}_G$ is given by . If the stress-energy tensor is the total stress-energy tensor, and if we impose Einstein’s field equation  and the background field equation ${\mathcal{A}}_G={\mathcal{A}}_T$, then the above equation simplifies to $$\label{DV1} (\partial_\eta - 3{\mathcal{H}}{\mathcal{C}}_T^2)\mathbb{D} + ({\bf D}^2+3K)\mathbb{V} = 3{\mathcal{H}}{\bar{\Xi}} .$$ Equations  and  form a coupled first order system of evolution equations for $\mathbb{D}$ and $\mathbb{V}$. However due to the appearance of the metric potential $\Phi$ the system is not closed. We can remedy this deficiency by applying the operator ${\bf D}^2 + 3K$ to  and using \[Bbb\_Z\] [Z]{} := ([**D**]{}\^2 + 3K)[V]{}, as a new variable to replace ${\mathbb V}$ in the system. On using the Einstein equations  and , which yield ([**D**]{}\^2 + 3K)= ([**D**]{}\^2 + 3K)(- \_T|) = 12([D]{} - 3[|]{}), equations ,  and $({\bf D}^2 + 3K)(\eqref{DV2})$ result in \[DZ\_evol1\] $$\begin{aligned} (\partial_\eta - 3{\mathcal{C}}^2_T{\mathcal{H}}){\mathbb D} + {\mathbb Z} &= 3{\mathcal{H}}{\bar{\Xi}}, \label{D_evol1}\\ \left(\partial_\eta + {\cal H}\right) {\mathbb Z} + \left({{\textstyle{1\over2}}}{\mathcal{A}}+ {\cal C}_T^2({\bf D}^2 + 3K)\right)\!{\mathbb D} &= - ({\bf D}^2 + 3K)(\bar{\Gamma} + {\bar{\Xi}}) + {{\textstyle{3\over2}}}{\mathcal{A}}{\bar{\Xi}}, \label{Z_evol1}\end{aligned}$$ where ${\mathcal{A}}\equiv{\mathcal{A}}_G={\mathcal{A}}_T$. For the reader’s convenience we note that the variables in these equations are defined by equations , ,  and . Equations  constitute one of the main results of this paper. They form a coupled system of first order (in time) partial differential equations for $({\mathbb D}, {\mathbb Z})$, assuming that the stress-energy terms $\bar{\Gamma}$ and ${\bar{\Xi}}$ are given. They determine the behaviour of the scalar mode of linear perturbations of an FL cosmology with arbitrary stress-energy content. The structure of this system is similar to the structure of the system of evolution equations  and  for the uniform curvature metric gauge invariants ${\bf A}$ and ${\bf B}$, and can be derived from them as follows. First use  to express ${\bf A}$ in terms of ${\mathbb V}$. Then apply the operator ${\bf D}^2 + 3K$ to both equations and use  to express $({\bf D}^2 + 3K){\bf B}$ in terms of ${\mathcal{A}}_T {\mathbb D}$, after which some obvious manipulations lead to equations . ### The evolution equation for ${\mathbb D}$ {#the-evolution-equation-for-mathbb-d .unnumbered} By eliminating ${\mathbb Z}$ from equations  one can obtain a second order evolution equation for the gauge-invariant density perturbation ${\mathbb D}$. We apply the operator $\partial_\eta + {\mathcal{H}}$ to the first of equations  and use the second equation to eliminate ${\mathbb Z}$. The resulting equation can be written in the form \[evol\_D\] ([**[L]{}**]{}\_[D]{} - \^2 [**D**]{}\^2) = 2([**D**]{}\^2 + 3K)(12| + (13 [**D**]{}\^2 + \_+2’ )[|]{}) , where \[L\_D\] [**[L]{}**]{}\_[D]{} := \_\^2 + (1 - 3\^2)\_+ (1 - 3\^2 )’ - (1 + 3\^2)(\^2 +K) - 3(\^2)’ . Equation  can also be derived from the governing equations  in Poisson form. We apply ${\bf D}^2 + 3K$ to  and use  to relate $({\bf D}^2 + 3K)\Psi$ to ${\mathbb D}$. By comparing the resulting evolution equation with , we can conclude that the operator ${\bf {\cal L}}_{\cal D}$ is related to the operator ${\bf {\cal L}}$ according to \[Lbullet\_A\] [**[L]{}**]{}() = [**[L]{}**]{}\_[D]{}(). This result can also be verified by direct calculation. Equation  is a second order linear partial differential equation for ${\mathbb D}$, assuming that the stress-energy terms ${\bar \Gamma}$ and ${\bar \Pi}$ are given. It differs from other related equations in the literature, for example, Ellis [*et al*]{} (1990)(see their equation (48), with the coefficients given by equations (19) and (20)), and Hwang and Noh (1999) (see their equation (45)), since we have defined ${\mathbb D}$ by normalizing with ${\cal M}^2 = {}^{(0)}\!\rho + {}^{(0)}\!p$, while the usual practice is to use ${}^{(0)}\!\rho$. If the source is a perfect fluid with a linear equation of state and a cosmological constant, [*i.e.*]{} = \_m + , p = p\_m - , p\_m=w\_m, then the right side of  is zero, and one can use Einstein’s equations in the background model (UW equations (41)) to write the expression  in the form[^21] \[L\_A2\] [**[L]{}**]{}\_[D]{} = \_\^2 +(1 - 3w)\_- 12 \[ (1+3w)(1-w)\_m + 4w\]a\^2. In this case  is compatible with those equations cited above. Governing equations for a multi-component source {#subsec:multi} ------------------------------------------------ The governing equations for a perturbed FL cosmology with a multi-component source were first derived by Kodama and Sasaki (1984), and subsequently considered by various authors including Hwang (1991), Dunsby [*et al*]{} (1992) and Durrer (2008). These authors, as is customary, use normalization with ${}^{(0)}\!\rho$ when defining the density perturbation. We have found that the derivation and the form of the governing equations is significantly simpler if one uses ${\cal M}$-normalization, as introduced in section \[sec:stress-energy\]. In this section we thus give a brief derivation of the relevant equations. We consider a multi-component source with $n$ separate stress-energy tensors denoted by $_{A}T^a\!_b,$ with $A = 1,\dots,n$, which sum to form the total stress-energy tensor: T\^a\_b = [ \_[A=1]{}\^n ]{}\_[A]{}T\^a\_b. \[T\_sum\] For simplicity we assume that the individual components are *non-interacting*. As shown in Appendix \[app:derconserved\], linearizing the conservation equation $\bna\!_b\, {}_{A}T^b\!_a = 0$ for an arbitrary component labeled $A$ leads to the following equations[^22]: \[conserv\_A\] $$\begin{aligned} \partial_\eta(\mathbb{D}_A[X] - 3\Psi[X]) + {\bf D}^2(\mathbb{V}_A[X] - {\bf B}[X]) &= - 3{\cal H}\bar{\Gamma}_A, \label{conserv1_A}\\ (\partial_\eta + {\cal H})\mathbb{V}_A[X] + \Phi[X] + {\mathcal{C}}_A^2\mathbb{D}_A &= -\bar{\Gamma}_A - {\bar{\Xi}}_A, \label{conserv2_A}\end{aligned}$$ where \_A := 23([**D**]{}\^2 + 3K)|\_A. We assume that the gauge field $X$ does not depend on the labeling index. A quantity ${\mathbb F}_A$ associated with $_{A}T^a\!_b$ that satisfies [ \_[A=1]{}\^n ]{}[[B]{}\_A [F]{}\_A]{} = [F]{}, where $ {\mathbb F}$ is the corresponding quantity associated with $T^a\!_b$, will be called *additive*. Here the coefficients ${\mathcal B}_A$ are defined by \_A:=, [ \_[A=1]{}\^n ]{}[ [B]{}\_A]{} = 1. We note that ${\mathbb D}_A[X]$, ${\mathbb V}_A[X]$, ${\mathbb D}_A, \Xi_A$, ${\bar \Gamma}_A + {\mathcal{C}}^2 {\mathbb D}_A[X]$ and ${\mathcal{C}}_A^2$ are additive[^23]. In order to obtain a closed system of evolution equations we introduce the “difference variables” \_[AB]{} := [D]{}\_A\[X\] - [D]{}\_B\[X\], \_[AB]{} := [V]{}\_A\[X\] - [V]{}\_B\[X\]. It follows from , ,  and  that ${\mathbb D}_{AB}$ and ${\mathbb V}_{AB}$ are $X$-independent. In order to obtain evolution equations for ${\mathbb D}_{AB}$ and ${\mathbb V}_{AB}$ we form the difference of two copies of equations , labeled $A$ and $B$. In this way we obtain the following equations[^24]: \[conserv\_AB\] $$\begin{aligned} \partial_\eta {\mathbb D}_{AB} + {\bf D}^2 {\mathbb V}_{AB} &= -3{\mathcal{H}}{\bar \Gamma}_{AB} ,\\ (\partial_\eta + {\mathcal{H}}) {\mathbb V}_{AB} \, + {\mathbb K}_{AB} + ({\mathcal{C}}_A^2 - {\mathcal{C}}_B^2) {\mathbb D} &= - {\bar \Gamma}_{AB} - {\bar \Xi}_{AB},\end{aligned}$$ where $$\begin{aligned} {\mathbb K}_{AB} &:= {\sum_{C=1}^n { {\mathcal B}_C \left({\mathcal{C}}_A^2({\mathbb D}_{AC} -3{\mathcal{H}}{\mathbb V}_{AC}) - {\mathcal{C}}_B^2 ({\mathbb D}_{BC} -3{\mathcal{H}}{\mathbb V}_{BC})\right)}}, \\ {\bar \Gamma}_{AB} &:= {\bar \Gamma}_A - {\bar \Gamma}_B, \qquad {\bar \Xi}_{AB} := {\bar \Xi}_A - {\bar \Xi}_B,\end{aligned}$$ Equations  do not form a closed system for ${\mathbb D}_{AB}$ and ${\mathbb V}_{AB}$, however, due to the appearance of the total intrinsic matter gauge invariant ${\mathbb D}$. The evolution equation  for ${\mathbb D}$ contains ${\bar \Gamma}$ and ${\bar \Xi}$ as source terms[^25], which have to be expressed in terms of ${\bar \Gamma}_{A}$, ${\bar \Xi}_{A}$ and ${\mathbb D}_{AB}$. The term ${\bar \Gamma}$ is not additive, whereas ${\bar \Gamma} + {\mathcal{C}}^2 {\mathbb D}[X]$ is, [*i.e.*]{} \[gamma\_sum\] [|]{} + \^2 [D]{}\[X\] = [ \_[A=1]{}\^n ]{}[B]{}\_A([|]{}\_A + \^2\_A [D]{}\_A\[X\]). It follows that ${\bar \Gamma}$ can be expressed[^26] as a sum involving ${\bar \Gamma}_A$ and ${\mathbb D}_{AB}$: \[sum\] \[sum\_gamma\] [|]{} = [ \_[A=1]{}\^n ]{}[B]{}\_A [|]{}\_A + 12 [\_[A,B=1]{}\^n [[B]{}\_A[B]{}\_B (\_A\^2 - \_B\^2) [D]{}\_[AB]{}]{}]{}. On the other hand, the term ${\bar \Xi}$ is additive: \[sum\_xi\] [|]{} = [ \_[A=1]{}\^n ]{}[B]{}\_A[|]{}\_A. In conclusion, equations  and the evolution equation  for ${\mathbb D}$, in conjunction with equations , form a closed system for ${\mathbb D}_{AB}, {\mathbb V}_{AB}$ and ${\mathbb D}$ that describes the scalar mode of a perturbed FL cosmology with a multi-component source[^27]. We note that all gauge invariants in these equations are $X$-independent. Gauge-fixing versus gauge-invariance ------------------------------------ To conclude this section we comment briefly on the two points of view as regards formulating the governing equations for scalar perturbations. In the gauge-fixing approach one chooses a gauge [*ab initio*]{}, in which case all the gauge invariants that appear in the governing equations are associated with this particular gauge. Recent references that use this traditional approach are Mukhanov (2005) and Weinberg (2008). In contrast, one can work with a variety of gauge invariants in the spirit of Bardeen (1980), in which case one has the flexibility to use gauge invariants associated with different gauges in formulating the governing equations. A notable example is the generalized Poisson equation , which relates the Bardeen potential $\Psi\equiv\Psi[X_{\mathrm p}]$ to the matter density gauge invariant ${\mathbb D}\equiv{\mathbb D}[X_{\mathrm v}]$. A variety of other examples occur in this paper, including the system of equations for ${\mathbb D}_{\mathrm v}$ and ${\mathbb Z}_{\mathrm p}$, equations  and  in the uniform curvature formulation, and the conservation law . Conserved quantities {#sec:cons} ==================== Two “conserved quantities” that are associated with scalar perturbations of FL have been defined in the literature. These quantities, often denoted by $\zeta$, satisfy an evolution equation of the form \[cons\] \_\_[ ]{} = [**D**]{}\^2 [**C**]{}\_ + |, where ${\bf C}_{\,\bullet}$ is an expression involving the primary gauge invariants such as $\Psi$ or ${\mathbb V}$ and the background variables. This equation is referred to as a “conservation law”, since if spatial derivatives are negligible (“perturbations outside the horizon”) and if $\bar{\Gamma}$ is zero or negligible in some epoch, then  is approximated by $\partial_{\eta}\zeta_{ \,\bullet} = 0$, [*i.e.*]{} $\zeta_{ \,\bullet}$ is approximately constant in time during that epoch. Two of the evolution equations that we have presented in section \[sec:equations\], namely equations  and , can be written in the form  simply by multiplying by a suitable factor. The conserved quantity can then be identified by inspection. We consider each in turn. ### The conserved quantity $\zeta_\rho$ {#the-conserved-quantity-zeta_rho .unnumbered} The evolution equation  for ${\mathbb D}_\mathrm{c}$ has the form of a conservation law . We multiply by the numerical factor $-\frac13$ to agree with current convention, and comparison with  leads to the following definition of the first conserved quantity: \_ := -13 [D]{}\_. \[zeta\_rho\] Equation  assumes the form \[zeta\_evol1\] \_\_ = 13 [**D**]{}\^2 [V]{} + |. Our motivation for using the notation $\zeta_\rho$ is that on account of equation  we have \[zeta\_rho2\] \_= \_. This conservation law was apparently first given in a form closely related to the above by Wands *et al* (2000),[^28] who emphasized that it depends only on the conservation equation for the stress-energy tensor, *i.e.* it is independent of Einstein’s equations. They denoted our ${\zeta_\rho}$ by ${\zeta}$ and because of  they referred to it as “the curvature perturbation on uniform density surfaces.” The quantity $\zeta_\rho$ has its origins in the paper Bardeen, Steinhardt and Turner (1983), and was further studied from a different point of view by Brandenberger and Khan (1984)[^29]. A major step in understanding the significance of $\zeta_{\rho}$ was taken by Langlois and Vernizzi (2005). Motivated by the $1 + 3$ covariant approach, they showed that this quantity could be obtained as the linearization of an exact nonlinear evolution equation for a certain covector. This approach enabled them to extend the definition of $\zeta_{\rho}$ to second-order (and higher order) perturbations. We refer to their equations (20) and (25) for the general situation and note that their equations (41) and (42) correspond to our equations  and . ### The conserved quantity $\zeta_{\mathrm v}$ {#the-conserved-quantity-zeta_mathrm-v .unnumbered} The second conservation equation arises from the evolution equation  for ${\bf A}\equiv\Phi_{{\mathrm{c}}}$. Using  we can write the differential operator ${\bf{\cal L}}_A$ in the form: [**[L]{}**]{}\_A() = \_(). Thus on multiplying  by $2{\mathcal{H}}/{\mathcal{A}}_G$ we can write it in the form of a conservation law[^30] \[Phi\_c\_evol\] \_( \_[c]{} ) = 2[H]{}[**D**]{}\^2 ( [[C]{}\^2]{} + 13| ) + [H]{}|, where we have chosen to replace ${\bf B}$ by $\Psi$, using the relation $\Psi = - {\mathcal{H}}{\bf B}$. Comparing  with  leads to the following definition of the second conserved quantity: \_[v]{} := \_[c]{}. \[zeta\_v\] Equation , which is equivalent to , is the conservation equation for $\zeta_{\mathrm v}$. An immediate consequence of this equation is that *if the source is pressure-free matter plus possibly a cosmological constant, then $\zeta_{\mathrm v}$ is constant in time.* We can derive an alternate expression for the conserved quantity $\zeta_{\mathrm v}$ as follows. Using the relation $\Psi = -{\mathcal{H}}{\bf B}$ and the definition , the velocity equation  can be written in the form: \[zeta\_v1\] \_[v]{} = ( 1 - )- [V]{} . We use the $X$-independent gauge invariant $[\Psi,{\mathbb V}]$ in  to obtain \_ = - , which when inserted in  leads to \[zeta\_v2\] \_[v]{} = \_ - . The quantity $\zeta_{\mathrm v}$ is most commonly used when the background curvature is zero ($K = 0$) in which case \[zeta\_v3\] \_[v]{} |\_[K=0]{} = \_. This expression motivates our use of the notation $\zeta_{\mathrm v}$. Malik and Wands (2009) use the notation $\cal R$ for $\zeta_{\mathrm v}$ in the case $K=0$ (see equation (7.46)), and refer to it as “the curvature perturbation in the comoving gauge.” This quantity has its origin in the paper Bardeen (1980) (see equations (5.19) and (5.21)). There is another commonly used expression for $\zeta_{\mathrm v}$, in terms of the Bardeen potential $\Psi$ and its time derivative, which we can quickly derive. On solving  for ${\bf A}\equiv \Phi_{{\mathrm{c}}}$, the definition  yields \_[v]{} = ([**[L]{}**]{}\_B() - ), where we used ${\mathcal{H}}{\bf B} = - \Psi$ and where ${\bf{\cal L}}_B$ is defined by . We expand the operator and use  to express ${\mathcal{H}}'$ in terms of ${\mathcal{A}}_G$. On specializing to flat FL ([*i.e.*]{} $K=0$) and setting $\Pi=0$ we obtain the expression \[zeta\_familiar\] \_[v]{}|\_[K=0]{} = + ( ’ + ), where $w:= {}^{(0)}\!p/^{(0)}\!\rho$. Here we have used the background field equation ${\mathcal{A}}_G = {\mathcal{A}}_T$, and the fact that ${\mathcal{A}}_T = 3(1 + w){\mathcal{H}}^2$ when $K=0$, as follows from UW (see equations (41a) and (42a)). The familiar expression  can be found, for example, in Mukhanov [*et al*]{} (1992), equation (5.23), and Mukhanov (2005), equation (7.73). We conclude this section by showing that $\zeta_\rho$ and $\zeta_{\mathrm v}$, despite their different origins, are closely related. On account of  and  \_ - \_[v]{} = \_- \_[v]{} + = -13 + , where we have used the $X$-independent invariant $[{\mathbb D},\Psi]$ in  to obtain the second equality. The generalized Poisson equation  can be used to eliminate $\mathbb{D}$ yielding \_ - \_[v]{} = - \^2 . \[3.3\] This equation suggests that *if spatial derivatives are negligible in some epoch, then* $\zeta_{\rho} \thickapprox \zeta_{\mathrm v}$ in that epoch. ### A coupled system for $(\zeta_{\mathrm v}, \Psi)$ {#a-coupled-system-for-zeta_mathrm-v-psi .unnumbered} The conservation equation  for $\zeta_{\mathrm v}$ is one of the linearized Einstein equations in uniform curvature form, namely, the evolution equation for ${\bf A} = \Phi_{{\mathrm{c}}}$. It is helpful to also write the evolution equation  for ${\bf B}$ in terms of $\zeta_{\mathrm v}$, replacing ${\bf B}$ by $\Psi$ using the relation $\Psi = - {\mathcal{H}}{\bf B}$. The resulting pair of equations has the following form: \[zeta\_psi\] $$\begin{aligned} \partial_\eta\left(\frac{x^2}{{\mathcal{H}}}\Psi \right) - {\mathcal F}\zeta_{\mathrm v} &= x^2 {\mathcal{A}}\bar{\Pi}, \\ \partial_\eta \zeta_{\mathrm v} - \frac{{\mathcal{C}}^2}{\mathcal F} {\bf D}^2\left(\frac{x^2}{{\mathcal{H}}} \Psi\right) &= {\mathcal{H}}\!\left(\bar{\Gamma} + {{\textstyle{2\over3}}}{\bf D}^2 \bar{\Pi} \right),\end{aligned}$$ where $x := a/a_*$ is the dimensionless scale factor, with $a_*$ being the scale factor at some reference time, and := . The system of equations  is a particularly useful form of the governing equations for scalar perturbations of FL. The second equation is the “conservation equation” for $\zeta_{\mathrm v}$, while the first equation enables one to express the Bardeen potential $\Psi$ as a quadrature, in situations in which $\zeta_{\mathrm v}$ is a temporal constant (or can be treated as such) and $\bar{\Pi}$ is negligible: (, x\^i) = ( \_[v]{}(x\^i) \_0 \^ [F]{}d + C\_[-]{}(x\^i)) . \[psi\] In particular this equation gives the [*exact*]{} solution where the source is pressure-free matter and, possibly, a cosmological constant (since then $\zeta_{\mathrm v}$ is a temporal constant), and the [*approximate*]{} solution in the long wavelength limit, in both cases without restriction on the background spatial curvature $K$. The $1+3$ gauge-invariant approach {#sec:1+3} ================================== In this section we first give a concise derivation of the governing equations for linear scalar perturbations in the $1+3$ approach, combining the formulation of Bruni, Dunsby and Ellis (1992a) (hereafter referred to as BDE)[^31] with our overall strategy of creating dimensionless quantities by normalizing with $\cal M$ and $a$. We then introduce the dependence of the $1+3$ variables and differential operators on a perturbation parameter $\epsilon$, which enables us to relate the variables and governing equations of the two approaches in a precise manner. In this way we set the stage for extending the $1+3$ approach to second order. The $1+3$ gauge-invariant approach to cosmological perturbations is based on choosing a preferred unit timelike vector field $u^a$ and decomposing the stress-energy tensor relative to this vector field: \[T\_ab\] T\^a\_b = (+ p)u\^a u\_b + p\^a\_b + (q\^au\_b + u\^aq\_b) + \^a\_b, where u\^a q\_b = 0, \^a\_a = 0, u\_a \^a\_b = 0. One distinguishes between physical and geometrical quantities which are non-zero in the background spacetime, namely $\rho, p$ and the Hubble scalar $H$ associated with $u^a$, and quantities which are zero in the background, such as the stress-energy quantities $q_a, \pi^a\!_b$, the shear of the preferred congruence, the Weyl curvature tensor and the spatial gradients of $\rho, p$ and $H$ orthogonal to $u^a$. The $1+3$ approach focusses on the latter quantities, which are gauge-invariant on account of the Stewart-Walker Lemma. Evolution equations ------------------- The spatial gradients of $\rho$ and $H$ describe the perturbation in a gauge-invariant way at the linear level. However, in order to extract the scalar mode of the perturbation it is necessary to form scalar quantities. We thus take the spatial divergence of these spatial gradients and form the [*dimensionless spatial Laplacian*]{}: \[DZ\_defn\] D:= (a\^2[\^[(3)]{}\^2]{})[M]{}\^2, Z:= 3(a\^2[\^[(3)]{}\^2]{})aH, where \[laplacian\] \^[(3)]{}\_a := h\_a\^b \_b, \^[(3)]{}\^2:= g\^[ab]{} [\^[(3)]{}\_a]{} [\^[(3)]{}\_b]{}, and h\_a\^b := \_a\^b + u\_a u\^b. Note that in introducing dimensionless variables we are normalizing the energy density $\rho$ with ${\cal M}^2$ and the geometric quantity $H$ with the background scale factor $a$. Likewise we normalize the geometric operator $^{(3)}{\bna}^2$ with $a^2$. The governing equations for the scalar mode take the form of a coupled system of first order (in time) partial differential equations for $D$ and $Z$. These equations arise from the energy conservation equation (the exact evolution equation for $\rho$) and the Raychaudhuri equation (the exact evolution equation for $H$). To derive the governing equations one simply applies the differential operator $a^2\,{^{(3)}{\bna}^2}$ to the linearized versions of the evolution equations for $\rho$ and $H$, which are obtained by dropping products of first order quantites. The linearized evolution equations for $D$ and $Z$, derived in Appendix \[app:1+3\], are as follows[^32]: \[1+3\_evol\] $$\begin{aligned} D' - 3{\mathcal{H}}{\cal C}_T^2 D + Z &= 3{\mathcal{H}}(\tilde{\Pi} + \tilde{ \Upsilon}) - {\tilde {\bf D}}^2 \tilde {Q}, \label{D_evol}\\ \!\!\!\!\!\! Z' + {\cal H}Z + \left[{{\textstyle{1\over2}}}{\mathcal{A}}+ {\cal C}_T^2({\tilde {\bf D}}^2 + 3K)\right]\!\!D &= - ( {\tilde {\bf D}}^2 + 3K)(\tilde{\Gamma} + \tilde{\Pi} + \tilde{\Upsilon}) + {{\textstyle{3\over2}}}{\mathcal{A}}(\tilde{\Pi} + \tilde{\Upsilon}),\label{Z_evol}\end{aligned}$$ In these equations the dimensionless operators $'$ and ${\tilde {\bf D}}^2$ are defined by \[operators1\] A’: = au\^a \_a A, \^2 A := a\^2\^[(3)]{}\^2 A, where $A$ is a scalar. The source terms $\tilde {\Pi}, \tilde {Q}$ and $\tilde{\Upsilon}$ are first order dimensionless scalars formed by taking the spatial divergence of $q^a$ and $\pi^a\!_b$ after normalizing with ${\cal M}^2$: \[tildeQ\] := \^a([M]{}\^2 q\_a), := \_a \^b ([M]{}\^2 \^a\_b), : = [Q]{}’ - (3[C]{}\_T\^2 - 1) , where \[operators4\] \_a:= a\^[(3)]{}\_a. The entropy perturbation ${\tilde \Gamma}$ is given by[^33] \[tildeGamma\] = P - [C]{}\_T\^2 D, where P := \^2 ([M]{}\^2 p). We conclude this section by relating our approach to that of BDE[^34]. The variables $D$ and $Z$ differ from those introduced by BDE as regards the normalization of $\rho_m$ and $H$. Specifically, BDE define \[BDE1\] := a\^[(3)]{}\^a () , : = 3a\^[(3)]{}\^a(a\^[(3)]{}\_aH). At the linear level, the factor $\rho_m$ in the denominator can be replaced by ${}^{(0)}\!\rho_m$. In the $1+3$ approach the scale factor is usually defined using the Hubble scalar $H$ of the preferred congruence $u^a$, according to $(u^a{\bna}\!_a\, a)/a= H$, in which case $^{(3)}{\bna}^a a\neq 0$. At the linear level, however, the factor of $a$ can be taken outside ${}^{(3)}{\bna}^a$, since the term $^{(3)}{\bna}^a a$ appears as a product with ${}^{(3)}{\bna}^a \rho_m$ or $^{(3)}{\bna}\!_a H$, and hence can be dropped. To avoid this complication we have chosen the scale factor $a$ to be the scale factor in the background model so that $^{(3)}{\bna}^a a= 0$. This choice also facilitates the link with the metric-based approach. In view of these remarks, the BDE variables  can be written in the form: = \^2 \_m, = 3\^2 H, which implies that the BDE variables are related to ours according to[^35] =(1+w)D, a[Z]{}= Z, since $1/{\cal M}^2 = (1+w){}^{(0)}\!\rho_m.$ Our evolution equations  for $D$ and $Z$ are equivalent to equations (68) and (69) for $\Delta$ and ${\cal Z}$ in BDE, but are simpler in form due to our use of dimensionless variables, in particular our use of ${\cal M}$-normalization[^36]. Relation with the metric-based approach --------------------------------------- Equations  are closely related to the governing equations in the form  for the variables ${\mathbb D}$ and ${\mathbb Z},$ that arise in the metric-based approach. Indeed a formal similarity is obvious on inspection. However, in order to relate the two sets of equations we have to regard each of the variables $D, Z, {\tilde Q}, {\tilde \Pi}$ and ${\tilde \Gamma}$ in  as being a function of the perturbation parameter $\epsilon$, which can be expanded in a power series of the form: \[D\_eps\] F() = \^[(0)]{}F + \^[(1)]{}F +…. Since each variable is zero in the background we have $^{(0)}\!F = 0$, while $^{(1)}\!F$ is the linear perturbation of $F$. We also need to consider the dependence of the differential operators on $\epsilon$, which is as follows: \[operators2\] (A’)() = au\^a()\^\_a A(), (\^2 A)() = a\^2h\^[ab]{}()\^\_a \^\_bA() Assuming that $A$ is a scalar such that $ {}^{(0)}\!A = 0,$ it follows that \[operators3\] \^[(1)]{}(A’) = \_\^[(1)]{}A, \^[(1)]{}(\^2 A) = [**D**]{}\^2\^[(1)]{}A, as is shown in Appendix \[app:1+3\]. If we now differentiate equations  with respect to $\epsilon$ and set $\epsilon=0$ the resulting equations have precisely the same form but with each variable replaced by its linear perturbation and each differential operator replaced by the corresponding zeroth order operator. We finally have to do a calculation using  to specifically relate the variables in the two sets of equations  and . The details are given in Appendix \[app:1+3\], where it is shown that: \[link\] $$\begin{aligned} {\bf D}^2{\mathbb D} &= {}^{(1)}\!D - 3{\mathcal{H}}{}^{(1)}\!\tilde {Q}, \label{link_D}\\ {\bf D}^2{\mathbb Z} &= {}^{(1)}\!Z + ({\bf D}^2 + 3K - {{\textstyle{3\over2}}}{\cal A}_G){}^{(1)}\!\tilde {Q} \label{link_Z}\\ {\bf D}^2{\bar{\Xi}} &= {}^{(1)}\!{\tilde \Pi}, \label{link_Xi}\\ {\bf D}^2\bar{\Gamma} &= {}^{(1)}\!\tilde{\Gamma} .\end{aligned}$$ If we now apply the operator ${\bf D}^2$ to equations  and use equations  then we obtain precisely equations , with each variable by its linear perturbation and each differential operator replaced by its zeroth order perturbation. Discussion {#sec:discuss} ========== In this paper we have presented an efficient way of defining dimensionless gauge invariants and determining their inter-relationships, which we have applied to give a unified account of the various ways of formulating the governing equations for scalar perturbations of FL cosmologies. In defining gauge invariants we use our version of Nakamura’s geometrical method, as described in UW (see section 2.1) which is based on specifying a so-called gauge field $X$, and normalizing so as to obtain dimensionless quantities. It turns out that the choice of the spatial part $X^i$ of the gauge field does not affect the form of the governing equations for linear perturbations given in UW and in the present paper.[^37] In these papers, however, we find it convenient to fix the spatial part $X^i$ of the gauge field according to , which leads to a simple form  for the metric gauge invariant ${\bf f}_{ab}[X]$. This in turn shortens the calculation of the Riemann gauge invariants in UW (see (B.23)) and of the gauge-invariant form of the divergence of the stress-energy tensor in the present paper, equation .[^38] The remaining gauge freedom is then described by the temporal part $X^0$, which we specify uniquely by requiring that one of the four basic gauge invariants $ \Psi[X], {\bf B}[X], {\mathbb D}[X]$ and ${\mathbb V}[X]$ be zero. This approach eliminates the need to express $X^0$ explicitly in terms of gauge-variant variables, and in particular enables one to perform a change of gauge without using $X^0$, as in subsection \[subsection:choice\]. Indeed, although the gauge field $X^a$, which is gauge-variant, plays an important role in establishing our formalism, we do not use it in performing calculations, in keeping with our goal of working exclusively with gauge invariants. Because it simplifies calculations this approach will facilitate the extension to second order perturbations. The coupled system of first order partial differential equations for ${\mathbb D}$ and ${\mathbb Z}$, given by equations , plays a central role in this paper. To the best of our knowledge they have not been given in the literature. They arise first of all from the linearized conservation equations in conjunction with the linearized Einstein equations, but can also be derived directly from the latter equations, when they are written in terms of the uniform curvature gauge invariants as in . Further, as shown in section \[sec:1+3\] these equations are essentially equivalent to the first order governing equations for $D$ and $Z$ that arise in the $1+3$ gauge-invariant approach (see equations ). Our derivation of the expressions for the conserved quantities $\zeta_\rho$ and $\zeta_{\mathrm v}$ deserves comment. We have shown that the conservation equations for $\zeta_\rho$ and $\zeta_{\mathrm v}$ are simply two of the first order governing equations for scalar perturbations when they are written in terms of the appropriate gauge invariants: the gauge-invariant expression for $\zeta_\rho$ arises directly from the linearized conservation equations for the stress-energy tensor, while that for $\zeta_{\mathrm v}$ arises from the linearized Einstein equations for the uniform curvature metric gauge invariants. Other expressions for the conserved quantities are derived using the method for finding inter-relationships between gauge invariants given in section \[subsection:choice\]. We mention that $\zeta_{\mathrm v}$ is usually introduced by rewriting the second order evolution equation  for the Bardeen potential $\Psi$ in a first order form, a procedure that involves a tedious calculation.[^39] Our use of the uniform curvature metric gauge invariants avoids the need for any calculation. Our discussion of the $1+3$ gauge-invariant approach has several novel features. An advantage of this approach is that it is coordinate-free, so that calculations require only standard operations from differential geometry. This feature has enabled us to give a particularly concise derivation of the first order system of governing equations for scalar perturbations, as given by equations  (see Appendix \[app:1+3\]).[^40] We have also derived the relation between the variables $({\mathbb D}, {\mathbb Z})$ in the metric-based approach and the $1+3$ variables $(D, Z)$ (see equation ). A drawback of the $1+3$ approach is that the linearization process is conceptually less clear than in the metric-based approach, relying as it does on “dropping products of first order terms”. In relating the $1+3$ approach to the metric-based approach it was necessary to regard $D, Z$ and the differential operators as functions of the perturbation parameter $\epsilon$ and explicitly calculate their dependence on $\epsilon$ to linear order. Introducing the perturbation parameter clarifies the linearization process and points the way to extending the $1+3$ approach to second order perturbations. Acknowledgments {#acknowledgments .unnumbered} --------------- CU is supported by the Swedish Research Council (VR grant 621-2009-4163). CU also thanks the Department of Applied Mathematics at the University of Waterloo for kind hospitality. JW acknowledges financial support from the University of Waterloo. The Replacement Principle {#app:repl} ========================= We define I\_a() := [M]{}\^2 [\^ ]{}\_bT\^b\_a(). The linear perturbation of $ I_a$, given in equation , can be written symbolically in the form \[I\] \^[(1)]{}I\_a = [L]{}\_a( [M]{}\^2 [\^[(1)]{}]{}T\^b\_c, \^[(1)]{}f\_[bc]{}), where $ {\mathsf L}_a$ is a linear operator. The replacement principle for the divergence of the stress-energy tensor states that the gauge invariants associated with $^{(1)}I_a, {^{(1)}}T^a\!_b$ and $ {^{(1)}}f_{ab}$ by $X$-compensation are related by the [*same*]{} linear operator: \[I\_X\] [**I**]{}\_a\[X\] = [L]{}\_a( [**T**]{}\^b\_c\[X\], [**f**]{}\_[bc]{}\[X\]), for any gauge field $X$. If the stress-energy tensor is conserved at zero order ([*i.e.*]{} $I_a(0)=0$ then $^{(1)}I_a$ is a gauge invariant, and the left sides of  and  are equal. This result is adapted from Nakamura (2005) (see equations (3.90), (3.91) and (3.20)). Use of the Replacement Principle in Appendix \[app:derconserved\] makes the transition from gauge-variant to gauge-invariant equations particularly easy and transparent. Derivation of the conservation equations {#app:derconserved} ======================================== In this Appendix we give the derivation of the linearized conservation equations in the form , using the methods developed in UW (see in particular Section 2 and Appendix B). We express the covariant derivative ${}^\epsilon\bna\!_a$ of the metric $g_{ab}(\epsilon)$ in terms of the covariant derivative ${}^0\!\bar{\bna}\!_a$ of the conformal background metric $\gamma_{ab}$ as follows: \[def\_Q\] \^\_a A\^[b]{}\_[c]{}() = \^0|\_a A\^[b]{}\_[c]{}() + Q\^[b]{}\_[ad]{}()A\^[d]{}\_[c]{}() - Q\^[d]{}\_[ac]{}()A\^[b]{}\_[d]{}(). The object $Q^a\!_{bc}(\epsilon)$ is written as the sum of two parts: \[Qcosmo\] Q\^a\_[bc]{}() = |[Q]{}\^a\_[bc]{}() + \^a\_[bc]{}(), where $$\begin{aligned} \bar{Q}^a\!_{bc} (\epsilon) &:= 2\delta^a\!_{(b} r_{c)} - \bar{g}^{ad}(\epsilon)\bar{g}_{bc}(\epsilon) r_d, \qquad \text{with} \qquad r_a: = {}^0\!\bar{\bna}\!_a (\ln a), \label{Qbar}\\ \tilde{Q}^a\!_{bc}(\epsilon) &:= {{\textstyle{1\over2}}}\,\bar{g}^{ad}(\epsilon)\left({}^0\!\bar{\bna}\!_{c}\,\bar{g}_{db}(\epsilon) - {}^0\!\bar{\bna}\!_{d}\,\bar{g}_{bc}(\epsilon) + {}^0\!\bar{\bna}\!_{b}\,\bar{g}_{cd}(\epsilon)\right). \label{Qtilde}\end{aligned}$$ It follows from  and , in conjunction with ${}^0\!\bar{\bna}\!_a \gamma_{bc}=0$, that at zeroth and first order we obtain \[Q\] [2]{} \^[(0)]{}|[Q]{}\^a\_[bc]{} &= 2\^a\_[(b]{} r\_[c)]{} - \^[ad]{}\_[bc]{}r\_d, &\^[(0)]{}\^a\_[bc]{} &= 0,\[Q0\]\ \^[(1)]{}|[Q]{}\^a\_[bc]{} &= (f\^[ad]{}\_[bc]{} - \^[ad]{}f\_[bc]{})r\_d ,&\^[(1)]{}\^a\_[bc]{} &= \^[ad]{}(\^0|\_[c]{}f\_[db]{} - \^0|\_[d]{}f\_[bc]{} + \^0|\_[b]{}f\_[cd]{}).\[Q1\] Consider tensors $A^a\!_b(\epsilon)$ such that $\lambda^2 A^a\!_b(\epsilon)$ is dimensionless, where $\lambda>0$ is a background quantity with dimension $length$. As follows from , the equation $0 = \lambda^{2}\,\,{}\!^\epsilon\bna_b A^b\!_a(\epsilon)$ can be written as $$0 = \left({}^0\!\bar{\bna}\!_b - 2s_b\right)\lambda^2 A^b\!_a(\epsilon) + 2Q^c\!_{b[c}(\epsilon)\lambda^2 A^b\!_{a]}(\epsilon),$$ where s\_a: = \^0|\_a (), which yields the following zeroth and first order expressions \[conserv\] $$\begin{aligned} 0 &= \left({}^0\!\bar{\bna}\!_b - 2s_b\right)\lambda^2 {}^{(0)}\!A^b\!_a + 2{}^{(0)}\!\bar{Q}^c\!_{b[c}\lambda^2 {}^{(0)}\!A^b\!_{a]},\label{conserva}\\ 0 &= \left({}^0\!\bar{\bna}\!_b - 2s_b\right)\lambda^2 {}^{(1)}\!A^b\!_a + 2{}^{(0)}\!\bar{Q}^c\!_{b[c}\lambda^2 {}^{(1)}\!A^b\!_{a]} + 2{}^{(1)}\!Q^c\!_{b[c}\lambda^2 {}^{(0)}\!A^b\!_{a]}.\label{conservb}\end{aligned}$$ We now specialize $A^a\!_b(\epsilon)$ to a stress-energy tensor $T^a\!_b(\epsilon)$ that obeys the background symmetries, [*i.e.*]{} that satisfies , and choose the normalizing factor $\lambda$ as in equation , [*i.e.*]{} $\lambda = {\cal M}$. We also assume that $T^a\!_b(\epsilon)$ satisfies a conservation law of the form ${}^\epsilon\bna\!_b T^b\!_a(\epsilon) = 0$. Relative to local coordinates we obtain $$r_\alpha ={\mathcal{H}}\delta^0\!_\alpha, \qquad s_\alpha = {{\textstyle{3\over2}}}{\cal H}(1+{\mathcal{C}}_T^2)\delta^0\!_\alpha.$$ On substituting from  the zeroth order expression  yields equations , and the temporal and spatial components of the first order expression  assume the following form: \[conservcompi\] $$\begin{aligned} 0 &= \partial_\eta({\cal M}^2\,{}^{(1)}\!T^0\!_0 - {{\textstyle{1\over2}}} f^i\!_i) + {\bf D}_i ({\cal M}^2\,{}^{(1)}\!T^i\!_0) - {\cal H}{\cal M}^2\,\left({}^{(1)}\!T^i\!_i + 3 {\mathcal{C}}_T^2 {}^{(1)}\!T^0\!_0 \right),\\ 0 &= (\partial_\eta - 3{\cal H}{\mathcal{C}}_T^2)({\cal M}^2\,{}^{(1)}\!T^0\!_i) + {\bf D}_j({\cal M}^2\,{}^{(1)}\!T^j\!_i) - {\cal H}\gamma_{ij}\,{\cal M}^2\,{}^{(1)}\!T^j\!_0 - {{\textstyle{1\over2}}}{\bf D}_i f_{00} + {\cal H}f_{0 i} .\end{aligned}$$ We simplify these equations by first expressing ${}^{(1)}\!{T}^i\!_0$ in terms of ${}^{(1)}\!{T}^0\!_i$: \^2\^[(1)]{}\^i\_0 = -\^[ij]{}([M]{}\^2\^[(1)]{}\^0\_j - f\_[0j]{}), and decomposing ${}^{(1)}\!T^i\!_j$ into its tracefree part and its trace using . We then introduce the intrinsic gauge invariants ${\hat {\mathbb T}}^j\!_i , {\mathbb T}_i $ and $ {\mathbb T}$ as defined by , expressing the trace ${}^{(1)}\!T^i\!_i$ in terms of ${\mathbb T}$. As a result of these changes equations  yield: $$\begin{aligned} 0 &= \partial_\eta({\cal M}^2\,{}^{(1)}\!T^0\!_0 - {{\textstyle{1\over2}}} f^i\!_i) - {\bf D}^i({\cal M}^2\,{}^{(1)}\!T^0\!_i - f_{0i}) - 3{\cal H}{\mathbb T},\label{cons_1}\\ 0 &= (\partial_\eta + {\cal H})({\cal M}^2\,{}^{(1)}\!T^0\!_i) - {{\textstyle{1\over2}}}{\bf D}_i f_{00} + {{\mathcal{C}}}_T^2 {\mathbb T}_i + {\bf D}_j{\hat {\mathbb T}}^j\!_i + {\bf D}_i {\mathbb T}.\label{cons_2}\end{aligned}$$ We now apply the Replacement Principle to these equations, which entails performing the following replacements: \[replace\] f\_[00]{} \_[00]{}\[X\], f\_[0i]{} \_[0i]{}\[X\], f\_[ij]{} \_[ij]{}\[X\], \^2\^[(1)]{}T\^0\_0 \^0\_0\[X\], \^2\^[(1)]{}T\^0\_i \^0\_i\[X\]. On substituting from  and  and noting that \[div\_1\] [**D**]{}\^i [**f**]{}\_[0i]{}\[X\] = [**D**]{}\^2 [**B**]{}\[X\], \^i [T]{}\^0\_i \[X\] = [**D**]{}\^2 [V]{}\[X\], equation  assumes the form . After performing the replacements  in  we apply the operator ${\bf D}^i$ in order to extract the scalar mode. We then substitute from  and , noting  and the fact that[^41] \^i [T]{}\_i = [**D**]{}\^2 [D]{}, \^i[**D**]{}\_j\^j\_i = [**D**]{}\^i[**D**]{}\_j [**D**]{}\^j\_i [|]{} = [**D**]{}\^2 [|]{}, where ${\bar \Xi}$ is defined by . The result is that  assumes the form ${\bf D}^2 {\mathbb C} = 0$. Since we are assuming, as in UW, that the inverse operator of ${\bf D}^2 $ exists, we obtain ${\mathbb C} = 0$, which is precisely the desired equation . Derivation of the $1+3$ perturbation equations {#app:1+3} ============================================== Derivation of the evolution equations {#app:1+3deriv} ------------------------------------- ### The evolution equation for $D$ {#the-evolution-equation-for-d .unnumbered} We begin with the conservation equations for the stress-energy tensor , linearized by dropping products of first order quantities[^42]: $$\begin{aligned} \rho' &= -3aH(\rho + p) - {\tilde {\bf D}}^a q_a, \label{rho'}\\ h_a\!^b q_b' &= - 4aH q_a - {\tilde {\bf D}}_a p - (\rho + p)a{\dot u}_a - {\tilde {\bf D}}_b \pi^b\!_a. \label{q'}\end{aligned}$$ In these equations the differential operators $'$ and ${\tilde {\bf D}}_a$ are defined by  and . We require the zero order version of  which we write in the form[^43] \^2 \^[(0)]{}’ = -3[H]{}, \[rho\_prime\] which leads to the evolution equation for $ {\cal M}^2$: ([M]{}\^2)’ = 3(1 + [C]{}\_T\^2) [H]{} [M]{}\^2. We apply the operator $ {\cal M}^2{\tilde {\bf D}}^2$ to  and the operator $ {\cal M}^2{\tilde {\bf D}}^a$ to  and then linearize, obtaining $$\begin{aligned} D' - 3{\mathcal{H}}{\cal C}_T^2 D + Z &= - 3{\mathcal{H}}\!\left({\tilde {\bf D}}^a(a{\dot u}_a) + P\right) - {\tilde {\bf D}}^2 {\tilde Q},\label{int1}\\ {\tilde {\bf D}}^a(a{\dot u}_a) + P &= -( {\tilde \Upsilon} + {\tilde \Pi}).\label{int2}\end{aligned}$$ On substituting  in  we obtain the evolution equation  for $D$. In deriving  and  we use the following linearized commutativity properties: \[bna\^2\_prime\] \^2 (A’) = (\^2 A) - \^[(0)]{}A’(\^a(a[u]{}\_a)), where $A$ is any scalar field, and \_a A\_b’ = (\_a A\_b)’, where $A_a$ is any covariant vector field. In differentiating products of perturbed quantities such as $\rho H$, $Hq_a$ and $(\rho + p){\dot u}_a$ we use the following expansion to linear order: \[AB\_lin\] AB = \^[(0)]{}AB + \^[(0)]{}BA - \^[(0)]{}A \^[(0)]{}B, where $A$ and $B$ are geometric quantities with background values ${}^{(0)}\!A $ and ${}^{(0)}\!B$, one of which may be zero. ### The evolution equation for $Z$ {#the-evolution-equation-for-z .unnumbered} We begin with the linearized Raychaudhuri equation written in the form \[Raychaud\] 3(aH’ + a\^2H\^2) - \^a(a[u]{}\_a) + 12 a\^2 (+ 3p) = 0. We use  with $A=B=H$ to write $a^2H^2 = 2{\cal H}(aH) - {\cal H}^2$, where ${\cal H} :=a{}^{(0)}\!H, $ and use  to eliminate ${\dot u}_a$. We then apply the operator ${\tilde {\bf D}}^2$ to . After using  and the definitions of $D,Z$ and $P$ we obtain[^44] \[int3\] Z’ + [H]{}Z + 12[A]{} D = - (\^2 + 3K)P - (\^2 + 3K - 32[A]{}) ( + ). In deriving this equation we have also used  and . We finally use  to express $P$ in  in terms of $D$ and ${\tilde \Gamma}$, which gives the evolution equation for $Z$. Relation between the $1+3$ and the metric-based approaches ---------------------------------------------------------- ### Fundamental 4-velocity and energy flow vector {#app:1+3metric .unnumbered} We begin with the decomposition of the stress-energy tensor with respect to a unit timelike vector field $u^a$, which is given by . The Stewart-Walker lemma implies that the linear perturbation ${}^{(1)}\!{q}_a$ is a gauge invariant. Since ${}^{(0)}\!u^a = a^{-1} \delta^a\!_0$ and ${}^{(0)}\!u_a = - a\, \delta^0\!_a$, it follows that ${}^{(1)}\!{q}_0 = 0$, and hence that \[TvQ\] \^[(1)]{}T\^0\_0 = - \^[(1)]{}, \^2 \^[(1)]{}T\^0\_i = v\_i + [|[Q]{}]{}\_i, where a v\_i:=\^[(1)]{}\_i, a|[Q]{}\_i := [M]{}\^2\^[(1)]{}\_i. It follows from  and  that \[TvQ\_gi\] [T]{}\^0\_i\[X\] = [**v**]{}\_i\[X\] + |[Q]{}\_i, where \[u\_gi\] [**v**]{}\_i\[X\] = v\_i + [**D**]{}\_i X\^0. We decompose ${\bar{\mathbb Q}}_i, {\bf v}_i$ and $v_i$ according to \[vQ\_decomp\] \_i = [**D**]{}\_i [|[Q]{}]{} + \_i, \_i\[X\] = [**D**]{}\_i [**v**]{}\[X\] + \_i, v\_i= D\_i v +[v]{}\_i, with \^i \_i = 0, \^i \_i = 0, \^i \_i = 0. It now follows from ,  and  that $${\mathbb V}[X] = {\bf v}[X] +{\bar{\mathbb Q}}, \qquad {\mathbb V}_i ={\tilde {\bf v}}_i +\tilde{{\mathbb Q}}_i . \label{pf_V}$$ Thus if the preferred timelike vector field $u^a$ is an eigenvector of the stress-energy tensor, [*i.e.*]{} if the energy transfer vector $q^a$ is zero, then the stress-energy gauge invariants ${\mathbb V}[X]$ and ${\mathbb V}_i$ equal the gauge invariants $ {\bf v}[X]$ and $\tilde{\bf v}_i$ associated with $u^a$. In addition it follows from  and  that \[gi\_v\] [**v**]{}\[X\] = v + X\^0, \_i = [v]{}\_i. ### Spatial gradient and Laplacian of a scalar {#spatial-gradient-and-laplacian-of-a-scalar .unnumbered} We have seen that the $1+3$ approach to cosmological perturbations is based on the spatial gradient and Laplacian of the density $\rho$ and the Hubble scalar $H$. We now define these quantities for a scalar field of given dimension, using a background normalization factor $\lambda$ of dimension *length*. Let $f$ be a scalar such that $\lambda^n f$ is dimensionless, and whose unperturbed value is a function only of $\eta$. We define the dimensionless spatial gradient and spatial Laplacian of $f$ according to \[grad,lapl\] F\_a := \_a(\^n f), F := \^2(\^n f), using the notation . Our goal is to relate the linear perturbation of $F_a$ and $F$ to the linear perturbation of $f$. Regarding all perturbed quantities as functions of the perturbation parameter $\epsilon$, we write \[epsilon\_dep\] F\_a() := h\_a\^b() \^ \_a(\^n f() ), F() := a\^2 g\^[ab]{}() h\_a\^c() \^\_c F\_b(), with $f(\epsilon) = {}^{(0)}\!f + \epsilon\,{}^{(1)}\!f + \dots\,$, [*etc*]{}. A straightforward calculation yields[^45] \[step1\] [3]{} \^[(0)]{}F\_a &= 0, & \^[(1)]{}F\_0 &= 0, & \^[(1)]{}F\_i &= \^n([**D**]{}\_i \^[(1)]{}f + \^[(0)]{}f’ [v]{}\_i),\ \^[(0)]{}F &= 0, & \^[(1)]{}F &=[**D**]{}\^i \^[(1)]{}F\_i. && We note that the background values ${}^{(0)}\!F_a$ and ${}^{(0)}\!F$ are zero due to our assumption that ${}^{(0)}\!f = {}^{(0)}\!f(\eta)$. On account of the Stewart-Walker lemma the linear perturbations ${}^{(1)}\!F_i $ and ${}^{(1)}\!F$ are gauge-invariant. We can write them in a manifestly gauge-invariant form by noting that \[step2\] \^n([**D**]{}\_i[\^[(1)]{}f ]{} + \^[(0)]{}f’ v\_i) = [**D**]{}\_i[**f**]{}\[X\] + \^n[\^[(0)]{}f’ ]{}[**v**]{}\_i\[X\], where ${\bf f}[X] $ is the gauge invariant associated with $f$ by $X$-compensation and ${\bf v}_i[X]$ is given by . It follows from ,  and  that \[laplacian1\] \^[(1)]{}F = [**D**]{}\^2( [**f**]{}\[X\] + \^n[\^[(0)]{}f’ ]{}[**v**]{}\[X\]). For future use we choose $X=X_{\mathrm v}$ in  and use the fact that ${\bf v}[X_{\mathrm v}] = - {\bar{\mathbb Q}}$, as follows from . Equation  assumes the form \[laplacian2\] \^[(1)]{}F = [**D**]{}\^2( [**f**]{}\[X\_[v]{}\] - \^n\^[(0)]{}f’|[Q]{}). ### Relation between the variables {#relation-between-the-variables .unnumbered} We need an expression for the gauge-invariant linear perturbation ${\bf H}[X]$ of the Hubble scalar $H$ of the preferred congruence, which is defined by \[X\] = a(\^[(1)]{}H - \^[(0)]{}H’ X\^0), in accordance with the general definition . It follows from the expression (B.41a) for $a^{(1)}H$ in UW, in conjunction with  and , that \[H\_X\] [**H**]{}\[X\] = 13 [**D**]{}\^2([**v**]{}\[X\] - [**B**]{}\[X\]) - (\_+ ). We have also used the fact that \[Hprime\] a\^[(0)]{}H’ = [H]{}’ - [H]{}\^2 = - (12 \_G - K), the second equality following from . First we note that \[H1\] [**v**]{}\[X\] - [**B**]{}\[X\] = [V]{}\[X\] - [**B**]{}\[X\] - [|[Q]{}]{} = [V]{} - [|[Q]{}]{}, as follows from  and the $X$-independent gauge invariant $[{\mathbb V},{\bf B}]$ in . Second we can use the transition rules  and  to show that the gauge invariant \[PsiPhiV\] \_+ + (-’ + \^2)[V]{}\[X\], is $X$-independent. Evaluating this expression for $X^0 = X^0_{\mathrm v}$ and $X^0 = X^0_{\mathrm p}$ yields \[H2\] \_\_[v]{} + \_[v]{} = -K[V]{}, where we have used the linearized Einstein equation , the background Einstein equation ${\mathcal{A}}_T = {\mathcal{A}}_G$ and the definition  of ${\mathcal{A}}_G.$ Finally we choose $X^0 = X^0_{\mathrm v}$ in  and substitute from  and  to obtain the desired result that \[H\_V\] 3[**H**]{}\_[v]{} = ( [**D**]{}\^2 + 3K)[V]{} - [**D**]{}\^2 [|[Q]{}]{} = [Z]{} - [**D**]{}\^2 [|[Q]{}]{}, the second equality following from . We can now use  to relate the perturbations of the variables $D$ and $Z$ in the $1+3$ approach to the corresponding variables ${\mathbb D}$ and ${\mathbb Z}$ in the metric-based approach. First choose $f=\rho$, $\lambda = {\cal M}$ and $n=2$ in , and use , , and  to obtain \[D1\_int\] \^[(1)]{}D = [**D**]{}\^2([D]{} +3[H]{}[|[Q]{}]{}). Second, choose $f= H$, $\lambda = a$ and $n=1$ in . Equations  and  then lead to[^46] \[Z1\_int\] \^[(1)]{}Z = [**D**]{}\^2([Z]{} - ( [**D**]{}\^2 + 3K - 32[A]{}\_G)[|[Q]{}]{}). We also need the relation \[Q1\_int\] \^[(1)]{} = [**D**]{}\^2 [|[Q]{}]{}, which can be derived from the definition  of ${\tilde Q}$ and the definition  of ${\bar{\mathbb Q}}$. The desired equations  and  now follow immediately from  and  in conjunction with . Further, equations  can be derived from the definitions  and  of ${\tilde \Gamma}$ and ${\tilde \Pi}$. Finally, equation  can be derived in a similar manner using  and the footnote following . References {#references .unnumbered} ========== Bardeen, J. M. (1980) Gauge-invariant cosmological perturbations, [*Phys. Rev. D*]{} [**22**]{}, 1882-1905.\ Bardeen, J. M. (1988) Cosmological perturbations, from quantum fluctuations to large scale structure, in [*Cosmology and Particle Physics*]{}, edited by Li-Zhi Fang and A. Zee, pages 1-64 (Gordon and Breach, New York).\ Bardeen, J.M., Steinhardt, P.J. and Turner, M.S. (1983) Spontaneous creation of almost scale-free density perturbations in an inflationary universe, [*Phys. Rev. D*]{} [**28**]{}, 679-693.\ Brandenberger, R. and Khan, R. (1984) Cosmological perturbations in inflationary-universe models, [*Phys. Rev. D*]{} [**29**]{}, 2172-2190.\ Bruni, M., Dunsby, P.K.S. and Ellis, G.F.R. (1992a) Cosmological perturbations and the meaning of gauge-invariant variables, [*Astrophysical J.*]{} [**395**]{}, 34-53.\ Bruni, M., Dunsby, P.K.S. and Ellis, G.F.R. (1992b) Gauge-invariant perturbations in a scalar field dominated universe, [*Class. Quantum Grav.*]{} [**9**]{}, 921-945.\ Bruni, M., Matarrese, S., Mollerach, S., and Sonego, S. (1997) Perturbations of spacetime: gauge transformations and gauge-invariance at second order and beyond, [*Class. Quantum Grav.*]{} [**14**]{}, 2585-2606.\ Dunsby, P.K.S., Bruni, M., and Ellis, G.F.R. (1992) Covariant perturbations in a multifluid cosmological medium, [*Astrophysical J.*]{} [**395**]{}, 54-74.\ Durrer, R. (2008) [*The Cosmic Microwave Background*]{}, Cambridge University Press.\ Ellis, G.F.R. and Bruni, M. (1989), Covariant and gauge-invariant approach to cosmological density fluctuations, [*Phys. Rev. D*]{} [**40**]{}, 1804-1818.\ Ellis, G.F.R., Bruni, M. and Hwang, J (1990) Density-Gradient-vorticity relation in perfect-fluid Robertson-Walker perturbations, [*Phys. Rev. D*]{} [**42**]{}, 1035-1046.\ Ellis, G.F.R., Hwang, J. and Bruni, M. (1989) Covariant and gauge-independent perfect fluid Robertson-Walker perturbations, [*Phys. Rev. D*]{} [**40**]{}, 1819-1826.\ Hawking, S.W. (1966) Perturbations of an expanding universe, [*Astrophysical J.*]{} [**145**]{}, 544-54.\ Hwang, J. (1991) Perturbations of the Robertson-Walker space: multicomponent sources and generalized gravity, [*Astrophysical J.*]{} [**375**]{}, 443-462.\ Hwang, J. and Noh, H. (1999) Relativistic Hydrodynamic Cosmological Perturbations, [*General Relativity and Gravitation*]{} [**31**]{},1131-1146.\ Kodama, H. and Sasaki, M. (1984) Cosmological Perturbation Theory, [*Prog. Theoret. Phys. Suppl.* ]{} [**78**]{},1-166.\ Langlois, D. and Vernizzi, F. (2005) Conserved nonlinear quantities in cosmology, [*Phys. Rev. D*]{} [**72**]{}, 103501 (1-9).\ Malik, K. A. and Wands, D. (2009) Cosmological perturbations, [*Physics Reports*]{} [**475**]{}, 1-51.\ Mukhanov, V. (2005) [*Physical Foundations of Cosmology*]{}, Cambridge University Press.\ Mukhanov, V. F., Feldman, H. A. and Brandenberger, R. H. (1992) Theory of cosmological perturbations, [*Physics Reports*]{} [**215**]{}, 203-333.\ Nakamura, K. (2003) Gauge Invariant Variables in Two-Parameter Nonlinear Perturbations, [*Prog. Theor. Phys.*]{} [**110**]{}, 723-755.\ Nakamura, K. (2005) Second Order Gauge Invariant Perturbation Theory, [*Prog. Theor. Phys.*]{} [**113**]{}, 481-511.\ Nakamura, K. (2007) Second Order Gauge Invariant Cosmological Perturbation Theory, [*Prog. Theor. Phys.*]{} [**117**]{}, 17-74.\ Tsagas C. G., Challinor A. and Maartens R. (2008) Relativistic cosmology and large-scale structure, [*Physics Reports*]{} [**465**]{}, 61-147.\ Uggla, C. and Wainwright, J. (2011) Cosmological Perturbation Theory Revisited, [*Class. Quantum Grav.*]{} [**28**]{}, 175017(26pp).\ Wainwright, J. and Ellis, G.F.R. (1997) [*Dynamical Systems in Cosmology*]{}, Cambridge University Press.\ Wands, D. Malik, K. A. Lyth, D. H. and Liddle, A. R. (2000), New approach to the evolution of cosmological perturbations on large scales, [*Phys. Rev. D*]{} [**62**]{}, 043527 (1-8).\ Weinberg, S. (2008) [*Cosmology*]{}, Oxford University Press.\ Woszczyna, A. and Kulak, A. (1989) Cosmological perturbations - extension of Olson’s gauge-invariant method, [*Class. Quantum Grav.*]{} [**6**]{}, 1665-1671.\ [^1]: Electronic address: [claes.uggla@kau.se]{} [^2]: Electronic address: [jwainwri@uwaterloo.ca]{} [^3]: We follow the nomenclature of Wainwright and Ellis (1997): a Friedmann-Lemaitre (FL) cosmology is a Robertson-Walker (RW) geometry that satisfies Einstein’s field equations. [^4]: By this we mean the standard approach to cosmological perturbations in which one formulates the governing equations in terms of gauge-invariant variables associated with the perturbed metric tensor and the perturbed stress-energy tensor, using local coordinates. [^5]: The $1+3$ gauge-invariant approach uses variables that are [*apriori*]{} gauge-invariant at first order due to the Stewart-Walker lemma and are defined using the 1+3 covariant description of GR, which is based on a preferred timelike congruence. This approach is growing in popularity. For a recent treatment in depth we refer to Tsagas, Challinor and Maartens (2008). [^6]: In the $1+3$ approach the metric tensor is not used as a dynamical variable and local coordinates are not introduced, in contrast to the metric-based approach. [^7]: See UW, footnote 9, for a discussion and references about allocation of dimensions. [^8]: This choice is motivated in section \[sec:stress-energy\]. [^9]: See UW equations (10) and (11). [^10]: Here and elsewhere we denote the derivative of a function $f(\eta)$ that depends only on $\eta$ by $f'(\eta)$. [^11]: Intrinsic gauge invariants were defined in UW, section 2.1, as gauge invariants constructed solely from a single tensor, in contrast to hybrid gauge invariants that are constructed from several tensors. [^12]: Replace $A$ by $T$ in equations (39) and (40) in UW. [^13]: The $a$-normalized gauge invariants ${\bf T}^a\!_b$ in UW are related to the corresponding ${\cal M}$-normalized gauge invariants $\mathbb{T}^a\!_b$ via ${\bf T}^a\!_b = {\cal A}_T\mathbb{T}^a\!_b$, where ${\cal A}_T = a^2({}^{(0)}\!\rho + {}^{(0)}\!p) = (a/{\cal M})^2$. [^14]: Replace $A$ by $T$ in equations (38a) and (38b) in UW and multiply by $1/{\cal A}_T = \left({\cal M}/a\right)^2$. These equations also arise from  with $A$ replaced by $T$ and $\lambda = {\cal M}$. [^15]: In deriving the expression for ${\mathbb V}[X]$, we assume, as in UW, that the inverse operator of ${\bf D}^2$ exists. In terms of the $(1+3)$-decomposition of the stress-energy tensor, we can write ${\mathbb V}[X] = v+{\bar{\mathbb Q}} + X^0$, as follows from and . [^16]: These gauge choices and others are discussed, for example, by Kodama and Sasaki (1984), Hwang (1991), Hwang and Noh (1999) and Malik and Wands (2009). In contrast to our approach which emphasizes relations between gauge invariants, they define gauge invariants in terms of gauge-variant quantities. [^17]: Note that the formal similarity between  and  enables one to obtain  directly from  without any calculation. [^18]: As a consequence the synchronous gauge contains residual freedom. [^19]: See UW, equation (52). We give the matter terms on the right hand side in two forms: using $a$-normalization as in UW, and ${\cal M}$-normalization as introduced in the present paper. [^20]: This equation corresponds to equation (5.20) in Bardeen (1980). [^21]: Since $w=constant$ we have ${\mathcal{C}}^2 = w$. [^22]: In these equations ${\mathcal{C}}^2_A$ denotes the value of ${\mathcal{C}}^2_T$ for the component labeled $A$. We have dropped the subscript $T$ for convenience. [^23]: One can check the consistency of  by showing that ${ \sum_{A=1}^n {}}{\mathcal B}_A\eqref{conserv_A} =\eqref{cons_X}$. Note that ${\mathbb D}_A = {\mathbb D}_A[X] - 3{\mathcal{H}}{\mathbb V}_A[X].$ [^24]: The only calculation involves showing that ${\mathcal{C}}_A^2\mathbb{D}_A - {\mathcal{C}}_B^2\mathbb{D}_B = {\mathbb K}_{AB} + ({\mathcal{C}}^2_A - {\mathcal{C}}^2_B){\mathbb D}$, which follows by writing ${\mathbb D}_A = {\mathbb D} + {\sum_{C=1}^n { {\mathcal B}_C({\mathbb D}_A - {\mathbb D}_C) }}$, and a similar expression for ${\mathbb D}_B $. [^25]: Note that the term $({\bf D}^2 + 3K){\bar \Pi}$ on the right side of  can be replaced by ${\bar \Xi}$ on account of . [^26]: Substitute ${\mathbb D}_A[X] = {\mathbb D}[X] + {\sum_{C=1}^n { {\mathcal B}_C{\mathbb D}_{AC} }} $ in  and use ${ \sum_{A=1}^n {}}{\cal B}_A {{\mathcal{C}}}^2_A = {\mathcal{C}}^2$. [^27]: For the reader’s convenience we give the equation numbers in the previously mentioned references that correspond to our equations : Kodama and Sasaki (1984), (5.59)-(5.60), Hwang (1991), (37)-(38), Bruni [*et al*]{} (1992b), (91)-(92), Dunsby [*et al*]{} (1992), (86)-(87), and Durrer (2008), (2.136)-(2.137). [^28]: See their equations (8) and (9). Their evolution equation (18) corresponds to our equation , although their equation is not in a manifestly gauge-invariant form, and uses clock time rather than conformal time. See also Malik and Wands (2009), equations (7.61), (7.62) and (8.35), which are somewhat closer in form to our equations. [^29]: It is not immediately obvious that the expressions given in these papers (equations (2.43) and (2.45) in Bardeen [*et al*]{} (1983) and equations (2.11) and (2.12) in Brandenberger and Khan (1984)) agree with our expressions. [^30]: Throughout the remaining discussion about conserved quantities we use the background Einstein equations and hence ${\mathcal{A}}_G={\mathcal{A}}_T={\mathcal{A}}$ and ${\mathcal{C}}_G^2={\mathcal{C}}_T^2={\mathcal{C}}^2$. [^31]: BDE give a comprehensive account of the linearization of the full system of equations in the $1+3$ formalism (Ricci and Bianchi identities, and stress-energy conservation equations ). We are concerned only with a limited subset of these equations. [^32]: Equations equivalent to  have been derived by Woszczyna and Kulak (1989) in the case of a barotropic perfect fluid, using the method of Appendix \[app:1+3\] but with different normalization factors (see their equations (11) and (18)). Their variables $\Delta\epsilon$ and $\Delta\theta$ are related to $D$ and $Z$ according to $D=a^2{\cal M}^2 \Delta\epsilon, \quad Z=a^3 \Delta\theta$. [^33]: See BDE equations (27), (28) and (40). Note that ${\tilde \Gamma} = \frac{w}{1+w}{\cal E}$. [^34]: BDE do not incorporate the cosmological constant into the stress-energy tensor as we do. In making a comparison we have to write $\rho=\rho_m +\Lambda$, $p=p_m-\Lambda$, with $w= p_m/\rho_m$ not necessarily constant. [^35]: Note that $\rho+p=\rho_m+p_m=(1+w)\rho_m$, and that $ {\tilde {\bf D}}^2 \rho_m= {\tilde {\bf D}}^2 \rho$. [^36]: Our variables $(\tilde{Q}, \tilde{\Pi}, \tilde{\Upsilon})$ are related to those used in the above reference according to $\Psi_{BDE} = \tilde {Q}, \quad a\Pi_{BDE} = {\tilde \Pi}, \quad aF_{BDE} = {\tilde \Upsilon}$. [^37]: This feature of linear perturbations is due to the fact that the components of the perturbed Riemann tensor and of the perturbed stress-energy tensor that are used in deriving the governing equations are invariant under [*spatial*]{} gauge transformations. Indeed, according to Bardeen (1988), since the background 3-space is homogeneous and isotropic the perturbations in all physical quantities must be invariant under spatial gauge transformations. Whether this property holds for nonlinear perturbations requires further investigation. [^38]: If one does not fix $X^i$ one has to use the general form  of ${\bf f}_{ab}[X]$. However, during the calculations the terms involving $X^i$ cancel, leading to the same final results. [^39]: This calculation is much easier if one uses our factorization property  for the operator ${\mathcal{L}}$. [^40]: In the usual derivation one first obtains a system of partial differential equations for the spatial gradients of the energy density and the Hubble scalar, and then one takes the spatial divergence to obtain partial differential equations for scalars, a more lengthy process. See, for example, BDE. [^41]: The third equality follows from the identity (B.39e) in UW. [^42]: See, for example, Wainwright and Ellis (1997), equations (1.48) and (1.49), after multiplying by $a$ to change the dot derivative to prime. [^43]: For a background scalar ${}^{(0)}\!A' $ is the ordinary derivative with respect to conformal time $\eta$. [^44]: In doing calculations such as these one should keep in mind that ${}^{(3)}{\bna}\!_a(a')\neq 0$, where $'$ is defined by , even though we have chosen $a$ to be the background scale factor ([*i.e.*]{} ${}^{(3)}{\bna}\!_a(a) = 0$. [^45]: We note that $^{\epsilon}{\bna}\!_a A(\epsilon) = {}^{0}{\bar {\bna}\!_a}A(\epsilon)$ for a scalar A, and that ${}^0\bar{\bna}\!_i A = {\bf D}_i A $ and ${}^0\bar{\bna}\!_0 A =\partial_\eta A.$ [^46]: Note the factor 3 in the definition of $Z$ in  compared to , so that $Z=3F$ and ${}^{(1)}\!Z=3{}^{(1)}\!F$ .
--- abstract: 'The sampling method is a technique for state inference in non-linear non-Gaussian state-space models which was proposed in @Neal2003 [@Neal2004] and extended in @ShestopaloffNeal2016. An extension to Bayesian parameter inference was presented in @ShestopaloffNeal2013. An alternative class of schemes addressing similar inference problems is provided by methods [@AndrieuDoucetHolenstein2009; @AndrieuDoucetHolenstein2010]. All these methods rely on the introduction of artificial extended target distributions for multiple state sequences which, by construction, are such that one randomly indexed sequence is distributed according to the posterior of interest. By adapting the algorithms developed in the framework of methods to the framework, we obtain novel -type algorithms for state inference and novel schemes for parameter and state inference. In addition, we show that most of these algorithms can be viewed as particular cases of a general and framework. We compare the empirical performance of the various algorithms on low- to high-dimensional state-space models. We demonstrate that a properly tuned conditional with ‘local’ moves proposed in @ShestopaloffNeal2016 can outperform the standard conditional significantly when applied to high-dimensional state-space models while the novel -type algorithm could prove to be an interesting alternative to standard for likelihood estimation in some lower-dimensional scenarios.' author: - | Axel Finke\ Department of Statistical Science, University College London, UK.\ Arnaud Doucet\ Department of Statistics, Oxford University, UK.\ Adam M. Johansen\ Department of Statistics, University of Warwick, UK. bibliography: - 'ehmm.bib' title: | On embedded hidden Markov models and\ particle Markov chain Monte Carlo methods --- Introduction ============ Throughout this work, for concreteness, we will describe both and methods in the context of performing inference in non-linear state-space models. However, we stress that those methods can be used to perform inference in other contexts. Non-linear non-Gaussian state-space models constitute a popular class of time series models which can be described in the time-homogeneous case as follows — throughout this paper we consider the time-homogeneous case, noting that the generalisation to time-inhomogeneous models is straightforward but notationally cumbersome. Let $\{x_t\}_{t \geq 1}$ be an $\mathcal{X}$-valued latent Markov process satisfying $$x_1 \sim \mu_{\theta}(\,\cdot\,) \quad \text{and} \quad x_{t}\vert (x_{t-1} = x) \sim f_{\theta}(\,\cdot\,| x), \text{ for $t \geq 2$}. \label{eq:modtransition}$$ and let $\{y_t\}_{t \geq 1}$ be a sequence of $\mathcal{Y}$-valued observations which are conditionally independent given $\{x_t\}_{t \geq 1}$ and which satisfy $$y_t \vert (x_1,\dotsc, x_t = x, x_{t+1}, \dotsc ) \sim g_{\theta}(\,\cdot\,| x), \text{ for $t \geq 1$}. \label{eq:modobs}$$ Here $\theta\in\varTheta$ denotes the vector of parameters of the model. Let $z_{i:j}$ denotes the components $(z_i,z_{i+1},\dotsc,z_j) $ of a generic sequence $\{z_t\}_{t \geq 1} $. Assume that we have access to a realization of the observations $Y_{1:T}=y_{1:T}$. If $\theta$ is known, inference about the latent states $x_{1:T}$ relies upon $$p_{\theta}(x_{1:T} \vert y_{1:T}) = \frac{p_{\theta}(x_{1:T}, y_{1:T})}{p_{\theta}(y_{1:T})},$$ where $$p_{\theta} (x_{1:T}, y_{1:T}) = \mu_{\theta}(x_1) \prod_{t=2}^{T} f_{\theta}(x_{t}\vert x_{t-1}) \prod_{t=1}^{T} g_{\theta}(y_t \vert x_t) .$$ When $\theta$ is unknown, to conduct Bayesian inference a prior density $p(\theta)$ is assigned to the parameters and inference proceeds via the joint posterior density $$p(x_{1:T},\theta \vert y_{1:T}) = p(\theta \vert y_{1:T}) p_{\theta}(x_{1:T}\vert y_{1:T}),$$ where the marginal posterior distribution of the parameter satisfies $$p(\theta \vert y_{1:T}) \propto p(\theta) p_{\theta}(y_{1:T}),$$ the likelihood $ p_{\theta}(y_{1:T})$ being given by $$p_{\theta}(y_{1:T}) =\int p_{\theta} (x_{1:T}, y_{1:T}) \mathrm{d}x_{1:T}.$$ Many algorithms have been proposed over the past twenty-five years to perform inference for this class of models; see @Kantas2015 for a recent survey. We focus here on the algorithm introduced in @Neal2003 [@Neal2004] and on introduced in [@AndrieuDoucetHolenstein2009; @AndrieuDoucetHolenstein2010]. Both classes of methods are fairly generic and do not require the state-space model under consideration to possess additional structural properties beyond and . The method has been recently extended in @ShestopaloffNeal2013 [@ShestopaloffNeal2016] while extensions of have also been proposed in, among other works, @Whiteley2010 and @LindstenJordanSchon2014. In particular, @Whiteley2010 combined the conditional algorithm of [@AndrieuDoucetHolenstein2009; @AndrieuDoucetHolenstein2010] with a backward sampling step. We will denote the resulting algorithm as the conditional with . Both and methods rely upon sampling a population of $N$ particles for the state $x_{t}$ and introducing an extended target distribution over the resulting $N^{T}$ potential sequences $x_{1:T}$ such that one of the sequences selected uniformly at random is at equilibrium by construction. It was observed in @LindstenSchon2013 [p. 116] that conditional with is reminiscent of the method proposed in @Neal2003 [@Neal2004] and some connections were made between some simple methods and methods in @Finke2015 [pp. 82–87] who also showed that both methods can be viewed as special cases of a much more general construction. However, to the best of our knowledge, the connections between the two classes of methods have never been investigated thoroughly. Indeed, such an analysis was deemed of interest in @ShestopaloffNeal2014, where we note that methods are sometimes alternatively referred to as *ensemble* methods: > “It would … be interesting to compare the performance of the ensemble method with the \[\]-based methods of [@AndrieuDoucetHolenstein2010] and also to see whether techniques used to improve \[particle \] methods can be used to improve ensemble methods and vice versa.” In this work, we characterize this relationship and show that it is possible to exploit the similarities between these methods to derive new inference algorithms. The relationship between the various classes of algorithms discussed in this work is shown in Figure \[fig:relationship\_between\_algorithms\]. The remainder of the paper is organized as follows. Section \[sec:pmcmc\] reviews some schemes, including the algorithm and samplers. We recall how the validity of these algorithms can be established by showing that they are standard algorithms sampling from an extended target distribution. In particular, the algorithm can be thought of as a standard algorithm sampling from this extended target using a proposal for the states. Likewise, the theoretical validity of the conditional with can be established by showing that it corresponds to a (“partially collapsed” – see @VanDyk2008) Gibbs sampler [@Whiteley2010]. Section \[sec:embeddedHMM\] is devoted to the ‘original’ from @Neal2003 [@Neal2004]. At the core of this methodology is an extended target distribution which shares common features with the target. We show that the method can be reinterpreted as a collapsed Gibbs sampling procedure for this target. This provides an alternative proof of validity of this algorithm. More interestingly, it is possible to come up with an original scheme to sample from this extended target distribution reminiscent of . However, whereas the algorithm relies on estimates of the likelihood $p_{\theta}(y_{1:T})$, this version of relies on an estimate of $p_{\theta}(y_{1:T})$ computed using a finite-state , the cardinality of the state-space being $N$. The computational cost of both of these original methods is $O(N^2T)$ in contrast to the $O(NT)$-cost of methods. The high computational cost of the original method has partially motivated the development of a novel class of alternative methods which bring the computational complexity down to $O(NT)$. As described in Section \[sec:embeddedHMMnewversion\], this is done by introducing a set of auxiliary variables playing the same rôle as the ancestor indices generated in the resampling step of a standard . This leads to the extended target distribution introduced in @ShestopaloffNeal2016. We show that this target coincides in a special case with the extended target of when one uses the [@PittShephard1999] and the resulting coincides with the conditional with in this scenario. We show once more that the validity of this novel method can be established by using a collapsed Gibbs sampler. In Section \[sec:novel\_methodology\], we derive several novel, practical extensions to the alternative method. First, we show that the alternative framework can also be used to derive an algorithm which, once again, is very similar to the algorithm except that $p_{\theta}(y_{1:T})$ is estimated unbiasedly using a novel type algorithm relying on local moves. Second, we derive additional bootstrap and general type variants of the alternative method. In Section \[sec:general\_pmcmc\], we describe a general, unifying framework which admits all variants of standard methods and all variants of alternative discussed in this work as special cases. This also allows us to generalize the scheme from @LindstenJordanSchon2014. In Section \[sec:simulations\], we empirically compare the performance of all the algorithms mentioned above. Our results indicate that, as suggested in @ShestopaloffNeal2016, a properly tuned version of the conditional (and hence sampler) using moves proposed in @ShestopaloffNeal2016 can outperform existing methods in high dimensions while the (‘non-conditional’) using moves are a potentially interesting alternative to standard for likelihood and state estimation for lower-dimensional models. ![Relationship between the various classes of algorithms discussed in this work. A general construction admitting all of these as special cases can be found in @Finke2015 [Section 1.4]. Novel methodology introduced in this work is highlighted in bold.[]{data-label="fig:relationship_between_algorithms"}](relationship_between_algorithms.pdf "fig:")\ Particle Markov chain Monte Carlo methods {#sec:pmcmc} ========================================= This section reviews methods. For transparency, we first restrict ourselves in this section to the scenario in which the underlying used is the bootstrap , and then discuss the before finally considering the case of general . Extended target distribution ---------------------------- Let $N$ be an integer such that $N \geq 2$. methods rely on the following extended target density on $\varTheta \times \mathcal{X}^{NT}\times \{1,\dotsc,N\}^{N{(T-1)}+1}$ $$\tilde{\pi}{\bigl(\theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr)} \coloneqq \frac{1}{N^T} \times \underbrace{\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)}}_{\mathclap{\text{\footnotesize{target}}}} \times \underbrace{\phi_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}|x_{1:T}^{b_{1:T}},b_{1:T}\bigr)} }_{\mathclap{\text{\footnotesize{law of conditional \gls{PF}}}}}, \label{eq:PMCMCtarget}$$ where $\smash{\pi(\theta,x_{1:T}) \coloneqq p(x_{1:T},\theta| y_{1:T})}$ represents the posterior distribution of interest. In addition, the *particles* $\mathbf{x}_t \coloneqq \{ x_t^1,\dotsc,x_t^N \} \in\mathcal{X}^N$, *ancestor indices* $\mathbf{a}_t \coloneqq \{a_t^1,\dotsc,a_t^N\} \in \{1,\dotsc,N\}^N$ and *particle indices* $b_{1:T} \coloneqq \{ b_1,\dotsc,b_T\}$ are related as $$\mathbf{x}_t^{-b_t}=\mathbf{x}_t\backslash x_t^{b_t}, \quad \mathbf{x}_{1:T}^{-b_{1:T}}=\bigl\{ \mathbf{x}_1^{-b_1},\dotsc,\mathbf{x}_T^{-b_T}\bigr\}, \quad \mathbf{a}_{t-1}^{-b_t}=\mathbf{a}_{t-1}\backslash a_{t-1}^{b_t}, \quad \mathbf{a}_{1:T-1}^{-b_{2:T}}=\bigl\{ \mathbf{a}_1^{-b_{2}},\dotsc,\mathbf{a}_{T-1}^{-b_T}\bigr\}.$$ In particular, given $b_T$, the particle indices $b_{1:T-1}$ are deterministically related to the ancestor indices by the recursive relationship $$b_t = a_t^{b_{t+1}}, \quad \text{for $t = T-1, \dotsc, 1$.}$$ Finally, for any $\smash{(x_{1:T}^{b_{1:T}},b_{1:T}) \in\mathcal{X}^N \times \bigl\{1,\dotsc,N\}^N}$, $\phi_\theta$ denotes a conditional distribution induced by an algorithm referred to as a $$\phi_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}}, \mathbf{a}_{1:T-1}^{-b_{2:T}}\big| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)} \coloneqq \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_1}}}^N \mu_\theta{\bigl( x_1^i\bigr)} \prod_{t=2}^T \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_t}}}^N w_{\theta,t-1}^{a_{t-1}^i}\,f_\theta{\bigl( x_t^i\big|x_{t-1}^{a_{t-1}^i}\bigr)}, \label{eq:CPF}$$ where $$w_{\theta,t}^i \coloneqq \frac{g_\theta(y_t |x_t^i)}{\sum_{j=1}^N g_\theta(y_t | x_t^j)} \label{eq:multinomialresampling}$$ represents the normalised weight associated with the $i$th particle at time $t$. The key feature of this high-dimensional target is that by construction it ensures that $\smash{( \theta,x_{1:T}^{b_{1:T}})}$ is distributed according to the posterior of interest. methods are algorithms which sample from this extended target, hence from the posterior of interest. Particle marginal Metropolis–Hastings {#subsec:pmmh} ------------------------------------- The ** algorithm is a algorithm targeting $\tilde{\pi}( \theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}}) $ defined through , and using a proposal of the form $$q{(\theta,{\theta'})} \times \underbrace{\Psi_{{\theta'}}( \mathbf{x}_{1:T},\mathbf{a}_{1:T-1})}_{\mathclap{\text{\footnotesize{law of \gls{PF}}}}} \times \underbrace{w_{{\theta'},T}^{b_T}}_{\mathclap{\text{\parbox{1.3cm}{\centering\footnotesize{path selection}}}}}, \label{eq:proposalparticlefilter}$$ where $b_{1:T}$ is again obtained via the reparametrisation $\smash{b_t = a_t^{b_{t+1}}}$ for $t = T-1, \dotsc,1$ and $\Psi_\theta(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1})$ is the law induced by a bootstrap $$\Psi_\theta( \mathbf{x}_{1:T},\mathbf{a}_{1:T-1}) \coloneqq \prod_{i=1}^N \mu_\theta( x_1^i) \prod_{t=2}^T \prod_{i=1}^N w_{\theta,t-1}^{a_{t-1}^i} f_\theta\bigl(x_t^i \big| x_{t-1}^{a_{t-1}^i}\bigr). \label{eq:distributionPF}$$ The resulting acceptance probability is of the form $$1 \wedge \frac{\hat{p}_{{\theta'}}(y_{1:T})p({\theta'})}{\hat{p}_\theta(y_{1:T})p(\theta)} \frac{q({\theta'},\theta)}{q(\theta,{\theta'})}, \label{eq:PMMHratio}$$ where $$\hat{p}_\theta(y_{1:T}) \coloneqq \prod_{t=1}^T \biggl[\frac{1}{N} \sum_{i=1}^N g_\theta(y_t| x_t^i) \biggr]\label{eq:likelihoodestimatorPF}$$ is well known to be an unbiased estimate of $p_\theta(y_{1:T})$; see [@DelMoral2004]. We stress that the unbiased estimates appearing in the numerator and denominator of each depends upon the particles (and ancestor indices) generated in distinct but we suppress this dependence to keep the notation as simple as is possible. The validity of the expression in follows directly by noting that: $$\begin{aligned} \frac{\tilde{\pi}( \theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}}) }{\Psi_\theta( \mathbf{x}_{1:T},\mathbf{a}_{1:T-1}) w_{\theta,T}^{b_T}} & = \frac{1}{N^T} \frac{\pi( \theta,x_{1:T}^{b_{1:T}})}{\mu_\theta(x_1^{b_1}) \bigl[\prod_{t=2}^T w_{\theta,t-1}^{b_{t-1}} f_\theta(x_t^{b_t} | x_{t-1}^{b_{t-1}}) \bigr] w_{\theta,T}^{b_T}}\\ & = \frac{p( \theta| y_{1:T}) }{N^T}\frac {\mu_\theta( x_1^{b_1}) g_\theta( y_1| x_1^{b_1}) \prod_{t=2}^T f_\theta( x_t^{b_t}| x_{t-1}^{b_{t-1}}) g_\theta( y_t| x_t^{b_t}) }{\mu_\theta( x_1^{b_1}) \bigl[\prod_{t=2}^T w_{\theta,t-1}^{b_{t-1}} f_\theta(x_t^{b_t} | x_{t-1}^{b_{t-1}}) \bigr] w_{\theta,T}^{b_T}}\\ & = p(\theta | y_{1:T}) \frac{\hat{p}_\theta( y_{1:T}) }{p_\theta(y_{1:T})}\\ & \propto\hat{p}_\theta(y_{1:T}) p(\theta), \label{eq:PMCMCratiotargetproposal}$$ where we have again used that $b_t = a_t^{b_{t+1}}$, for $t = T-1,\dotsc,1$ and that $p(\theta|y_{1:T})/p_\theta(y_{1:T}) = p(\theta)/p(y_{1:T})$; see also [@AndrieuDoucetHolenstein2010 Theorem 2]. Particle Gibbs samplers {#subsec:cpf} ----------------------- To sample from $\pi(\theta,x_{1:T})$, one can use the ** sampler. The sampler mimics the block Gibbs sampler iterating draws from $\pi(\theta | x_{1:T})$ and $\pi(x_{1:T}| \theta) $. As sampling from $\pi(x_{1:T} | \theta) $ is typically impossible, we can use a so called conditional kernel with to emulate sampling from it. Given a current value of $x_{1:T}$, we perform the following steps (see @AndrieuDoucetHolenstein2009, @AndrieuDoucetHolenstein2010 [Section 4.5]); 1. Sample $b_{1:T}$ uniformly at random and set $x_{1:T}^{b_{1:T}}\leftarrow x_{1:T}$. 2. Run the conditional , i.e. sample from $\phi_\theta(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}} | x_{1:T}^{b_{1:T}},b_{1:T})$. 3. \[alg:simple\_pg\_last\_step\] Sample $b_T$ according to $\Pr(b_T=m) = w_\theta^{m}$ and set $b_t=a_t^{b_{t+1}}$ for $t = T-1, \dotsc, 1$. It was noticed in @Whiteley2010 that it is possible to improve Step \[alg:simple\_pg\_last\_step\]: for $t=T-1,\dotsc,1$, instead of deterministically setting $b_t=a_t^{b_{t+1}}$, one can use a backward sampling step which samples $$\Pr{\bigl(b_t=m\bigr)} \propto w_{\theta,t}^{m}f_\theta{\bigl(x_{t+1}^{b_{t+1}}\bigr| x_t^{m}\bigr)}. \label{eq:backwardsampling}$$ To establish the validity of this procedure (i.e. of the conditional with ), it was shown that this procedure is a (partially) collapsed Gibbs sampler of invariant distribution $\smash{\tilde{\pi}(b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1} | \theta)}$, sampling recursively from $\smash{\tilde{\pi}(b_t | \theta,\mathbf{x}_{1:t},\mathbf{a}_{1:t-1},x_{t+1:T}^{b_{t+1:T}},b_{t+1:T})}$, for $t=T,T-1,\dotsc,1$. Indeed, we have $$\begin{aligned} \MoveEqLeft \tilde{\pi}{\bigl(b_t\bigr| \theta,\mathbf{x}_{1:t},\mathbf{a}_{1:t-1},x_{t+1:T}^{b_{t+1:T}},b_{t+1:T}\bigr)}\\ & \propto \sum_{b_{1:t-1}}\sum_{\mathbf{a}_{t:T-1}} \idotsint \frac{\pi{\bigl( \theta,x_{1:T}^{b_{1:T}}\bigr)} }{N^T} \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_1}}}^N \mu_\theta{\bigl( x_1^i\bigr)} \smashoperator{\prod_{n=2}^T} \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_n}}}^N w_{\theta,n-1}^{a_{n-1}^i}f_\theta{\bigl(x_n ^i\bigr| x_{n-1}^{a_{n-1}^i}\bigr)}\,\mathrm{d} \mathbf{x}_{t+1:T}^{-b_{t+1:T}}\\ & \propto \smashoperator{\sum_{b_{1:t-1}}} \pi{\bigl( \theta,x_{1:T}^{b_{1:T}}\bigr)} \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_1}}}^N \mu_\theta{\bigl( x_1^i\bigr)} \prod_{n=2}^t \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_n}}}^N w_{\theta,n-1}^{a_{n-1}^i}f_\theta{\bigl(x_n^i\bigr| x_{n-1}^{a_{n-1}^i}\bigr)} \\ & = \smashoperator{\sum_{b_{1:t-1}}} \pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)} \frac{\prod_{i=1}^N \mu_\theta{\bigl( x_1^i\bigr)} \prod_{n=2}^t \prod_{i=1}^N w_{\theta,n-1}^{a_{n-1}^i}f_\theta{\bigl(x_n^i\bigr| x_{n-1}^{a_{n-1}^i}\bigr)} }{\mu_\theta{\bigl(x_1^{b_1}\bigr)} \prod_{n=2}^t w_{\theta,n-1}^{b_{n-1}}f_\theta{\bigl(x_n^{b_n}\bigr| x_{n-1}^{b_{n-1}}\bigr)} }, \quad \text{as $\smash{a_{n-1}^{b_n} = b_{n-1}}$,}\label{eq:CPFBS}\\ & \propto \smashoperator{\sum_{b_{1:t-1}}} f_\theta{\bigl(x_{t+1}^{b_{t+1} }\bigr| x_t^{b_t}\bigr)} w_{\theta,t}^{b_t}\\ & \propto f_\theta{\bigl(x_{t+1}^{b_{t+1}}\bigr| x_t^{b_t}\bigr)} w_{\theta,t}^{b_t},\end{aligned}$$ where we have used that the numerator of the ratio appearing in is independent of $b_{1:t-1}$. Extension to the fully-adapted auxiliary particle filter {#Section:perfectadaptationPF} -------------------------------------------------------- It is straightforward to employ a more general class of in a context. One such is the ** [@PittShephard1999] whose incorporation within was explored in @Pitt2012. It is described in this subsection. When it is possible to sample from $p_\theta(x_1|y_1) \propto \mu_\theta(x_1) g_\theta(y_1| x_1) $ and $p_\theta(x_t| x_{t-1},y_t) \propto f_\theta(x_t| x_{t-1}) g_\theta(y_t|x_t)$ and to compute $p_\theta(y_1) = \int \mu_\theta(x_1) g_\theta(y_1|x_1) \mathrm{d}x_1$ and $p_\theta(y_t|x_{t-1}) = \int f_\theta(x_t|x_{t-1}) g_\theta(y_t| x_t) \mathrm{d} x_t$, it is possible to define the target distribution $\smash{\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}})}$ using an alternative conditional – the conditional – in (more precisely, in these circumstances one can implement the associated ): $$\phi_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a} _{1:T-1}^{-b_{2:T}}\bigr| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)} = \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_1}}}^N p_\theta{\bigl(x_1^i\bigr| y_1\bigr)} \prod_{t=2}^T \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_t}}}^N w_{\theta,t-1}^{a_{t-1}^i}p_\theta{\bigl(x_t^i\bigr| x_{t-1}^{a_{t-1}^i},y_t\bigr)}, \label{eq:CPFperfectadaptation}$$ where $$w_{\theta,t}^{i} \coloneqq \frac{p_\theta(y_{t+1} | x_{t}^{i})}{\sum_{j=1}^Np_\theta(y_{t+1}| x_{t}^j)}. \label{eq:perfectadaptationweight}$$ In this case, we can target the extended distribution $\smash{\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}})}$ defined through , and using a algorithm with proposal $$q{\bigl( \theta,{\theta'}\bigr)} \times \underbrace{\Psi_{{\theta'}}{\bigl(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}\bigr)}}_{\mathclap{\text{\footnotesize{law of \gls{FAAPF}}}}} \times \underbrace{\frac{1}{N}}_{\mathclap{\text{\parbox{1.3cm}{\centering\footnotesize{path selection}}}}},$$ i.e. we pick $b_T$ uniformly at random, then set $\smash{b_t=a_t^{b_{t+1}}}$ for $t=T-1,\dotsc,1$ and $\Psi_\theta{\bigl(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}\bigr)}$ is the distribution associated with the instead of the bootstrap $$\Psi_\theta{\bigl(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}\bigr)} = \prod_{i=1}^N p_\theta{\bigl(x_1^i\bigr| y_1\bigr)} \prod_{t=2}^T \prod_{i=1}^N w_{\theta,t-1}^{a_{t-1}^i}p_\theta{\bigl(x_t^i\bigr| x_{t-1}^{a_{t-1}^i},y_t\bigr)} . \label{eq:distributionperfectadaptation}$$ It is easy to check that the resulting acceptance probability is also of the form given in but with $$\hat{p}_\theta(y_{1:T}) = p_\theta(y_1) \prod_{t=2}^T \biggl[\frac{1}{N} \sum_{i=1}^N p_\theta(y_t|x_{t-1}^i)\biggr]. \label{eq:likelihoodestimatorperfectadaptationPF}$$ The conditional with proceeds by first running the conditional defined in , then sampling $b_T$ uniformly at random and finally sampling $b_{T-1},\dotsc,b_1$ backwards using $$\tilde{\pi}\bigl(b_t \big| \theta,\mathbf{x}_{1:t},\mathbf{a}_{1:t-1}, x_{t+1:T}^{b_{t+1:T}},b_{t+1:T}\bigr) \propto f_\theta\bigl(x_{t+1}^{b_{t+1}} \big| x_t^{b_t}\bigr), \label{eq:backwardperfectadaptation}$$ where the expression in is obtained using calculations similar to those in . Extension to general auxiliary particle filters {#subsec:apf} ----------------------------------------------- The previous section demonstrated that the leads straightforwardly to valid algorithms and will allow natural connections to be made to certain methods. Here, we show that as was established in @Pitt2012 [Appendix 8.2], *any* general ** can be employed in this context and will lead to natural extensions of these methods. To facilitate later developments, an explicit representation of the associated extended target distribution and related quantities is useful. Viewing the as a sequential importance resampling algorithm for an appropriate sequence of target distributions as described in @JohansenDoucet2008, it is immediate that the density associated with such an algorithm is simply: $$\begin{aligned} \Psi_\theta^{\mathbf{q}_\theta}(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}) = \prod_{i=1}^N q_{\theta,1}(x_1^i) \prod_{t=2}^T w_{\theta,t-1}^{a_{t-1}^i} q_{\theta,t}\bigl(x_t^i \big|x_{t-1}^{a_{t-1}^i}\bigr),\end{aligned}$$ where $\mathbf{q}_\theta = \{q_{\theta, t}\}_{t=1}^T$ and $q_{\theta,t}$ denotes the proposal distribution employed at time $t$ (with dependence of this distribution upon the observation sequence suppressed from the notation) and $\smash{w_{\theta,t}^i = v_{\theta,t}^i / \sum_{j=1}^N v_{\theta,t}^j}$ with: $$\begin{aligned} v_{\theta,t}^i = \begin{cases} \dfrac{\mu_\theta{\bigl(x_1^i\bigr)} g_\theta{\bigl(y_1|x_1^i\bigr)} \tilde{p}_\theta{\bigl(y_2|x_1^i\bigr)}}{q_{\theta,1}{\bigl(x_1^i\bigr)}}, & \text{if $t = 1$,}\\ \dfrac{f_\theta{\bigl(x_t^i|x_{t-1}^{a_{t-1}^i}\bigr)} g_\theta{\bigl(y_t|x_t^i\bigr)} \tilde{p}_\theta{\bigl(y_{t+1}|x_t^i\bigr)}}{q_{\theta,t}{\bigl(x_t^i | x_{t-1}^{a_{t-1}^i}\bigr)} \tilde{p}_\theta{\bigl(y_t|x_{t-1}^{a_{t-1}^i}\bigr)}}, & \text{if $1 < t < T$,}\\ \dfrac{f{\bigl(x_T^i|x_{T-1}^{a_{T-1}^i}\bigr)} g_\theta{\bigl(y_T|x_T^i\bigr)}}{q_{\theta,T}{\bigl(x_T^i|x_{t-1}^{a_{T-1}^i}\bigr)} \tilde{p}_\theta{\bigl(y_T|x_{T-1}^{a_{T-1}^i}\bigr)}}, & \text{if $t = T$,} \end{cases}\label{eq:apfweights}\end{aligned}$$ and $\tilde{p}_\theta(y_{t+1}|x_t^i)$ denoting the approximation of the predictive likelihood employed within the weighting of the . Note that $\tilde{p}_\theta(y_{t+1}|x_t)$ can be any positive function of $x_t$ and the simpler sequential importance resampling is recovered by setting $\tilde{p}_\theta(y_{t+1}|x_t) \equiv 1$, with the bootstrap emerging as a particular case thereof when $q_{\theta, t}(x_t|x_{t-1}) = f_\theta(x_t|x_{t-1})$. Associated with the is a conditional of the form: $$\phi^{\mathbf{q}_\theta}_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)} = \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_1}}}^N q_{\theta,1}{\bigl(x_1^i\bigr)} \prod_{t=2}^T \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_t}}}^N w_{\theta,t-1}^{a_{t-1}^i}\,q_{\theta,t}{\bigl(x_t^i\bigr|x_{t-1}^{a_{t-1}^i}\bigr)}. \label{eq:CAPF}$$ A algorithm is arrived at by employing the extended target distribution, $$\tilde{\pi}^{\mathbf{q}_\theta}{\bigl(\theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a} _{1:T-1}^{-b_{2:T}}\bigr)} = \frac{1}{N^T} \times \underbrace{\pi{\bigl( \theta,x_{1:T}^{b_{1:T}}\bigr)}}_{\mathclap{\text{\footnotesize{target}}}} \times \underbrace{\phi^{\mathbf{q}_\theta}_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)} }_{\mathclap{\text{\footnotesize{law of conditional \gls{APF}}}}}, \label{eq:APMCMCtarget}$$ and proposal distribution, $$q{(\theta,{\theta'})} \times \underbrace{\Psi_{{\theta'}}^{\mathbf{q}_\theta'}{(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1})}}_{\mathclap{\text{\footnotesize{law of \gls{APF}}}}} \times \underbrace{w_{{\theta'},T}^{b_T}}_{\mathclap{\text{\parbox{1.3cm}{\centering\footnotesize{path selection}}}}}. \label{eq:proposalauxiliaryparticlefilter}$$ One can straightforwardly verify that this leads to a acceptance probability of the form stated in but using the natural unbiased estimator of the normalising constant associated with the , $$\hat{p}_\theta(y_{1:T}) = \prod_{t=1}^T \biggl[\frac{1}{N} \sum_{i=1}^N v_{\theta,t}^i\biggr].$$ We conclude this section by noting that although the constructions developed above were presented for simplicity with multinomial resampling employed during every iteration of the algorithm, it is straightforward to incorporate more sophisticated, adaptive resampling schemes within this framework. Original embedded hidden Markov models {#sec:embeddedHMM} ====================================== Extended target distribution ---------------------------- The ** method of [@Neal2003; @Neal2004] is based on the introduction of a target distribution on $\varTheta \times \mathcal{X}^{NT}\times \{ 1,\dotsc,N \}^N$ of the form $$\begin{aligned} \tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T}) = \frac{1}{N^T} \times \underbrace{\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)}}_{\mathclap{\text{\footnotesize{target}}}} \times \underbrace{\prod_{t=1}^T \;\; \Bigl\{ \smashoperator{\prod_{i=b_t -1}^1} \widetilde{R}_{\theta,t}{\bigl(x_t^i\bigr| x_t ^{i+1}\bigr)} \cdot \smashoperator{\prod_{i=b_t+1}^N} R_{\theta,t}{\bigl(x_t^i\bigr| x_t^{i-1}\bigr)} \Bigr\}}_{\mathclap{\text{\footnotesize{law of conditional random grid generation}}}}, \label{eq:embeddedHMM2003target}\end{aligned}$$ where $R_{\theta,t}$ is a $\rho_{\theta,t}$-invariant Markov transition kernel, i.e. $\smash{\int\rho_{\theta,t}(x) R_{\theta,t}(x^\prime| x) \mathrm{d} x = \rho_{\theta,t}(x^\prime)}$, and $\smash{\widetilde{R}_{\theta,t}}$ is its reversal, i.e. $\smash{\widetilde{R}_{\theta,t}(x^\prime|x) = \rho_{\theta,t}(x^\prime) R_{\theta,t}(x|x^\prime) / \rho_{\theta,t}(x)}$ (for $\rho_{\theta,t}$-almost every $x$ and $x^\prime$). Similarly to the extended target distribution, the key feature of $\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T})$ is that, by construction, it ensures that the associated marginal distribution of $\smash{(\theta,x_{1:T}^{b_{1:T}})}$ is the posterior of interest. Metropolis–Hastings algorithm {#subsec:ehmm-mh} ----------------------------- As detailed in the next section, the algorithm proposed in @Neal2003 can be reinterpreted as a Gibbs sampler targeting $\tilde{\pi}(b_{1:T},\mathbf{x}_{1:T}|\theta)$. We present here an alternative, original algorithm to sample from $\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T})$. It relies on a proposal of the form $$q{\bigl( \theta,{\theta'}\bigr)} \times \underbrace{\Psi_{{\theta'}}{\bigl(\mathbf{x}_{1:T}\bigr)} }_{\mathclap{\parbox{2.3cm}{\text{\parbox{2.3cm}{\centering\footnotesize{law of random grid generation}}}}}} \times \underbrace{q_{{\theta'}}{\bigl(b_{1:T}\bigr| \mathbf{x}_{1:T}\bigr)}}_{\mathclap{\text{\parbox{2cm}{\centering\footnotesize{path selection}}}}}, \label{eq:proposalembeddedHMM2003}$$ where $$\Psi_\theta{\bigl( \mathbf{x}_{1:T}\bigr)} \coloneqq \frac{1}{N^T} \prod_{t=1}^T \Bigl\{ \rho_{\theta,t}{\bigl(x_t^1\bigr)} \smashoperator{\prod_{i=2}^N} R_{\theta,t}{\bigl(x_t^i\bigr| x_t^{i-1}\bigr)} \Bigr\} \label{eq:distributionrandomHMMfilter}$$ is sometimes referred to as the *ensemble base measure* [@Neal2011] and $$q_\theta(b_{1:T} | \mathbf{x}_{1:T}) \coloneqq \frac{\tilde{p}_\theta{\bigl( x_{1:T}^{b_{1:T}},y_{1:T}\bigr)}}{\sum_{b_{1:T}^\prime}\tilde{p}_\theta{\bigl( x_{1:T}^{b_{1:T}^\prime},y_{1:T}\bigr)}} = \frac{1}{N^T}\frac{\tilde{p}_\theta{\bigl(x_{1:T}^{b_{1:T}},y_{1:T}\bigr)}}{\tilde{p}_\theta(y_{1:T})}. \label{eq:pathselectionEHMM}$$ In this expression, we have (where we note that this is no longer a probability density with respect to Lebesgue measure)$$\tilde{p}_\theta( x_{1:T},y_{1:T}) \coloneqq \frac{\mu_\theta(x_1) g_\theta(y_1|x_1)}{\rho_{\theta,1}(x_1)} \prod_{t=2}^T \frac{f_\theta(x_t|x_{t-1}) g_\theta(y_t|x_t)}{\rho_{\theta,t}(x_t)} \label{eq:modifiedjointposterior}$$ and $$\tilde{p}_\theta(y_{1:T}) \coloneqq \frac{1}{N^T}\sum _{b_{1:T}^\prime}\tilde{p}_\theta{\bigl( x_{1:T}^{b_{1:T}^\prime},y_{1:T}\bigr)} . \label{eq:likelihoodestimatorembeddedHMM}$$ To sample from $\Psi_\theta( \mathbf{x}_{1:T})$, we sample $\smash{x_t^{1}\sim\rho_{\theta,t}}(x_t^{1})$ and $\smash{\mathbf{x}_t^{-1}}\sim\smash{\prod_{i=2}^N R_{\theta,t}(x_t^i| x_t^{i-1})}$ for $t=1,...,T$. Hence, at time $t$ all of the particles are marginally distributed according to $\rho_{\theta,t}$. When $\smash{R_{\theta,t}(x^\prime| x) = \rho_{\theta,t}(x^\prime)}$, this corresponds to the algorithm proposed in @LinChen2005. Sampling from the high-dimensional discrete distribution $q_\theta(b_{1:T}| \mathbf{x}_{1:T})$ can be performed in $O(N^2T)$ operations with the finite state-space filter using the $N$ states $(x_t^i)$ at time $t$, transition probabilities proportional to $f_\theta(x_t^j| x_{t-1}^i)$ and conditional probabilities of the observations proportional to $g_\theta(y_t| x_t^i) / \rho_{\theta,t}(x_t^i)$. We also obtain as a by-product $\tilde{p}_\theta(y_{1:T})$, which is an unbiased estimate of $p_\theta(y_{1:T})$. The resulting algorithm targeting the extended distribution given in with the proposal given in admits an acceptance probability of the form $$1 \wedge \frac{\tilde{p}_{{\theta'}}(y_{1:T})p({\theta'})}{\tilde{p}_\theta(y_{1:T}) p(\theta)} \frac{q({\theta'},\theta)}{q(\theta,{\theta'})}, \label{eq:embeddedHMMratio}$$ i.e. it looks very much like the algorithm, except that instead of having likelihood terms estimated by a particle filter, these likelihood terms are estimated using a finite state-space filter. To establish the correctness of the acceptance probability given in , we note that $$\begin{aligned} \frac{\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T})}{\Psi_\theta(\mathbf{x}_{1:T}) q_\theta(b_{1:T}| \mathbf{x}_{1:T}) } & = \frac{N^{-T}\pi(\theta,x_{1:T}^{b_{1:T}}) \prod_{t=1}^T \bigl\{ \prod_{i=b_t -1}^1 \widetilde{R}_{\theta,t}(x_t^i| x_t^{i+1}) \cdot \prod_{i=b_t+1}^N R_{\theta,t}(x_t^i| x_t^{i-1}) \bigr\} }{ \prod_{t=1}^T \bigl\{ \rho_{\theta,t}( x_t^{1}) \cdot \prod_{i=2}^N R_{\theta,t}(x_t^i| x_t^{i-1}) \bigr\} N^{-T}\frac{\tilde{p}_\theta( x_{1:T}^{b_{1:T}},y_{1:T}) }{\tilde{p}_\theta( y_{1:T}) }}\\ & =\frac{p_\theta( x_{1:T}^{b_{1:T}},y_{1:T}) / p_\theta(y_{1:T}) }{\prod_{t=1}^T \rho_{\theta,t}(x_t^{b_t})} \biggl[ \frac{p_\theta(x_{1:T}^{b_{1:T}},y_{1:T})}{\tilde{p}_\theta(y_{1:T}) \prod_{t=1}^T \rho_{\theta,t}( x_t^{b_t})}\biggr]^{-1}\\ & = p(\theta| y_{1:T}) \frac{\tilde{p}_\theta(y_{1:T})}{p_\theta(y_{1:T})} \propto \tilde{p}_\theta(y_{1:T}) p(\theta), \label{eq:embeddedHMMratiotargetproposal}\end{aligned}$$ where we have used that $$\frac{\tilde{p}_\theta(x_{1:T},y_{1:T})}{\tilde{p}_\theta{\bigl( y_{1:T}\bigr)}} =\frac{p_\theta(x_{1:T},y_{1:T})}{\tilde{p}_\theta(y_{1:T}) \prod_{t=1}^T \rho_{\theta,t}{\bigl(x_t^{b_t}\bigr)}}.$$ In addition, we have used the following identity which we will also exploit in the next section: if $R$ is a $\rho$-invariant Markov kernel and $\widetilde{R}$ the associated reversal, then for any $b,c \in \{1,\dotsc,N\}$, $$\begin{aligned} \smashoperator{\prod_{i=b-1}^1} \widetilde{R}{\bigl(x^i\bigr| x^{i+1}\bigr)} \cdot \rho{\bigl(x^{b}\bigr)} \cdot \smashoperator{\prod_{i=b+1}^N} R{\bigl(x^i\bigr| x^{i-1}\bigr)} = \smashoperator{\prod_{i=c-1}^1} \widetilde{R}{\bigl(x^i\bigr| x^{i+1}\bigr)} \cdot \rho{\bigl(x^c\bigr)} \cdot \smashoperator{\prod_{i=c+1}^N} R{\bigl(x^i\bigr| x^{i-1}\bigr)}. \label{eq:useful_reversal_kernel_identity}\end{aligned}$$ Interpretation as a collapsed Gibbs sampler {#subsec:ehmm-gibbs} ------------------------------------------- Consider the following Gibbs sampler type algorithm to sample from $\pi(x_{1:T}|\theta)$: 1. \[enum:ehmm\_gibbs:1\] Sample $b_{1:T}$ uniformly at random on $\smash{\{1,\dotsc,N\}^T}$ and set $\smash{x_{1:T}^{b_{1:T}}\leftarrow x_{1:T}}$; 2. \[enum:ehmm\_gibbs:2\] Sample $\smash{\tilde{\pi}(\mathbf{x}_{1:T}^{-b_{1:T}}| \theta,b_{1:T},x_{1:T}^{b_{1:T}})}$; 3. \[enum:ehmm\_gibbs:3\] Sample $\smash{b_T\sim\tilde{\pi}(b_T|\theta,\mathbf{x}_{1:T})}$ then $\smash{b_{T-1}\sim\tilde{\pi}(b_{T-1}| \theta,\mathbf{x}_{1:T-1},x_T^{b_T},b_T)}$ and so on. It is obvious that Steps \[enum:ehmm\_gibbs:1\] and \[enum:ehmm\_gibbs:2\] coincide with the first steps of the algorithm described in @Neal2003. For Step \[enum:ehmm\_gibbs:3\], we note that $$\begin{aligned} \MoveEqLeft \tilde{\pi}{\bigl(b_t\bigr| \theta,\mathbf{x}_{1:t},x_{t+1:T}^{b_{t+1:T}},b_{t+1:T}\bigr)} \\ & \propto \smashoperator{\sum_{b_{1:t-1}}} \idotsint \pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)} \prod_{n=1}^T \;\; \Bigl\{ \smashoperator{\prod_{i=b_n -1}^1} \widetilde{R}_{\theta,n}{\bigl(x_n^i\bigr| x_n^{i+1}\bigr)} \cdot \smashoperator{\prod_{i=b_n+1}^N} R_{\theta,n}{\bigl(x_n^i\bigr| x_n^{i-1}\bigr)} \Bigr\}\, \mathrm{d}\mathbf{x}_{n+1}^{-b_{n+1}}\cdots \mathrm{d} \mathbf{x}_T^{-b_T}\\ & =\smashoperator{\sum_{b_{1:t-1}}} \pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)} \smashoperator{\prod_{n=1}^t} \;\; \Bigl\{\smashoperator{\prod_{i=b_n -1}^1} \widetilde{R}_{\theta,n}{\bigl(x_n^i \bigr| x_n^{i+1}\bigr)} \cdot \smashoperator{\prod_{i=b_n+1}^N} R_{\theta,n}{\bigl(x_n^i\bigr| x_n^{i-1}\bigr)} \Bigr\}\\ & = \smashoperator{\sum_{b_{1:t-1}}} \pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)} \prod_{n=1}^t \frac{ \prod_{i=b_n -1}^1 \widetilde{R}_{\theta,n}{\bigl(x_n^i\bigr| x_n^{i+1}\bigr)} \cdot \rho_{\theta,n}{\bigl( x_n^{b_n}\bigr)} \cdot \prod_{i=b_n+1}^N R_{\theta,n}{\bigl(x_n^i\bigr| x_n^{i-1}\bigr)} }{\rho_{\theta,n}{\bigl( x_n^{b_n}\bigr)} }\\ & \propto\smashoperator{\sum_{b_{1:t-1}}}\,\frac{\pi{\bigl( \theta,x_{1:T}^{b_{1:T}}\bigr)} }{ \prod_{n=1}^t \rho_{\theta,n}{\bigl( x_n^{b_n}\bigr)} },\end{aligned}$$ where by , the numerator in the penultimate line is independent of $b_n$. Since $$\begin{aligned} \frac{\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)}}{\prod_{n=1}^t \rho_{\theta,n}{\bigl(x_n^{b_n}\bigr)}} & \propto \frac{p_{\theta }{\bigl( x_{1:T}^{b_{1:T}},y_{1:T}\bigr)} }{\prod_{n=1}^t \rho_{\theta,n}{\bigl(x_n^{b_n}\bigr)}}\\ & \propto \underbrace{\prod_{n=1}^t \frac{f_\theta\bigl(x_n^{b_n}\bigr| x_{n-1}^{b_{n-1}}\bigr)g_\theta{\bigl(y_n\bigr| x_n^{b_n}\bigr)}}{\rho_{\theta,n}{\bigl(x_n^{b_n}\bigr)} }}_{\mathclap{\text{\footnotesize{modified posterior $\tilde{p}_\theta(x_{1:t}^{b_{1:t}}| y_{1:t})$}}}} \cdot \smashoperator{\prod_{n=t+1}^T} f_\theta{\bigl(x_n^{b_n}\bigr| x_{n-1}^{b_{n-1}}\bigr)} g_\theta{\bigl(y_n\bigr| x_n^{b_n}\bigr)},\end{aligned}$$ we can compute the marginal $\smash{\tilde{p}_\theta(x_t^{b_t}| y_{1:t}) \coloneqq \sum_{b_{1:t-1}} \tilde{p}_\theta(x_{1:t}^{b_{1:t}}| y_{1:t})}$ using the same (finite state-space) filter discussed in the previous section and so $$\tilde{\pi}{\bigl(b_t\bigr| \theta,\mathbf{x}_{1:t},x_{t+1:T}^{b_{t+1:T}},b_{t+1:T}\bigr)} \propto\tilde{p}_\theta{\bigl(x_t^{b_t}\bigr| y_{1:t}\bigr)} f_\theta{\bigl(x_{t+1}^{b_t+1}\bigr| x_t^{b_t}\bigr)}$$ coinciding with the expression obtained in @Neal2003. This is an alternative proof of validity of the algorithm. The present derivation is more complex than that in @Neal2003 which relies on a simple detailed balance argument. One potential benefit of our approach is that it can be extended systematically to any extended target admitting a similar structure; see for example @LindstenSchon2013 [p. 116] for extensions to the non-Markovian case. Finally, we note that this algorithm may be viewed as a special case of the framework proposed in @Tjelmeland2004 and simplifies to Barker’s kernel [@Barker1965] if $N=2$ and $T=1$. Alternative embedded hidden Markov models {#sec:embeddedHMMnewversion} ========================================= In its original version, the method has a computational cost per iteration of order $O(N^2T)$ compared to $O(NT)$ for methods and it samples particles independently across time which can be inefficient if the latent states are strongly correlated. The new version of methods, which was proposed in @ShestopaloffNeal2016, resolves both of these limitations. It can be viewed as a -type algorithm making use of a new type of that we term the ** given its connection to the which we detail below. Extended target distribution ---------------------------- This version of the , henceforth referred to as the *alternative* method, relies on the extended target distribution $$\tilde{\pi}{\bigl( \theta,b_{1:T},\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr)} = \frac{1}{N^T} \times \underbrace{\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)}}_{\mathclap{\text{\footnotesize{target}}}} \times \underbrace{\phi_\theta\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr|x_{1:T}^{b_{1:T}},b_{1:T}\bigr)}_{\mathclap{\text{\footnotesize{law of conditional \gls{MCMCFAAPF}}}}},$$ where we will refer to the algorithm inducing the following distribution as the conditional for reasons which are made clear below: $$\begin{aligned} \MoveEqLeft \phi_\theta{\bigl( \mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)}\\* & = \smashoperator{\prod_{\smash{i=b_1-1}}^1}\widetilde{R}_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i+1}\bigr)} \cdot \smashoperator{\prod_{\smash{i=b_1+1}}^N} R_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i-1}\bigr)} \label{eq:conditionalPFnovelEHMMtarget}\\ & \quad \times \smashoperator{\prod_{t=2}^{\smash{T}}} \;\; \Bigl\{\smashoperator{\prod_{i=b_t -1}^{\smash{1}}} \widetilde{R}_{\theta,t}{\bigl(x_t^i,a_{t-1}^i\bigr| x_t^{i+1},a_{t-1}^{i+1};\mathbf{x}_{t-1}\bigr)} \smashoperator{\prod_{i=b_t+1}^{\smash{N}}} R_{\theta,t}{\bigl(x_t^{i+1},a_{t-1}^{i+1}\bigr| x_t^i,a_{t-1}^i;\mathbf{x}_{t-1}\bigr)} \Bigr\},\end{aligned}$$ with $\smash{b_t=a_t^{b_{t+1}}}$ as for methods. Here $R_{\theta,1}$ is invariant with respect to $\rho_{\theta,1}(x_1) = p_\theta(x_1 | y_1)$ whereas, for $t=2,\dotsc,T$, $R_{\theta,t}(\,\cdot\,|\,\cdot\,; \mathbf{x}_{t-1})$ is invariant w.r.t.$$\begin{aligned} \rho_{\theta,t}{\bigl(x_t, a_{t-1}\bigr| \mathbf{x}_{t-1}\bigr)} & =\frac{g_\theta{\bigl(y_t\bigr| x_t\bigr)} f_\theta{\bigl(x_t\bigr| x_{t-1}^{a_{t-1}}\bigr)}} {\sum_{i=1}^Np_\theta{\bigl(y_t\bigr| x_{t-1}^i\bigr)}} =\frac{p_\theta{\bigl(y_t\bigr| x_{t-1}^{a_{t-1}}\bigr)}}{\sum_{i=1}^Np_\theta{\bigl(y_t\bigr| x_{t-1}^i\bigr)}}p_\theta{\bigl(x_t\bigr| y_t,x_{t-1}^{a_{t-1}}\bigr)} ,\end{aligned}$$ while, for $t=1,\dotsc,T$, $\widetilde{R}_{\theta,t}(\,\cdot\,|\,\cdot\,; \mathbf{x}_{t-1})$ denotes the reversal of the kernel $R_{\theta,t}(\,\cdot\,|\,\cdot\,; \mathbf{x}_{t-1})$ with respect to its invariant distribution. Note that if $R_{\theta,1}(x_1^\prime|x_1) =\rho_{\theta,1}( x_1^\prime)$ and $R_{\theta,t}(x_t^\prime, a_{t-1}^\prime|x_t, a_{t-1};\mathbf{x}_{t-1}) = \rho_{\theta,t}(x_t^\prime, a_{t-1}^\prime| \mathbf{x}_{t-1})$, the extended target $\tilde{\pi}( \theta,b_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}}, \mathbf{x}_{1:T})$ coincides exactly with the extended target associated with the described in Section \[Section:perfectadaptationPF\]. As explored in the following two sections, this allows us to understand this approach as the incorporation of a slightly more general class of within a framework and ultimately suggests further generalisations of these algorithms. Metropolis–Hastings algorithm {#subsec:mcmc-fa-apf} ----------------------------- We now consider the following algorithm to sample from $\smash{\tilde{\pi}(\theta,b_{1:T}, \mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}^{-b_{2:T}})}$. It relies on a proposal of the form $$q{\bigl( \theta,{\theta'}\bigr)} \times \underbrace{\Psi_{{\theta'}}{\bigl(\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}\bigr)} }_{\mathclap{\text{\parbox{2cm}{\centering\footnotesize{law of \gls{MCMCFAAPF}}}}}} \times \underbrace{\frac{1}{N}}_{\mathclap{\text{\parbox{1.3cm}{\centering\footnotesize{path selection}}}}}, \label{eq:proposalnovelEHMM}$$ i.e. to sample $b_{1:T}$, we pick $b_T$ uniformly at random, then set $\smash{b_t=a_t^{b_{t+1}}}$ for $t=T-1,\dotsc,1$. Moreover, $$\begin{aligned} \Psi_\theta{\bigl( \mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}\bigr)} & = \rho_{\theta,1}{\bigl( x_1^1\bigr)} \smashoperator{\prod_{\smash{i=2}}^N} R_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i-1}\bigr)}\\ & \quad \times \smashoperator{\prod_{t=2}^{\smash{T}}} \; \Bigl\{ \rho_{\theta,t}{\bigl( x_t^1, a_{t-1}^1\bigr| \mathbf{x}_{t-1}\bigr)} \smashoperator{\prod_{i=2}^{\smash{N}}} R_{\theta,t}{\bigl(x_t^{i},a_{t-1}^{i}\bigr| x_t^{i-1},a_{t-1}^{i-1};\mathbf{x}_{t-1}\bigr)} \Bigr\} \label{eq:novelPFforEHMM}\end{aligned}$$ is the law of a novel type algorithm, which we refer to as the ; again the reason for this terminology should become clear below. The proceeds as follows. 1. At time $1$, sample $x_1^1\sim\rho_{\theta,1}(x_1^1)$ and then $\mathbf{x}_1^{-1}\sim \prod_{i=2}^N R_{\theta,1}(x_1^i| x_1^{i-1})$. 2. At time $t=2,\dotsc,T$, sample 1. $(x_t^1, a_{t-1}^1) \sim\rho_{\theta,t}(x_t^1, a_{t-1}^1| \mathbf{x}_{t-1})$, 2. $(\mathbf{x}_t^{-1},\mathbf{a}_{t-1}^{-1}) \sim \prod_{i=2}^N R_{\theta,t}(x_t^{i},a_{t-1}^{i}| x_t^{i-1},a_{t-1}^{i-1};\mathbf{x}_{t-1})$. If $R_{\theta,1}(x_1^\prime\bigr| x_1) = \rho_{\theta,1}(x_1^\prime)$ and $R_{\theta,t}(x_t^\prime, a_{t-1}^\prime\bigr|x_t, a_{t-1};\mathbf{x}_{t-1}) = \rho_{\theta,t}(x_t^\prime, a_{t-1}^\prime\bigr| \mathbf{x}_{t-1})$, this corresponds to the standard . The resulting algorithm targeting the extended distribution defined in and using the proposal defined in admits an acceptance probability of the form $$1 \wedge \frac{\hat{p}_{{\theta'}}(y_{1:T}) p({\theta'})}{\hat{p}_\theta(y_{1:T}) p(\theta)}\frac{q({\theta'},\theta)}{q(\theta,{\theta'})}, \label{eq:alternative_ehmm_acceptance_probability}$$ i.e. it looks very much like the , except that here $\hat{p}_\theta(y_{1:T})$ is given by the expression in with particles generated via . Note that this estimate is unbiased. The validity of the acceptance probability in can be established by calculating $$\begin{aligned} \MoveEqLeft \frac{\tilde{\pi}{\bigl( \theta,b_{1:T}, \mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr)}}{\Psi_\theta{\bigl( \mathbf{a}_{1:T-1},\mathbf{x}_{1:T}\bigr)} \frac{1}{N}}\\ & = N\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)} \frac{\prod_{i=b_1-1}^1 \widetilde{R}_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i+1}\bigr)} \cdot \prod_{i=b_1+1}^N R_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i-1}\bigr)}}{\rho_{\theta,1}{\bigl( x_1^1\bigr)} \prod_{i=2}^N R_{\theta,1}{\bigl(x_1^j\bigr| x_1^{i-1}\bigr)}}\\ & \quad \times \prod_{t=2}^T \frac{\prod_{i=b_t -1}^1 \widetilde{R}_{\theta,t}{\bigl(x_t^i,a_{t-1}^i\bigr|x_t^{i+1},a_{t-1}^{i+1};\mathbf{x}_{t-1}\bigr)} \cdot \prod_{i=b_t+1}^N R_{\theta,t}{\bigl(x_t^{i+1},a_{t-1}^{i+1}\bigr| x_t^i,a_{t-1}^i;\mathbf{x}_{t-1}\bigr)}}{\rho_{\theta,t}{\bigl(x_t^{1}, a_{t-1}^{1}\bigr| \mathbf{x}_{1:t-1}\bigr)} \prod_{i=2}^N R_{\theta,t}{\bigl(x_t^{i+1},a_{t-1}^{i+1}\bigr| x_t^i,a_{t-1}^j;\mathbf{x}_{t-1}\bigr)}}\\ & = \frac{N^{T-1}\pi{\bigl( \theta,x_{1:T}^{b_{1:T}}\bigr)} }{\rho_{\theta,1}{\bigl( x_1^{b_1}\bigr)} \prod_{t=2}^T \rho_{\theta,t}{\bigl(x_t^{b_t}, a_{t-1}^{b_t}\bigr|\mathbf{x}_{1:t-1}\bigr)} }\\ & = \frac{N^{T-1}\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)}}{\frac{p_\theta(x_1^{b_1}, y_1)}{p_\theta(y_1)} \prod_{t=2}^T \frac{f_\theta(x_t^{b_t}| x_{t-1}^{b_{t-1}}) g(y_t| x_t^{b_t})}{\sum_{i=1}^N p_\theta(y_t| x_{t-1}^i) } } = p(\theta | y_{1:T}) \frac{\hat{p}_\theta(y_{1:T})}{p_\theta(y_{1:T})}.\end{aligned}$$ We have again used identity and additionally that $\smash{b_t=a_t^{b_{t+1}}}$, for $t = T-1,\dotsc, 1$. Gibbs sampler {#subsec:gibbs_mcmc-fa-apf} ------------- The method of [@ShestopaloffNeal2016] can be reinterpreted as a collapsed Gibbs sampler to sample from the extended target distribution $\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}^{-b_{2:T}})$. Given a current value of $x_{1:T}$, the algorithm proceeds as follows. 1. Sample $b_{1:T}$ uniformly at random and set $x_{1:T}^{b_{1:T}}\leftarrow x_{1:T}$. 2. Run the conditional , i.e. sample from $\phi_\theta(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}| x_{1:T}^{b_{1:T}},b_{1:T})$. 3. Sample $b_T$ according to $\Pr(b_T=m) = 1/N$ and then, for $t=T-1,\dotsc,1$, sample $b_t$ according to a distribution proportional to $f_\theta(x_{t+1}^{b_{t+1}}| x_t^{b_t})$. The validity of the algorithm is established using a detailed balance argument in @ShestopaloffNeal2016. Alternatively, we can show using simple calculations similar to the ones presented earlier that $$\tilde{\pi}\bigl( b_t\big| \theta,\mathbf{x}_{1:t}, x_{t+1:T}^{b_{t+1:T}}, b_{t+1:T}\bigr) \propto f_\theta\bigl(x_{t+1}^{b_{t+1}} \big| x_t^{b_t}\bigr) .$$ In the standard conditional , the particles are conditionally independent given the previously sampled values. The conditional allows for conditional dependence between all the particles (and ancestor indices) generated in one time step. Indeed, we can choose the kernels $R_{\theta, t}^{\mathbf{q}_\theta}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2})$ such that they induce only small, local moves. This can improve the performance of samplers in high dimensions: as with standard schemes, less ambitious local moves are much more likely to be accepted. Of course, as with any local proposal one could not expect such a strategy to work well with strongly multi-modal target distributions without further refinements. Novel practical extensions {#sec:novel_methodology} ========================== Motivated by the connections identified above, we now develop extensions based upon the more general algorithms described above, in particular considering constructions based around general . In particular, we relax the requirement in the algorithm from Section \[subsec:mcmc-fa-apf\] that it is possible to sample from the proposal distribution of the (which is possible in only a small number of tractable models) and to compute its associated importance weight. MCMC APF -------- Generalising the in the same manner as the generalises the leads us to propose a (general) **. Set $$\begin{aligned} \rho_{\theta,t}^{\mathbf{q}_\theta}(x_t,a_{t-1}|\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}) & = \begin{cases} q_{\theta,1}(x_1), & \text{if $t = 1$,}\\ \dfrac{v_{\theta,t-1}^{a_{t-1}}}{\sum_{i=1}^N v_{\theta,t-1}^i} q_{\theta,t}(x_t|x_{t-1}^{a_{t-1}}), & \text{if $t> 1$,} \end{cases}\end{aligned}$$ where $v_{\theta,t-1}^i$ are as defined in , and is responsible for the dependence upon $a_{t-1}$ and $x_{t-2}$ in particular, and we allow $\smash{R_t^{\mathbf{q}_\theta}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2})}$ and $\smash{\widetilde{R}_{t}^{\mathbf{q}_\theta}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2})}$ to respectively denote a $\rho_{\theta,t}^{\mathbf{q}_\theta}(\,\cdot\,|\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2})$-invariant Markov kernel and the associated reversal kernel. Although this expression superficially resembles the mixture proposal of the *marginalised* [@Klass2005], by explicitly including the ancestry variables it avoids incurring the $O(N^2)$ cost and allows an approximation of smoothing distributions. We then define the law of the via: $$\begin{aligned} \Psi^{\mathbf{q}_\theta}_\theta{\bigl(\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}\bigr)} & \coloneqq \rho^{\mathbf{q}_\theta}_{\theta,1}{\bigl( x_1^1\bigr)} \smashoperator{\prod_{\smash{i=2}}^N} R^{\mathbf{q}_\theta}_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i-1}\bigr)}\\ & \quad \times \smashoperator{\prod_{t=2}^{\smash{T}}}\;\Bigl\{\rho^{\mathbf{q}_\theta}_{\theta,t}{\bigl(x_t^1, a_{t-1}^1\bigr| \mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}\bigr)} \bigr. \smashoperator{\prod_{i=2}^{\smash{N}}} R^{\mathbf{q}_\theta}_{\theta,t}{\bigl(x_t^{i},a_{t-1}^{i}\bigr| x_t^{i-1},a_{t-1}^{i-1};\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}\bigr)} \Bigr\}.\end{aligned}$$ The corresponding extended target distribution is simply: $$\tilde{\pi}^{\mathbf{q}_\theta}{\bigl( \theta,b_{1:T},\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr)} = \frac{1}{N^T} \times \underbrace{\pi\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr) }_{\text{\footnotesize{target}}} \times \underbrace{\phi^{\mathbf{q}_\theta}_{\theta}\bigl( \mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\big\vert x_{1:T}^{b_{1:T}},b_{1:T}\bigr)}_{\text{\footnotesize{law of conditional \gls{MCMCAPF}}}}, \label{eq:novelEHMMtarget}$$ where, as might be expected: $$\begin{aligned} \MoveEqLeft \phi^{\mathbf{q}_\theta}_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)}\\* & = \smashoperator{\prod_{\smash{i=b_1-1}}^1} \widetilde{R}^{\mathbf{q}_\theta}_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i+1}\bigr)} \cdot \smashoperator{\prod_{\smash{i=b_1+1}}^N} R^{\mathbf{q}_\theta}_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i-1}\bigr)}\\ & \quad \times \smashoperator{\prod_{t=2}^{\smash{T}}} \;\; \Bigl\{\smashoperator{\prod_{i=b_t-1}^{\smash{1}}} \widetilde{R}^{\mathbf{q}_\theta}_{\theta,t}{\bigl(x_t^i,a_{t-1}^i\bigr| x_t^{i+1},a_{t-1}^{i+1};\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}\bigr)} \bigr. \smashoperator{\prod_{i=b_t+1}^{\smash{N}}} R^{\mathbf{q}_\theta}_{\theta,t}{\bigl(x_t^{i},a_{t-1}^{i}\bigr| x_t^{i-1},a_{t-1}^{i-1};\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}\bigr)} \Bigr\}. \end{aligned}$$ Note that the can be viewed as a special case of the in much the same way that the from Section \[Section:perfectadaptationPF\] can be viewed as a special case of the (general) from Section \[subsec:apf\]. Metropolis–Hastings algorithms ------------------------------ We arrive at a -type algorithm based around the by considering proposal distributions of the form: $$q{\bigl(\theta,{\theta'}\bigr)} \times \underbrace{\Psi^{\mathbf{q}_\theta'}_{{\theta'}}{\bigl(\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}\bigr)}}_{\mathclap{\text{\parbox{2.3cm}{\centering\footnotesize{law of \gls{MCMCAPF}}}}}} \times \underbrace{w_{{\theta'},T}^{b_T}}_{\mathclap{\text{\parbox{1.3cm}{\centering\footnotesize{path selection}}}}}, $$ where, as in Section \[subsec:apf\], $\smash{w_{\theta,T}^i = v_{\theta,T}^i / \sum_{j=1}^N v_{\theta,T}^j}$ and $\smash{\hat{p}_\theta(y_{1:T}) = \prod_{t=1}^T N^{-1} \sum_{i=1}^N v_{\theta,t}^i}$ is again an unbiased estimate of the marginal likelihood. Note that the -type variant of the cannot often be used in realistic scenarios because it requires sampling from $p_\theta(x_t|x_{t-1}, y_t)$ and evaluating $x_{t-1} \mapsto p_\theta(y_t|x_{t-1})$ in order to implement the in . To circumvent this problem, we can define a special case of the algorithm which requires neither sampling from $p_\theta(x_t|x_{t-1}, y_t)$ nor evaluating $x_{t-1} \mapsto p_\theta(y_t|x_{t-1})$. This algorithm, obtained by setting $\tilde{p}_\theta(y|x) \equiv 1$, will be called ** as it represents an analogue of the (bootstrap) . At time $1$, the uses the kernels $\overline{R}_{\theta,1}$ which are invariant w.r.t. $\bar{\rho}_{\theta,1}(x_1) \coloneqq \mu_\theta(x_1)$. At time $t$, $t > 1$, the uses the kernels $\overline{R}_{\theta,t}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-1})$ which are invariant w.r.t.  $$\begin{aligned} \bar{\rho}_{\theta,t}(x_t,a_{t-1}|\mathbf{x}_{t-1}) \coloneqq \frac{g_\theta(y_{t-1}|x_{t-1}^{a_{t-1}})}{\sum_{i=1}^N g_\theta(y_{t-1}|x_{t-1}^i)} f_\theta(x_t|x_{t-1}^{a_{t-1}}). \label{eq:mcmc_fa-apf_transition_density}\end{aligned}$$ The -type variant of the may be useful if the -type variant of the cannot be implemented. Gibbs samplers -------------- Given the extended target construction of the algorithm, it is straightforward to implement algorithms (or similarly with – see Section \[subsec:generalised\_PGS\]) which target it. However, Gibbs samplers based around the (conditional) do not appear useful as they might be expected to perform less well than the Gibbs sampler based around the and are no more easy to implement: in contrast to the -type algorithms, the Gibbs sampler based around the (conditional) does *not* generally require sampling from $p_\theta(x_t|x_{t-1}, y_t)$ and it only requires evaluation of the unnormalised density $p_\theta(y_t|x_t)f_\theta(x_t|x_{t-1})$ in the transition density of the in . General particle Markov chain Monte Carlo methods {#sec:general_pmcmc} ================================================= In this section, we describe a slight generalisation of methods which admits both the standard methods from Section \[sec:pmcmc\] as well as the alternative methods from Section \[sec:embeddedHMMnewversion\] as special cases. In addition, we derive both the and recursions for this algorithm. We note that this section is necessarily slightly more abstract than the previous sections. As the details developed below are not required for understanding the remainder of this work, this section may be skipped on a first reading. Extended target distribution ---------------------------- We define $\mathbf{z}_1 \coloneqq \mathbf{x}_1$ and $\mathbf{z}_t \coloneqq (\mathbf{x}_t, \mathbf{a}_{t-1})$. For notational brevity, also define $\smash{\mathbf{z}_1^{-i} \coloneqq \mathbf{z}_1 \setminus x_1^i}$, $\smash{\mathbf{z}_t^{-i} \coloneqq \mathbf{z}_t \setminus (x_t^i, a_{t-1}^i)}$ as well as $\smash{\mathbf{z}_{1:t}^{-b_{1:t}} = (\mathbf{z}_{1}^{-b_1}, \dotsc, \mathbf{z}_{t}^{-b_t})}$. We note that further auxiliary variables could be included in $\mathbf{z}_t$ without changing anything in the construction developed below. The law of a general is given by $$\begin{aligned} \Psi_\theta(\mathbf{z}_{1:T}) \coloneqq \psi_{\theta,1}(\mathbf{z}_1) \prod_{t=2}^T \psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}).\end{aligned}$$ With this notation, general methods target the following extended distribution: $$\begin{aligned} \tilde{\pi}(\theta, \mathbf{z}_{1:T}, b_T) \coloneqq \frac{1}{N^T} \times \underbrace{\pi(\theta, x_{1:T}^{b_{1:T}})}_{\text{\footnotesize{target}}} \times \underbrace{\phi_\theta(\mathbf{z}_{1:T}^{-b_{1:T}} | x_{1:T}^{b_{1:T}}, b_{1:T})}_{\text{\parbox{2.3cm}{\centering\footnotesize{law of conditional general \gls{PF}}}}}, \label{eq:general_pmcmb_target_distribution}\end{aligned}$$ where the law of the conditional general is given by $$\begin{aligned} \phi_\theta(\mathbf{z}_{1:T}^{-b_{1:T}} | x_{1:T}^{b_{1:T}}, b_{1:T}) \coloneqq \psi_{\theta,1}^{-b_1}(\mathbf{z}_1^{-b_1}) \prod_{t=2}^T \psi_{\theta,t}^{-b_t}(\mathbf{z}_t^{-b_t}|\mathbf{z}_{1:t-1}, x_t^{b_t}),\end{aligned}$$ with $$\begin{aligned} \psi_{\theta,t}^{-i}(\mathbf{z}_t^{-i}|\mathbf{z}_{1:t-1}, x_t^i) & \coloneqq \frac{\psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1})}{\psi_{\theta,t}^i(x_t^i, a_{t-1}^i|\mathbf{z}_{1:t-1}, x_t^i)}.\end{aligned}$$ Here, $\psi_{\theta,t}^i(\,\cdot\,|\mathbf{z}_{1:t-1})$ denotes the marginal distribution of the $i$th components of $\mathbf{x}_t$ and $\mathbf{a}_{t-1}$ under the distribution $\psi_{\theta,t}(\,\cdot\,|\mathbf{z}_{1:t-1})$. Finally, for any $t \in \{1,\dotsc,T\}$, we define the following unnormalised weight $$\begin{aligned} \tilde{v}_{\theta, t}^{b_t} \coloneqq \frac{1}{N^t} \frac{\gamma_{\theta,t}(x_{1:t}^{b_{1:t}})}{\psi_{\theta,1}^{b_1}(x_1^{b_1}) \prod_{n=2}^t \psi_{\theta,n}^{b_n}(x_n^{b_n}, a_{n-1}^{b_n}|\mathbf{z}_{1:n-1})},\end{aligned}$$ where $b_{1:t-1}$ on the r.h.s. are to be interpreted as functions of $b_t$ and the ancestry variables via the usual recursion $\smash{b_t = a_t^{b_{t+1}}}$. Here, $\gamma_{\theta,t}(x_{1:t})$ is the unnormalised density targeted at the $t$th step of the general – for all the algorithms discussed in this work, we will state these densities explicitly in Appendix \[subsec:special\_cases\]; in particular, $$\gamma_{\theta,T}(x_{1:T}) = p_\theta(x_{1:T}, y_{1:T}).$$ We make the following minimal assumption to ensure the validity of the (general) algorithms. \[as:absolute\_continuity\] For any $t \in \{1,\dotsc,T\}$, any $i \in \{1,\dotsc,N\}$ and any $\mathbf{z}_{1:t-1}$, the support of $(x_t, b_{t-1}) \mapsto \psi_{\theta,t}^i(x_t, b_{t-1}|\mathbf{z}_{1:t-1})$ includes the support of $(x_t, b_{t-1}) \mapsto \gamma_{\theta,t}(x_{1:t-1}^{b_{1:t-1}}, x_{t})$. We also make the following assumption which requires that all marginals of the conditional distributions $\psi_{\theta,t}(\,\cdot\,|\mathbf{z}_{1:t-1})$ are identical. \[as:exchangeability\] For any $(i,j) \in \{1,\dotsc, N\}^2$ and any $t \in \{1,\dotsc,T\}$, $\psi_{\theta,t}^i = \psi_{\theta,t}^j$. \[rem:identical\_marginals\_assumption\] Assumption \[as:exchangeability\] can be easily dropped in favour of selecting a suitable (non-uniform) distribution for the particle indices $b_{1:T}$ in . Indeed, more elaborate constructions could be used to justify resampling schemes which, unlike multinomial resampling, are not exchangeable in the sense of @AndrieuDoucetHolenstein2010 [Assumption 2] (unless one permutes the particle indices uniformly at random at the end of each step as mentioned in @AndrieuDoucetHolenstein2010). Similarly, such more general constructions would allow us to view the use of more sophisticated , such as the *discrete particle filter* of @Fearnhead1998, with schemes as special cases of this framework as shown in @Finke2015 [Section 2.3.4]. In Examples \[ex:antithetic\] and \[ex:sqmc\], we show how with *antithetic variables* [@BizjajevaOlsson2008] and (randomised) *methods* [@GerberChopin2015] can be considered as special cases of the framework described in this section even though these methods cannot easily be viewed as conventional because the particles are not sampled conditionally independently at each step. \[ex:antithetic\] The with antithetic variables from @BizjajevaOlsson2008 aim to improve the performance of by introducing negative correlation into the particle population. To that end, the $N$ particles are divided into $M$ groups of $K$ particles; the particles in each group then share the same ancestor index and given the ancestor particle, they are sampled in such a way that they are negatively correlated. Assume that there exists $K, M \in \mathbb{N}$ such that $N = K M$ and for $\tilde{x}_t \coloneqq (\tilde{x}_t^1, \dotsc, \tilde{x}_t^K) \in \mathcal{X}^K$ let $\tilde{q}_{\theta,t}(\tilde{x}_t| x_{t-1})$ denote some joint proposal kernel for $K$ particles such that if $(\tilde{x}_t^1, \dotsc, \tilde{x}_t^K) \sim \tilde{q}_{\theta,t}(\,\cdot\,| x_{t-1})$ then $\tilde{x}_t^1, \dotsc, \tilde{x}_t^K$ are (pairwise) negatively correlated, marginally, $\smash{\tilde{x}_t^k \sim q_{\theta,t}(\,\cdot\,| x_{t-1})}$ for all $k \in \{1,\dotsc,K\}$. Given $\mathbf{z}_{1:t-1}$, the with antithetic variables generates $\mathbf{z}_t = (\mathbf{a}_{t-1}, \mathbf{x}_t)$ as follows (we use the convention that any action prescribed for some $m$ is to be performed for all $m \in \{1,\dotsc,M\}$). 1. \[ex:antithetic:step:1\] Set $\smash{a_{t-1}^{(m-1)K+1} = i}$ w.p. proportional to $v_{\theta, t-1}^i$. 2. Set $\smash{a_{t-1}^{(m-1)K+k} \coloneqq a_{t-1}^{(m-1)K+1}}$ for all $k \in \{2,\dotsc,K\}$. 3. \[ex:antithetic:step:2\] Sample $\smash{\bigl(x_t^{(m-1)K+k}\bigr)_{k \in \{1,\dotsc,K\}} \sim \tilde{q}_{\theta,t}\bigl(\,\cdot\,\big| x_{t-1}^{a_{t-1}^{(m-1)K+1}}\bigr)}$. 4. \[ex:antithetic:step:3\] Permute the particle indices on $\mathbf{z}_t^1, \dotsc, \mathbf{z}_t^N$ uniformly at random. \[ex:sqmc\] Let $\mathcal{X} = \mathbb{R}^d$. Randomised algorithms are general which stratify sampling of the ancestor indices and particles $\mathbf{z}_t = (\mathbf{a}_{t-1}, \mathbf{x}_t)$ by computing them as a deterministic transformation of a set of randomised quasi Monte Carlo points $\mathbf{u}_t \coloneqq (u_t^1, \dotsc, u_t^N) \in [0,1)^{(d+1)N}$. By construction, \[ex:sqmc:property:1\] the set $\mathbf{u}_t = (u_t^1, \dotsc, u_t^N)$ has a low discrepancy, \[ex:sqmc:property:2\] for each $i \in \{1,\dotsc,N\}$, $u_t^i$ is (marginally) uniformly distributed on the $(d+1)$-dimensional hypercube. Write $u_t^i = (\tilde{u}_t^i, \tilde{v}_t^i)$ with $\tilde{u}_t^i \in [0,1)$ and $\tilde{v}_t^i \in [0,1)^{d}$. Given $\mathbf{z}_{1:t-1}$, the algorithm [@GerberChopin2015 Algorithm 3] transforms $\mathbf{u}_t \to \mathbf{z}_t = (\mathbf{a}_{t-1}, \mathbf{x}_t)$ as follows (using the convention that any action mentioned for some $i$ is to be performed for all $i \in \{1,\dotsc,N\}$). 1. \[ex:sqmc:step:1\] Find a suitable permutation $\sigma_{t-1}\colon \{1,\dotsc, N\} \to \{1,\dotsc,N\}$ such that $x_{t-1}^{\sigma_{t-1}(1)} \leq \dotsc \leq x_{t-1}^{\sigma_{t-1}(N)}$, if $d=1$; if $d > 1$, the permutation $\sigma_{t-1}$ is obtained by mapping the particles to the hypercube $[0,1)^{d}$ and projecting them onto $[0,1)$ using the pseudo-inverse of the Hilbert space-filling curve. These projections are then ordered as for $d=1$ (see @GerberChopin2015 for details). 2. \[ex:sqmc:step:2\] Set $\smash{a^i \coloneqq F^{-1}(\tilde{u}_t^i)}$, where $F^{-1}$ denotes the generalised inverse of the $F\colon \{1,\dotsc,N\} \to [0,1]$, defined by $\smash{F(i) \coloneqq \sum_{j=1}^i v_{\theta,t-1}^{\sigma_{t-1}(j)} / \sum_{j=1}^N v_{\theta,t-1}^j}$. 3. \[ex:sqmc:step:3\] Set $\smash{a_{t-1}^i \coloneqq \sigma_{t-1}(a^i)}$ and $\smash{x_t^i \coloneqq \varGamma_{\theta,t}(x_{t-1}^{a_{t-1}^i}, \tilde{v}_t^i)}$. Here, if $d=1$, the function $\varGamma_{\theta,t}(x_{t-1}, \,\cdot\,)$ is the (generalised) inverse of the associated with $q_{\theta,t}(\,\cdot\,|x_{t-1})$; if $d > 1$, this can be generalised via the Rosenblatt transform. 4. \[ex:sqmc:step:4\] Permute the particle indices on $\mathbf{z}_t^1, \dotsc, \mathbf{z}_t^N$ uniformly at random. While the joint kernel $\psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1})$ is potentially intractable in both examples, the random permutation of the particle indices (i.e. Step \[ex:antithetic:step:3\] in Example \[ex:antithetic\] and also Step \[ex:sqmc:step:4\] in Example \[ex:sqmc\]) ensures that Assumption \[as:exchangeability\] is satisfied. Indeed, it can be easily verified that in both examples, for any $(i,j) \in \{1,\dotsc,N\}^2$, $$\begin{aligned} \psi_{\theta,t}^i(x_t,a_{t-1}|\mathbf{z}_{1:t-1}) = \rho_{\theta,t}^{\mathbf{q}_\theta}(x_t, a_{t-1}|\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}) = \psi_{\theta,t}^j(x_t, a_{t-1}|\mathbf{z}_{1:t-1}). \end{aligned}$$ As pointed out in Remark \[rem:identical\_marginals\_assumption\], Assumption \[as:exchangeability\] is not actually necessary and can be easily dropped in favour of a slightly more general construction of the extended target distribution which is implicitly employed by @BizjajevaOlsson2008 [@GerberChopin2015] (who therefore do not require the random permutation of the particle indices). General particle marginal Metropolis–Hastings --------------------------------------------- In this section, we use the general framework to derive a general algorithm. All algorithms and versions of the alternative methods can then be seen as special cases of this general scheme as shown in Appendix \[subsec:special\_cases\]. As with the standard , we may use an algorithm to target the extended distribution $\tilde{\pi}(\theta, \mathbf{z}_{1:T}, b_T)$ using a proposal of the form $$q(\theta,{\theta'}) \times \underbrace{\Psi_{{\theta'}}(\mathbf{z}_{1:T})}_{\text{\footnotesize{\parbox{1.5cm}{\centering law of general \gls{PF}}}}} \times \underbrace{q_{\theta'}(b_T|\mathbf{z}_{1:T})}_{\text{\footnotesize{path selection}}}, \label{eq:proposalGeneralisedPF}$$ where we have defined the selection probability $$\begin{aligned} q_\theta(b_T|\mathbf{z}_{1:T}) \coloneqq \frac{\tilde{v}_{\theta, T}^{b_T}}{\sum_{i=1}^N \tilde{v}_{\theta, T}^i}.\end{aligned}$$ Define the usual unbiased estimate of the marginal likelihood $$\hat{p}_\theta(y_{1:T}) \coloneqq \sum_{i=1}^N \tilde{v}_{\theta, T}^i.$$ Then we obtain the following general algorithm (Algorithm \[alg:general\_pmmh\]) the validity of which can be established by checking that indeed, $$\begin{aligned} \frac{\tilde{\pi}(\theta, \mathbf{z}_{1:T}, b_T)}{\Psi_\theta(\mathbf{z}_{1:T}) q_\theta(b_T|\mathbf{z}_{1:T})} = p(\theta|y_{1:T}) \frac{\hat{p}_\theta(y_{1:T})}{p_\theta(y_{1:T})}.\end{aligned}$$ General particle Gibbs samplers {#subsec:generalised_PGS} ------------------------------- In this section, we use the general framework to derive a general sampler. We also derive [@Whiteley2010] and [@LindstenJordanSchon2014] recursions and prove that they leave the target distribution of interest invariant. As before, all samplers and Gibbs versions of the alternative method can then be seen as special cases of this general scheme as shown in Appendix \[subsec:special\_cases\]. Set $$\gamma_\theta(x_{t+1:T}|x_{1:t}) \coloneqq \frac{\gamma_{\theta,T}(x_{1:T})}{\gamma_{\theta,t}(x_{1:t})}.$$ We are then ready to state both (general) samplers. For the remainder of this section, we let $\tilde{x}_{1:t}^i$ denote the $i$th particle lineage at time $t$, i.e. $\smash{\tilde{x}_{1:t}^i = x_{1:t}^{i_{1:t}}}$, where $i_t = i$ and $\smash{i_n = a_n^{i_{n+1}}}$, for $n = t-1, \dotsc, 1$. As in previous sections, the recursion in Algorithm \[alg:general\_pg\_bs\] may be justified via appropriate partially-collapsed Gibbs sampler arguments by noting that $$\begin{aligned} \tilde{\pi}(b_t|\theta, \mathbf{z}_{1:t}, x_{t+1:T}^{b_{t+1:T}}) & \propto \tilde{v}_{{\theta}, t}^{b_t}\gamma_{\theta}(x_{t+1:T}^{b_{t+1:T}}|\tilde{x}_{1:t}^{b_t}).\end{aligned}$$ The steps in Algorithm \[alg:general\_pg\_as\] follows similarly since $a_t^{b_{t+1}} = b_t$, by construction. Alternatively – without invoking partially-collapsed Gibbs sampler arguments – the validity of can be established by even further extending the space to include the new particle indices generated via . As shown in @Finke2015 [Chapter 3.4.3], this construction also proves a particular duality of and . Empirical study {#sec:simulations} =============== In this section, we empirically compare the performance of some of the algorithms described in this work on a $d$-dimensional linear-Gaussian state-space model. Model ----- The model considered throughout this section is given by $$\begin{aligned} \mu_\theta(x_1) & = \mathrm{Normal}(x_1; m_0, C_0),\\ f_\theta(x_t|x_{t-1}) & = \mathrm{Normal}(x_t; A x_{t-1}, \sigma^2 \mathbf{I}_d), \quad \text{for $t > 1$,}\\ g_\theta(y_t|x_t) & = \mathrm{Normal}(y_t; x_t, \tau^2 \mathbf{I}_d), \quad \text{for $t \geq 1$,}\end{aligned}$$ where $x_t, y_t \in \mathbb{R}^d$, $\sigma, \tau > 0$, $\mathbf{I}_d$ denotes the $(d,d)$-dimensional identity matrix and $A$ is the $(d,d)$-dimensional symmetric banded matrix with upper and lower bandwidth $1$, with entries $a_0 \in \mathbb{R}$ on the main diagonal, and with entries $a_1 \in \mathbb{R}$ on the remaining bands, i.e.  $$A = \begin{bmatrix} a_0 & a_1 & 0 & \dotsc & 0\\ a_1 & a_0 & a_1 & \ddots & \vdots\\ 0 & a_1 & \ddots & \ddots & 0\\ \vdots & \ddots & \ddots & a_0 & a_1\\ 0 & \dotsc & 0 & a_1 & a_0 \end{bmatrix}.$$ For simplicity, we assume that the initial mean $m_0 \coloneqq \mathbf{0}_d \in \mathbb{R}^d$ (where $\mathbf{0}_d$ denotes a vector of zeros of length $d$) and the initial $(d,d)$-dimensional covariance matrix $C_0 = \mathbf{I}_d$ are known. Thus, the task is to approximate the posterior distribution of the remaining parameters $\theta \coloneqq (a_0, a_1, \sigma, \tau)$. The true values of these parameters, i.e. the values used for simulating the data are $(0.5, 0.2, 1, 1)$. As prior distributions, we take uniform distributions on $(-1,1)$ for $a_0$ and $a_1$ and inverse-gamma distributions on $\sigma$ and $\tau$ each with shape parameter $1$ and scale parameter $0.5$. All parameters are assumed to be independent a priori. In all algorithms, we propose new values ${\theta'}$ for $\theta$ via a simple Gaussian random-walk kernel, i.e. we use $q(\theta, {\theta'}) \coloneqq \mathrm{Normal}({\theta'};\theta, (100 d_\theta d T)^{-1} \mathbf{I}_{d_\theta})$, where $d_\theta$ is the dimension of the parameter vector $\theta$, i.e. $d_\theta = 4$. Algorithms {#subsec:algorithms} ---------- In this subsection, we detail the specific algorithms whose empirical performance we compare in our simulation study. Standard . : We implement the (bootstrap) and the using multinomial resampling at every step. Though we note that more sophisticated resampling schemes, e.g. adaptive systematic resampling, could easily be employed. As described above, we can implement both algorithms (i.e. the ) and Gibbs samplers based around these standard . For the latter, we make use of in the conditional . Original . : We implement the algorithms with $\rho_{\theta,t}(x) = \mathrm{Normal}(x; \mu, \varSigma)$, where $\mu$ and $\varSigma$ represent the mean and covariance matrix associated with the stationary distribution of the latent Markov chain $(X_t)_{t \in \mathbb{N}}$. We compare two different options for constructing the kernels $R_{\theta,t}$ which leave this distribution invariant. 1. \[as:original\_EHMM\_kernel\_a\] The kernel $R_{\theta,t}$ generates samples from its invariant distribution, i.e. $\smash{R_{\theta,t}(x_t'|x_t) = \rho_{\theta,t}(x_t')}$. 2. \[as:original\_EHMM\_kernel\_b\] The kernel $R_{\theta,t}$ is a standard kernel which proposes a value $x_t^\star$ using the Gaussian random-walk proposal $\smash{\mathrm{Normal}(x_t^\star; x_t, d^{-1}\mathbf{I}_d)}$. Alternative . : We compare four different versions of the and methods outlined above. Again, we implement both algorithms and Gibbs samplers (with ) based around these methods. Below, we describe the specific versions which we are comparing. The kernels $\overline{R}_{\theta, t}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-1})$ employed in the and the kernels $R_{\theta, t}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-1})$ employed in the are all taken to be kernels which, given $(x_t, a_{t-1})$, propose a new value $(x_t^\star, a_{t-1}^\star)$ using a proposal of the following form $$\frac{v_{\theta,t-1}^{a_{t-1}^\star}}{\sum_{i=1}^N v_{\theta,t-1}^{i}} s_{\theta,t}(x_t^\star|x_t;\mathbf{x}_{t-1}, a_{t-1}^\star).$$ We compare two different approaches for generating a new value for the particle, $x_t^\star$. 1. \[as:particle\_proposal\_a\] The first proposal uses a simple Gaussian random-walk kernel, i.e. $$s_{\theta,t}(x_t^\star|x_t;\mathbf{x}_{t-1}, a_{t-1}^\star) = \mathrm{Normal}(x_t^\star; x_t, d^{-1}\mathbf{I}_d), \label{eq:rw_proposal}$$ where the scaling of the covariance matrix is motivated by existing results on optimal scaling for such random-walk proposal kernels [@GelmanRobertsGilks1996; @RobertsGelmanGilks1997]. 2. \[as:particle\_proposal\_b\] The second proposal uses the *autoregressive* proposal employed by [@ShestopaloffNeal2016], i.e.  $$s_{\theta,t}(x_t^\star|x_t;\mathbf{x}_{t-1}, a_{t-1}^\star) = \mathrm{Normal}\bigl(x_t^\star;\mu + \sqrt{1-\varepsilon^2} (x_t - \mu), \varepsilon^2\varSigma\bigr), \label{eq:ar_proposal}$$ where $\mu$ and $\varSigma$ denote the mean and covariance matrix of $f_\theta(x_t|x_{t-1}) = \mathrm{Normal}(x_t; \mu, \varSigma)$, i.e. $\varSigma=\sigma^2\mathbf{I}$ and $\mu=A x_{t-1}$. To scale the covariance matrix of this proposal with the dimension $d$, we set $\varepsilon \coloneqq \sqrt{d^{-1}}$. Idealised. : We also implement the algorithms which the above-mentioned algorithms seek to mimic. The idealised Gibbs sampler, is a (Metropolis-within-)Gibbs algorithm which updates the latent states $x_{1:T}$ as one block by sampling them from their full conditional posterior distribution. The idealised marginal algorithm analytically evaluates the marginal likelihood $p_\theta(y_{1:T})$ via the Kalman filter. Results for general PMMH algorithms ----------------------------------- In this subsection, we empirically compare the performance of various type samplers. First, we fix $\theta$ in order to assess the variability of the estimates of the marginal likelihood, $\hat{p}_\theta(y_{1:T})$, which is a key ingredient in (general) algorithms. Then, we perform inference about $\theta$. Recall that in order to implement the version of the , we need to sample at least one particle from $p_\theta(x_t|x_{t-1}, y_t)$ at each time $t$ and we need to be able to evaluate the function $x_{t-1} \mapsto p_\theta(y_t|x_{t-1})$. In other words, whenever we can implement this algorithm we can also implement a standard algorithm based around the . Figure \[fig:likelihood\_estimates\] shows the relative estimates of the marginal likelihood obtained from the various algorithms described in this work for various model dimensions. Unsurprisingly, the , resp. , provides lower variance estimates than its corresponding , resp.  counterparts. However, more interestingly, the can provide lower variance estimates than the standard and could prove useful in more realistic scenarios where it is computationally very expensive to run the . As expected, the original method described in Section \[sec:embeddedHMM\] breaks down very quickly as the dimension $d$ increases. [0.188]{} [0.188]{} [0.188]{} [0.188]{} The right panel of Figure \[fig:pmmh\_acf\_parameters\] shows kernel-density plots of the estimates of parameter $a_0$ obtained from various -type algorithms. Clearly, the -type algorithms based around the (bootstrap) or the were unable to obtain sensible parameter estimates within the number of iterations that we fixed. The left panel of Figure \[fig:pmmh\_acf\_parameters\] shows the corresponding empirical autocorrelation. The results are consistent with the efficiency of the likelihood estimates illustrated in Figure \[fig:likelihood\_estimates\]. That is, at least in this setting, the standard version of the alternative method does not outperform standard algorithms. The estimates of the other parameters behaved similarly and the results for $(a_1, \sigma, \tau)$ are therefore omitted. Results for general particle Gibbs samplers ------------------------------------------- In this subsection, we compare empirically the performance of various type samplers (all using ). Gibbs samplers based on the original method failed to yield meaningful estimates for the model dimensions considered in this subsection and at a similar computational cost as the other algorithms. We therefore do not show results for the original method in the figures below. Recall that in order to implement the conditional , we do not need to sample from $p_\theta(x_t|x_{t-1}, y_t)$ nor evaluate the function $x_{t-1} \mapsto p_\theta(y_t|x_{t-1})$. In other words, we can implement the conditional in many situations in which implementing a standard conditional is impossible. Figure \[fig:gibbs\_acf\_state\_components\_dimX\_100\] shows the autocorrelation of estimates of the first component of $x_1$ obtained from various samplers for model dimension $d=100$. For the moment, we have kept $\theta$ fixed to the true values. It appears that in high dimensions, the conditional with moves are able to outperform standard conditional . Note that although, unsurprisingly, the best performance is obtained with the , the simpler is able to substantially outperform the approach based upon a standard . This is supported by Figure \[fig:gibbs\_ess\_dimX\_100\] which shows that the conditional with moves lead to a higher estimated in this setting. The acceptance rates associated with the kernels are shown in Figure \[fig:gibbs\_acceptance\_rates\_dimX\_100\]. We conclude this section by showing (in Figure \[fig:gibbs\_acf\_parameters\]) simulation results for the estimates of Parameter $a_0$ obtained from the various samplers. The kernel which updates $\theta$ was employed $100$ times per iteration, i.e. $100$ times between each conditional update of the latent states as the former is relatively computationally cheap compared to the latter. Note that as indicated by the kernel-density estimates in the right panel of Figure \[fig:gibbs\_acf\_parameters\], the Gibbs sampler based around the did not manage to sufficiently explore the support of the posterior distribution within the number of iterations that we fixed. This lack of convergence also caused the comparatively low empirical autocorrelation of the chains based around the (bootstrap) in the left panel of Figure \[fig:gibbs\_acf\_parameters\]: as the chain did not sufficiently traverse support of the target distribution – due to poor mixing of the state-updates as illustrated in Figure \[fig:gibbs\_acf\_state\_components\_dimX\_100\] – the empirical autocorrelation shown in Figure \[fig:gibbs\_acf\_parameters\] is a poor estimate of the (theoretical) autocorrelation of the chain. More specifically, the former greatly underestimates the latter. The estimates of the other parameters behaved similarly and the results for $(a_1, \sigma, \tau)$ are therefore omitted. Discussion ========== In this work, we have discussed the connections between the and methodologies and have obtained novel Bayesian inference algorithms for state and parameter estimation in state-space models. We have compared the empirical performance of the various and algorithms on a simple high-dimensional state-space model. We have found that a properly tuned conditional which employs local moves proposed in @ShestopaloffNeal2016 can dramatically outperform the standard conditional in high dimensions. Additionally, by formally establishing that and the (alternative) methods can be viewed as a special case of a general framework, we have derived both and for this general framework. This provides a promising strategy for extending the range of applicability of algorithms as well as providing a novel class of which might be useful. There are numerous other potential extensions of these ideas. For instance, many existing extensions of standard methods could also be considered for the alternative methods, e.g. incorporating gradient-information into the parameter proposals $q(\theta, {\theta'})$ or exploiting correlated pseudo-marginal ideas [@Deligiannidis2015]. Clearly, further generalisation of the target distribution and associated algorithms introduced here are possible. Many other processes for simulating from an extended target admitting a single random trajectory with the correct marginal distribution are possible, e.g. along the lines of @LindstenJohansenNaesseth2014. Acknowledgements {#acknowledgements .unnumbered} ================ Arnaud Doucet’s research is partially supported by the , grants EP/K000276/1, EP/K009850/1 and by the Air Force Office of Scientific Research/Asian Office of Aerospace Research and Development, grant AFOSRA/AOARD-144042. Axel Finke was partially supported by the under grants EP/I017984/1 and EP/K020153/1. Special cases of the general PMCMC algorithm {#subsec:special_cases} ============================================ In this appendix, we show that all and alternative methods described this work can be recovered as special cases of the general framework from Section \[sec:general\_pmcmc\]. For completeness, we explicitly derive all algorithms as special cases of the general framework even though methods based around the (bootstrap) and were already shown to be special cases of methods based around the general and even though, alternative methods based around the and were already shown to be special cases of alternative methods based around the . (Bootstrap) . : In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \prod_{i=1}^N \mu_\theta(x_1^i) = \prod_{i=1}^N \bar{\rho}_{\theta, 1}(x_1^i)$, and, for $t>1$, $$\psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \prod_{i=1}^N \frac{g_\theta(y_{t-1}|x_{t-1}^{a_{t-1}^i})}{\sum_{j=1}^N g_\theta(y_{t-1}|x_{t-1}^j)} f_\theta(x_t^i|x_{t-1}^{a_{t-1}^i}) = \prod_{i=1}^N \bar{\rho}_{\theta, t}(x_t^i, a_{t-1}^i|\mathbf{x}_{t-1}),$$ while $\gamma_{\theta,t}(x_{1:t}) \coloneqq p_\theta(x_{1:t}, y_{1:t})$, for any $t \leq T$. This implies that $\tilde{v}_{\theta,t}^{i} = \frac{1}{N} g_\theta(y_t|x_t^{i}) \prod_{n=1}^{t-1} \frac{1}{N} \sum_{j=1}^N g_\theta(y_n|x_n^j)$, so that we obtain $q_\theta(i|\mathbf{z}_{1:T}) = g_\theta(y_T|x_T^i) / \sum_{j=1}^N g_\theta(y_T|x_T^j)$ and $\hat{p}_\theta(y_{1:T}) = \prod_{t=1}^{T} \frac{1}{N} \sum_{i=1}^N g_\theta(y_t|x_t^i)$, as stated in Section \[sec:pmcmc\]. . : In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \prod_{i=1}^N p_\theta(x_1^i|y_1) = \prod_{i=1}^N \rho_{\theta,1}(x_1^i)$, and, for $t>1$, $$\psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \prod_{i=1}^N \frac{g_\theta(y_{t}|x_{t-1}^{a_{t-1}^i})}{\sum_{j=1}^N g_\theta(y_{t}|x_{t-1}^j)} p_\theta(x_t^i|x_{t-1}^{a_{t-1}^i}, y_t) = \prod_{i=1}^N \rho_{\theta,t}(x_t^i, a_{t-1}^i|\mathbf{x}_{t-1}),$$ while $\gamma_{\theta,t}(x_{1:t}) \coloneqq p_\theta(x_{1:t}, y_{1:t}) p_\theta(y_{t+1}|x_t)$, for $t < T$, and $\gamma_{\theta,T}(x_{1:T}) \coloneqq p_\theta(x_{1:T}, y_{1:T})$. This implies that $\tilde{v}_{\theta,t}^{i} = \frac{1}{N} p_\theta(y_1) \prod_{n=2}^t \frac{1}{N} \sum_{j=1}^N p_\theta(y_n|x_{n-1}^j)$, so that we obtain the selection probability $q_\theta(i|\mathbf{z}_{1:T}) = 1/N$ and the marginal-likelihood estimate $\hat{p}_\theta(y_{1:T}) = p_\theta(y_1) \prod_{t=2}^T \frac{1}{N} \sum_{i=1}^N p_\theta(y_t|x_{t-1}^i)$, as stated in Section \[sec:pmcmc\]. General . : In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \prod_{i=1}^N q_{\theta,1}(x_1^i) = \prod_{i=1}^N \rho_{\theta,1}^{\mathbf{q}_\theta}(x_1^i)$, and, for $t>1$, $$\psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \prod_{i=1}^N \frac{v_{\theta, t-1}^{a_{t-1}^i}}{\sum_{j=1}^N v_{\theta, t-1}^j} q_{\theta,t}(x_t^i|x_{t-1}^{a_{t-1}^i}) = \prod_{i=1}^N \rho_{\theta,t}^{\mathbf{q}_\theta}(x_t^i, a_{t-1}^i|\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}),$$ while $\gamma_{\theta,t}(x_{1:t}) \coloneqq p_\theta(x_{1:t}, y_{1:t}) \tilde{p}_\theta(y_{t+1}|x_t)$, for $t < T$, and $\gamma_{\theta,T}(x_{1:T}) \coloneqq p_\theta(x_{1:T}, y_{1:T})$. This implies that $\tilde{v}_{\theta,t}^{i} = \frac{1}{N} v_{\theta,t}^i \prod_{n=1}^{t-1} \frac{1}{N} \sum_{j=1}^N v_{\theta,n}^j$, so that we obtain the selection probability $q_\theta(i|\mathbf{z}_{1:T}) = v_{T,\theta}^i / \sum_{j=1}^N v_{T,\theta}^j$ and the marginal-likelihood estimate $\hat{p}_\theta(y_{1:T}) = \prod_{t=1}^T \frac{1}{N} \sum_{i=1}^N v_{t,\theta}^i$, as stated in Section \[sec:pmcmc\]. . : In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \bar{\rho}_{\theta,1}(x_1^1) \prod_{i=2}^N \overline{R}_{\theta,1}(x_1^i|x_1^{i-1})$, and, for $t>1$, $$\psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \bar{\rho}_{\theta,t}(x_t^1, a_{t-1}^1|\mathbf{x}_{t-1}) \prod_{i=2}^N \overline{R}_{\theta,t}(x_t^i, a_{t-1}^i|x_t^{i-1}, a_{t-1}^{i-1}; \mathbf{x}_{t-1}),$$ while $\gamma_{\theta,t}(x_{1:t})$, $q_\theta(b_T|\mathbf{z}_{1:T})$ and $\hat{p}_\theta(y_{1:T})$ are the same as for methods using the bootstrap . . : In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \rho_{\theta,1}(x_1^1) \prod_{i=2}^N R_{\theta,1}(x_1^i|x_1^{i-1})$, and, for $t>1$, $$\psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \rho_{\theta,t}(x_t^1, a_{t-1}^1|\mathbf{x}_{t-1}) \prod_{i=2}^N R_{\theta,t}(x_t^i, a_{t-1}^i|x_t^{i-1}, a_{t-1}^{i-1}; \mathbf{x}_{t-1}),$$ while $\gamma_{\theta,t}(x_{1:t})$, $q_\theta(b_T|\mathbf{z}_{1:T})$ and $\hat{p}_\theta(y_{1:T})$ are the same as for methods using the . . : In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \rho_{\theta,1}^{\mathbf{q}_\theta}(x_1^1) \prod_{i=2}^N R_{\theta,1}^{\mathbf{q}_\theta}(x_1^i|x_1^{i-1})$, and, for $t>1$, $$\psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \rho_{\theta,t}^{\mathbf{q}_\theta}(x_t^1, a_{t-1}^1|\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}) \prod_{i=2}^N R_{\theta,t}^{\mathbf{q}_\theta}(x_t^i, a_{t-1}^i|x_t^{i-1}, a_{t-1}^{i-1}; \mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}),$$ while $\gamma_{\theta,t}(x_{1:t})$, $q_\theta(b_T|\mathbf{z}_{1:T})$ and $\hat{p}_\theta(y_{1:T})$ are the same as for methods using the general APF.
--- abstract: 'A double well loaded with bosonic atoms represents an ideal candidate to simulate some of the most interesting aspects in the phenomenology of thermalisation and equilibration. Here we report an exhaustive analysis of the dynamics and steady state properties of such a system locally in contact with different temperature reservoirs. We show that thermalisation only occurs ‘accidentally’. We further examine the nonclassical features and energy fluxes implied by the dynamics of the double-well system, thus exploring its finite-time thermodynamics in relation to the settlement of nonclassical correlations between the wells.' author: - Steve Campbell - Gabriele De Chiara - Mauro Paternostro title: 'Equilibration and nonclassicality of a double-well potential' --- The high degree of control available when dealing with ultracold atomic samples makes them ideal candidates for realising prototypical quantum technology devices [@Sanpera2012; @review_optical_lattices]. The range of practical applications that can be addressed using platforms based on the physics of ultracold atomic ensembles ranges from metrology and sensing to the achievement of quantum memories [@brennen], from ultra-stable atomic clocks [@andre] to the simulation of difficult condensed-matter physics problems [@georgescu]. Recently, such range has been extended to quantum thermometry [@mehboudi], while theoretical and experimental interest is emerging in the design and implementation of thermodynamic processes and (elementary) engines based on such systems [@brantut]. The tuneable interactions among the elementary constituents of a cold-atom system, and the availability of effective ways of arranging non-equilibrium states of atomic systems confined in external optical potentials, provide an almost ideal scenario for the study and harnessing of thermodynamically relevant questions and tasks, indeed recently thermal and number fluctuations have been studied for ultracold atoms in two mode traps [@bruno]. For such endeavours to succeed, it is absolutely crucial to identify a suitable configuration to act as the basic building block for a thermodynamic device, and characterise its working principles in terms of fundamental quantities (such as heat and work), which will pave the way to the actual construction of the machine itself. In this paper, we move exactly along these lines: Inspired by the experimental set up of Refs. [@brantut], where a cold atomic system is placed in contact with two different thermal reservoirs, we consider a slight modification in which the gate potential separating the two reservoirs is replaced by a double well potential loaded with a Bose-Einstein condensate (BEC), itself a system of vast experimental implementability [@smerzi; @albiez]. We set and study explicitly its non-equilibrium dynamics. By assuming each well is initially thermalised to its own local reservoir, we will show that, in the tunnelling dominated regime, a temperature imbalance between the wells leads to the emergence of non-classicality, and study how this is linked to the equilibration dynamics of the atomic system. Remarkably, we show that the genuinely quantum nature of the state of the double well does not appear to affect the rate of equilibration of the open system at hand. By working in the weak coupling regime between each well and its reservoir, which allows us to identify clearly the contributions of each well to the total heat flux into/out of the local environments, we highlight a rather rich and complex dynamics of the heat exchange across the wells. Further, we examine its relation with the emergence of nonclassical correlations within the state of the atomic ensemble within a vast range of operating conditions. Description of the model ======================== We are interested in studying the out of equilibrium dynamics and steady-state properties of a system of cold atoms loaded in a double-well potential and subject to the effects of two reservoirs at different energy. Our Hamiltonian is the two-site Bose-Hubbard model [@smerzi] given by $\hat{\cal H}=\hat{\cal H}_f+\hat{\cal H}_{si}+\hat{\cal H}_{t}$ with \[we assume units such that $\hbar=1$ throughout\] $$\label{hamiltonian} \begin{aligned} &\hat{\cal H}_f=\omega_1 \left(\hat a^\dagger_1 \hat a_1+\frac{1}{2} \right) + \omega_2 \left(\hat a^\dagger_2 \hat a_2+\frac{1}{2} \right),\\ &\quad\hat{\cal H}_{si}= \frac{U}{2} \left(\hat a^{\dagger2}_1\hat a^2_1+\hat a^{\dagger2}_2\hat a^2_2 \right),\\ &\quad\hat{\cal H}_t= - {\cal J} \left(\hat a^\dagger_1 \hat a_2 + \hat a_1 \hat a^\dagger_2\right). \end{aligned}$$ Here $\hat{\cal H}_f$ describes the free evolution of the atomic systems in the two wells, each occurring at the rate set by the single-atom energy $\omega_j$, and with $\hat a_j,~\hat a_j^\dagger$ the associated annihilation and creation operators for each well. The Hamiltonian term $\hat{\cal H}_{si}$ accounts for the self-interaction (at rate $U$) between atoms occupying the same well, while $\hat{\cal H}_{t}$ stands for the tunnelling term, which occurs at rate ${\cal J}$. We will focus mostly on the tunnelling-dominated regime associated with $U=0$. However, the interaction-dominated regime corresponding to ${\cal J}=0$, and the intermediate regime will also be addressed. The focus of our investigation will be the phenomenology of thermalisation of the system, both at the single and two-well level. We remark that model, Eq. , can be realised in a variety of settings including superconducting Josephson junctions [@schon], trapped ions [@ions], bimodal optical cavities [@larson] and optomechanical setups [@markus]. While important insight will be gathered by addressing the unitary evolution induced by considering $\hat{\cal H}$, the overarching goal of this work is the study of the open-system evolution created by the contact of the two wells with their respective reservoirs. We are interested in addressing the dynamics induced by the master equation [@breuer] $$\label{master} \dot{\varrho}_t=-i\left[\hat{\cal H},\varrho_t \right] + \sum_{j=1}^2 \mathcal{L}_j(\varrho_t),$$ where we have introduced the overall-system density matrix at a generic time $t$, $\varrho_t$, and the Lindblad super-operators $$\begin{aligned} \mathcal{L}_j (\varrho_t)=& \gamma_j ({\overline{n}}_j+1)\left( \hat a_j \varrho_t \hat a_j^\dagger -\frac{1}{2} \{\hat a_j^\dagger \hat a_j, \varrho_t\} \right) +\\ & \gamma_j {\overline{n}}_j\left( \hat a_j^\dagger \varrho_t \hat a_j -\frac{1}{2} \{\hat a_j \hat a_j^\dagger, \varrho_t\} \right), \end{aligned}$$ which describe the incoherent particle-exchange process (occurring at rate $\gamma_j$) between a well and the respective reservoir (assumed to have a thermal occupation number ${\overline{n}}_j$). Eq.  is the key equation in our analysis to follow. We remark that in certain working conditions this description of the dynamics is not always valid. In particular, when the scattering length of the BEC is large, non-Markovian dynamics can play an important role [@haikka]. We therefore assume that the scattering length is sufficiently small to ensure the validity of the Markovian approximation [@haikka].\ \ Exact solutions of the tunneling-dominated regime. ================================================== In order to gather insight into the basic coherent processes of the system in the case of tunnelling-dominated regimes, we set $U=0$ in $\hat{\cal H}$ and address the unitary evolution first. We define the canonical quadrature operators $\{\hat x_1,\hat p_1,\hat x_2,\hat p_2\}$ as [@carmicheal; @walls] $$\hat x_j=\frac{1}{\sqrt{2}}\left( \hat a_j + \hat a^\dag_j \right),~~~\text{and}~~~\hat p^\dagger_j=\frac{i}{\sqrt{2}}\left( \hat a^\dag_j - \hat a_j \right),$$ and recast the Hamiltonian into the form $$\label{withquad} \hat {\cal H}=\frac{\omega_1}{2}\left( \hat x_1^2 + \hat p_1^2 + \openone \right) + \frac{\omega_2}{2}\left( \hat x_2^2 + \hat p_2^2 + \openone \right) - {\cal J} \left( \hat x_1 \hat x_2 + \hat p_1 \hat p_2 \right),$$ with $\openone$ the identity operator. By neglecting trivial constant terms, Eq.  can be thus interpreted as a quadratic form identified by the adjacency matrix $${\cal A}= \left( \begin{array}{cccc} \omega_1 & 0 & -{\cal J} & 0 \\ 0 & \omega_1 & 0 & -{\cal J} \\ -{\cal J} & 0 & \omega_2 & 0 \\ 0 & -{\cal J} & 0 & \omega_2 \\ \end{array} \right),$$ which has been written in the ordered operator basis $\{\hat x_1,\hat p_1,\hat x_2,\hat p_2\}$. In what follows, we rescale all the relevant frequencies with respect to $\omega_1$. In these units, we have $\omega_2\rightarrow\omega_2/\omega_1=1+\Delta$ with $\Delta$ a dimensionless bias between the two wells, and ${\cal J}\to J={\cal J}/\omega_1$. The rescaled Hamiltonian $\hat{\cal H}/\omega_1$ can be diagonalised by means of a simple two-mode mixing transformation $\hat U_T(\theta)=\exp[-i\tfrac{\theta}{2}(\hat a^\dag_1\hat a_2+\hat a_1 \hat a^\dag_2)]$ with $\theta=-\frac{1}{2}\arctan\left({2J}/{\Delta}\right)$, which leaves us with the new quadratic Hamiltonian $${{\hat{\cal H}}_q}/\omega_1=\Omega_1 ( \hat{x}_1^2 + \hat{p}_1^2 ) + \Omega_2 ( \hat{x}_2^2 + \hat{p}_2^2 ),$$ describing two freely evolving harmonic oscillators at the respective frequencies $\Omega_1=1+({\Delta - \Gamma})/{2},~~\Omega_2=1+({\Delta + \Gamma})/{2}$, with $\Gamma=\sqrt{\Delta^2+4 J^2}$. For a Gaussian initial state of the system [@walls], given the quadratic nature of the Hamiltonian, rather than tracking the evolution of the density matrix of the system, we can restrict our attention to the evolved form of the covariance matrix $\sigma$ of entries $\sigma_{ij}=\langle\{\hat P_i,\hat P_j\}\rangle-\langle \hat P_i\rangle\langle \hat P_j\rangle$, where $\hat P_i$’s are the elements of the vector of quadrature operators $\hat P^\top=(\hat x_1~\hat p_1~\hat x_2~\hat p_2)$ and the expectation value of such vector (calculated over the state of the system), which bear full information on the state of the system. Both are readily gathered as $$\label{evolv} \sigma_u(t)=M \sigma_u(0) M^\top,\qquad\langle{\hat P}\rangle_t=M \langle{\hat P}\rangle_0,$$ with $M=T(\theta) R_1(\Omega_1 t)R_2 (\Omega_2 t) T(\theta)^\top$, $\sigma_u(0)$ \[$\langle{\hat P}\rangle_0$\] the covariance matrix \[vector of phase-space displacements\] of the initial state of the system and $\sigma_u(t)$ \[$\langle{\hat P}\rangle_t$\] its time-evolved version. In Eq.  $R_j(\Omega_j t)~(j=1,2)$ and $T(\theta)$ are the symplectic transformations corresponding to the free evolution $e^{-i\Omega_j\hat a^\dag_j\hat a_j t}$ and two-mode mixing $\hat U_T(\theta)$. Explicitly $$\begin{aligned} & R_j(\Omega_jt)= \begin{pmatrix} \cos(\Omega_j t)&\sin(\Omega_j t)\\ -\sin(\Omega_j t)&\cos(\Omega_j t) \end{pmatrix},\\ & T(\theta)=\left( \begin{array}{cccc} \cos\theta & 0 & \sin\theta & 0 \\ 0 & \cos\theta & 0 & \sin\theta \\ -\sin\theta & 0 & \cos\theta & 0 \\ 0 & -\sin\theta & 0 & \cos\theta \\ \end{array} \right). \end{aligned}$$ We now concentrate on the situation where the particles in each well are initially at thermal equilibrium with their local reservoirs. The initial covariance matrix will thus be that of a two-mode thermal state $$\label{initial} \sigma_u(0) = \left( \begin{array}{cccc} 2{\overline{n}}_1+1 & 0 & 0 & 0 \\ 0 & 2{\overline{n}}_1+1 & 0 & 0 \\ 0 & 0 & 2{\overline{n}}_2+1 & 0 \\ 0 & 0 & 0 & 2{\overline{n}}_2+1 \\ \end{array} \right),$$ with ${\overline{n}}_j=\langle\hat a^\dag_j\hat a_j\rangle$ the mean number of particles in the $j^{\rm th}$ well. For such an initial state, the phase-space displacements are all null and full information on the evolved state is provided by the covariance matrix $$\label{unitarysol} \sigma_u(t) = \left( \begin{array}{cccc} n_1 & 0 & c_1 & c_2 \\ 0 & n_1 & -c_2 & c_1 \\ c_1 & -c_2 & n_2 & 0 \\ c_2 & c_1 & 0 & n_2 \\ \end{array} \right),$$ with elements $$\begin{aligned} &c_1=\frac{4J\Delta\left( {\overline{n}}_1-{\overline{n}}_2 \right)\sin\left(\Gamma t/2\right)}{\Gamma^2},~c_2=\frac{2{J} \left( {\overline{n}}_1 - {\overline{n}}_2 \right) \sin\left(\Gamma t\right)}{\Gamma},\\ &n_1=\frac{{4 J^2 ({\overline{n}}_1-{\overline{n}}_2) \cos \left(\Gamma t \right)+4J^2 ({\overline{n}}_1+{\overline{n}}_2+1)+\Delta ^2 (2 {\overline{n}}_1+1)}}{\Gamma^2},\\ &n_2=\frac{{4 J^2 ({\overline{n}}_2-{\overline{n}}_1) \cos \left(\Gamma t \right)+4J^2 ({\overline{n}}_1+{\overline{n}}_2+1)+\Delta ^2 (2 {\overline{n}}_2+1)}}{\Gamma^2}. \end{aligned}$$ If both wells are at the same initial temperature, i.e. ${\overline{n}}_1={\overline{n}}_2$, then $c_1=c_2=0$ and $n_1=n_2=2{\overline{n}}_1+1$, i.e. the system does not evolve in time and the two wells remain at their thermal equilibrium, notwithstanding the tunneling. This is a clear interference effect. Moreover, for identical single-atom energy in each well, i.e. $\Delta=0$, $c_1$ is null, showing that the position \[momentum\] $\hat x_1$ \[$\hat p_1$\] gets correlated with $\hat p_2$ \[$\hat x_2$\]. In general, such correlations do not imply necessarily the setting of entanglement between the wells [@Ferraro]. Indeed, the tunneling term of the Hamiltonian, $\hat {\cal H}_{t}$ in Eq. , can generate entanglement only when the state of at least one of the wells is sufficiently non-classical. In the context of our investigation here, this basically implies the preparation of squeezed states of the wells [@asboth]. This can be understood by noticing the formal analogy between $\hat {\cal H}_{t}$ and the generator of a two-mode mixing transformation and considering, for the sake of argument, the resonant case $\Delta=0$. Under such conditions, moving to the interaction picture with respect to $\hat{\cal H}_{f}$, the time evolution operator would correspond to $\hat U_T(Jt)$, which gives rise to no entanglement between the two wells when they are prepared in thermal states (even at different effective temperatures), as demonstrated in Ref. [@asboth]. However, this does not imply that the dynamics of the two-well system is trivial. In fact, in general, quantum correlations (of a form weaker than entanglement) are set by $\hat U(Jt)$ when acting on thermal states with ${\overline{n}}_1\neq{\overline{n}}_2$. We will address the emergence of discord-like quantum correlations [@zurek; @vedral; @discordreview] and its relation to the inter-well exchange process in a later section. We now move to solving the full dissipative dynamics governed by Eq.  for $U=0$. The problem can be efficiently solved by using a suitable Gaussian ansatz: We first translate Eq.  into the phase space by deriving a differential equation for the symmetrically ordered characteristic function $\chi(\beta_1,\beta_2,t) = \text{Tr}[\hat{D}_1(\beta_1)\otimes \hat{D}_2(\beta_2)\varrho_t]$ [@carmicheal; @walls]. Here $\hat D_j(\beta_j)=\exp[{\beta_j \hat a^\dagger_j - \beta_j^* \hat a_j}]$ is the Weyl displacement operator with amplitude $\beta_j\in{\mathbb C}$ for system $j=1,2$. Using the phase-space relations [@walls] $$\begin{split} &\hat a^\dagger \hat D_j(\beta_j)\leftrightarrow\left( \frac{\partial}{\partial \beta} +\frac{\beta^*}{2} \right)\hat D_j(\beta_j),\\ &\hat D(\beta_j) \hat a^\dagger\leftrightarrow\left( \frac{\partial}{\partial \beta} - \frac{\beta^*}{2} \right)\hat D_j(\beta_j),\\ &\hat a \hat D_j(\beta_j)\leftrightarrow\left(\frac{\beta}{2} - \frac{\partial}{\partial \beta^*}\right)\hat D_j(\beta_j),\\ &\hat D_j(\beta_j) \hat a\leftrightarrow\left( - \frac{\beta}{2} - \frac{\partial}{\partial \beta^*} \right)\hat D_j(\beta_j), \end{split}$$ after a lengthy but otherwise straightforward calculation, we find that Eq.  takes the form of the Fokker-Planck equation $$\label{fulleq} \begin{aligned} \partial_t\chi(\beta_1,\beta_2)&=\left\{ iJ\left( -\beta_1\frac{\partial}{\partial\beta_2} -\beta_2\frac{\partial}{\partial\beta_1} + \beta_1^*\frac{\partial}{\partial\beta_2^*}+ \beta_2^*\frac{\partial}{\partial\beta_1^*} \right) \right.\\ &\left.- \sum_{j=1}^2\left[ \omega_j\left( \beta_j\frac{\partial}{\partial\beta_j}- \beta_j^*\frac{\partial}{\partial\beta_j^*} \right) + \frac{\gamma_j}{2}\left( \beta_j\frac{\partial}{\partial\beta_j} + \beta_j^*\frac{\partial}{\partial\beta_j^*}\right) + \gamma_j\left({\overline{n}}_j+\frac12\right)\vert\beta_j\vert^2 \right] \right\} \chi(\beta_1,\beta_2). \end{aligned}$$ By letting $\beta_j=x_j+ip_j$ \[so that $\chi(\beta_1,\beta_2,t)\to\chi(x_1,p_1,x_2,p_2,t)$\] and expressing the characteristic function in terms of the entries of the vector of quadrature variables, we can write $\chi(x_1,p_1,x_2,p_2,t)=\exp[{iP^\top X - \tfrac{1}{2} P^\top \tilde\sigma P}]$, where we have introduced the generic vector of ${\mathbb C}$-numbers $X^\top=(y_1~z_1~y_2~z_2)$ and matrix $\tilde\sigma$ whose elements we aim at finding, which we do by solving Eq. . In [**Methods**]{} we provide the set of differential equations for the elements of $X$ and $\tilde\sigma$ obtained when evaluating both sides of Eq.  and equating them term by term. The explicit solution of the problem at hand leads to a time-evolved covariance matrix of the general block form $$\label{explicit} \sigma(t) = \begin{pmatrix} m_1\openone&{\bm c}\\ {\bm c}^\top&m_2\openone \end{pmatrix},$$ where ${\bm c}$ is a $2\times2$ matrix of correlations among the quadrature operators of the system. The diagonal structure of the blocks pertaining to the the individual wells shows that, locally, the system thermalises at temperatures determined by the explicit form of $m_{1,2}$. However, as ${\bm c}$ is, in general, not null, global thermalisation is not achieved: the overall system never thermalises, notwithstanding an explicitly dissipative evolution. This is clearly seen by looking at the general form of the steady state. Although the analytic form of the non-zero elements is readily achievable for any value of the parameters involved in the problem, they are, in general, too cumbersome to be reported here. However, assuming $\gamma_1=\gamma_2=\gamma$, the steady-state of the system is determined by the covariance matrix $$\label{steadystate} \sigma_{ss}=\zeta \left( \begin{array}{cccc} \tfrac{4J^2({\overline{n}}_1+{\overline{n}}_2+1)}{\gamma^2+\Delta^2}+(2{\overline{n}}_1+1) & 0 & -\tfrac{2J\Delta({\overline{n}}_1-{\overline{n}}_2)}{\gamma^2+\Delta^2} & \tfrac{2J\gamma\left( {\overline{n}}_1-{\overline{n}}_2 \right)}{\gamma^2+\Delta^2} \\ 0 & \tfrac{4J^2({\overline{n}}_1+{\overline{n}}_2+1)}{\gamma^2+\Delta^2}+(2{\overline{n}}_1+1) & -\tfrac{2J\gamma\left( {\overline{n}}_1-{\overline{n}}_2 \right)}{\gamma^2+\Delta^2}& -\tfrac{2J\Delta({\overline{n}}_1-{\overline{n}}_2)}{\gamma^2+\Delta^2} \\ -\tfrac{2J\Delta({\overline{n}}_1-{\overline{n}}_2)}{\gamma^2+\Delta^2} & -\tfrac{2J\gamma\left( {\overline{n}}_1-{\overline{n}}_2 \right)}{\gamma^2+\Delta^2} & \tfrac{4J^2({\overline{n}}_1+{\overline{n}}_2+1)}{\gamma^2+\Delta^2}+(2{\overline{n}}_2+1) & 0 \\ \tfrac{2J\gamma\left( {\overline{n}}_1-{\overline{n}}_2 \right)}{\gamma^2+\Delta^2} & -\tfrac{2J\Delta({\overline{n}}_1-{\overline{n}}_2)}{\gamma^2+\Delta^2} & 0 & \tfrac{4J^2({\overline{n}}_1+{\overline{n}}_2+1)}{\gamma^2+\Delta^2}+(2{\overline{n}}_2+1) \\ \end{array} \right),$$ with $\zeta= \frac{\gamma^2+\Delta^2}{4 J^2+\gamma^2+\Delta^2}$. Clearly, only for ${\overline{n}}_1={\overline{n}}_2$ the structure of the global covariance matrix takes a thermal-like form. However, this does not preclude the possibility to achieve accidental thermalisation, i.e. situations such that the state of the system either becomes globally/locally thermal, or closely approximates an equilibrium configuration. This will be the focus of the following analysis.\ [**(a)**]{} 0.5\ ![[**(a)**]{} Maximum fidelity between the instantaneous state of the system and a globally thermal state, plotted against the dimensionless evolution time and the (dimensionless) bias between the energies of the wells $\Delta$. [**(b)**]{} Corresponding estimate of the mean energy $\mu$ of the target globally thermal state. In both panels, we have taken $J=2$, ${\overline{n}}_1={\overline{n}}_2/2=1$.[]{data-label="fidelityunitaryglobal"}](fig2a.pdf "fig:"){width="0.5\columnwidth"} ![[**(a)**]{} Maximum fidelity between the instantaneous state of the system and a globally thermal state, plotted against the dimensionless evolution time and the (dimensionless) bias between the energies of the wells $\Delta$. [**(b)**]{} Corresponding estimate of the mean energy $\mu$ of the target globally thermal state. In both panels, we have taken $J=2$, ${\overline{n}}_1={\overline{n}}_2/2=1$.[]{data-label="fidelityunitaryglobal"}](fig2b.pdf "fig:"){width="0.43\columnwidth"} Assessment of dynamical thermalisation ====================================== We shall start with the study of the unitary case. Dynamical thermalisation in closed-system dynamics is a topic of vast interest, which has recently attracted considerable attention at both the theoretical and experimental level [@Trotzky]. Our approach is based on the assessment of the distance between the time-dependent state of the system and a generic (either global or local) thermal state. Quantitatively, as a measure of the distance between two states $\rho_{1,2}$, we use the Ulhmann fidelity [@uhlmann] $$F(\rho_1,\rho_2)=\left({\rm Tr}\sqrt{\sqrt{\rho_1}\rho_2\sqrt{\rho_1}}\right)^2.$$ For Gaussian states, it can be conveniently evaluated using the covariance matrices $\sigma_{1,2}$ associated to the states under scrutiny. The explicit formula, which has been recently reported in Ref. [@marian], reads $$F(\sigma_1,\sigma_2)=4\frac{(\sqrt{x}+\sqrt{x-1})^2}{\sqrt{{\rm det}(\sigma_1+\sigma_1)}},$$ where $\Omega=i(\sigma_y\oplus\sigma_y)$ is the two-mode symplectic matrix ($\sigma_y$ being the $y$-Pauli matrix) and $x=2\sqrt{{\cal I}_1}+2\sqrt{{\cal I}_2}+1/2$ with $${\cal I}_1=\frac{{\rm det}(\Omega\sigma_1\Omega\sigma_1-\openone_4)}{16{\rm det}(\sigma_1+\sigma_2)},~~{\cal I}_2=\frac{{\rm det}(\sigma_1+i\Omega){\rm det}(\sigma_2+i\Omega)}{16{\rm det}(\sigma_1+\sigma_2)}.$$ In our case we consider $\sigma_1=\sigma_u(t)$ \[cf. Eq. \] and $\sigma_2$ given by either $\sigma^G_2=(2\mu+1)\openone_4$, i.e. the covariance matrix of a globally thermal state with mean number of excitations $\mu$, or $\sigma^L_2=(2\mu_1+1)\openone_2\oplus(2\mu_2+1)\openone_2$, which is the one associated with the tensor product of locally thermal states (each with mean number of excitations $\mu_j$). For clarity, we have indicated with $\openone_n$ the identity matrix of dimension $n$. We present the case of global thermalisation first: after calculating the time behaviour of $F(\sigma_u(t),\sigma^G_2)$ for various choices of $\Delta$, we have numerically evaluated the value of $\mu$ that achieves the maximum of $F(\sigma_u(t),\sigma^G_2)$. In Fig. \[fidelityunitaryglobal\] we show both such value and the corresponding estimate for $\mu$. The state fidelity remains evidently quite large, being only partially depleted by an increasing value of $\Delta$ (the dependence on such parameter is quite non-trivial, given that for $\Delta=2.5$, for instance, values very close to those associated with $\Delta=0$ can be achieved, at suitable times in the evolution). However, while at small values of $\Delta$ the target state changes very little with time, this is not the case for increasing bias: the value of $\mu$ corresponding to a non-zero $\Delta$ oscillates with a non-negligible amplitude as this parameter grows. In any case, perfect thermalisation is never achieved, a result that is strengthened by the analysis that we will report in the next Section. The situation is somehow different when locally thermal target states are considered \[cf. Fig. \[fidelityunitarylocal\]\]: besides the expected times at which a full period of the evolution is achieved, it is possible to identify instants of time at which the state of the double-well system is indeed very close to a locally thermal state ($F(\sigma_u(t),\sigma^L_2)\ge0.999$), which would suggest the occurrence of accidental dynamical thermalisation. [**(a)**]{}\ ![[**(a)**]{} Maximum fidelity between the instantaneous state of the system and a locally thermal state, plotted against the dimensionless evolution time and the (dimensionless) bias between the energies of the wells $\Delta$. [**(b)**]{} & [**(c)**]{} Estimate of the corresponding mean number of excitations $\mu_{1,2}$ of the target locally thermal state. In both panels, we have taken $J=2$, ${\overline{n}}_1={\overline{n}}_2/2=1$.[]{data-label="fidelityunitarylocal"}](fig3a.pdf "fig:"){width="0.5\columnwidth"}\ [**(b)**]{}0.33\ ![[**(a)**]{} Maximum fidelity between the instantaneous state of the system and a locally thermal state, plotted against the dimensionless evolution time and the (dimensionless) bias between the energies of the wells $\Delta$. [**(b)**]{} & [**(c)**]{} Estimate of the corresponding mean number of excitations $\mu_{1,2}$ of the target locally thermal state. In both panels, we have taken $J=2$, ${\overline{n}}_1={\overline{n}}_2/2=1$.[]{data-label="fidelityunitarylocal"}](fig3b.pdf "fig:"){width="0.80\columnwidth"} [**(a)**]{}\ ![Open-system dynamics. [**(a)**]{} Fidelity with a globally thermal state for $J=2$, ${\overline{n}}_1=1$, ${\overline{n}}_2=2$, $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. We have taken $\Delta=0$ (blue line) and $\Delta=0.5$ (red line). [**(b)**]{} Fidelity with locally thermal states for $J=2$, ${\overline{n}}_1=1$, ${\overline{n}}_2=2$, and $\Delta=0$. We have taken $\gamma_1/\omega_1=\gamma_2/\omega_1=1$ (red line) and $\gamma_1/\omega_1=1$ with $\gamma_2/\omega_1=3$ (blue line).[]{data-label="fidelityopen"}](fig4a.pdf "fig:"){width="0.7\columnwidth"}\ [**(b)**]{}\ ![Open-system dynamics. [**(a)**]{} Fidelity with a globally thermal state for $J=2$, ${\overline{n}}_1=1$, ${\overline{n}}_2=2$, $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. We have taken $\Delta=0$ (blue line) and $\Delta=0.5$ (red line). [**(b)**]{} Fidelity with locally thermal states for $J=2$, ${\overline{n}}_1=1$, ${\overline{n}}_2=2$, and $\Delta=0$. We have taken $\gamma_1/\omega_1=\gamma_2/\omega_1=1$ (red line) and $\gamma_1/\omega_1=1$ with $\gamma_2/\omega_1=3$ (blue line).[]{data-label="fidelityopen"}](fig4b.pdf "fig:"){width="0.7\columnwidth"} In the open-system dynamics case, a similar calculation allows us to evaluate the fidelity with both a globally thermal and locally thermal state as shown in Fig. \[fidelityopen\], which studies the effects of both the energy bias \[panel [**(a)**]{}\] and a difference in the damping rates of the two wells \[panel [**(b)**]{}\]. Quite evidently, both effects spoil the state fidelity, which however achieves values that are either precisely 1 or very close to it. Indeed, focusing on the unbiased case with $\Delta=0$ and $\gamma_1=\gamma_2$, we know that the solution is given in the form of Eq. (\[explicit\]). Moreover, the off-diagonal block matrix $\bf{c}$ turns out to be anti-diagonal with entries equal in modulus but opposite in sign. Therefore, in order to determine if the system has accidentally thermalised, we need only to determine if, at some value of $t$, these entries are identically zero. After some manipulation, this condition reduces to the transcendental equation $$\label{transcendental} e^{\gamma_1 t}=\cos\left( 2{\cal J}t \right)-\frac{2{\cal J}}{\gamma_1}\sin\left( 2{\cal J}t \right).$$ Interestingly, this ‘accidental’ thermalisation is independent of the temperature of either well and only concerned with the tunnelling strength and the damping rate. For the same parameters taken to obtain the red curve in Fig. \[fidelityopen\] [**(b)**]{}, we find Eq.  has two solutions: $t\sim1.03438\omega_1^{-1}$ and $t\sim1.33749\omega_1^{-1}$, clearly corresponding to the two instances of local thermalisation in Fig. \[fidelityopen\] [**(b)**]{}. Furthermore, we find the thermal occupation numbers of the wells at the first instance of thermalisation are ${\overline{n}}_1=1.597$ and ${\overline{n}}_2=1.403$, and at the second are ${\overline{n}}_1=1.422$ and ${\overline{n}}_2=1.578$, thus suggesting that the two instants of accidental thermalisation correspond to an almost swap of the two local thermal states. Increasing $J$ leads to more instances of accidental thermalisation occurring before the system equilibrates to its steady state.\ Assessment of the non-classical nature of the state of the system. {#quantumness} ================================================================== Values of fidelity so close to unity should not lead to misinterpretation of the actual nature of the state of the two-well system. In fact, any assessment of fidelity should be accompanied by the study of problem-specific figures of merit able to provide a more fine-grained characterisation of the state at hand. For the sake of a study on thermalisation, a significant class of such quantifiers is embodied by measures of quantum correlations. In this respect, it is important here to assess the role, if any, various forms of quantum correlations play in the dynamics highlighted above. It is quickly confirmed that, as anticipated before, the system never becomes entangled. While this is expected in light of the nature of the interaction and initial state being considered, nothing prevents the settlement of weaker forms of quantum correlations, such as quantum discord (QD) [@discordreview]. QD is the difference between two classically equivalent definitions of mutual information when applied to a quantum system [@zurek; @vedral]. A non-zero degree of QD implies that, in a bipartite system composed of parties A and B, information can be gathered on system A by interrogating party B. For Gaussian states, QD is captured by the Gaussian quantum discord [@adesso; @olivares; @paris], which entails that the interrogation of B only involves Gaussian measurements. For a generic covariance matrix $S=\left( \begin{array}{cc} A & C \\ C^\top & B \end{array} \right)$, QD is then defined following Ref. [@adesso] (to be consistent with the definition of the vacuum state used throughout) $$\mathcal{D}_G = h\left(\sqrt{I_1}\right) - h\left(d_-\right) - h\left(d_+\right) + h\left(\sqrt{E^{\text{min}}}\right), \label{gaussdisc}$$ with $$E^{\text{min}}=\begin{cases} \dfrac{1}{(I_1-1)^2}\left[{2I_3^2+(I_1-1)(I_4-I_2)+2\vert I_3\vert \sqrt{I_3^2+(I_1-1)(I_4-I_2)}}\right]\quad\text{for}~(I_4-I_1I_2)^2\leq I_3^2(I_2+I_4)(I_1+1), \\~~\\ \dfrac{1}{2I_1}\left[{I_1I_2-I_3^2+I_4-\sqrt{I_3^4+(I_4-I_1I_2)^2-2I_3^2(I_1I_2+I_4)}}\right]\quad\text{otherwise}, \end{cases}$$ where $$\begin{aligned} &h(x)=\left(\frac{x+1}{2}\right)\log\left(\frac{x+1}{2}\right)-\left(\frac{x-1}{2}\right)\log\left(\frac{x-1}{2}\right),\\ &d_\pm^2=\frac{1}{2}\left(\Lambda \pm \sqrt{\Lambda^2-4 I_4}\right),\\ &\Lambda=I_1+I_2+2I_3, \end{aligned}$$ and $I_1=\det A$, $I_2=\det B$, $I_3=\det C$, and $I_4=\det S$. In Fig. \[fig5\] [**(a)**]{} we study QD against the energy bias for the case of the unitary solution Eq. . Intuitively we would expect that for $\Delta=0$, owing to the full symmetry enforced in the system, QD will be maximised. This is indeed the case, as it can be seen in Fig. \[fig5\] [**(a)**]{}. However, an interesting feature appears as we increase the bias. At $\Delta/J\sim2.5$, ${\overline{n}}_1=1$, and ${\overline{n}}_2=5$, QD exhibits a plateau, which implies the existence of an ‘optimal’ value of the bias, dependent on the temperature difference, that helps amplify the non-classicality of the system. Further increase of $\Delta$ pushes the systems too far off resonance, and the coherence decays. In Fig. \[fig5\] [**(b)**]{} we examine this phenomenon closer, for a fixed temperature difference and small biasing, $\Delta/J=1,$ (red), we see the oscillatory behaviour changes and the first zero-point is lifted. At the optimal value of $\Delta$ (solid black) the plateau is clearly evident. When we increase the bias further, we see the decay in the non-classicality, as well as a change in the periodicity of the system. [**(a)**]{}\ ![[**(a)**]{} Discord versus bias and evolution time for the case of closed-system dynamics. We have taken ${\overline{n}}_1=1$, ${\overline{n}}_2=5$, $J=2$. [**(b)**]{} Behaviour of quantum discord in the open-system scenario for ${\overline{n}}_1=1$, ${\overline{n}}_2=5$, $J=2$ and the bias choices $\Delta=1$ (red curve), 5 (black curve), and 10 (blue curve).[]{data-label="fig5"}](fig5a.jpg "fig:"){width="0.7\columnwidth"}\ [**(b)**]{}\ ![[**(a)**]{} Discord versus bias and evolution time for the case of closed-system dynamics. We have taken ${\overline{n}}_1=1$, ${\overline{n}}_2=5$, $J=2$. [**(b)**]{} Behaviour of quantum discord in the open-system scenario for ${\overline{n}}_1=1$, ${\overline{n}}_2=5$, $J=2$ and the bias choices $\Delta=1$ (red curve), 5 (black curve), and 10 (blue curve).[]{data-label="fig5"}](fig5b.pdf "fig:"){width="0.7\columnwidth"} [**(a)**]{}\ ![[**(a)**]{} Dynamical discord for the unitary (gray) and dissipative (red) cases. For both $J=2$, ${\overline{n}}_1=1$, and ${\overline{n}}_2=$ 2 (dotted), 5 (dashed), and 10 (solid). For all the dissipative cases $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. [**(b)**]{} As for panel [**(a)**]{} but with the optimal bias (for ${\overline{n}}_1=1$, ${\overline{n}}_2=5$) between the wells, $\Delta=5$. [**(c)**]{} Dissipative dynamical discord $J=2$, $\Delta=4$, $\gamma_1/\omega_1=\gamma_2/\omega_1=1$, ${\overline{n}}_1=1$, and ${\overline{n}}_2=2$, with self interaction term $U=0$, $U=1$ and $U=3$ going from top to bottom.[]{data-label="fig6"}](fig6a.pdf "fig:"){width="0.6\columnwidth"}\ [**(b)**]{}\ ![[**(a)**]{} Dynamical discord for the unitary (gray) and dissipative (red) cases. For both $J=2$, ${\overline{n}}_1=1$, and ${\overline{n}}_2=$ 2 (dotted), 5 (dashed), and 10 (solid). For all the dissipative cases $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. [**(b)**]{} As for panel [**(a)**]{} but with the optimal bias (for ${\overline{n}}_1=1$, ${\overline{n}}_2=5$) between the wells, $\Delta=5$. [**(c)**]{} Dissipative dynamical discord $J=2$, $\Delta=4$, $\gamma_1/\omega_1=\gamma_2/\omega_1=1$, ${\overline{n}}_1=1$, and ${\overline{n}}_2=2$, with self interaction term $U=0$, $U=1$ and $U=3$ going from top to bottom.[]{data-label="fig6"}](fig6b.pdf "fig:"){width="0.6\columnwidth"}\ [**(c)**]{}\ ![[**(a)**]{} Dynamical discord for the unitary (gray) and dissipative (red) cases. For both $J=2$, ${\overline{n}}_1=1$, and ${\overline{n}}_2=$ 2 (dotted), 5 (dashed), and 10 (solid). For all the dissipative cases $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. [**(b)**]{} As for panel [**(a)**]{} but with the optimal bias (for ${\overline{n}}_1=1$, ${\overline{n}}_2=5$) between the wells, $\Delta=5$. [**(c)**]{} Dissipative dynamical discord $J=2$, $\Delta=4$, $\gamma_1/\omega_1=\gamma_2/\omega_1=1$, ${\overline{n}}_1=1$, and ${\overline{n}}_2=2$, with self interaction term $U=0$, $U=1$ and $U=3$ going from top to bottom.[]{data-label="fig6"}](fig6c.pdf "fig:"){width="0.6\columnwidth"} Turning our attention to the dissipative case, in Fig. \[fig6\] we compare the unitary dynamics with the dissipative case for unbiased wells \[panel [**(a)**]{}\], and biased ones \[panel [**(b)**]{}\] at various differences in temperature. For unbiased wells, we see dissipation quickly suppresses the the oscillations and we reach a steady state with non-vanishing QD. As we increase the temperature between the wells we see an increase in the QD for the unitary case and the steady state QD is larger for increasing temperature difference. When we bias the wells, taking $\Delta/J=2.5$ for all temperature differences, we see the dissipative dynamics clearly show the enhanced non-classicality. While the time to reach equilibrium appears unaffected, the steady state is significantly more non-classical than in the unbiased situation. This may imply that in this situation the non-classicality plays no role in reaching equilibrium. In Fig. \[fig6\] [**(c)**]{} we examine the effect that self-interaction has on the dynamics of nonclassicality. In order to do so, we compute the Gaussian discord of the hypothetical Gaussian state having, as covariance matrix, the one achieved by calculating the entries $\sigma_{ij}$ over the non-Gaussian state resulting from a chosen non-zero values of $U$. Evidently the larger the self interaction, the more self ordered each well becomes, diminishing the effect of the tunnelling and reducing the amount of nonclassicality present. Also we see the system tends to equilibrate faster. The nonclassicality of the steady state is delicately dependent on the temperature difference, as well as the tunneling rate and the bias. In Fig. \[fig7\] we examine this behaviour closer, fixing ${\overline{n}}_1$ and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$ with $J=2$ and $\Delta=5$. The only conditions for which the system does not exhibit nonclassical correlations is the trivial one of ${\overline{n}}_2={\overline{n}}_1$. As we increase the temperature imbalance we see that QD increases to a maximum value before slowly decaying \[cf. Figs. \[fig7\] [**(a)**]{} and [**(e)**]{}\]. If we fix the temperature difference such that ${\overline{n}}_1=1$ and ${\overline{n}}_2=5$, we see in panels [**(b)**]{} and [**(c)**]{} that there are optimal values of the remaining parameters that give the largest value of discord. While the reservoirs have been kept at moderately low energies, in panel [**(f)**]{} we significantly increase both ${\overline{n}}_1$ and ${\overline{n}}_2$ and see that large values of QD can still be achieved. An unbiased configuration leads to values of discord of the order of $10^{-5}$. Increasing the bias, such values are raised by up to one order of magnitude.\ [**(a)**]{} 0.45\ ![Steady-state discord between the two wells for $\gamma_1/\omega_1=\gamma_2/\omega_1=1$, and ${\overline{n}}_1=1$. [**(a)**]{} Plotted against ${\overline{n}}_2$ for $J=2$, $\Delta=5$. [**(b)**]{} Against $\Delta$ for $J=2$ and ${\overline{n}}_2=5$. [**(c)**]{} Against $J$ for ${\overline{n}}_2=5$ and $\Delta=5$. [**(d)**]{} Maximum discord attainable for a given value of ${\overline{n}}_2$ found by optimising with respect to both $\Delta$ and $J$ when ${\overline{n}}_1=1$ and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. [**(e)**]{} Steady-state discord studied against both ${\overline{n}}_1$ and ${\overline{n}}_2$ when $J=2$, $\Delta=5$, and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. The black line shows that $\mathcal{D}_G$ is identically null only when ${\overline{n}}_1={\overline{n}}_2$. [**(f)**]{} Steady-state discord against ${\overline{n}}_2$ for ${\overline{n}}_1=100$ with $J=2$, $\gamma_1/\omega_1=\gamma_2/\omega_1=1$.[]{data-label="fig7"}](fig7a.pdf "fig:"){width="0.45\columnwidth"} ![Steady-state discord between the two wells for $\gamma_1/\omega_1=\gamma_2/\omega_1=1$, and ${\overline{n}}_1=1$. [**(a)**]{} Plotted against ${\overline{n}}_2$ for $J=2$, $\Delta=5$. [**(b)**]{} Against $\Delta$ for $J=2$ and ${\overline{n}}_2=5$. [**(c)**]{} Against $J$ for ${\overline{n}}_2=5$ and $\Delta=5$. [**(d)**]{} Maximum discord attainable for a given value of ${\overline{n}}_2$ found by optimising with respect to both $\Delta$ and $J$ when ${\overline{n}}_1=1$ and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. [**(e)**]{} Steady-state discord studied against both ${\overline{n}}_1$ and ${\overline{n}}_2$ when $J=2$, $\Delta=5$, and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. The black line shows that $\mathcal{D}_G$ is identically null only when ${\overline{n}}_1={\overline{n}}_2$. [**(f)**]{} Steady-state discord against ${\overline{n}}_2$ for ${\overline{n}}_1=100$ with $J=2$, $\gamma_1/\omega_1=\gamma_2/\omega_1=1$.[]{data-label="fig7"}](fig7b.pdf "fig:"){width="0.45\columnwidth"}\ [**(c)**]{} 0.45\ ![Steady-state discord between the two wells for $\gamma_1/\omega_1=\gamma_2/\omega_1=1$, and ${\overline{n}}_1=1$. [**(a)**]{} Plotted against ${\overline{n}}_2$ for $J=2$, $\Delta=5$. [**(b)**]{} Against $\Delta$ for $J=2$ and ${\overline{n}}_2=5$. [**(c)**]{} Against $J$ for ${\overline{n}}_2=5$ and $\Delta=5$. [**(d)**]{} Maximum discord attainable for a given value of ${\overline{n}}_2$ found by optimising with respect to both $\Delta$ and $J$ when ${\overline{n}}_1=1$ and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. [**(e)**]{} Steady-state discord studied against both ${\overline{n}}_1$ and ${\overline{n}}_2$ when $J=2$, $\Delta=5$, and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. The black line shows that $\mathcal{D}_G$ is identically null only when ${\overline{n}}_1={\overline{n}}_2$. [**(f)**]{} Steady-state discord against ${\overline{n}}_2$ for ${\overline{n}}_1=100$ with $J=2$, $\gamma_1/\omega_1=\gamma_2/\omega_1=1$.[]{data-label="fig7"}](fig7c.pdf "fig:"){width="0.45\columnwidth"} ![Steady-state discord between the two wells for $\gamma_1/\omega_1=\gamma_2/\omega_1=1$, and ${\overline{n}}_1=1$. [**(a)**]{} Plotted against ${\overline{n}}_2$ for $J=2$, $\Delta=5$. [**(b)**]{} Against $\Delta$ for $J=2$ and ${\overline{n}}_2=5$. [**(c)**]{} Against $J$ for ${\overline{n}}_2=5$ and $\Delta=5$. [**(d)**]{} Maximum discord attainable for a given value of ${\overline{n}}_2$ found by optimising with respect to both $\Delta$ and $J$ when ${\overline{n}}_1=1$ and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. [**(e)**]{} Steady-state discord studied against both ${\overline{n}}_1$ and ${\overline{n}}_2$ when $J=2$, $\Delta=5$, and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. The black line shows that $\mathcal{D}_G$ is identically null only when ${\overline{n}}_1={\overline{n}}_2$. [**(f)**]{} Steady-state discord against ${\overline{n}}_2$ for ${\overline{n}}_1=100$ with $J=2$, $\gamma_1/\omega_1=\gamma_2/\omega_1=1$.[]{data-label="fig7"}](fig7d.pdf "fig:"){width="0.45\columnwidth"}\ [**(e)**]{} 0.45\ ![Steady-state discord between the two wells for $\gamma_1/\omega_1=\gamma_2/\omega_1=1$, and ${\overline{n}}_1=1$. [**(a)**]{} Plotted against ${\overline{n}}_2$ for $J=2$, $\Delta=5$. [**(b)**]{} Against $\Delta$ for $J=2$ and ${\overline{n}}_2=5$. [**(c)**]{} Against $J$ for ${\overline{n}}_2=5$ and $\Delta=5$. [**(d)**]{} Maximum discord attainable for a given value of ${\overline{n}}_2$ found by optimising with respect to both $\Delta$ and $J$ when ${\overline{n}}_1=1$ and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. [**(e)**]{} Steady-state discord studied against both ${\overline{n}}_1$ and ${\overline{n}}_2$ when $J=2$, $\Delta=5$, and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. The black line shows that $\mathcal{D}_G$ is identically null only when ${\overline{n}}_1={\overline{n}}_2$. [**(f)**]{} Steady-state discord against ${\overline{n}}_2$ for ${\overline{n}}_1=100$ with $J=2$, $\gamma_1/\omega_1=\gamma_2/\omega_1=1$.[]{data-label="fig7"}](fig7e.jpg "fig:"){width="0.45\columnwidth"} ![Steady-state discord between the two wells for $\gamma_1/\omega_1=\gamma_2/\omega_1=1$, and ${\overline{n}}_1=1$. [**(a)**]{} Plotted against ${\overline{n}}_2$ for $J=2$, $\Delta=5$. [**(b)**]{} Against $\Delta$ for $J=2$ and ${\overline{n}}_2=5$. [**(c)**]{} Against $J$ for ${\overline{n}}_2=5$ and $\Delta=5$. [**(d)**]{} Maximum discord attainable for a given value of ${\overline{n}}_2$ found by optimising with respect to both $\Delta$ and $J$ when ${\overline{n}}_1=1$ and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. [**(e)**]{} Steady-state discord studied against both ${\overline{n}}_1$ and ${\overline{n}}_2$ when $J=2$, $\Delta=5$, and $\gamma_1/\omega_1=\gamma_2/\omega_1=1$. The black line shows that $\mathcal{D}_G$ is identically null only when ${\overline{n}}_1={\overline{n}}_2$. [**(f)**]{} Steady-state discord against ${\overline{n}}_2$ for ${\overline{n}}_1=100$ with $J=2$, $\gamma_1/\omega_1=\gamma_2/\omega_1=1$.[]{data-label="fig7"}](fig7f.pdf "fig:"){width="0.45\columnwidth"} Dynamics of the energy flux between the wells ============================================= It is important to gather insight into the details of the exchange of energy between the wells of the system, which is at the basis of the process of quasi-thermalisation highlighted so far and takes place in two forms: an exchange of particles between the wells and a similar process occurring at the interface between the double-well system and the reservoirs. The aim of this section is to identify the contribution coming from both such fluxes. We are thus interested in quantifying the flux into/from well $j=1,2$, which we label as $\dot{\mathcal{Q}}_j$ , and the total flux $\dot{\mathcal{Q}}_{tot}$. These are given by the quantities $$\label{fluxes} \dot{\mathcal{Q}}_{tot} = \text{Tr}[\hat{\cal H} \partial_t \varrho],\qquad\dot{\mathcal{Q}}_j = \text{Tr}[\hat{\cal H}_j \partial_t \varrho_j], ~~(j=1,2),\\$$ where $\hat{\cal H}_j=\omega_j(\hat{a}^\dag_j\hat a_j+1/2)$ is the free evolution of a single well and $\varrho_j$ is the density matrix of well $j$. Conveniently, these quantities can be directly evaluated from the covariance matrix (and we will assume both wells to have the same damping rate, i.e. $\gamma_1/\omega_1=\gamma_2/\omega_1=\gamma$). We find $$\begin{aligned} &\dot{\mathcal{Q}}_1 = e^{-\gamma t} \left[ \frac{2J^2(1+\Delta)({\overline{n}}_2-{\overline{n}}_1)}{\sqrt{4 J^2+\Delta^2}} \sin\left(\sqrt{4J^2+\Delta^2} t\right) \right],\\ &\dot{\mathcal{Q}}_2 =-(1+\Delta)\dot{\mathcal{Q}}_1,\qquad\dot{\mathcal{Q}}_{tot} = 0. \end{aligned}$$ From here it is easy to confirm that ${\dot{\mathcal{Q}}_1}/{\omega_1} = - {\dot{\mathcal{Q}}_2}/{\omega_2}$ and clearly taking $\gamma=0$ or $\Delta=0$ recovers the unitary and unbiased limits respectively. However, this behaviour is only for the special case of the wells initially being thermalised with their baths, while taking a different initial state this behaviour no longer holds. Indeed, what is special about our initial state is that it conserves the total energy of the system. This is readily seen given that $\dot{\mathcal{Q}}_{tot}=0$ and it is easy to confirm that $$\mathcal{Q}_{tot}=1+\frac{\Delta}{2}+{\overline{n}}_1+(1+\Delta){\overline{n}}_2,$$ for all $t$ and $J$. Of course, the energy of the individual wells changes dynamically (until settling into the same steady state). We can gain further insight into the reason for this by examining closer the quantity we are calculating, i.e. $$\begin{aligned} \dot{\mathcal{Q}}_{tot}&=\text{Tr}[\hat{\mathcal{H}} \partial_t\varrho]\\ &=\text{Tr}\left[\hat{\mathcal{H}} \left(-i[\hat{\mathcal{H}},\varrho] + \sum_{i=1}^2 \mathcal{L}_i(\varrho) \right) \right]\\ &=-i\text{Tr}\left[ \hat{\mathcal{H}}\left(\hat{\mathcal{H}}\varrho-\varrho\hat{\mathcal{H}}\right) \right] + \text{Tr}\left[ \hat{\mathcal{H}} \sum_{i=1}^2 \mathcal{L}_i(\varrho) \right]\\ &=\text{Tr}\left[ \hat{\mathcal{H}} \sum_{i=1}^2 \mathcal{L}_i(\varrho) \right]. \end{aligned}$$ The tunnelling term in Eq.  commutes with $\mathcal{L}_i$, and when $U=0$ the only contribution to the total flux is from the free evolution of each well. Therefore we are interested in calculating $$\label{expression} \dot{\mathcal{Q}}_{tot}=\text{Tr}\left[ \left( {\overline{n}}_1+{\overline{n}}_2 \right) \mathcal{L}_1(\varrho) \right]+\text{Tr}\left[ \left( {\overline{n}}_1+{\overline{n}}_2 \right) \mathcal{L}_2(\varrho) \right].$$ In a tedious but otherwise straightforward calculation, we can explicitly evaluate this expression when assuming the special initial condition $\varrho=\frac{e^{-\beta_1 \hat{\cal H}_1}}{\mathcal{Z}_1}\otimes\frac{e^{-\beta_2 \hat{\cal H}_2}}{\mathcal{Z}_2}$, with ${\cal Z}_j=\text{Tr}[e^{-\beta_j \hat{\cal H}_j}]$. We find that both the terms entering Eq. (\[expression\]) are identically zero, thus showing that, in the $U=0$ case, the net heat flux is null due to two special circumstances: on one hand our chosen initial state, on the other hand the tunnelling term commutes with the super operators. [**(a)**]{}\ ![Steady-state discord between the two wells. In all panels $J=2,~\gamma=1,~{\overline{n}}_1=1,~{\overline{n}}_2=2,~\Delta=4.$ Red Line: hotter well (well 2), gray line cooler well (well 1), and blue the total flux. With [**(a)**]{} $U=0.$ [**(b)**]{} $U=1.$ [**(c)**]{} $U=3.$[]{data-label="fig8"}](fig8a.pdf "fig:"){width="0.6\columnwidth"}\ [**(b)**]{}\ ![Steady-state discord between the two wells. In all panels $J=2,~\gamma=1,~{\overline{n}}_1=1,~{\overline{n}}_2=2,~\Delta=4.$ Red Line: hotter well (well 2), gray line cooler well (well 1), and blue the total flux. With [**(a)**]{} $U=0.$ [**(b)**]{} $U=1.$ [**(c)**]{} $U=3.$[]{data-label="fig8"}](fig8b.pdf "fig:"){width="0.6\columnwidth"}\ [**(c)**]{}\ ![Steady-state discord between the two wells. In all panels $J=2,~\gamma=1,~{\overline{n}}_1=1,~{\overline{n}}_2=2,~\Delta=4.$ Red Line: hotter well (well 2), gray line cooler well (well 1), and blue the total flux. With [**(a)**]{} $U=0.$ [**(b)**]{} $U=1.$ [**(c)**]{} $U=3.$[]{data-label="fig8"}](fig8c.pdf "fig:"){width="0.6\columnwidth"} In Fig. \[fig8\] [**(a)**]{} we show the dynamics of the various energy fluxes given in Eq. . We notice how the flux into the cooler well is proportional to the flux out of the hotter well, which results in a null net flux. Needless to say, the single-well fluxes only account for the net intake/outtake of particles for one of the wells and do not provide information on the actual balance between the contribution due to the coupling to the reservoir and that due to the coherent inter-well interaction. We can study the intermediate dynamical regime where the self-interaction is non-zero and comparable with the tunnelling by numerically solving Eq.  and examining the behaviour of the heat fluxes, of which we illustrate some examples in Fig. \[fig8\] [**(b)**]{} and [**(c)**]{} \[we refer to the caption for an account of the parameters used in the simulations\]. The total flux is now non-zero, and the energy is not conserved. However, the average occupation number is conserved, i.e. ${\langle \hat{a}_1^{\dagger}\hat{a}_1 \rangle}/{\omega_1} = -{\langle \hat{a}_2^{\dagger}\hat{a}_2 \rangle}/{\omega_2}$, which follows directly from the previous arguments.\ Discussion ========== The analysis above shows that neither global nor local thermalisation with the reservoirs is achieved. The fidelity between the density matrices of the time-evolved state and the target thermal one (whether globally or locally) connects the closeness of the populations of the energy levels of the former to the statistics of the latter. However, the interaction between the wells establishes strong quantum coherence between the particles of the systems, which in turn results in the generation of a substantive degree of quantum correlations, albeit of a nature weaker than entanglement, which prevents the thermal character of the resulting state. The analysis reported here also has the merit of providing rather deep insight into the phenomenology of quantum correlations between the wells. We have qualitatively and quantitatively examined the dynamics and steady state of a BEC loaded into a double well potential. While the wells remain separable at all times, thus sharing no entanglement, by exploring the behaviour of the quantum discord we find the system to be always non-classical, except under trivial, uninteresting conditions. Furthermore, the degree of nonclassicality of the system is dependent on the energy bias between the two wells. For identical wells, a significant amount of QD is possible, provided that a large temperature imbalance is established. Such nonclassicality can be greatly enhanced by taking a suitable value of tunnelling, which must be a function of the given bias. The transfer of heat in the system is equally complex.\ Methods ======= [**Differential Equations.**]{} Here we provide the complete set of differential equations that describe the dissipative dynamics considered throughout. $$\label{diffs} \begin{matrix} \partial_t y_1 = -J z_2 -\frac{\gamma_1}{2} y_1 + \omega_1 z_1,\quad\partial_t y_2 = -J z_1 -\frac{\gamma_2}{2} y_2 + \omega_2 z_2,&\partial_t z_1 = J y_2 - \frac{\gamma_1}{2} z_1 - \omega_1 y_1,\quad\partial_t z_2 = J y_1 - \frac{\gamma_2}{2} z_2 - \omega_2 y_2,\\ \partial_t \sigma_1^x = \gamma_1 +2{\overline{n}}_1 \gamma_1 - 2 J \sigma_{12}^{xp} -\gamma_1 \sigma_1^x + 2\omega_1 \sigma_1^{xp}, &\partial_t \sigma_2^x = \gamma_2 +2{\overline{n}}_2 \gamma_2 - 2 J \sigma_{12}^{px} -\gamma_2 \sigma_2^x + 2\omega_2 \sigma_2^{xp}, \\ \partial_t \sigma_1^p= \gamma_1 +2{\overline{n}}_1 \gamma_1 + 2 J \sigma_{12}^{px} -\gamma_1 \sigma_1^p - 2\omega_1 \sigma_1^{xp}, &\partial_t \sigma_2^p= \gamma_2 +2{\overline{n}}_2 \gamma_2 + 2 J \sigma_{12}^{xp} -\gamma_2 \sigma_2^p - 2\omega_2 \sigma_2^{xp}, \\ \partial_t\sigma_1^{xp}=J\left( \sigma_{12}^x-\sigma_{12}^p \right) - \gamma_1 \sigma_1^{xp} - \omega_1 \left( \sigma_1^x - \sigma_1^p \right), &\partial_t\sigma_2^{xp}=J\left( \sigma_{12}^x-\sigma_{12}^p \right) - \gamma_2 \sigma_2^{xp} - \omega_1 \left( \sigma_2^x - \sigma_2^p \right), \\ \partial_t\sigma_{12}^x=-J\left( \sigma_1^{xp} + \sigma_2^{xp} \right) - \frac{\gamma_1+\gamma_2}{2} \sigma_{12}^x + \omega_1 \sigma_{12}^{px} + \omega_2\sigma_{12}^{xp}, &\partial_t\sigma_{12}^p=J\left( \sigma_1^{xp} + \sigma_2^{xp} \right) - \frac{\gamma_1+\gamma_2}{2} \sigma_{12}^p - \omega_1 \sigma_{12}^{xp} - \omega_2\sigma_{12}^{px}, \\ \partial_t\sigma_{12}^{xp}=J\left( \sigma_1^{x} - \sigma_2^{p} \right) - \frac{\gamma_1+\gamma_2}{2} \sigma_{12}^{xp} + \omega_1\sigma_{12}^p - \omega_2 \sigma_{12}^x, &\partial_t\sigma_{12}^{px}=J\left( \sigma_2^{x} - \sigma_1^{p} \right) - \frac{\gamma_1+\gamma_2}{2} \sigma_{12}^{px} - \omega_1\sigma_{12}^x + \omega_2 \sigma_{12}^p. \end{matrix}$$ [**Discussions on self interaction dominated limit.**]{} While the main analysis treated the tunnelling dominated regime, the opposite extreme is determined setting $J=0$ and exploring the situation where self-interaction dominates. In this instance, the two wells are completely decoupled from one another. We can directly solve Eq.  by projecting onto the number states ${\left\vertn\right\rangle}$. Since these states are eigenstates of the Hamiltonian for $J=0$ the steady-state will be entirely independent of $U$. In fact, regardless of the initial state we find the steady state for each well to be $\rho_j=\frac{1}{\mathcal{Z}_j}e^{-\beta_j \omega_j \hat{n}_j}$ with $e^{-\beta_j \omega_j}=\frac{{\overline{n}}_j}{{\overline{n}}_j+1}$, which is the Boltzmann distribution for a harmonic oscillator with thermal occupation ${\overline{n}}_j$. Clearly then, if our initial states are already thermalised with their local reservoir we see no dynamics. For any other initial state, the two wells thermalise independently to their respective reservoir temperatures.\ SC is grateful to Simon Pigeon, Lorenzo Fusco, and Alessandro Ferraro for helpful and enlightening discussions. The authors acknowledge support from the UK EPSRC (EP/L005026/1, EP/M003019/1), the John Templeton Foundation (grant ID 43467), and the EU Collaborative Project TherMiQ (Grant Agreement 618074). [99]{} Lewenstein, M., Sanpera, A. & Ahufinger, V. in [*Ultra-cold Atoms in Optical Lattices: Simulating quantum many-body systems*]{}, (Oxford University Press, Oxford, U.K, 2012) See for example: Bloch, I., Dalibard, J. & Zwerger, W. [Many-body physics with ultracold gases]{} [[*Rev. Mod. Phys.*]{} **80**, 885 (2008)](http://dx.doi.org/10.1103/RevModPhys.80.885). Brennen, G., Giacobino, E. & Simon, Ch. [Focus on Quantum Memories]{}, [[*New J. Phys.*]{} [**17**]{}, 050201 (2015)](http://iopscience.iop.org/1367-2630/17/5/050201/article). André, A., Srensen, A. S. & Lukin, M. D. [Stability of atomic clocks based on entangled atoms]{} [[*Phys. Rev. Lett.*]{} [**92**]{}, 230801 (2004)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.92.230801). Georgescu, I. M., Ashhab, S. & Nori, F. [Quantum simulation]{}, [[*Rev. Mod. Phys.*]{} [**86**]{}, 47 (2014)](http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.86.47). Mehboudi, M., Moreno-Cardoner, M., De Chiara, G. & Sanpera, A. [Thermometry Precision in Strongly Correlated Ultracold Lattice Gases]{}, [[*New J. Phys.*]{} [**17**]{}, 055020 (2015)](http://iopscience.iop.org/1367-2630/17/5/055020); McDonald, M., McGuyer, B. H., Iwata, G. Z. & Zelevinsky, T. [Thermometry via Light Shifts in Optical Lattices]{} [[*Phys. Rev. Lett.*]{} [**114**]{}, 023001 (2015)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.023001); Hangleiter, D., Mitchison, M. T., Johnson, T. H., Bruderer, M., Plenio, M. B., & Jaksch, D. [Nondestructive selective probing of phononic excitations in a cold Bose gas using impurities]{}, [[*Phys. Rev. A*]{} [**91**]{}, 013611 (2015)](http://dx.doi.org/10.1103/PhysRevA.91.013611). Stadler, D., Krinner, S., Meineke, J., Brantut, J.-P. & Esslinger, T. [Observing the drop of resistance in the flow of a superfluid Fermi gas]{} [[*Nature*]{} [**491**]{}, 736Ð739 (2012)](https://dx.doi.org/10.1038/nature11613); Brantut, J.-P. Grenier, C., Meineke, J. Stadler, D., Krinner, S., Kollath, C., Esslinger, T. & Georges, A. [A thermoelectric heat engine with ultracold atoms]{} [[*Science*]{} [**342**]{}, 713 (2013)](http://www.sciencemag.org/content/342/6159/713.abstract). Sinatra, A. & Castin, Y. Phase dynamics of Bose-Einstein condensates: Losses versus revivals, [[*Eur. Phys. J. D*]{} [**4**]{}, 247-260 (1998)](http://dx.doi.org/10.1007/s100530050206); Sinatra, A., Castin, Y. & Li, Y. Particle number fluctuations in a cloven trapped Bose gas at finite temperature, [[*Phys. Rev. A*]{} [**81**]{}, 053623 (2010)](http://dx.doi.org/10.1103/PhysRevA.81.053623); Juliá-Díaz, B., Gottlieb, A. D., Martorell, J. & Polls, A. Quantum and thermal fluctuations in bosonic Josephson junctions, [[*Phys. Rev. A*]{} [**88**]{}, 033601 (2013)](http://dx.doi.org/10.1103/PhysRevA.88.033601). Smerzi, A., Fantoni, S., Giovanazzi, S. & Shenoy, S. R. [Quantum Coherent Atomic Tunneling between Two Trapped Bose-Einstein Condensates]{} [[*Phys. Rev. Lett.*]{} [**79**]{}, 4950 (1997)](http://dx.doi.org/10.1103/PhysRevLett.79.4950). Albiez, M., Gati, R., Fölling, J., Hunsmann, S., Cristiani, M. & Oberthaler, M.K. [Direct observation of tunneling and nonlinear self-trapping in a single bosonic Josephson junction]{} [[*Phys. Rev. Lett.*]{} [**95**]{}, 010402 (2005)](http://dx.doi.org/10.1103/PhysRevLett.95.010402). Makhlin, Y., Schön, G. & Shnirman A. [Quantum-state engineering with Josephson-junction devices]{}, [[*Rev. Mod. Phys.*]{} [**73**]{}, 357 (2001)](http://dx.doi.org/10.1103/RevModPhys.73.357). Porras, D. & Cirac, J. I. [Bose-Einstein Condensation and Strong-Correlation Behavior of Phonons in Ion Traps]{}, [[*Phys. Rev. Lett.*]{} [**93**]{}, 263602 (2004)](http://dx.doi.org/10.1103/PhysRevLett.93.263602). Larson, J. [Anomalous decoherence and absence of thermalization in a photonic many-body system]{}, [[*Phys. Rev. A*]{} [**83**]{}, 052103 (2011)](http://dx.doi.org/10.1103/PhysRevA.83.052103). Aspelmeyer, M., Kippenberg, T. J. & Marquardt, F. [Cavity Optomechanics]{}, [[*Rev. Mod. Phys.*]{} [**86**]{}, 1391 (2014)](http://dx.doi.org/10.1103/RevModPhys.86.1391). Breuer, H. P. & Petruccione, F. [*The Theory of Open Quantum Systems*]{} (Oxford University Press, Oxford, 2002). Haikka, P., McEndoo, S., De Chiara, G., Palma, G. M., & Maniscalco, S. [*Quantifying, characterizing and controlling information flow in ultracold atomic gases*]{}, [[*Phys. Rev. A*]{} [**84**]{}, 031602 (2011)](http://dx.doi.org/10.1103/PhysRevA.84.031602). Carmichael, H. J. [*Statistical Methods in Quantum Optics 1*]{} (Springer-Verlag Berlin Heidelberg, 1999). Walls, D. F. & Milburn, G. J. [*Quantum optics*]{} (Springer-Verlag Berlin Heidelberg, 2008). Ferraro, A., Olivares, S. & Paris, M. G. A. [[*Lecture Notes*]{} (Bibliopolis, Naples, 2005)](http://arxiv.org/abs/quant-ph/0503237). Asbóth, J. K., Calsamiglia, J. & Ritsch, H. [Computable measure of nonclassicality for light]{}, [[*Phys. Rev. Lett.*]{} [**94**]{}, 173602 (2005)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.94.173602). Ollivier, H. & Zurek, W. H. [Quantum discord: a measure of the quantumness of correlations]{}, [[*Phys. Rev. Lett.*]{} [**88**]{}, 017901 (2001)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.88.017901). Henderson, L. & Vedral, V. [Classical, quantum and total correlations]{}, [[*J. Phys. A*]{} [**34**]{}, 6899 (2001)](http://iopscience.iop.org/0305-4470/34/35/315). Modi, K., Brodutch, A., Cable, H., Paterek, T. & Vedral, V. [The classical-quantum boundary for correlations: Discord and related measures]{}, [[*Rev. Mod. Phys.*]{} [**84**]{}, 1655 (2012)](http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.84.1655). Trotzky, S., Chen, Y.-A., Flesch, A., McCulloch, I. P., Schollwöck, U., Eisert, J. & Bloch, I. [Probing the relaxation towards equilibrium in an isolated strongly correlated one-dimensional Bose gas]{}, [[*Nature Phys.*]{} [**8**]{}, 325 (2012)](http://www.nature.com/nphys/journal/v8/n4/abs/nphys2232.html); Gogolin, C., Müller, M. P. & Eisert, J. [Absence of thermalization in nonintegrable systems]{}, [[*Phys. Rev. Lett.*]{} [**106**]{}, 040401 (2011)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.106.040401); Polkovnikov, A., Sengupta, K., Silva, A. & Vengalattore, M. [Nonequilibrium dynamics of closed interacting quantum systems]{}, [[*Rev. Mod. Phys.*]{} [**83**]{}, 863 (2011)](http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.863); Polkovnikov, A., [Microscopic diagonal entropy and its connection to basic thermodynamic relations]{} [[*Annals Phys.*]{} [**326**]{}, 486 (2011)](http://arxiv.org/abs/0806.2862); Masanes, L., Roncaglia, A. J. & Acin, A. [Complexity of energy eigenstates as a mechanism for equilibration]{}, [[*Phys. Rev. E*]{} [**87**]{}, 032137 (2013)](http://journals.aps.org/pre/abstract/10.1103/PhysRevE.87.032137). Uhlmann, A. [The transition probability in the state space of a A$^*$-algebra]{}, [[*Rep. Math. Phys.*]{} [**9**]{}, 273 (1976)](http://www.physik.uni-leipzig.de/~uhlmann/PDF/Uh76a.pdf). Marian, P. & Marian, T. A. [Uhlmann fidelity between two-mode Gaussian states]{}, [[*Phys. Rev. A*]{} [**86**]{}, 022340 (2012)](http://journals.aps.org/pra/abstract/10.1103/PhysRevA.86.022340). Adesso, G. & Datta, A. [Quantum versus classical correlations in Gaussian states]{}, [[*Phys. Rev. Lett.*]{} [**105**]{}, 030501 (2010)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.105.030501). Olivares, S. [Quantum optics in the phase space]{}, [[*Eur. Phys. J. Special Topics*]{} [**203**]{}, 3-24 (2012)](http://link.springer.com/article/10.1140/epjst/e2012-01532-4). Giorda, P. & Paris, M. G. A. [Gaussian quantum discord]{}, [[*Phys. Rev. Lett.*]{} [**105**]{}, 020503 (2010)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.105.020503).
--- abstract: | For a finite $p$-group $G$ and a bounded below $G$-spectrum $X$ of finite type mod $p$, the $G$-equivariant Segal conjecture for $X$ asserts that the canonical map $X^G \to X^{hG}$, from $G$-fixed points to $G$-homotopy fixed points, is a $p$-adic equivalence. Let $C_{p^n}$ be the cyclic group of order $p^n$. We show that if the $C_p$-equivariant Segal conjecture holds for a $C_{p^n}$-spectrum $X$, as well as for each of its geometric fixed point spectra $\Phi^{C_{p^e}}(X)$ for $0 < e < n$, then the $C_{p^n}$-equivariant Segal conjecture holds for $X$. Similar results also hold for weaker forms of the Segal conjecture, asking only that the canonical map induces an equivalence in sufficiently high degrees, on homotopy groups with suitable finite coefficients. address: - 'Department of Mathematical Sciences, Aarhus University, [Å]{}rhus, Denmark' - 'Department of Mathematics, Wayne State University, Detroit, USA' - 'Department of Mathematics, University of Oslo, Norway' - 'Department of Mathematics, University of Oslo, Norway' author: - 'Marcel B[ö]{}kstedt' - 'Robert R. Bruner' - | \ Sverre Lun[ø]{}e–Nielsen - John Rognes date: October 27th 2010 title: On cyclic fixed points of spectra --- Introduction {#sec-1} ============ Let $p$ be any prime number. Graeme Segal’s Burnside ring conjecture [@Ad82] for a finite $p$-group $G$ asserts that when $X = S_G$ is the genuinely $G$-equivariant sphere spectrum, then the canonical map $X^G \to X^{hG} = F(EG_+, X)^G$ is a $p$-adic equivalence. For cyclic groups $C = C_p$ of prime order the conjecture was proved by Lin [@LDMA80] and Gunawardena [@Gu80], [@AGM85]. Thereafter Ravenel [@Ra81], [@Ra84] gave an inductive proof of the Segal conjecture for finite cyclic $p$-groups $G = C_{p^n}$ of order $p^n$, starting from Lin and Gunawardena’s theorems. Ravenel’s result was superseded by Carlsson’s proof [@Ca84] of the Segal conjecture for all finite $p$-groups, but as we shall show here, Ravenel’s methods are also of interest in a more general context, where $X$ is a quite general $G$-spectrum. As was elucidated by Miller and Wilkerson [@MW83], Ravenel’s methods give two proofs of the Segal conjecture for cyclic groups—one computational using the modified Adams spectral sequence, and one non-computational, using explicit geometric constructions. The object of this paper is to generalize the non-computational, geometric proof of the Segal conjecture to deduce when $X^G \to X^{hG}$ is “close to” a $p$-adic equivalence for $G = C_{p^n}$, assuming that $X^C \to X^{hC}$ and similar maps are “close to” such an equivalence for $C = C_p$. Our main technical results are Theorem \[thm-1.6\] and Corollary \[cor-1.7\]. Their statements involve $(W, k)$-coconnected maps and geometric fixed points, which are discussed in Definitions \[def:coconn\] and \[dfn-1.3\], respectively. In the special cases $X = B^{\wedge p^n}$ or $X = THH(B)$, where $B^{\wedge p^n}$ is a specific $C_{p^n}$-equivariant model for the $p^n$-th smash power of a spectrum $B$, and $THH(B)$ is the topological Hochschild homology of a symmetric ring spectrum $B$, the geometric fixed points are well understood, as explained in Theorems \[thm-1.9\] and \[thm-1.10\]. In the special cases $W = S^{-1}/p^\infty$ or $W = F(V, S)$, where $V$ is a finite $p$-torsion spectrum, the $(W, k)$-coconnected maps are well understood in terms of $p$-completion or homotopy with $V$-coefficients, as explained in Examples \[ex-1.11\] and \[ex-1.12\]. In the doubly special case when $X = THH(B)$ and $W = S^{-1}/p^\infty$, our results recover the main theorem of Tsalidis [@Ts98]. Statement of results ==================== We first formalize the notion of being “close to” a $p$-adic equivalence. \[def:coconn\] Let $S^{-1}\!/p^\infty$ be the Moore spectrum with homology ${\mathbb{Z}}/p^\infty$ concentrated in degree $-1$, so that the function spectrum $F(S^{-1}\!/p^\infty, Y) = Y\sphat_p$ is the $p$-adic completion of an arbitrary spectrum $Y$. Let $W$ be an object in the localizing ideal [@HPS97]\*[Def. 1.4.3(d)]{} of spectra generated by $S^{-1}\!/p^\infty$, i.e., the smallest thick subcategory of spectra that contains $S^{-1}\!/p^\infty$ and is closed under arbitrary wedge sums, as well as under smash products with arbitrary spectra. This assumption on $W$ implies that $F(W, Y)$ is contractible whenever $Y\sphat_p$ is contractible. Let $k$ be an integer, or $-\infty$. We say that a spectrum $Y$ is *$(W, k)$-coconnected* if $\pi_* F(W, Y) = 0$ for all $* \ge k$. We say that a map of spectra $f {\colon\thinspace}Y_1 \to Y_2$ is *$(W, k)$-coconnected* if $\operatorname{hofib}(f)$ is $(W, k)$-coconnected, or equivalently, if $\pi_* F(W, Y_1) \to \pi_* F(W, Y_2)$ is injective for $* = k$ and an isomorphism for all $* > k$. The most obvious choice for $W$ is $W = S^{-1}\!/p^\infty$, in which case $F(W, Y) = Y\sphat_p$, so a map $f {\colon\thinspace}Y_1 \to Y_2$ is $(W, k)$-coconnected if and only if the $p$-completed map $f\sphat_p {\colon\thinspace}(Y_1)\sphat_p \to (Y_2)\sphat_p$ induces an injection on $\pi_*$ for $* = k$ and an isomorphism for $* > k$. When $k=-\infty$, this is the same as being a $p$-adic equivalence. For another class of examples we may take $W = F(V, S)$, where $V$ is a finite CW spectrum whose integral homology is $p$-torsion, in which case $F(W, Y) \simeq V \wedge Y$ by Spanier–Whitehead duality. In this case $f {\colon\thinspace}Y_1 \to Y_2$ is $(W, k)$-coconnected if and only if the map $1 \wedge f {\colon\thinspace}V \wedge Y_1 \to V \wedge Y_2$ induces an injection $V_*(Y_1) = \pi_*(V \wedge Y_1) \to \pi_*(V \wedge Y_2) = V_*(Y_2)$ for $* = k$ and an isomorphism for $* > k$. Hereafter we assume that a pair $(W, k)$ has been chosen as in the definition above. Next, we recall some comparison maps between fixed points, geometric fixed points and homotopy fixed points. \[dfn-1.3\] Let $C = C_p \subset C_{p^n} = G$ and $\bar G = G/C \cong C_{p^{n-1}}$. Let $\lambda = {\mathbb{C}}(1)$ be the basic faithful $G$-representation of complex rank one, and $S^\lambda$ its one-point compactification. Let $\infty\lambda$ be the direct sum of a countable number of copies of $\lambda$. Its unit sphere $S(\infty\lambda) = EG$ is a free contractible $G$-CW space, and its one-point compactification $S^{\infty\lambda} = \widetilde{EG}$ sits in a $G$-homotopy cofiber sequence $EG_+ \to S^0 \to \widetilde{EG}$, where the first map collapses $EG$ to the non-basepoint. Note the $G$-homeomorphism $\widetilde{EG} \wedge S^{j\lambda} \cong \widetilde{EG}$ for each $j\ge0$. Let $X$ be any genuine $G$-spectrum [@LMS86], and consider the vertical map $$\xymatrix{ EG_+ \wedge X \ar[r] \ar[d]^{\simeq_G} & X \ar[r] \ar[d] & {}\widetilde{EG} \wedge X \ar[d] \\ EG_+ \wedge F(EG_+, X) \ar[r] & F(EG_+, X) \ar[r] & {}\widetilde{EG} \wedge F(EG_+, X) }$$ of horizontal $G$-homotopy cofiber sequences. Passing to $G$-fixed point spectra we obtain a vertical map $$\xymatrix{ X_{hG} \ar[rr]^-N \ar[d]_{=} && X^G \ar[rr]^-R \ar[d]^{\Gamma_n} && \Phi^C(X)^{\bar G} \ar[d]^{\hat\Gamma_n} \\ X_{hG} \ar[rr]^-{N^h} && X^{hG} \ar[rr]^-{R^h} && X^{tG} }$$ of horizontal homotopy cofiber sequences, called the norm–restriction sequences [@GM95]\*[Diag. (C), (D)]{}. Here $$\begin{aligned} X_{hG} &= EG_+ \wedge_G X &&\qquad\text{(homotopy orbits)} \\ X^{hG} &= F(EG_+, X)^G &&\qquad\text{(homotopy fixed points)} \\ X^{tG} &= [\widetilde{EG} \wedge F(EG_+,X)]^G &&\qquad\text{(Tate construction)} \\ \intertext{and there is a $\bar G$-equivariant equivalence} \Phi^C(X) &\simeq [\widetilde{EG} \wedge X]^C &&\qquad\text{(geometric fixed points)}\end{aligned}$$ inducing the upper right hand equivalence $[\widetilde{EG} \wedge X]^G \simeq \Phi^C(X)^{\bar G}$. For more details, see e.g. [@HM97]\*[Prop. 2.1]{}. The right hand square above is homotopy cartesian, so $\Gamma_n$ is $(W, k)$-coconnected if and only if $\hat\Gamma_n$ is $(W, k)$-coconnected. This observation can be combined with the conclusions of all of the theorems below. We write $H_*(X) = H_*(X; {\mathbb{F}}_p)$ for the mod $p$ homology of any spectrum. \[thm-1.6\] Let $X$ be a $G$-spectrum. Assume that $\pi_*(X)$ is bounded below and $H_*(X)$ is of finite type. Suppose that $\Gamma_1 {\colon\thinspace}X^C \to X^{hC}$ and $\Gamma_{n-1} {\colon\thinspace}\Phi^C(X)^{\bar G} \to \Phi^C(X)^{h\bar G}$ are $(W, k)$-coconnected maps. Then $\Gamma_n {\colon\thinspace}X^G \to X^{hG}$ is $(W, k)$-coconnected. Informally, the theorem asserts that if $X^C \to X^{hC}$ is close to a $p$-adic equivalence, and we can inductively prove that $Y^{\bar G} \to Y^{h\bar G}$ is close to a $p$-adic equivalence for $Y = \Phi^C(X)$, then $X^G \to X^{hG}$ is close to a $p$-adic equivalence. \[cor-1.7\] Let $X$ be a $C_{p^n}$-spectrum. Suppose for each of the geometric fixed point spectra $$Y = X \,,\, \Phi^{C_p}(X) \,,\, \dots \,,\, \Phi^{C_{p^{n-1}}}(X)$$ that $Y$ is bounded below with $H_*(Y)$ of finite type, and that $\Gamma_1 {\colon\thinspace}Y^{C_p} \to Y^{hC_p}$ is $(W, k)$-coconnected. Then $\Gamma_n {\colon\thinspace}X^{C_{p^n}} \to X^{hC_{p^n}}$ is $(W, k)$-coconnected. The proofs of Theorem \[thm-1.6\] and Corollary \[cor-1.7\] are given near the end of Section \[sec-2\]. Let $B$ be any spectrum. When $B$ is realized as a symmetric spectrum (or an FSP), the $r$-fold smash power $B^{\wedge r}$ can be defined as a genuine $C_r$-spectrum by the construction $$B^{\wedge r} = sd_r THH(B)_0 = THH(B)_{r-1}$$ from [@HM97]\*[§2.4]{}. Its $V$-th space is defined by a homotopy colimit $$(B^{\wedge r})_V = \operatorname*{hocolim}_{(i_1, \dots, i_r) \in I^r} \operatorname{Map}(S^{i_1} \wedge \dots \wedge S^{i_r}, B_{i_1} \wedge \dots \wedge B_{i_r} \wedge S^V) \,,$$ and $C_r$ cyclically permutes the smash factors, in addition to its natural action on $S^V$. We are principally interested in the case $r = p^n$. In [@LR:A]\*[Thm. 5.13]{}, the third and fourth authors prove that $\Gamma_1 {\colon\thinspace}(B^{\wedge p})^{C_p} \to (B^{\wedge p})^{hC_p}$ is a $p$-adic equivalence whenever $\pi_*(B)$ is bounded below and $H_*(B)$ is of finite type. This provides the inductive beginning for the following application of Corollary \[cor-1.7\]. \[thm-1.9\] Let $B$ be a spectrum with $\pi_*(B)$ bounded below and $H_*(B)$ of finite type. Then $$\Gamma_n {\colon\thinspace}(B^{\wedge {p^n}})^{C_{p^n}} \to (B^{\wedge {p^n}})^{hC_{p^n}}$$ is a $p$-adic equivalence, for each $n\ge1$. When $B$ is a symmetric ring spectrum, its topological Hochschild homology $THH(B)$ is a genuine ${\mathbb{T}}$-spectrum [@HM97]\*[§2.4]{}, where ${\mathbb{T}}$ is the circle group. It is not true in general that $\Gamma_1 {\colon\thinspace}THH(B)^{C_p} \to THH(B)^{hC_p}$ is a $p$-adic equivalence, but when it is “approximately” true, then the following theorem is useful. \[thm-1.10\] Let $B$ be a connective symmetric ring spectrum with $H_*(B)$ of finite type, and suppose that $$\Gamma_1 {\colon\thinspace}THH(B)^{C_p} \to THH(B)^{hC_p}$$ is $(W, k)$-coconnected. Then $$\Gamma_n {\colon\thinspace}THH(B)^{C_{p^n}} \to THH(B)^{hC_{p^n}}$$ is $(W, k)$-coconnected, for each $n\ge2$. The proofs of Theorems \[thm-1.9\] and \[thm-1.10\] are given at the end of Section \[sec-2\]. In the case $B = S$ there is a $G$-equivariant equivalence $THH(S) \simeq S_G$, and $\Gamma_1$ is a $p$-adic equivalence by the classical Segal conjecture. Also in the cases $B = MU$ (the complex cobordism spectrum) and $B = BP$ (the Brown–Peterson spectrum) it turns out that $\Gamma_1$ for $THH(B)$ is a $p$-adic equivalence, as the third and fourth authors show in [@LR:B]\*[Thm. 1.1]{}. This provides examples with $k=-\infty$ for the following special case. \[ex-1.11\] Taking $W = S^{-1}\!/p^\infty$, the assumption in Theorem \[thm-1.10\] is that the $p$-completed map $\Gamma_1 {\colon\thinspace}(THH(B)^{C_p})\sphat_p \to (THH(B)^{hC_p})\sphat_p$ is $k$-coconnected, i.e., that it induces an injection on $\pi_k$ and an isomorphism on $\pi_*$ for $* > k$, and the conclusion is that the $p$-completed map $$\Gamma_n {\colon\thinspace}(THH(B)^{C_{p^n}})\sphat_p \to (THH(B)^{hC_{p^n}})\sphat_p$$ is also $k$-coconnected, for all $n\ge2$. This recovers a theorem of Tsalidis [@Ts98]\*[Thm. 2.4]{}. \[ex-1.12\] Taking $W = F(V, S)$ and $V = V(1) = S/(p, v_1)$, the Smith–Toda complex of chromatic type 2, the assumption in Theorem \[thm-1.10\] is that $$V(1)_*(\Gamma_1) {\colon\thinspace}V(1)_* THH(B)^{C_p} \to V(1)_* THH(B)^{hC_p}$$ is $k$-coconnected, and the conclusion is that $$V(1)_*(\Gamma_n) {\colon\thinspace}V(1)_* THH(B)^{C_{p^n}} \to V(1)_* THH(B)^{hC_{p^n}}$$ is also $k$-coconnected, for all $n\ge2$. This recovers the generalization of Tsalidis’ theorem used (implicitly) by Ausoni and Rognes [@AR02]\*[Thm. 5.7]{} in the special case when $B = \ell$, the Adams summand of connective $p$-local complex $K$-theory, and $k = 2p-2$. The generalized result is used again in [@AR:tck1]\*[Cor. 5.10]{}. Constructions and proofs {#sec-2} ======================== Let $\bar\lambda$ be the basic faithful $\bar G$-representation of complex rank one. Like in Definition \[dfn-1.3\], we let $E\bar G = S(\infty\bar\lambda)$ and $\widetilde{E\bar G} = S^{\infty\bar\lambda}$. There is a $\bar G$-equivalence $\operatorname*{hocolim}_j S^{j\bar\lambda} \overset{\simeq}{\longrightarrow}\widetilde{E\bar G}$. The pullback of $\bar\lambda$ along $G \to \bar G$ is the $p$-th tensor power $\lambda^p = {\mathbb{C}}(p)$ of $\lambda$, and there is a $G$-equivalence $$\operatorname*{hocolim}_j S^{j\lambda^p} \overset{\simeq}{\longrightarrow}\widetilde{E\bar G} \,,$$ where the right hand side is implicitly viewed as a $G$-space by pullback along $G \to \bar G$. Each $G$-map $z {\colon\thinspace}S^{j\lambda^p} \to S^{(j+1)\lambda^p}$ in the colimit system is induced by the zero-inclusion $\{0\} \subset \lambda^p$. \[lem-2.2\] Let $X$ be a genuine $G$-spectrum. There is a natural homotopy cofiber sequence $$\xymatrix{ \operatorname*{holim}_j \, (\Sigma^{-j\lambda^p} X)^G \ar[r] & (X^C)^{\bar G} \ar[rr]^-{\Gamma_{n-1}} && (X^C)^{h\bar G} \,, }$$ where the right hand map is $\Gamma_{n-1}$ for the $\bar G$-spectrum $X^C$. By mapping the $\bar G$-homotopy cofiber sequence $E\bar G_+ \to S^0 \to \widetilde{E\bar G}$ into $X^C$, we get the homotopy (co-)fiber sequence $$F(\widetilde{E\bar G}, X^C)^{\bar G} \to (X^C)^{\bar G} \overset{\Gamma_{n-1}}{\longrightarrow}F(E\bar G_+, X^C)^{\bar G} \,.$$ Here $$F(\widetilde{E\bar G}, X^C)^{\bar G} \simeq \operatorname*{holim}_j F(S^{j\bar\lambda}, X^C)^{\bar G} \simeq \operatorname*{holim}_j \, (\Sigma^{-j\lambda^p} X)^G \,.$$ This gives the asserted homotopy cofiber sequence. \[prop-2.3\] Let $X$ be a genuine $G$-spectrum. There is a vertical map of homotopy cofiber sequences $$\xymatrix{ \operatorname*{holim}_j \Phi^C(\Sigma^{-j\lambda^p} X)^{\bar G} \ar[r] \ar[d] & \Phi^C(X)^{\bar G} \ar[rr]^-{\Gamma_{n-1}} \ar[d]^{\hat\Gamma_n} && \Phi^C(X)^{h\bar G} \ar[d]^{(\hat\Gamma_1)^{h\bar G}} \\ \operatorname*{holim}_j \, (\Sigma^{-j\lambda^p} X)^{tG} \ar[r] & X^{tG} \ar[rr]^-{\Gamma_{n-1}} && (X^{tC})^{h\bar G} \rlap{\,.} }$$ The right hand horizontal maps are $\Gamma_{n-1}$ for the $\bar G$-spectra $\Phi^C(X) \simeq [\widetilde{EG} \wedge X]^C$ and $X^{tC} \simeq [\widetilde{EG} \wedge F(EG_+, X)]^C$, respectively. We replace $X$ in the lemma above by the $G$-spectra $\widetilde{EG} \wedge X$ and $\widetilde{EG} \wedge F(EG_+, X)$. This gives the two claimed homotopy cofiber sequences, in view of the $\bar G$-equivalences $$\Phi^C(\Sigma^{-j\lambda^p} X) \simeq [\widetilde{EG} \wedge F(S^{j\lambda^p}, X)]^C \simeq F(S^{j\bar\lambda}, [\widetilde{EG} \wedge X]^C)$$ and $$(\Sigma^{-j\lambda^p} X)^{tC} \simeq [\widetilde{EG} \wedge F(EG_+, F(S^{j\lambda^p}, X))]^C \simeq F(S^{j\bar\lambda}, [\widetilde{EG} \wedge F(EG_+, X)]^C) \,,$$ respectively. These all follow from the $G$-dualizability of $S^{j\lambda^p}$. \[lem-2.4\] If $\hat\Gamma_1 {\colon\thinspace}\Phi^C(X) \to X^{tC}$ is $(W, k)$-coconnected, then $(\hat\Gamma_1)^{h\bar G}$ is $(W, k)$-coconnected. This is a special case of a more general result. The homotopy fixed point spectral sequence $$E^2_{s,t} = H^{-s}(G; \pi_t(Y)) \Longrightarrow \pi_{s+t}(Y^{hG})$$ shows that $Y^{hG}$ is $k$-coconnected whenever $Y$ is a $k$-coconnected $G$-spectrum. Commutation of function spectra, homotopy fibers and homotopy fixed points shows that $Y_1^{hG} \to Y_2^{hG}$ is $(W, k)$-coconnected whenever $Y_1 \to Y_2$ is a $(W, k)$-coconnected $G$-map. The lemma follows by applying this for the $\bar G$-map $\hat\Gamma_1$. The *Greenlees filtration* [@Gr87]\*[p. 437]{} of $\widetilde{EG}$ is an integer-indexed $G$-cellular filtration of spectra, whose $2i$-th term is $S^{i\lambda}$ for each integer $i$. The $(2i+1)$-th term is obtained from $S^{i\lambda}$ by attaching a single $G$-free $(2i+1)$-cell, and $S^{(i+1)\lambda}$ is in turn obtained from it by attaching a single $G$-free $(2i+2)$-cell. The Greenlees filtration induces an increasing filtration of $X^{tG} = [\widetilde{EG} \wedge F(EG_+, X)]^G$, and a tower of homotopy cofibers with $(2i+1)$-th term $$\label{eq-2.6} X^{tG}\langle i\rangle = [\widetilde{EG}/S^{i\lambda} \wedge F(EG_+, X)]^G \,,$$ which we call the Tate tower. The associated spectral sequence is the homological $G$-equivariant Tate spectral sequence $$\hat E^2_{s,t} = {\widehat{H}}^{-s}(G; H_t(X))$$ converging to the continuous homology groups $$H^c_*(X^{tG}) = \lim_i H_*(X^{tG}\langle i\rangle)$$ of $X^{tG}$, when $X$ is a bounded below spectrum with $H_*(X)$ of finite type. See [@LR:A]\*[Def. 2.3, Prop. 4.15]{}. Note that $i$ tends to $-\infty$ in this limit. We shall also refer to the continuous cohomology groups $$H_c^*(X^{tG}) = \operatorname*{colim}_i H^*(X^{tG}\langle i\rangle) \,,$$ and note that $H^c_*(X^{tG}) \cong H_c^*(X^{tG})^*$ (the $\operatorname{Hom}$ dual) when $H_*(X)$ is of finite type, because then each $H_*(X^{tG}\langle i\rangle)$ is also of finite type. Let the $G$-map $\xi {\colon\thinspace}S^\lambda \to S^{\lambda^p}$ be the suspension of the degree $p$ covering map $\pi {\colon\thinspace}S^1 = S(\lambda) \to S(\lambda^p) = S^1/C$ of unit spheres, as in the following vertical map of horizontal $G$-homotopy cofiber sequences: $$\xymatrix{ S(\lambda)_+ \ar[r] \ar[d]^{\pi_+} & S^0 \ar[r] \ar[d]^{=} & S^\lambda \ar[d]^\xi \\ S(\lambda^p)_+ \ar[r] & S^0 \ar[r]^-z & S^{\lambda^p} }$$ Then $\xi$ has degree $p$ on the top cell, so $\xi_* {\colon\thinspace}H_*(S^\lambda) \to H_*(S^{\lambda^p})$ is the zero homomorphism (since we work with reduced homology and mod $p$ coefficients). \[prop-2.8\] Let $X$ be a $G$-spectrum with $H_*(X)$ bounded below. Then $$\lim_j H^c_*((\Sigma^{-j\lambda^p} X)^{tG}) = \lim_{i,j} H_*((\Sigma^{-j\lambda^p} X)^{tG}\langle i\rangle) = 0$$ and $$\operatorname*{colim}_j H_c^*((\Sigma^{-j\lambda^p} X)^{tG}) = \operatorname*{colim}_{i,j} H^*((\Sigma^{-j\lambda^p} X)^{tG}\langle i\rangle) = 0 \,.$$ In the notation of  we have a natural equivalence $$(\Sigma^{-j\lambda^p} X)^{tG}\langle i\rangle \overset{\simeq}{\longrightarrow}(\Sigma^{j(\lambda-\lambda^p)} X)^{tG}\langle i{-}j\rangle$$ for each $i$ and $j$, since $S^{j\lambda}$ is $G$-dualizable. Under this identification, the $z$-tower map $$z {\colon\thinspace}(\Sigma^{-(j+1)\lambda^p} X)^{tG}\langle i\rangle \to (\Sigma^{-j\lambda^p} X)^{tG}\langle i\rangle$$ induced by smashing with $z {\colon\thinspace}S^0 \to S^{\lambda^p}$ corresponds to the composite of the Tate tower map $$(\Sigma^{(j+1)(\lambda-\lambda^p)} X)^{tG}\langle i{-}j{-}1\rangle \to (\Sigma^{(j+1)(\lambda-\lambda^p)} X)^{tG}\langle i{-}j\rangle$$ induced by smashing with $S^0 \to S^\lambda$, and the map $$\xi {\colon\thinspace}(\Sigma^{(j+1)(\lambda-\lambda^p)} X)^{tG}\langle i{-}j\rangle \to (\Sigma^{j(\lambda-\lambda^p)} X)^{tG}\langle i{-}j\rangle$$ induced by smashing with $\xi {\colon\thinspace}S^\lambda \to S^{\lambda^p}$: $$\xymatrix{ [\widetilde{EG}/S^{i\lambda} \wedge F(EG_+, \Sigma^{-(j+1)\lambda^p} X)]^G \ar[r]^-{\simeq} \ar[dd]_{z} & [\widetilde{EG}/S^{(i-j-1)\lambda} \wedge F(EG_+, \Sigma^{(j+1)(\lambda-\lambda^p)} X)]^G \ar[d] \\ & [\widetilde{EG}/S^{(i-j)\lambda} \wedge F(EG_+, \Sigma^{(j+1)(\lambda-\lambda^p)} X)]^G \ar[d]^{\xi} \\ [\widetilde{EG}/S^{i\lambda} \wedge F(EG_+, \Sigma^{-j\lambda^p} X)]^G \ar[r]_-{\simeq} & [\widetilde{EG}/S^{(i-j)\lambda} \wedge F(EG_+, \Sigma^{j(\lambda-\lambda^p)} X)]^G }$$ Passing to the limit over $i$, the homomorphism $$z_* {\colon\thinspace}H^c_*((\Sigma^{-(j+1)\lambda^p} X)^{tG}) \to H^c_*((\Sigma^{-j\lambda^p} X)^{tG})$$ is identified with the homomorphism $$\label{eq-2.9} \xi_* {\colon\thinspace}H^c_*((\Sigma^{(j+1)(\lambda-\lambda^p)} X)^{tG}) \to H^c_*((\Sigma^{j(\lambda-\lambda^p)} X)^{tG}) \,,$$ so it suffices to show that the limit over $j$ of the latter homomorphisms is zero. Let $$\hat E^2_{s,t}(j) = {\widehat{H}}^{-s}(G; H_t(\Sigma^{j(\lambda-\lambda^p)} X)) \Longrightarrow H^c_{s+t}((\Sigma^{j(\lambda-\lambda^p)} X)^{tG})$$ be the homological Tate spectral sequence for the $j$-th term in the $\xi$-tower. The map $\xi_*$ above is compatible with the spectral sequence map $\hat E^2_{**}(j+1) \to \hat E^2_{**}(j)$ that is induced on Tate cohomology by the $G$-module homomorphism $$\xi_* {\colon\thinspace}H_*(\Sigma^{(j+1)(\lambda-\lambda^p)} X) \to H_*(\Sigma^{j(\lambda-\lambda^p)} X) \,.$$ This homomorphism is zero, since $\xi_* {\colon\thinspace}H_*(S^\lambda) \to H_*(S^{\lambda^p})$ is zero. Hence the map of spectral sequences is also zero. It follows that the homomorphism $\xi_*$ in  strictly reduces the Tate filtration ($=s$) of each nonzero continuous homology class. Equivalently, $\xi_*$ strictly increases the vertical degree ($=t$) of the spectral sequence representative of each nonzero class. By assumption, there is an integer $\ell$ such that $H_t(X) = 0$ for all $t < \ell$. Then $\hat E^2_{s,t}(j) = \hat E^\infty_{s,t}(j) = 0$ for $t < \ell$ and any $j$. If $x = (x_j)_j$ is an arbitrary element of $\lim_j H^c_*((\Sigma^{j(\lambda-\lambda^p)} X)^{tG})$, then $x_j = \xi_*^k(x_{j+k})$ for each $k\ge0$. If $x_j$ is represented in vertical degree $t$, then $x_{j+k}$ must be represented in vertical degree $\le (t-k)$. Choosing $k$ so large that $t-k < \ell$, it follows that $x_{j+k} = 0$, which implies $x_j = 0$. Repeating the argument for each $j$ we see that $x=0$, so $\lim_j H^c_*((\Sigma^{j(\lambda-\lambda^p)} X)^{tG})$ must be the trivial group. Let $M = \operatorname*{colim}_j H_c^*((\Sigma^{-j\lambda^p} X)^{tG})$. Then the $\operatorname{Hom}$ dual $M^*$ is the limit group we just showed is zero, and $M$ injects into its double $\operatorname{Hom}$ dual $M^{**}$, so $M = 0$ as well. \[prop-2.10\] Let $X$ be a $G$-spectrum with $\pi_*(X)$ bounded below and $H_*(X)$ of finite type. Then the $p$-adic completion $Y\sphat_p$ of $$Y = \operatorname*{holim}_j \, (\Sigma^{-j\lambda^p} X)^{tG}$$ is contractible. The spectrum $Y$ is the homotopy limit over $i$ and $j$ of the spectra $$(\Sigma^{-j\lambda^p} X)^{tG}\langle i\rangle = [\widetilde{EG}/S^{i\lambda} \wedge F(EG_+, \Sigma^{-j\lambda^p} X)]^G \,,$$ which can be rewritten as $$(\widetilde{EG}/S^{i\lambda} \wedge \Sigma^{-j\lambda^p} X)_{hG}$$ by the Adams equivalence [@LMS86]\*[II.8.4]{}, since $\widetilde{EG}/S^{i\lambda}$ is a free $G$-CW spectrum. Each of these are bounded below with mod $p$ homology of finite type. Hence there is an inverse limit Adams spectral sequence $$E_2^{**} = \operatorname{Ext}_{{\mathscr{A}}}^{**}(M, {\mathbb{F}}_p) \Longrightarrow \pi_*(Y\sphat_p)$$ converging to the $p$-adic homotopy of that homotopy limit (see [@CMP87]\*[Prop. 7.1]{} and [@LR:A]\*[Prop. 2.2]{}), where $$M = \operatorname*{colim}_j H_c^*((\Sigma^{-j\lambda^p} X)^{tG}) \,.$$ The latter ${\mathscr{A}}$-module was shown to be zero in Proposition \[prop-2.8\], hence the $E_2$-term is zero and $Y\sphat_p$ is contractible. Consider the diagram in Proposition \[prop-2.3\]. By assumption, the maps $$\Gamma_{n-1} {\colon\thinspace}\Phi^C(X)^{\bar G} \to \Phi^C(X)^{h\bar G} \qquad\text{and}\qquad \Gamma_1 {\colon\thinspace}X^C \to X^{hC}$$ are $(W, k)$-coconnected. Hence $\hat\Gamma_1 {\colon\thinspace}\Phi^C(X) \to X^{tC}$ is $(W, k)$-coconnected, so by Lemma \[lem-2.4\] also $$(\hat\Gamma_1)^{h\bar G} {\colon\thinspace}\Phi^C(X)^{h\bar G} \to (X^{tC})^{h\bar G}$$ is $(W, k)$-coconnected. By Proposition \[prop-2.10\], the map $\Gamma_{n-1} {\colon\thinspace}X^{tG} \to (X^{tC})^{h\bar G}$ is a $p$-adic equivalence, hence $(W, -\infty)$-coconnected, by our standing assumption that $W$ is in the localizing ideal of spectra generated by $S^{-1}\!/p^\infty$. It follows easily that $\hat\Gamma_n {\colon\thinspace}\Phi^C(X)^{\bar G} \to X^{tG}$ is $(W, k)$-coconnected, which is equivalent to $\Gamma_n {\colon\thinspace}X^G \to X^{hG}$ being $(W, k)$-coconnected. This follows by induction on $n$, using Theorem \[thm-1.6\] and the observation that $$\Phi^{C_p}(\Phi^{C_{p^e}}(X)) \cong \Phi^{C_{p^{e+1}}}(X)$$ for all $0 \le e < n$. This will follow from Corollary \[cor-1.7\] in the case $X = B^{\wedge p^n}$, $W = S^{-1}\!/p^\infty$ and $k=-\infty$, once we show that for each $0 \le e < n$ there is a $C_{p^{n-e}}$-equivalence $$Y = \Phi^{C_{p^e}}(B^{\wedge p^n}) \simeq B^{\wedge p^{n-e}} \,,$$ the right hand side is bounded below with mod $p$ homology of finite type, and $\Gamma_1 {\colon\thinspace}Y^{C_p} \to Y^{hC_p}$ is a $p$-adic equivalence. The first claim follows from the proof in simplicial degree $0$ of [@HM97]\*[Prop. 2.5]{}. Writing $Y \simeq Z^{\wedge p}$, where $Z = B^{\wedge p^{n-e-1}}$ is bounded below with $H_*(Z)$ of finite type, the other claims also follow, since $\Gamma_1 {\colon\thinspace}(Z^{\wedge p})^{C_p} \to (Z^{\wedge p})^{hC_p}$ is a $p$-adic equivalence by [@LR:A]\*[Thm. 5.13]{}, generalizing [@BMMS86]\*[§II.5]{}. There is a $C_{p^{n-1}}$-equivalence $$r {\colon\thinspace}\Phi^{C_p} THH(B) \overset{\simeq}{\longrightarrow}THH(B)$$ (the cyclotomic structure map of $THH(B)$, see [@HM97]\*[§2.5]{}), whose $e$-fold iterate is a $C_{p^{n-e}}$-equivalence $\Phi^{C_{p^e}}(THH(B)) \simeq THH(B)$. It is clear from the simplicial definition that $THH(B)$ is connective and has mod $p$ homology of finite type, hence the theorem follows from Corollary \[cor-1.7\].
--- abstract: 'We determine the classical and the non-central Wallach sets $W_0$ and $W$ by classical probabilistic methods. We prove the Mayerhofer conjecture on $W$. We exploit the fact that $(x_0,\beta)\in W$ if and only if $x_0$ is the starting point and $2\beta$ is the drift of a squared Bessel matrix process $X_t$ on the cone $\overline{Sym^+({\mathbf{R}},p)}$. Our methods are based on the study of SDEs for the symmetric polynomials of $X_t$ and for the eigenvalues of $X_t$, i.e. the squared Bessel particle systems.' address: - | Piotr Graczyk\ LAREMA\ Université d’Angers\ 2 Bd Lavoisier\ 49045 Angers cedex 1, France - | Jacek Ma[ł]{}ecki,\ Faculty of Pure and Applied Mathematics\ Wroc[ł]{}aw University of Technology\ ul. Wybrze[ż]{}e Wyspia[ń]{}skiego 27\ 50-370 Wroc[ł]{}aw, Poland author: - 'Piotr Graczyk, Jacek Ma[ł]{}ecki' title: Wallach sets and squared Bessel particle systems --- [^1] Introduction and Preliminaries ============================== The aim of this paper is to prove the characterization of the non-central Wallach set $W$, conjectured in [@bib:mayerJMA] by Mayerhofer. More precisely, let us denote by $\mathcal{S}_p=Sym({\mathbf{R}},p)$ the space of symmetric $p\times p$ matrices and let $\mathcal{S}_p^+$ be the open cone of positive definite matrices. The (central) Wallach set $W_{0}$ is defined as the set of admissible $\beta\in{\mathbf{R}}$ such that there exists a random matrix $X$ with values in $\bar {\mathcal S}^+_p$ (equivalently a measure with support in $\bar {\mathcal S}^+_p$) such that its Laplace transform is of the form $${\mathbf{E}}e^{-{\bf Tr}(uX)}=(\det(I+2\Sigma u))^{-\beta},\ \ u\in {\mathcal S}^+_p,$$ where $\Sigma\in {\mathcal S}_p^+$. It is well-known (see [@bib:farautKOR], pp. 137, 349) that $$W_0=\frac12 B \cup \left[\frac{p-1}2,\infty\right) \/,$$ where $B=\{0,1,\cdots,p-2\}$. However, a similar question can be stated in a more general setting. Let $x_0\in \bar {\mathcal S}_p^+$ and $\beta \in {\mathbf{R}}$. We say that the pair $(x_0,\beta)$ belongs to the non-central Wallach set $W$ if there exists a random matrix $X$ with values in $\bar {\mathcal S}_p^+$ having the Laplace transform $$\label{eq:Lap1} {\mathbf{E}}e^{-{\bf Tr}(uX) }= (\det(I+2\Sigma u))^{-\beta} \exp[- {\bf Tr}(x_0(I+2\Sigma u)^{-1}u))], \ \ u\in {\mathcal S}^+_p,$$ for a matrix $\Sigma\in {\mathcal S}_p^+$. The interest in random matrices verifying comes from the fact that if $$X = \xi_1\xi_1^T+\ldots+\xi_n\xi_n^T=q(\xi)\/,\quad \xi=(\xi_1,\ldots,\xi_p)\/,$$ where $\xi_{i}\sim N_p(m_i,\Sigma)$ are independent normal vectors in ${\mathbf{R}}^p$, then the Laplace transform of $X$ is given by with $\beta =n/2$ and $x_0=q(m_1,\ldots,m_n)$ (see [@bib:mayer]). Consequently, random matrices $X$ verifying are of great importance in statistics as estimators of the normal covariance matrix $\Sigma$. Obviously $(0,\beta)\in W$ if and only if $\beta \in W_0$. Note also that whenever $(x_0,\beta)\in W$ then we have $\beta\geq 0$, otherwise ${\mathbf{E}}e^{-{\bf Tr}(uX) }$ would be unbounded (take for example $u=nI$, $n\in\mathbb{N}$, $n\to \infty$). The characterization of the non-central set Wallach set $W$ has been recently studied by Letac and Massam in [@bib:LetMassFalse]. However, [@bib:LetMassFalse] contains an error in the formulation of the result and in its proof, which was pointed out by Mayerhofer[@bib:mayerJMA]. Mayerhofer stated in [@bib:mayerJMA] the following conjecture [**Mayerhofer Conjecture**]{}. [*The non-central Wallach set is characterized by* ]{} $$(x_0,\beta )\in W \ \ \Leftrightarrow \ \ (\beta\in \left[\frac{p-1}2,\infty\right), x_0\in \bar {\mathcal S}_p^+) \ {\it or}\ (2\beta\in B, rk(x_0)\le 2\beta).$$ Sufficiency of these conditions was showed by Bru in [@bib:b91], except the case $2\beta=p-1$, that may be found in [@bib:LetMassFalse] and [@bib:gm11]. In what concerns the necessity, Mayerhofer proved in [@bib:mayerJMA] that if $(x_0,\beta)\in W$ and $2\beta\in B$, then $rk(x_0)\le 2\beta +1$. In this note we provide a simple proof of the Mayerhofer conjecture based on Itô stochastic calculus. Completely different approach based on analytical methods was proposed in the unpublished note [[@bib:LetMass]]{}. Denoting more precisely the set of $(x_0,\beta)$ with the property by $W_\Sigma$, it is easy to show ([@bib:mayer], Prop.III.5.1) that $(x_0,\beta)\in W_\Sigma$ if and only if $(\Sigma^{-\frac12}x_0\Sigma^{-\frac12},\beta)\in W_I$. Thus the conditions of Mayerhofer’s conjecture are the same for any $\Sigma\in {\mathcal S}_p^+$ and, in the sequel, we will only consider the case $\Sigma=I$. Our main tools are the results of the article [@bib:gm2], that we adapt to the set-up of BESQ matrix SDEs and their eigenvalue processes, i.e. BESQ particle systems. These new results on BESQ particle systems are interesting independently and are an income to the study of these processes started in [@bib:katori2011]. Wallach sets and Stochastic Analysis ==================================== We begin this section by recalling (following the exposure given in [@bib:mayer], [@bib:mayerJMA]) the relations between Wallach sets and matrix squared Bessel processes. Consequently, we translate the Mayerhofer conjecture to the question of the existence of solutions in $\bar{\mathcal{S}}_p^{+}$ of the appropriate matrix SDE (depending on $\beta$) starting from $x_0$. Then, using symmetric polynomials method and comparison theorem we show that such solutions exist only if $(x_0,\beta)$ fulfills the conditions stated by Mayerhofer. Wallach sets and stochastic processes -------------------------------------- Let $W_t$ be a Brownian matrix of dimension $p\times p$. We call [*matrix BESQ process*]{} any solution of the following SDE $$\begin{aligned} \label{eq:Wishart:SDe} dX_t = \sqrt{|X_t|}dW_t+dW^T_t\sqrt{|X_t|}+\alpha Idt\/,\quad X_t\in {\mathcal S}_p\/,t\ge 0;\quad X_0=x_0. \end{aligned}$$ Recall that if $g:{\mathbf{R}}\mapsto{\mathbf{R}}$ then $g(X)$ is defined spectrally, i.e. $g(U diag(\lambda_i) U^T)=U diag(g(\lambda_i)) U^T$, where $U\in SO(p)$. When $X_0=x_0\in \bar {\mathcal S}_p^+$, such processes were studied by Bru in [@bib:b91], who among others showed that if (\[eq:Wishart:SDe\]) admits a solution $X_t \in \bar {\mathcal S}_p^+$ with $X_0=x_0\in \bar{\mathcal S}^+_p$, then the Laplace transform of $X_t$ is $$\label{Lap_Wish} {\mathbf{E}}^{x_0}[\exp(- {\bf Tr}(uX_t)]=(\det(I+2tu))^{-\alpha/2} \exp[- {\bf Tr}(x_0(I+2tu)^{-1}u))],\quad u\in {\mathcal S}^+_p\/.$$ In particular, by taking $t=1$, it means that $(x_0, \frac{\alpha}{2})\in W$. Mayerhofer showed in [@bib:mayer] and [@bib:mayerJMA] that in fact these properties are equivalent. The stochastic differential equation (\[eq:Wishart:SDe\]) with $x_0\in \bar {\mathcal S}_p^+$ has a solution in $\bar {\mathcal S}_p^+$ if and only if $(x_0,\frac{\alpha}2)\in W$. Symmetric polynomials of solutions of matrix SDEs {#sec:polyn} ------------------------------------------------- If $X$ is a symmetric $p\times p$ matrix, we define the polynomials $e_n(X)$ as basic symmetric polynomials $$e_n(X) = \sum_{i_1<\ldots<i_n}\lambda_{i_1}(X)\lambda_{i_2}(X)\ldots \lambda_{i_n}(X)\/,\ \ \ \quad n=1,\ldots,p;$$ in the eigenvalues $\lambda_1(X) \le \ldots\le\lambda_p(X)$ of $X$. Moreover, we use the convention that $e_0(X)\equiv 1$. Up to the sign change, the polynomials $e_n$ are the coefficients of the characteristic polynomial of $X$, i.e. $$\det(X-uI)=(-1)^p u^p + (-1)^{p-1} e_1(X)u^{p-1}+\ldots -e_{p-1}(X)u+e_p(X)$$ and are polynomial functions of the entries of the matrix $X$. In particular, $e_p(X)=\det X$. In [@bib:gm2], the symmetric polynomials related to general class of non-colliding particle systems were studied in details. Using the results therein we get the following characterization of the symmetric polynomials related to matrix squared Bessel processes. \[prop:Poly\] Let $X_t$ be a solution of the matrix SDE and $X_t\in {\mathcal S}^+_p,t\ge 0$. Then the symmetric polynomials $e_n(t):=e_n(X_t)$, $n=1,\ldots, p$ are semimartingales satisfying the following system of SDEs $$\begin{aligned} de_n &=& M_n(e_1,\ldots,e_p)dV_n +(p-n+1)(\alpha-n+1)e_{n-1}dt\/,\quad n=1,\ldots,p-1\/, \label{eq:polynom_first:SDEs} \\ de_p &=& 2\sqrt{e_{p-1}e_p}dV_p +(\alpha-p+1)e_{p-1}dt, \label{eq:polynom_last:SDEs} \end{aligned}$$ where $V_i$, $i=1,\ldots, p$ are one-dimensional Brownian motions and the functions $M_n$ are continuous on ${\mathbf{R}}^p$. Note that the explicit forms of the martingale parts $M_n(e_1,\ldots,e_p)dV_n$ as well as their brackets $d\left<e_n,e_{m}\right>$ are known for every $n,m=1,\ldots,p$ (see Proposition 3.2 in [@bib:gm2]). However, they will not be used in the sequel apart from the case $n=p$ stated explicitly in Proposition \[prop:Poly\] above. The symmetric polynomials $(e_1,\ldots, e_n)$ are given by an analytic function (polynomials of the coefficients) of the matrix $X$. Thus Itô formula, applied to the SDE for the matrix process $X_t$, gives a system of the SDEs for $(e_1,\ldots, e_n)$. We determine these SDEs like in Propositions 3.1 and 3.2 in [@bib:gm2], using Theorem 3 from [@bib:gm11] in the case when eigenvalues of $x_0$ are all distinct. Evidently, by Itô formula, this form of the SDEs system describing $(e_1,\ldots,e_p)$ does not depend on the starting point $x_0$, i.e. it does not change if we remove the condition that eigenvalues of the initial points are all different. We write $e_{n}^{\overline i}$ for the incomplete polynomial of order $n$, not containing the variable $\lambda_i(e)$; the notation $e_{n}^{\overline i,\overline j}$ is analogous. Using formulas from Proposition 3.2 in [@bib:gm2] we find that $$M_n=2\left(\sum_{i=1}^p|\lambda_i|(e_{n-1}^{\overline i})^2\right)^{1/2}$$ and, in particular, when $X_t\in {\mathcal S}^+_p$, $t\ge 0$, (i.e. the eigenvalues are non-negative), we obtain $M_p= 2\sqrt{e_{p-1}e_p}$. Moreover, we have the following expressions for the drift parts of $de_n$: $$\sum_{i=1}^p \alpha e_{n-1}^{\overline i}-\sum_{i<j}(|\lambda_i|+|\lambda_j|)e_{n-2}^{\overline{i},\overline j}=(p-n+1)(\alpha-n+1)e_{n-1},$$ where we removed the absolute values since we assumed that all the eigenvalues are non-negative. This ends the proof. The fact that matrix BESQ $X_t$ leaves $\bar{\mathcal{S}}_p^+$ is controlled by $e_p$, which is the determinant of $X_t$, i.e. if $e_p(t)$ is negative then $X_t$ cannot be in $\bar{\mathcal{S}}_p^+$. The explicit formulas for the SDEs describing the symmetric polynomials $e_1,\ldots,e_p$ can be used to show that the $e_p$ becomes negative when $e_p(0)=0$ and $\alpha=2\beta$ is small enough. This is presented in the following proposition. \[prop:polyn\] Let $\alpha\geq 0$ and $x_0\in \bar{\mathcal{S}}_p^+$. - Suppose $0<\alpha<p-1$, $\alpha\not\in B$ and $rk(x_0)<p$. Then $(x_0,\frac{\alpha}{2})\not\in W$. In particular, the classical Wallach set $W_0=\frac12 B \cup [\frac{p-1}2,\infty)$. - If $\alpha\in B, rk(x_0)<p$ and $(x_0,\frac{\alpha}{2})\in W$, then $rk(x_0)\le\alpha$. To deal with (i) suppose that $(x_0,\frac{\alpha}{2})\in W$, so there exists a solution $X_t$ of such that $X_t\in \bar {\mathcal S}^+_p$ for every $t\ge 0$. The condition $rk(x_0)<p$ is equivalent to $\lambda_1(0)=0$ as well as $e_p(0)=0$. Formula shows that $e_p(X_t)$ is a BESQ$^{\alpha-p+1}(0)$ in ${\mathbf{R}}$ (the superscript of a BESQ denotes its dimension), starting from $0$ with a time change by $A_t=\int_0^t e_{p-1}(s) ds$. As it was shown in [@bib:gjy], the squared Bessel process with negative dimension starting from $0$ is just -BESQ$^{|\alpha-p+1|}(0)$ and consequently it becomes strictly negative just after the start. Thus, if the time change $A_t>0$ (with positive probability) then ${\textbf{P}}(e_p(X_t)<0)=1$ for every $t>0$. Thus $A_t\equiv 0$ and consequently the process $e_{p-1}(t)$ is always zero. Looking at the SDE for $e_{p-1}$ we deduce from $e_{p-1}(t)\equiv 0$ that the drift term must vanish, which means that $e_{p-2}(t)\equiv 0$. Note that we use here the fact that $\alpha\not\in B$ implies that the factors $\alpha-n+1$ in the drift term are non-zero. Consequently, by induction, we arrive at $e_1\equiv 0$, which is impossible because the drift term of the process $e_1$ is $pdt\not=0$. The classical Wallach set corresponds to $x_0=0$. This proves (i). The proof of (ii) is the same, however the condition $\alpha\in B$ implies that the factors $\alpha-n+1$ in the drift term are non-zero until $n=\alpha+1$. By induction, we get $e_{\alpha+1}(0)=0$, which is equivalent to $rk(x_0) \le \alpha$. Observe that the method of polynomials does not apply to the case $rk(x_0)=p$. We will show below that in this case, the eigenvalue process $\lambda_1(t)$ of any matrix solution of becomes negative. Squared Bessel particle systems ------------------------------- We call [*squared Bessel particle system*]{} the eigenvalue process $\lambda_1(t)\le \cdots \le \lambda_p(t)$ of a matrix solution of the SDE . The following corollary of the results of [@bib:gm2] is needed in the proof of the characterization of $W$. \[bad\_p\] For $\alpha \in B$ and any $x_0 \in {\mathcal S}^+_p$ (i.e. $rk(x_0)=p$) the eigenvalue process $\lambda_1(t)\le \cdots \le \lambda_p(t)$ is a strong and pathwise unique solution of the following SDE system $$\begin{aligned} \label{eq:eigenvalues:SDe} d \lambda_i = 2\sqrt{|\lambda_i|}dB_i + \left(\alpha+\sum_{k\neq i}\frac{|\lambda_i|+|\lambda_k|}{\lambda_i-\lambda_k}\right)dt\/,\quad i=1,\ldots,p\end{aligned}$$ Moreover, the eigenvalues $\lambda_i(t)$ never collide if $t>0$. We use a natural bijection between the polynomials $e=(e_1,\ldots,e_p)$ and the eigenvalues $(\lambda_1\ldots \lambda_p)$ belonging to the closed Weyl chamber $\bar C_+= \{(x_1,\ldots,x_p)\in{\mathbf{R}}^p: x_1\le x_2<\ldots\le x_p\}$, see [@bib:gm2]. The first part of the proof of Prop. 4.3 in [@bib:gm2] together with the proof of Th. 4.4 in [@bib:gm2] imply that any solution of the system and becomes non-colliding for every $t>0$, i.e. $\lambda_i(e(t))$ are all different. Then we apply Remark 5.2[@bib:gm2], which allows to construct a non-colliding solution to the system . By Th. 5.3 in [@bib:gm2] we get pathwise uniqueness for the solutions of . From the other side, it was shown in Theorem 3 in [@bib:gm11] that whenever the eigenvalues are initially different then holds. To deal with starting from collision points we first write SDEs for symmetric polynomials (what can be done in every case). As it was done in [@bib:gm2], using Itô formula argument (the form of the SDEs does not depend on the starting point), we claim that the eigenvalues are solutions to . \[prop:hit:zero\] Suppose $\alpha<p-1$ and let $(\lambda_1,\ldots,\lambda_p)$ be a non-colliding solution of the system with $\lambda_1(0)\ge 0$. Then ${\textbf{P}}(\lambda_1(t)<0)>0$, for every $t>0$. Let $\Lambda = (\lambda_1,\ldots,\lambda_p)$ be a non-colliding solution to and let $\tilde{\lambda_1}$ be a solution to the SDE given by $$\begin{aligned} d\tilde{\lambda}_1 = 2\sqrt{|\tilde{\lambda}_1|}dB_1+(\alpha-(p-1))dt\/.\end{aligned}$$ starting from $\tilde{\lambda}_1(0)=\lambda_1(0)\geq 0$. Note that $B_1$ is the same Brownian motion that appears in the SDE for $\lambda_1$. Now, we apply the techniques of local times proposed by Le Gall in [@bib:LeGall1983], described also in [@bib:ry99]. More precisely, by Lemmas 3.3 and 3.4 in [@bib:ry99], the local time $L^0(\tilde{\lambda}_1-\lambda_1)$ is zero. Using Tanaka’s formula (see proof of Thm 3.7 in [@bib:ry99]) we obtain $$\begin{aligned} {\mathbf{E}}(\lambda_1(t)-\tilde{\lambda}_1(t))^{+} = {\mathbf{E}}\int_0^t \mathbf{1}_{\{\lambda_1(s)>\tilde{\lambda}_1(s)\}}\left(p-1+\sum_{k=2}^p\frac{|\lambda_1(s)|+|\lambda_k(s)|}{\lambda_1(s)-\lambda_k(s)}\right)ds\leq 0\/.\end{aligned}$$ The last inequality follows form the estimate ${|x|+|y|}\geq x-y$ valid for every $y<x$. It implies that $ {\textbf{P}}(\lambda_1(t)\leq \tilde{\lambda}_1(t) \textrm{ for every $t\geq 0$})=1\/. $ Since the process $\tilde{\lambda}_1$ is a squared Bessel motion with dimension $\alpha-(p-1)< 0$, it crosses zero a.s. and next it remains in the negative half-line, see [@bib:gjy]. This ends the proof. \[rank\_p\] Suppose that $0<\alpha<p-1$ and $rk(x_0)=p$. Then $(x_0,\alpha/2)\not\in W$. Consider a solution of . The corresponding eigenvalue process $\Lambda=(\lambda_1,\ldots,\lambda_p)$ is a strong and pathwise unique solution of . If $\alpha\in {\mathbf{R}}^+\setminus B$, it is justified by Cor.6.6[@bib:gm2] (note a misprint in the formulation of Cor.6.6[@bib:gm2]: it should be $\alpha\in {\mathbf{R}}^+\setminus B$). In the case $\alpha\in B$, $rk(x_0)=p$, it follows from Proposition \[bad\_p\]. Finally, by Proposition \[prop:hit:zero\], the first eigenvalue $\lambda_1(t)$ becomes strictly negative, so $(x_0,\alpha/2)\not\in W$. By Propositions \[prop:polyn\] and \[rank\_p\] we claim that The Mayerhofer Conjecture is true. As a consequence, we obtain the following Let $x_0\in\bar {\mathcal S}_p^+$. The SDE has a solution $X_t\in \bar {\mathcal S}_p^+$ if and only if $$(\alpha\in [{p-1},\infty), x_0\in \bar {\mathcal S}_p^+) \ {\rm or}\ (\alpha \in B, rk(x_0)\le \alpha).$$ [**Remark**]{}. In Proposition \[prop:polyn\](i) we gave a stochastic proof of the characterization of the classical “central” Wallach set. Observe that one more stochastic proof of the central Wallach set may be given by comparison methods. Indeed, it follows from Proposition \[prop:hit:zero\] that, if $\alpha<p-1$, $\alpha\not\in B$ and $x_0=0$, then the first eigenvalue $\lambda_1(t)$ becomes strictly negative, so $\alpha/2\not\in W_0$. [00]{} M. F. Bru, *Wishart processes*. J. Theor. Prob. 4 (1991), 725–751. J. Faraut, A. Korányi, *Analysis on symmetric cones*, Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1994. A. Göing-Jaeschke, M. Yor, *A survey and some generalizations of Bessel processes.* Bernoulli 9 (2003), no. 2, 313-349 P. Graczyk, J. Małecki, *Multidimensional Yamada-Watanabe theorem and its applications to particle systems*, Journal of Mathematical Physics, 54 (2013), pp. 021503-021503-15. P. Graczyk, J. Małecki, *Strong solutions of non-colliding particle systems*, Electron. J. Probab.,19(2014), no. 119, 1-21. M. Katori and H. Tanemura, *Noncolliding Squared Bessel processes*, J. Stat. Phys.(2011) [**142**]{}, 592-615. J.F. Le Gall, *Applications du temps local aux équations différentielles stochastiques unidimensionnelles*. Séminaire de Probabilités de Strasbourg, 17 (1983), 15-31. G. Letac, H. Massam, *The noncentral Wishart as an exponential family, and its moments.* J. Multivariate Anal. 99(2008), no. 7, 1393-1417. G. Letac, H. Massam, *Existence and non-existence of the non-central Wishart distributions* , arXiv:1108.2849. E. Mayerhofer, *Stochastic Analysis Methods in Wishart Theory II*, in: Modern Methods of Multivariate Statistics, P. Graczyk, A. Hassairi Eds., Travaux en Cours 82, Hermann, Paris, 2014. E. Mayerhofer, *On the existence of non-central Wishart distributions,* Journal of Multivariate Analysis 114(2013), p. 448-456. D. Revuz and M. Yor. *Continuous Martingales and Brownian Motion*. Springer, New York, 1999. [^1]: Jacek Małecki was supported by the National Science Centre (Poland) grant no. 2013/11/D/ST1/02622.
--- abstract: 'U(1) gauge fields are decomposed into a monopole and photon part across the phase transition from the confinement to the Coulomb phase. We analyze the leading Lyapunov exponents of such gauge field configurations on the lattice which are initialized by quantum Monte Carlo simulations. We observe that the monopole field carries the same Lyapunov exponent as the original U(1) field. Evidence is found that monopole fields stay chaotic in the continuum whereas the photon fields are regular.' address: 'Atominstitut, Technische Universität Wien, A-1040 Vienna, Austria' author: - 'Harald Markum, Rainer Pullirsch, Wolfgang Sakuler' title: 'Lyapunov exponents in Minkowskian U(1) gauge theory [^1]' --- Monopole and photon part of U(1) ================================ We begin with a $4d$ U(1) gauge theory described by the Euclidean action $ S \lbrace U_l \rbrace = \beta \: \sum_p (1 - \cos \theta_p )$, where $U_l = U_{x,\mu} = \exp (i\theta_{x,\mu}) $ and $ \theta_p = \theta_{x,\mu} + \theta_{x+\hat{\mu},\nu} - \theta_{x+\hat{\nu},\mu} - \theta_{x,\nu}\ . $ We are interested in the relationship between monopoles and classical chaos across the phase transition at $\beta_c \approx 1.01$. Following Ref. [@StWe92], we have factorized our gauge configurations into monopole and photon fields. The U(1) plaquette angles $\theta_{x,\mu\nu}$ are decomposed into the “physical” electromagnetic flux through the plaquette $\bar \theta_{x,\mu\nu}$ and a number $m_{x,\mu\nu}$ of Dirac strings through the plaquette $$\label{Dirac_string_def} \theta_{x,\mu\nu} = \bar \theta_{x,\mu\nu} + 2\pi\,m_{x,\mu\nu}\ ,$$ where $\bar \theta_{x,\mu\nu}\in (-\pi,+\pi]$ and $m_{x,\mu\nu} \ne 0$ is called a Dirac plaquette. Classical chaotic dynamics from quantum Monte Carlo initial states ================================================================== Chaotic dynamics in general is characterized by the spectrum of Lyapunov exponents. These exponents, if they are positive, reflect an exponential divergence of initially adjacent configurations. In case of symmetries inherent in the Hamiltonian of the system there are corresponding zero values of these exponents. Finally negative exponents belong to irrelevant directions in the phase space: perturbation components in these directions die out exponentially. Pure gauge fields on the lattice show a characteristic Lyapunov spectrum consisting of one third of each kind of exponents [@BOOK]. Assuming this general structure of the Lyapunov spectrum we investigate presently its magnitude only, namely the maximal value of the Lyapunov exponent, $L_{{\rm max}}$. The general definition of the Lyapunov exponent is based on a distance measure $d(t)$ in phase space, $$L := \lim_{t\rightarrow\infty} \lim_{d(0)\rightarrow 0} \frac{1}{t} \ln \frac{d(t)}{d(0)}.$$ In case of conservative dynamics the sum of all Lyapunov exponents is zero according to Liouville’s theorem, $\sum L_i = 0$. We utilize the gauge invariant distance measure consisting of the local differences of energy densities between two $3d$ field configurations on the lattice: $$d : = \frac{1}{N_P} \sum_P\nolimits \, \left| {\rm tr} U_P - {\rm tr} U'_P \right|.$$ Here the symbol $\sum_P$ stands for the sum over all $N_P$ plaquettes, so this distance is bound in the interval $(0,2N)$ for the group SU(N). $U_P$ and $U'_P$ are the familiar plaquette variables, constructed from the basic link variables $U_{x,i}$, $$U_{x,i} = \exp \left( aA_{x,i}^cT^c \right)\: ,$$ located on lattice links pointing from the position $x=(x_1,x_2,x_3)$ to $x+ae_i$. The generators of the group are $T^c = -ig\tau^c/2$ with $\tau^c$ being the Pauli matrices in case of SU(2) and $A_{x,i}^c$ is the vector potential. The elementary plaquette variable is constructed for a plaquette with a corner at $x$ and lying in the $ij$-plane as $U_{x,ij} = U_{x,i} U_{x+i,j} U^{\dag}_{x+j,i} U^{\dag}_{x,j}$. It is related to the magnetic field strength $B_{x,k}^c$: $$U_{x,ij} = \exp \left( \varepsilon_{ijk} a B_{x,k}^c T^c \right).$$ The electric field strength $E_{x,i}^c$ is related to the canonically conjugate momentum $P_{x,i} = \dot{U}_{x,i}$ via $$E^c_{x,i} = \frac{2a}{g^3} {\rm tr} \left( T^c \dot{U}_{x,i} U^{\dag}_{x,i} \right).$$ The Hamiltonian of the lattice gauge field system can be casted into the form $$H = \sum \left[ \frac{1}{2} \langle P, P \rangle \, + \, 1 - \frac{1}{4} \langle U, V \rangle \right].$$ Here the scalar product stands for $\langle A, B \rangle = \frac{1}{2} {\rm tr} (A B^{\dag} )$. The staple variable $V$ is a sum of triple products of elementary link variables closing a plaquette with the chosen link $U$. This way the Hamiltonian is formally written as a sum over link contributions and $V$ plays the role of the classical force acting on the link variable $U$. We prepare the initial field configurations from a standard four dimensional Euclidean Monte Carlo program on a $12^3\times 4$ lattice varying the inverse gauge coupling $\beta$ [@SU2]. We relate such four dimensional Euclidean lattice field configurations to Minkowskian momenta and fields for the three dimensional Hamiltonian simulation by identifying a fixed time slice of the four dimensional lattice. Chaos, confinement and continuum ================================= We start the presentation of our results with a characteristic example of the time evolution of the distance between initially adjacent configurations. An initial state prepared by a standard four dimensional Monte Carlo simulation is evolved according to the classical Hamiltonian dynamics in real time. Afterwards this initial state is rotated locally by group elements which are chosen randomly near to the unity. The time evolution of this slightly rotated configuration is then pursued and finally the distance between these two evolutions is calculated at the corresponding times. A typical exponential rise of this distance followed by a saturation can be inspected in Fig. \[Fig2\] from an example of U(1) gauge theory in the confinement phase and in the Coulomb phase. While the saturation is an artifact of the compact distance measure of the lattice, the exponential rise (the linear rise of the logarithm) can be used for the determination of the leading Lyapunov exponent. The left plot exhibits that in the confinement phase the original field and its monopole part have similar Lyapunov exponents whereas the photon part has a smaller $L_{max}$. The right plot in the Coulomb phase suggests that all slopes and consequently the Lyapunov exponents of all fields decrease substantially. The main result of the present study is the dependence of the leading Lyapunov exponent $L_{{\rm max}}$ on the inverse coupling strength $\beta$, displayed in Fig. \[Fig3\]. As expected the strong coupling phase is more chaotic. The transition reflects the critical coupling to the Coulomb phase. The plot shows that the monopole fields carry Lyapunov exponents of nearly the same size as the full U(1) fields. The photon fields yield a non-vanishing value in the confinement ascending toward $\beta=0$ for randomized fields which indicates that the decomposition works well only around the phase transition. An interesting result concerning the continuum limit can be viewed from Fig. \[Fig4\] which shows the energy dependence of the Lyapunov exponents for the U(1) theory and its components. One observes an approximately linear relation for the monopole part while a quadratic relation is suggested for the photon part in the weak coupling regime. From scaling arguments one expects a functional relationship between the Lyapunov exponent and the energy [@BOOK; @SCALING] $$L(a) \propto a^{k-1} E^{k}(a) , \label{scaling}$$ with the exponent $k$ being crucial for the continuum limit of the classical field theory. A value of $k < 1$ leads to a divergent Lyapunov exponent, while $k > 1$ yields a vanishing $L$ in the continuum. The case $k = 1$ is special leading to a finite non-zero Lyapunov exponent. Our analysis of the scaling relation (\[scaling\]) gives evidence, that the classical compact U(1) lattice gauge theory and especially the photon field have $k \approx 2$ and with $L(a) \to 0$ a regular continuum theory. The monopole field signals $k \approx 1$ and stays chaotic approaching the continuum. [99]{} J.D. Stack and R.J. Wensley, Nucl. Phys. B371 (1992) 597; T. Suzuki, S. Kitahara, T. Okude, F. Shoji, K. Moroda, and O. Miyamura, Nucl. Phys. B (Proc. Suppl.) 47 (1996) 374. T.S. Biró, S.G. Matinyan, and B. Müller: Chaos and Gauge Field Theory, World Scientific, Singapore, 1995. T.S. Biró, M. Feurstein, and H. Markum, APH Heavy Ion Physics 7 (1998) 235; T.S. Biró, N. Hörmann, H. Markum, and R. Pullirsch, Nucl. Phys. B (Proc. Suppl.) 86 (2000) 403; H. Markum, R. Pullirsch, and W. Sakuler, hep-lat/0201001; hep-lat/0205003. L. Casetti, R. Gatto, and M. Pettini, J. Phys. A32 (1999) 3055; H.B. Nielsen, H.H. Rugh, and S.E. Rugh, ICHEP96 1603, hep-th/9611128; B. Müller, chao-dyn/9607001; H.B. Nielsen, H.H. Rugh, and S.E. Rugh, chao-dyn/9605013. [^1]: Support by FWF P14435-TPH and computer code for Lyapunov exponents by T.S. Biró are gratefully acknowledged.
--- abstract: 'Frequentist and Bayesian methods differ in many aspects, but share some basic optimal properties. In real-life classification and regression problems, situations exist in which a model based on one of the methods is preferable based on some subjective criterion. Nonparametric classification and regression techniques, such as decision trees and neural networks, have frequentist (classification and regression trees (CART) and artificial neural networks) as well as Bayesian (Bayesian CART and Bayesian neural networks) approaches to learning from data. In this work, we present two hybrid models combining the Bayesian and frequentist versions of CART and neural networks, which we call the Bayesian neural tree (BNT) models. Both models exploit the architecture of decision trees and have lesser number of parameters to tune than advanced neural networks. Such models can simultaneously perform feature selection and prediction, are highly flexible, and generalize well in settings with a limited number of training observations. We study the consistency of the proposed models, and derive the optimal value of an important model parameter. We also provide illustrative examples using a wide variety of real-life regression data sets.' author: - 'Tanujit Chakraborty[^1]' - Gauri Kamat - Ashis Kumar Chakraborty bibliography: - 'bibliography.bib' title: '**Bayesian Neural Tree Models for Nonparametric Regression**' --- [*Keywords:* ]{}Decision tree; Bayesian neural network; hybrid; nonparametric; Bayesian neural tree. Introduction ============ Methodologies in nonparametric regression employ either a frequentist or a Bayesian approach to learning from data. The choice between the two paradigms is often philosophical, and based on subjective judgements. Two models, namely decision trees and neural networks, have primarily been used in the frequentist setting, but have robust Bayesian counterparts. Classification and regression trees (CART) were introduced by Breiman et al.$\text{ }$[@breimanetal1984] for flexibly modeling the conditional distribution of an outcome variable given the predictors. For a data set, a tree is grown by sequentially splitting its internal nodes, and then pruning the grown tree back to avoid overfitting [@loh2011]. The splitting rule for each node is based on the minimization of the mean squared error (MSE) in regression, and Gini index in classification. The Bayesian approach to finding a ‘good’ tree model entails specification of a prior distribution and stochastic search [@chipman1998; @chipman2002bayesian]. The fundamental idea behind Bayesian CART (BCART) is to have the prior induce a posterior distribution that can guide a (posterior) stochastic search towards a promising tree model [@chipman2002bayesian]. On the other hand, an artificial neural network (ANN) is an interconnected gathering of artificial neurons organized in layers [@horniketal1989]. A standard ANN model has three layers of nodes, namely input, hidden, and output layers, where nodes are neurons that use a nonlinear activation function (except for the input nodes). A backpropagation gradient descent algorithm is used to compare the network outputs with the actual outputs [@rumelhardtetal1988]. If an error exists, it is backpropagated through the network and the weights in the network architecture are adjusted accordingly [@lecun2015deep]. An ANN, however, is often prone to overfitting when the data comprise a limited number of observations. A Bayesian treatment to an ANN offers a practical solution to this problem by naturally allowing for regularization [@mackay1992; @neal2012]. A Bayesian neural network (BNN) can also deal with the issue of model complexity, for e.g., by selecting the number of hidden neurons in the model. In particular, a BNN treats the network weights to be random and obtains a posterior distribution over them [@barberetal1998; @kendall2017uncertainties]. Although CART, BCART, ANN, and BNN individually perform well, they exhibit certain drawbacks. Tree-based models may overfit the training data, or stick to a local minima in the decision boundaries. Additionally, the training of neural networks suffers considerably in a limited-data set up. Thus, a hybrid (or ensemble) formulation of trees and neural networks can be used to leverage their individual strengths and overcome their individual limitations. Several such hybrid models blending CART and ANNs have been discussed in the literature [@utgoff1989; @sethi1990; @sirat1990neural; @kijsirikuletal2001; @michelonietal2012; @vanli2019nonlinear; @chakraborty2019radial; @chakraborty2019; @chakraborty2019novel], and have been useful for improving the prediction accuracy of the individual models. These hybrid models, however, only consider frequentist implementations of their components. Some other works have explored hybrid frequentist-Bayesian models in the context of parametric inference, hypothesis testing, and other inferential problems [@yuan2009bayesian; @bayarrietal2004; @bickel2015blending]. However, we are not aware of any hybrid algorithms blending frequentist and Bayesian methods for nonparametric regression. Motivated by this, we propose two novel models, called the Bayesian neural tree (BNT) models, for feature-selection-cum-prediction purposes. The first model, which we call the BNT-1 model, implements a tree model in the frequentist setting and a neural network model in the Bayesian setting. The second model, which we call the BNT-2 model, implements a tree model in the Bayesian setting and a neural network model in the frequentist setting. Both models utilize the built-in feature selection mechanisms of CART and BCART, along with the accuracy and flexibility of ANNs and BNNs, particularly in limited-data-size settings. They harness the architecture of their component models, have lesser number of tuning parameters, and are easily interpretable. We restrict our attention to regression tasks, although the proposed models can also be used for classification. Further, we prove the statistical consistency of the models, which gives a theoretical guarantee of their robustness. Finally, we explore the performance of the BNT models using several standard data sets. The remainder of this article is organized as follows. Section \[sec2\] discusses the proposed BNT models. Section \[sec3\] explores the statistical properties of the BNT models. The empirical performance of the models using real-life data sets is addressed in Section \[sec4\]. Section \[sec5\] concludes the paper with a discussion on the future scope of this work. Formulation of the BNT models \[sec2\] ====================================== We begin by establishing notation. We assume that models are trained on $n$ observations, and that there are $d$ predictor variables. For data point $i$, where $1 \leq i \leq n$, let $Y_i$ denote the response variable, $\overline Y_i$ denote its mean value, and $\hat Y_i$ denote the final prediction obtained from a model. Let $X_i = (X_{i1},\dots,X_{id})^{'}$ denote the input vector for the $i^{th}$ data point, where $1 \leq i \leq n$. We denote the training data as $L_n = \{Y_i,X_i\}_{i=1}^{n}$. In what follows, we omit the subscript $i$ for simplicity of notation. Overview of constituent models ------------------------------ ### CART and BCART A CART model consecutively divides the predictor space into multiple regions. The partitioning begins at the root node, followed by splits at each internal node. A splitting rule (i.e., a chosen predictor and a split threshold) for a node is determined based on the minimization of the mean squared error (MSE) in regression settings. For each node, a stopping criterion called ‘minsplit’ is defined in terms of the minimum number of observations required in the node for further splitting. A node with less than ‘minsplit’ samples is labeled as a terminal node. At a terminal node, the predictor space is not split any further. Every data point falls into a region defined at one of the terminal nodes, and predictions are made using the parameter local to that region. A fully grown tree is often pruned back via cross-validation or cost-complexity pruning to avoid overfitting. To illustrate the Bayesian version of CART, we assume that a tree $T$ has $b$ terminal nodes. Let the set of terminal node parameters be $\Lambda = \{\lambda_1,\dots,\lambda_b\}$. A prior is then placed on $(\Lambda,T)$ as $$\begin{aligned} P(\Lambda,T) = P(\Lambda|T) \text{ } P(T),\end{aligned}$$ where $P(T)$ is specified as a tree generating stochastic process comprising two functions, namely $P_{split}(m,T)$, the probability that a terminal node $m$ in a tree $T$ is split, and $P_{rule}(\gamma|m,T)$, the probability that a splitting rule $\gamma$ is assigned if $m$ is split [@chipman1998]. A general form of $P_{split}(m,T)$ is [@chipman1998] $$\begin{aligned} P_{split}(m,T) = \alpha (1+D_m)^{-\beta}, \label{eqsplit}\end{aligned}$$ where $D_m$ denotes the number of splits before the $m^{th}$ node, and $0<\alpha<1$ and $\beta\geq0$. Larger values of $\beta$ make the splitting of deeper nodes less probable, since the RHS in is a decreasing function of the depth $D_m$ of a node. The prior $P_{rule}(\gamma|m,T)$ is specified so that at an internal node, each available predictor is equally likely to be chosen for a split, and for a chosen predictor, each of its observed values is equally likely to be chosen as a splitting threshold. $P(\Lambda|T)$ is generally specified so that the marginalization $$\begin{aligned} P(Y|T,X) = \int P(Y|X, \Lambda, T) \text{ } P(\Lambda|T) \text{ } d\Lambda\end{aligned}$$ is feasible [@chipman1998]. For a continuous $Y$, we model the values in the $m^{th}$ terminal node as a Gaussian with mean $\mu_m$ and variance $\sigma_m^2$, where $1 \leq m \leq b$ . Thus, we have $\Lambda = \big\{\mu_m, \sigma^2_m\big\}_{m=1}^{b}$, with $\mu_m$ and $\sigma^2_m$ having conjugate Gaussian and Inverse-Gamma priors respectively, as in [@chipman1998; @chipman2002bayesian]. The posterior over the possible tree models $P(T|Y,X)$ is analytically explored via a Metropolis-Hastings search algorithm. A ‘good’ tree is usually found as a tradeoff between the number of terminal nodes $b$, and a high value of the marginal probability $P(Y|T,X)$. ### ANN and BNN {#bnnexplain} An ANN is a nonparametric model consisting of an input layer, a certain number of hidden layers, and an output layer. All inputs to the network pass through the hidden layers, after which they are mapped to the final output. Each interconnection of neurons in an ANN is associated with a weight. In frequentist settings, such weights are obtained by minimizing an error function and its gradient. We consider an ANN with parameter vector $\theta$, which contains the network weights and a general offset (or bias) parameter. In the Bayesian setting, a zero-mean multivariate Gaussian prior is placed on $\theta$ [@mackay1992v2] as $$\begin{aligned} P(\theta) = \mathlarger{\frac{1}{\big(\frac{2\pi}{\sigma_p}\big)^{\frac{\mathlarger l}{2}}} \text{ exp} \big(-\frac{\sigma_p}{2} ||\theta||^2\big)},\end{aligned}$$ where $l$ is the length of $\theta$. The likelihood is modeled as a Gaussian given by $$\begin{aligned} P(L_n|\theta) = \mathlarger{\frac{1}{\big(\frac{2\pi}{\sigma_l}\big)^{\frac{\mathlarger n}{2}}} \text{ exp} \big(-\frac{\sigma_l}{2} \displaystyle \sum_{i=1}^{n} \big( \hat Y_i-Y_i^2\big)\big)}.\end{aligned}$$ Predictions are obtained from the posterior predictive distribution $$\begin{aligned} P(Y|X, L_n) = \int_{\theta} P(Y|X, \theta) \text{ } P(\theta|L_n) \text{ } d\theta. \label{eqpost}\end{aligned}$$ The integral in is approximated by $P(Y|X, \tilde\theta)$, where $ \tilde \theta$ is obtained by locally minimizing $$\begin{aligned} E = \mathlarger{\frac{\sigma_l}{2} \displaystyle \sum_{i=1}^{n} \big( \hat Y_i-Y_i^2\big) + \frac{\sigma_p}{2} ||\theta||^2}. \label{eqparts}\end{aligned}$$ The first term in the RHS of corresponds to the error function that is minimized in frequentist settings. The second term corresponds to a regularization term that penalizes larger values in $\theta$, and hence restrains overfitting. A BNN can also have a variable architecture, i.e., the number of hidden nodes can be subject to a Geometric distribution, which enables one to place a lower probability on larger networks (see [@insua1998feedforward]). Proposed models --------------- We now describe the working principles of the proposed BNT models. Informally, each BNT model consists of a Bayesian (frequentist) implementation of a tree-based component for feature selection purposes, and a frequentist (Bayesian) implementation of a neural network component for prediction purposes (see Figure \[figmodRWNGT\]). Such hybridizations blending trees and neural networks in fully frequentist settings were first proposed and theoretically justified in [@chakraborty2019radial; @chakraborty2019; @chakraborty2019novel]. In this work, we extend those approaches, but consider frequentist as well as Bayesian versions of the component models. In theory, both BNT models are asymptotically consistent, as we prove in Section \[sec3\]. ### BNT-1 model The BNT-1 model comprises two stages. In the first stage, a classical CART model is fit to the data, taking all $d$ predictors. The CART model implicitly selects a feature at each internal split (based on maximum reduction in the MSE). Thus, the features used to construct the CART model can be considered as ‘important’ features in the data. We record these features, as well as the predictions obtained from the CART model. In the second stage, we construct a BNN with one hidden layer, where the input variables are the selected features from CART plus the prediction results from stage one. We use a Gaussian prior for the network weights and also model the data likelihood to be Gaussian. The prior for the number of hidden neurons ($k$) is taken to be a Geometric distribution with probability of success $p$. As illustrated in Section \[bnnexplain\], the BNN is naturally regularized through its implementation, hence making overfitting less likely. The final set of predictions is obtained after fitting the BNN model to the data. Thus, the proposed BNT-1 model utilizes the intrinsic feature selection ability of CART in the first stage, and trains a BNN model in the second stage using the selected features and predicted values from CART. This improves the accuracy of the individual models, as using the CART output as a feature in the BNN adds non-redundant information. We present a formal workflow of the BNT-1 model below. Fit a CART model to $L_n$ with a specified ‘minsplit’ value. - Record $S \subseteq \{X_1,\dots,X_d\}$, the set of selected features from CART. - Record $\hat Y_{cart}$, the predictions from CART. - Construct $S^{'} = \{S,\hat Y_{cart}\}$, the complete set of features for the BNN model. Fit a BNN model with $k$ hidden neurons, where $k\sim \text{Geometric }(p)$, and with input feature set $S^{'}$. - Record $\hat Y$, the final set of predictions from the BNN. ![An overview of Bayesian neural tree models. A CART (BCART) model is at the top and its corresponding BNN (ANN) model at the bottom. ${\it OP}$ denotes the tree (CART/BCART) output.[]{data-label="figmodRWNGT"}](1.eps "fig:") ![An overview of Bayesian neural tree models. A CART (BCART) model is at the top and its corresponding BNN (ANN) model at the bottom. ${\it OP}$ denotes the tree (CART/BCART) output.[]{data-label="figmodRWNGT"}](2.eps "fig:") ### BNT-2 model The BNT-2 model also follows a two-step pipeline. A BCART model is fit to the data in the first stage, with the best fitting tree found via posterior stochastic search. For feature selection in the context of BCART, Bleich et al. [@bleich2014variable] illustrate three different schemes based on [*variable inclusion proportions*]{}, or the proportion of times a predictor variable is used for for a split within each posterior sample. The three schemes differ in thresholding the inclusion proportions, and are called ‘local’, ‘global max’ and ‘global SE’ procedures. Any of the procedures can be utilized for feature selection based on the data and prediction problem at hand. In this work, we use the local thresholding procedure. We thus record the important features and predictions from BCART, and use these as inputs to a one-hidden-layer ANN in stage two. One hidden layer in the ANN suffices, due to the incorporation of the selected features and predicted outputs from BCART. Using a single hidden layer also reduces the overall complexity of the model and the risk of overfitting in small and medium-sized data sets [@devroye2013probabilistic]. The optimal choice for the number of hidden neurons ($k$) for the ANN is derived under Proposition \[optimalnumberann\] in Section \[secoptimalval\], and is given as $\sqrt{\frac{n}{d_{m}log(n)}}$, where $d_m$ is the dimension of the input feature space of the ANN, and $n$ is the training sample size. The final set of predictions is obtained after fitting the ANN model to the data. The formal algorithm is as follows. Fit a BCART model to the data via a posterior stochastic search over the possible tree models. - Record $S \subseteq \{X_1,\dots,X_d\}$, the set of selected features obtained using a thresholding procedure. - Record $\hat Y_{bcart}$, the prediction from BCART. - Construct $S^{'} = \{S,\hat Y_{bcart}\}$, the complete set of features for the ANN model. Denote the dimension of $S^{'}$ as $d_m$. Fit a one-hidden-layer ANN model with input feature set $S^{'}$, and with number of hidden neurons $k = \sqrt{\frac{n}{d_{m}log(n)}}$. - Record $\hat Y$, the final set of predictions from the ANN. Statistical Properties of the BNT models \[sec3\] ================================================= From the results on the consistency of multivariate histogram-based regression estimates on data-dependent partitions [@nobel1996histogram; @lugosi1996consistency], and that of regression estimates realized by an ANN [@lugosi1995nonparametric; @devroye2013probabilistic], we know that under certain conditions, both nonparametric models converge to the true density functions. In Bayesian settings, posterior concentration of the BCART model [@rockova2017posterior], and posterior consistency of the BNN model [@lee2000consistency; @lee2004bayesian] have been previously explored. We use these results to prove the theoretical consistency of the BNT models under certain conditions. We also find the optimal value of the number of hidden nodes in the BNT-2 model in Subsection \[secoptimalval\]. Consistency of the BNT-1 Model ------------------------------ Let $\mathbb{X}=(X_{1},X_{2},...,X_{d})$ be the space of all possible values of $d$ features, and let $\mathbb{Y} = (Y_1,\dots,Y_n)^{'}$ be the response vector, where each $Y_i$ takes values in $[-K,K]$, and $K \in \mathbb{R}$. A regression tree (RT) $f : \mathbb{R}^{d} \rightarrow \mathbb{R}$ is defined by assigning a number to each cell of a tree-structured partition. We seek to estimate a regression function $r(x)= E(Y | X = x) \in [-K,K]$ based on $n$ training samples $L_{n}=\{(X_{1},Y_{1}), (X_{2},Y_{2}),...,(X_{n},Y_{n})\}$. The regression function $r(x)$ minimizes the predictive risk $J(f)=E\big|f(X) - Y\big|^2$ over all functions $f: \mathbb{R}^d \rightarrow \mathbb{R}$. Practically, given the training data $L_n$, we can likely find an estimate $\hat{f}$ of $f$ that minimizes the empirical risk $$J_n(f) = \frac{1}{n}\sum_{i=1}^{n}\Big(f(X_i)-Y_{i}\Big)^{2}$$ over a suitable class of regression estimates, since the distribution of $(\mathbb{X},\mathbb{Y})$ is not known a priori. We let $\Omega=\{\omega_{1},\omega_{2},...,\omega_{k}\}$ be a partition of the feature space $\mathbb{X}$ and denote $\widetilde{\Omega}$ as one such partition of $\Omega$. Define $(L_{n})_{\omega_{i}}=\{(X_{i},Y_{i})\in L_{n}: X_{i}\in \omega_{i}, Y_{i}\in [-K,K]\}$ to be the subset of $L_{n}$ induced by $\omega_{i}$ and let $(L_{n})_{\widetilde{\Omega}}$ denote the partition of $L_{n}$ induced by $\widetilde{\Omega}$. Now define $\widehat{L}_{n}$ to be the space of all learning samples and $\mathbb{D}$ be the space of all partitioning regression functions. Then a binary partitioning rule $f:\widehat{L}_{n} \rightarrow \mathbb{D}$ is such that $f \in (\psi \circ \phi)(L_{n})$, where $\phi$ maps $L_{n}$ to some induced partition $(L_{n})_{\widetilde{\Omega}}$ and $\psi$ is an assigning rule which maps $(L_{n})_{\widetilde{\Omega}}$ to a partitioning regression function $\hat{f}$ on the partition $\widetilde{\Omega}$. Consistent estimates of $r(\cdot)$ can be achieved using an empirically optimal regression tree if the size of the tree grows with $n$ at a controlled rate. Suppose $(\mathbb{X},\mathbb{Y})$ is a random vector in $\mathbb{R}^{p} \times [-K,K]$ and $L_{n}$ is the training set of $n$ outcomes. Finally if for every $n$ and $w_{i}\in \tilde{\Omega}_{n}$, the induced subset $(L_{n})_{w_{i}}$ contains at least $k_{n}$ of the vectors of $X_{1},X_{2},...,X_{n}$. Let $\hat{f}$ minimizes the empirical risk $J_n(f)$ over all $k_n$ nodes of RT $f \in (\psi \circ \phi)(k_{n})$. If $k_n \rightarrow \infty$ and $k_n = o \Big(\frac{n}{log(n)}\Big)$, then $P \big|\hat{f} - r \big|^2 \rightarrow 0$ with probability 1. \[theorem100\] For proof, one may refer to [@chakraborty2019novel Theorem 1]. The BNT-1 model essentially uses the feature selection mechanism of RT and RT output also plays an important role in designing the ensemble model. We further build a one hidden layered BNN model using RT given features as well as RT output as an another feature in the input space of BNN. We denote the dimension of the input feature space of BNN model in the ensemble as $d_m \; (\leq d)$. We further assume that these covariates are fixed and have been rescaled to $[0,1]^{d_m} = \mathbb{C}^{d_m}$.Now, let the random variables $\mathbb{Z}$ and $\mathbb{Y}$ take their values from $\mathbb{C}^{d_{m}}$ and $[-K,K]$ respectively. Denote the measure of $\mathbb{Z}$ over $\mathbb{C}^{d_{m}}$ by $\mu$ and $m:\mathbb{C}^{d_{m}}\rightarrow [-K,K]$ be a measurable function such that $m(Z)$ approximates $Y$. Given the training sequence $\xi_{n}=\{ (Z_{1},Y_{1}), (Z_{2},Y_{2}),...,(Z_{n},Y_{n}) \}$ of $n$ iid copies of ($\mathbb{Z},\mathbb{Y}$), the parameters of the neural network regression function estimators are chosen such that it minimizes the empirical $L_{2}$ risk = $\frac{1}{n} \sum_{j=1}^{n}|f(Z_{j})-Y_{j}|^{2}$. We have used logistic squasher as sigmoid function in BNN and treat the number of hidden nodes ($k$) as a parameter in the proposed Bayesian ensemble formulation. In usual bayesian nonparametrics, number of hidden nodes grows with the sample size and thus we can use an arbitrarily large number of hidden nodes asymptotically. But we use the formulation by [@insua1998feedforward] and treat number of hidden nodes in the ensemble model as a parameter and show that the joint posterior becomes consistent under certain regularity conditions. Following [@insua1998feedforward] we consider geometric prior for $k$. This will give better uncertainty quantification by allowing unconstrained size of the hidden nodes. The major advantage of using bayesian setting over frequentist approach is that it allows one to use background knowledge to select a prior probability distribution for the model parameters. Also the predictions of the future observations are made by integrating the model’s prediction with respect to the posterior parameter distributions obtained by updating the prior by taking into account the data. We address this by properly defining the class of prior distribution for neural network parameters that reach sensible limits on the size of the networks goes to infinity and further implementing markov chain monte carlo algorithm in the network structure [@mackay1992v2]. We define $$E\big[Y_i | Z_i = z_i\big] = \beta_0 + \sum_{j=1}^{k}\beta_j\sigma(a_j^{T}z_i)+\epsilon_i, \label{1}$$ where $k$ is the number of hidden nodes, $\beta_j$’s are the weights of these hidden nodes, $a_j$’s are vectors of location and scale parameters, and $\epsilon_i \stackrel{iid}{\sim} \mathcal{N}(0,\sigma^{2})$. Expanding (\[1\]) in vector notation yields the following equation: $$y_i = \beta_0 + \sum_{j=1}^{k}\beta_j\sigma\bigg(a_{j0}+\sum_{h=1}^{d_m}a_{jh}z_{h}\bigg)+\epsilon_i, \label{2}$$ where $d_m$ is the number of input features. We consider the asymptotic properties of neural network in the bayesian setting. We show consistency of the posterior for neural networks in bayesian setting which along with Theorem \[theorem100\] ensures the consistency of the proposed BNT-1 model. Let $\lambda_i = P(k=i)$ be the prior probability that the number of hidden nodes is $i$, and of course $\sum_{i} \lambda_i = 1$. Also, $\Pi_i$ be the prior for the parameters of the regression equation, given that $k = i$. We can then write the joint prior for all the parameters as $\sum \lambda_i\Pi_i$. Here we consider $\Pi_i \stackrel{ind}{\sim} \mathcal{N}(0,\tau^{2})$ and the prior for $k$ be geometric distribution. In the sequel, we also assume that $$Y | Z = z \sim \mathcal{N}\Bigg( \beta_0 + \sum_{j=1}^{k}\frac{\beta_j}{1+\exp\Big({-a_{j0}-\sum_{h=1}^{d_m}a_{jh}z_{h}}\Big)}, 1 \Bigg).$$ Let $f_0(z,y)$ be the true density. We can define a family of Hellinger neighborhoods as $$H_\epsilon = \{f\; ; \; D_H(f,f_0)\leq \epsilon\},$$ with $D_H(f,f_0)$ as defined below: $$D_H(f,f_0) = \mathlarger{\sqrt{\int \int \Bigg(\sqrt{f(z,y)} - \sqrt{f_0(z,y)}\Bigg)^2dz dy}}.$$ Let $\mathscr{F}_{n}$ be the set of all neural networks with parameters $|a_{jh}|\leq C_n$ and $|\beta_j|\leq C_n$, where $j=1,\dots,k$ and $h=1,\dots,d_m$, and $C_n$ grows with $n$ such that $C_n \leq \text{exp } (n^{(b-a)})$ for any constant $b$ such that $0<a<b<1$ when $k\leq n^a$. The Kullback-Leibler divergence (not a distance metric) is defined as $$D_K(f_0,f) = E_{f_0}\Bigg[\text{log } \frac{f_0(z,y)}{f(z,y)}\Bigg].$$ For any $\gamma > 0$, we define Kullback-Leibler neighborhood by $$K_\gamma = \{ f \; : D_K(f_0,f) \leq \gamma \}.$$ We denote the prior for $f$ by $\Pi_n(\cdot)$ and the posterior by $P\big(\cdot \; | (Z_1,Y_1), ...,(Z_n,Y_n)\big).$ Now we are going to present results on the asymptotic properties of the posterior distribution for the neural network model present in the ensemble BNT-1 model over Hellinger neighborhoods. Assume that $\mathbb{Z}$ is uniformly distributed in $[0,1]^{d_m}$, $\Pi_i \stackrel{ind}{\sim} \mathcal{N}(0,\tau^{2})$, $k \sim \mbox{Geometric}$, and the following conditions hold:\ (A1) For all $i$, we have $\lambda_i > 0$;\ (A2) $B_n \uparrow n$, for all $r>0$, there exists $q>1$ and $N$ such that $\sum_{i=B_n+1}^{\infty} \lambda_i < exp\big( -n^q r \big)$ for $n \geq N$;\ (A3) There exists $r_i > 0, N_i$ such that $\Pi_n(\mathscr{F}_{n}^{c}) < exp(-nr_i)$ for all $n \geq N_i$;\ (A4) For all $\gamma, v >0$, there exists $I$ and $M_i$ such that for any $i \geq I$, $\Pi_i(K_\gamma) \geq exp(-nv)$ for all $n \geq M_i$.\ Then for all $\epsilon > 0$, posterior is asymptotically consistent for $f_0$ over Hellinger neighborhoods, i.e.,\ $P\big(H_\epsilon \; | (Z_1,Y_1), ...,(Z_n,Y_n)\big) \rightarrow 1$ in probability. (A1) Since we take geometric prior for $k$, then it is obvious that $\lambda_i > 0$.\ (A2) $$\begin{aligned} & \sum_{i=B_n+1}^{\infty} \lambda_i = P(k > B_n) = \sum_{i=B_n+1}^{\infty} p(1-p)^i = (1-p)^{B_n+1} \nonumber \\ & \quad \quad \quad \quad = exp\big[(B_n+1)log(1-p)\big] \nonumber \\ & \quad \quad \quad \quad = exp\bigg[-n^q\big(-log(1-p)\big)\bigg] \quad \mbox{(Using $B_n = O(n^q)$ for $q>1$)} \nonumber \\ & \quad \quad \quad \quad \leq exp\big(-n^q.r\big) \quad \mbox{for $r > 0$ and sufficiently large $n$} \label{2star}\end{aligned}$$ (A3) We consider a geometric prior with parameter $p$. Also let, $B_n = O(n^q)$ for any $q>1$. For any $i$, we write $i < n^a$ for $a > 0$ and sufficiently large $n$, where $\theta$ be the vector of all parameters (other than $k$): $$\begin{aligned} & \Pi_{i}\big(\mathscr{F}_n^c\big) = \int_{\mathscr{F}_n^c}\Pi_i(\theta)d\theta \nonumber \\ & \quad \quad \quad \quad \leq \sum_{i=1}^{d_n} 2 \int_{C_n}^{\infty} \phi \bigg( \frac{\theta_i}{\tau_i} \bigg) d\theta_{i} \nonumber \\ & \quad \quad \quad \quad = d_n \bigg[ 2\tau \int_{\frac{C_n}{\tau_i}}^{\infty} \phi(\tau_i)d\tau_i \bigg] \nonumber \\ & \quad \quad \quad \quad \leq d_n \bigg[ \frac{2 \tau \phi \big(\frac{C_n}{\tau_i}\big)}{\frac{C_n}{\tau_i}} \bigg] \quad \mbox{by Mill's ratio} \nonumber \\ & \quad \quad \quad \quad = d_n \bigg[ \frac{2\tau_i^2}{C_n}.\frac{1}{\sqrt{2\Pi}}.exp\bigg( - \frac{C_n^2}{2\tau_i^2} \bigg) \bigg] \nonumber \\ & \quad \quad \quad \quad \leq d_n \bigg(\tau_i^2\sqrt{\frac{2}{\Pi}}\bigg). exp\bigg( -n^{b-a} - \frac{1}{2\tau_i^2}e^{2n^{b-a}} \bigg) \quad \big[ \mbox{Taking} \quad C_n=e^{n^{b-a}}, 0<a<b<1 \big] \nonumber \\ & \quad \quad \quad \quad = exp\bigg[ -n^{b-a} + \log\bigg( d_n\tau_i^2 \sqrt{\frac{2}{\Pi}} \bigg) \bigg].exp \bigg( - \frac{1}{2\tau_i^2}e^{2n^{b-a}} \bigg) \nonumber \\ & \quad \quad \quad \quad \leq exp \bigg( - \frac{1}{2\tau_i^2}e^{2n^{b-a}} \bigg) \quad \big[ \mbox{Using} \quad d_n = (p+2)n^a+1 \leq (p+3)n^a \quad \mbox{for large $n$} \big] \nonumber \\ & \quad \quad \quad \quad \leq e^{-nr_i}, \quad \mbox{where} \quad e^{2n^{b-a}} > n \quad \mbox{for large $n$ and taking} \quad r=\frac{\tau_i^2}{2}. \label{star}\end{aligned}$$ We can write $$\mathscr{G}_n^c = \bigcup_{i=0}^{\infty} \mathscr{F}_i^c,$$ where $\mathscr{F}_i$ be the set of all neural networks with $i$ nodes and with all the parameters are less than $C_n$ in absolute value, $C_n \leq exp(n^b), 0<b<1.$ $$\Pi(\mathscr{G}_n^c) = \sum_{i=0}^{\infty} \lambda_i \Pi_i(\mathscr{F}_i^c) \leq \sum_{i=0}^{B_n}\lambda_i \Pi_i(\mathscr{F}_i^c) + \sum_{i=B_n+1}^{\infty}\lambda_i = I_1 + I_2.$$ To handle $I_1$ and $I_2$, we use (\[star\]) and (\[2star\]): $$\begin{aligned} & I_1 \leq \sum_{i=0}^{B_n}\lambda_i exp(-nr_i) \nonumber \\ & \quad \leq exp(-nr^{*})\sum_{i=0}^{B_n}\lambda_i \quad \big( \mbox{By letting} \quad r^{*} = min\{r_0,r_1,...,r_{B_n}\} \big) \nonumber \\ & \quad \leq exp(-nr^{*}).\end{aligned}$$ And, $I_2 \leq exp(-n^qr^{*}) \quad \mbox{for sufficiently large n.}$ For large $n, q>1, \quad \mbox{and} \quad r=r^{*}/2$, we have $$\Pi(\mathscr{G}_n^c) < exp(-nr).$$ (A4) $$\begin{aligned} & \Pi_{i}(M_\delta) = \prod_{i=1}^{\hat{d_n}} \int_{\theta_i-\delta}^{\theta_i+\delta} \frac{1}{\sqrt{2\Pi \tau^2}}.exp\bigg( - \frac{u^2}{2\tau^2} \bigg)du \nonumber \\ & \quad \quad \quad \geq \prod_{i=1}^{\hat{d_n}} 2\delta \inf_{u \in [\theta_i-1, \theta_i+1]} \frac{1}{\sqrt{2\Pi \tau^2}}.exp\bigg( - \frac{u^2}{2\tau^2} \bigg) \nonumber \\ & \quad \quad \quad = \prod_{i=1}^{\hat{d_n}} \delta \sqrt{\frac{2}{\Pi \tau^2}}.exp\bigg( - \frac{\xi_i}{2\tau^2} \bigg) \quad \big[\xi_i = max\{ (\theta_i-1)^2, (\theta_i+1)^2 \} \big] \nonumber \\ & \quad \quad \quad \geq \bigg( \delta \sqrt{\frac{2}{\Pi \tau^2}} \bigg)^{\hat{d_n}}.exp\bigg( - \frac{\hat{\xi}\hat{d_n}}{2\tau^2} \bigg) \quad \big[\hat{\xi} = \max_i\{ \xi_1, \xi_2, ... , \xi_{\hat{d_n}} \} \big] \nonumber \\ & \quad \quad \quad > e^{-nv} \quad \big[ \mbox{Using} \quad \hat{d_n} \leq (p+3)n^a \quad \mbox{and for large $n$ and for any $v$} \big]. \label{3star}\end{aligned}$$ For any $\delta > 0$, and $l$ be the number of hidden nodes required by the theorem for making $g_0$ continuous and square differentiable. Using (\[3star\]) we write $$\Pi(M_\delta) = \sum_{i=0}^{\infty} \lambda_i \Pi_i(M_\delta) \geq \lambda_l \Pi_l(M_\delta) \geq \lambda_l exp(-nv^{*}).$$ For sufficiently large $n$ and for any $v^{*}$, $l$ is a constant, thus $\lambda_l$ does not depend on $n$ and is positive for geometric prior. Thus, $\Pi(M_\delta) \geq \exp(-nv)$ for any sufficiently large $n$. We can now use conditions (A1)-(A4) to show that $P\big(H_\epsilon \; | (Z_1,Y_1), ...,(Z_n,Y_n)\big) \rightarrow 1$ in probability. Alternatively, $P\big(H_\epsilon^{c} \; | (Z_1,Y_1), ...,(Z_n,Y_n)\big) \rightarrow 0$ in probability. Now, $$\begin{aligned} & P\big(H_\epsilon^{c} \; | (Z_1,Y_1), ...,(Z_n,Y_n)\big) = \frac{\int_{H_\epsilon^{c}}\prod_{i=1}^{n}f(z_i,y_i)d\Pi_n(f)}{\int\prod_{i=1}^{n}f(z_i,y_i)d\Pi_n(f)} \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad = \frac{\int_{H_\epsilon^{c}}R_n(f)d\Pi_n(f)}{\int R_n(f)d\Pi_n(f)}, \; \mbox{where} \; R_n(f) = \frac{\prod_{i=1}^{n}f(z_i,y_i)}{\prod_{i=1}^{n}f_0(z_i,y_i)} \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad = \frac{D_1}{D_2}.\end{aligned}$$ Using [@wong1995probability] and (A1-A4), we can find the supremum of the likelihood ratios $R_n(f)$. Thus, we have $ D_1 < e^{\frac{-nt}{2}} + e^{-2c_2\epsilon^2}, \; t,c_2 > 0. $\ Also upon using [@lee2000consistency Lemma 5] along with (A1-A4) we have $ D_2 > e^{-n\delta} $ for large $n$, except on a set with probability approaches to $0$.\ Finally, we have $$\begin{aligned} & P\big(H_\epsilon^{c} \; | (Z_1,Y_1), ...,(Z_n,Y_n)\big) < \frac{e^{\frac{-nt}{2}} + e^{-2c_2\epsilon^2}}{e^{-n\delta}} \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \leq e^{-n\epsilon^{'}} + e^{n\epsilon^2\epsilon^{'}}, \; \mbox{where} \; \epsilon^{'} > 0 \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \rightarrow 0 \; \mbox{for sufficiently large $n$}.\end{aligned}$$ Consistency and optimal value of a parameter for the BNT-2 model {#secoptimalval} ---------------------------------------------------------------- We consider the nonparametric regression model $$Y_i=f_0(X_i) + \epsilon_i, \; \; \epsilon_i \stackrel{iid}{\sim} \mathcal{N}(0,1),$$ where the output variable $\mathbb{Y}=(Y_1,Y_2,...,Y_n)^{'}$ is dependent on a set of $d$ potential covariates $\mathbb{X}= (X_{i1},X_{i2},...,X_{id})^{'}$, $1 \leq i \leq n$. We further assume that these covariates are fixed and have been rescaled such that every $X_{ij} \in [0,1]^{d}=\mathbb{C}^{d}$, $1 \leq i \leq n$ and $1 \leq j \leq d$. The true unknown response surface $f_0(X_i)$ is assumed to be smooth. In a recent work [@rockova2017posterior], it was shown that the BCART model achieves a near-minimax-rate optimal performance when approximating a single smooth function. Thus, optimal behavior of a BCART model is guaranteed, and even under a suitably complex prior on the number of terminal nodes, a BCART model is reluctant to overfit. In the BNT-2 model, we build a BCART model in the first stage, and perform variable (feature) selection as in [@bleich2014variable], which ensures that we can obtain a consistent BCART model under the assumptions of Theorem 4.1 of [@rockova2017posterior]. The selected important features along with the BCART outputs are trained using an ANN model with one hidden layer. We denote the dimension of the input feature space of this ANN model as $d_m \; (\leq d)$. The rescaled feature space is denoted by $\mathbb{C}^{d_m} = [0,1]^{d_m}$. Using one hidden layer in the ANN makes the BNT-2 model less complex and fastens its actual implementation. Moreover, there is no theoretical gain in considering more than one hidden layer in an ANN [@devroye2013probabilistic]. Below, we establish sufficient conditions for consistency of the BNT-2 model along with the optimal value of the number of hidden nodes $k$. Let the rescaled set of features of the ANN be $\mathbb{Z}$. $\mathbb{Z}$ and $\mathbb{Y}$ take values from $\mathbb{C}^{d_{m}}$ and $[-K,K]$, respectively. We denote the measure of $\mathbb{Z}$ over $\mathbb{C}^{d_{m}}$ by $\mu$ and $m:\mathbb{C}^{d_{m}}\rightarrow [-K,K]$ be a measurable function that approximates $\mathbb{Y}$. Given the training sequence $(\mathbb{Z}, \mathbb{Y})$ of $n$ i.i.d copies, the neural network hyperparameters are chosen by empirical risk minimization. We consider the class of neural networks having a logistic sigmoidal activation function in the hidden layer and $k$ hidden neurons, with bounded output weights $$\mathscr{F}_{n,k}=\Bigg\{ \sum_{i=1}^{k}c_{i}\sigma(a_{i}^{T}z+b_{i})+c_{0} : k \in \mathbb{N}, a_{i} \in \mathbb{R}^{d_{m}}, b_{i},c_{i} \in \mathbb{R}, \sum_{i=0}^{k}|c_{i}|\leq \beta_{n} \Bigg\},$$ and obtain $m_{n} \in \mathscr{F}_{n,k}$ satisfying $$\frac{1}{n} \sum_{i=1}^{n}|m_{n}(Z_{i})-Y_{i}|^{2} \leq \frac{1}{n} \sum_{i=1}^{n}|f(Z_{i})-Y_{i}|^{2}, \; \mbox{if} \; f \in \mathscr{F}_{n,k},$$ where, $m_{n}$ is a function that minimizes the empirical $L_{2}$ risk in $\mathscr{F}_{n,k}$. The theorem below, due to [@lugosi1995nonparametric Theorem 3], states the sufficient conditions for the consistency of the neural network. Consider an ANN with a logistic sigmoidal activation function having one hidden layer with $k \; (>1)$ hidden nodes. If $k$ and $\beta_{n}$ are chosen to satisfy $$k \rightarrow \infty, \; \beta_{n}\rightarrow \infty, \; \frac{k\beta_{n}^{4}log(k\beta_{n}^{2})}{n} \rightarrow 0$$ as $n \rightarrow \infty$, then the model is said to be consistent for all distributions of $(\mathbb{Z}, \mathbb{Y})$ with $\mathbb{E}|\mathbb{Y}|^2 < \infty$.\[theo3\] For the proof, one may refer to [@gyorfi2006distribution Chapter 16]. Now, we obtain an upper bound on $k$ using the rate of convergence of a neural network with bounded output weights. In what follows, we have assumed that $m$ is Lipschitz $(\delta, C)$-smooth according to the following definition: A function $m:C^{d_m} \rightarrow [-K,K]$ is called Lipschitz $(\delta, C)$-smooth if it satisfies the following equation: $$|m(z^{'})-m(z)| \leq C\|z^{'}-z\|^\delta$$ for all $\delta \in [0,1]$, $z^{'}, z \in \mathbb{C}^{d_m}$, and $C \in \mathbb{R}_{+}$. \[optimalnumberann\] Assume that $\mathbb{Z}$ is uniformly distributed in $\mathbb{C}^{d_m}$ and $\mathbb{Y}$ is bounded a.s. and $m$ is Lipschitz $(\delta, c)$-smooth. Under the assumptions of Theorem (\[theo3\]) with fixed $d_{m}$, and $m, f \in \mathscr{F}_{n,k}$, also $f$ satisfying $\int_{C^{d_m}}f^{2}(z)\mu(dz)<\infty$, we have $k = O \bigg (\sqrt{\frac{n}{d_{m}log(n)}} \bigg )$. To prove Proposition \[optimalnumberann\], we use results from statistical learning theory of neural networks [@gyorfi2006distribution Chapter 12]. We use the complexity regularization principle to choose the parameter $k$ in a data-dependent manner [@kohler2005adaptive; @hamers2003bound; @kohler2006nonparametric]. Consistency results presented in (\[theo3\]) state that $$\mathbb{E}\int_{C^{d_m}}(m_{n}(Z)-m(Z))^{2}\mu(dz) \rightarrow 0 \quad \mbox{as}\quad n \rightarrow \infty.$$ We can write, using ([@gyorfi2006distribution Lemma 10.1]), that $$\begin{aligned} \mathbb{E}\Bigg[\int_{C^{d_m}} \big| m_n(Z)-m(Z)\big|^2 \mu(\mbox{dz})\Bigg] \leq 2\mathbb{E}\Bigg[ \sup_{f \in \mathscr F_{n,k}}\bigg|\frac{1}{n}\sum_{i=1}^n\big|Y_i-f(Z_i)\big|^2 - \mathbb{E}\big|Y-f(Z)\big|^2\bigg| \Bigg] \nonumber \\ + \mathbb{E} \Bigg[\inf_{f \in \mathscr F_{n,k}} \int_{C^{d_m}}\big|f(Z)-m(Z)\big|^2\mu(\mbox{dz}) \Bigg],\label{imp}\end{aligned}$$ where $\mu$ denotes the distribution of $\mathbb{Z}$. For the consistency of the neural network model, the *estimation error* (first term in the RHS of \[imp\]) and the *approximation error* (second term in the RHS of \[imp\]) should tend to 0. To find the bound for $k$, we apply non-asymptotic uniform deviation inequalities and covering numbers corresponding to $\mathscr F_{n,k}$. Assuming $Y$ is bounded as in Theorem \[theo3\], we write (\[imp\]) as $$\begin{aligned} \mathbb{E} \int_{C^{d_m}} \big| m_n(Z)-m(Z)\big|^2 \mu(\mbox{dz}) \leq 2 \min_{k \geq 1}\bigg\{pen_n(k)+\inf_{f \in \mathscr F_{n, k}}\int_{C^{d_m}}\big|f(z)-m(z)\big|^2\mu(\mbox{dz})\bigg\} +O\Big(\frac{1}{n}\Big).\label{roc}\end{aligned}$$ We have assumed that for each $f \in \mathscr F_n$, $Y$ is bounded. Let $w_1^n=(w_1, w_2, \hdots, w_n)$ be a vector of $n$ fixed points in $ \mathbb{R}^{d_m}$ and let $\mathscr H$ be a set of functions from $ \mathbb{R}^{d_m} \to [-K,K]$. For every $\varepsilon >0$, we let $\mathscr N(\varepsilon, \mathscr H,w_1^n)$ be the $L_1$ $\varepsilon$-covering number of $\mathscr H$ with respect to $w_1, w_2, \hdots, w_n$. $\mathscr N(\varepsilon, \mathscr H,w_1^n)$ is defined as the smallest integer $N$ such that there exist functions $h_1, \hdots, h_N: \mathbb{R}^{d_m}\to [-K,K]$ with the property that for every $h\in \mathscr H$, there is a $j \in \{1, \hdots, N\}$ such that $$\frac{1}{n}\sum_{i=1}^n\big|h(w_i)-h_j(w_i)\big|<\varepsilon.$$ Note that if $W_1^n = (W_1, W_2,\hdots, W_n)$ is a sequence of i.i.d. random variables, then $\mathscr N(\varepsilon, \mathscr H,W_1^n)$ is also a random variable. Now, let $W=(Z,Y)$, $W_1=(Z_1,Y_1), \hdots, W_n=(Z_n,Y_n)$, and $C^{d_m} = [0,1]^{d_m}$, we write $$\begin{aligned} \mathscr H_n=\bigg\{h(z,y)& :=|y-f(z)\big|^2: (z,y)\in C^{d_m}\times [-K,K]\mbox{ and }f\in \mathscr F_n\bigg\}.\end{aligned}$$ The functions in $\mathscr H_n$ will satisfy the following: $0\leq h(z,y)\leq 2\beta_n^2 + 2K^2 \leq 4\beta_n^2.$ Using Pollard’s inequality [@gyorfi2006distribution], we have, for arbitrary $\varepsilon >0$, $$\begin{aligned} & P\bigg\{ \sup_{f \in \mathscr F_{n,k}}\Big|\frac{1}{n}\sum_{i=1}^n\big|Y_i-f(Z_i)\big|^2 - E\big|Y-f(Z)\big|^2\Big|>\varepsilon\bigg\}\nonumber\\ & = P\bigg\{ \sup_{h \in \mathscr H_n}\Big|\frac{1}{n}\sum_{i=1}^nh(W_i)- E(h(W))\Big|>\varepsilon\bigg\}\nonumber\\ & \leq 8 \mathbb{E} \bigg[\mathscr N\big(\frac{\varepsilon}{8}, \mathscr H_n,W_1^n\big)\bigg]\exp\bigg({-\frac{n\varepsilon^2}{128(4\beta_n^2)^2}}\bigg).\label{zero}\end{aligned}$$ Next, we try to bound the covering number $ \mathscr N\big(\frac{\varepsilon}{8}, \mathscr H_n, W_1^n\big)$. Let us consider two functions $h_{i}(z,y)= |y-f_{i}(z)|^2$ of $\mathscr H_n$ for some $f_{i} \in \mathscr F_n$ and $i = 1,2$. We get $$\begin{aligned} &\frac{1}{n}\sum_{i=1}^n\big|h_{1}(W_i)-h_{2}(W_i)\big|\\ & =\frac{1}{n}\sum_{i=1}^n\Big|\big|Y_i-f_{1}(Z_i)\big|^2-\big|Y_i-f_{2}(Z_i)\big|^2\Big|\\ & =\frac{1}{n}\sum_{i=1}^n\big|f_{1}(Z_i)-f_{2}(Z_i)\big|\times \big|f_1(Z_i)-Y_i+f_2(Z_i)-Y_i\big|\\ & \leq \frac{4\beta_n}{n}\sum_{i=1}^n\big|f_{1}(Z_i)-f_{2}(Z_i)\big|.\end{aligned}$$ Thus, if $\{h_1,h_2,...,h_l\}$ is an $\frac{\mathlarger \varepsilon}{8}$ packing of $\mathscr H_n$ on $W_1^n$, then $\{f_1,f_2,...,f_l\}$ is an $\frac{\mathlarger \varepsilon}{32\beta_n}$ packing of $\mathscr F_n$. $$\mbox{Thus,} \; \; \label{one} \mathscr N\Big(\frac{\mathlarger \varepsilon}{8}, \mathscr H_n, W_1^n\Big)\leq \mathscr N\Big(\frac{\mathlarger \varepsilon}{32\beta_n}, \mathscr F_n, Z_1^n\Big).$$ The covering number $\mathscr N(\frac{\varepsilon}{32\beta_n}, \mathscr F_n, Z_1^n)$ can be upper bounded independently of $Z_1^n$ by extending the arguments of Theorem 16.1 of [@gyorfi2006distribution]. We now define the following classes of functions: $$G_1=\{\sigma(a^{\top} z + b): a\in \mathbb{R}^{d_m}, b\in \mathbb{R}\},$$ $$G_2=\{c\sigma(a^{\top} z + b): a\in \mathbb{R}^{d_m}, b\in \mathbb{R}, c \in [-\beta_n, \beta_n]\}.$$ For any $ \varepsilon > 0 $, $$\begin{aligned} \mathscr N(\varepsilon, G_1,Z_1^n) & \leq 3 \bigg(\frac{2e}{\varepsilon} \log\frac{3e}{\varepsilon} \bigg)^{d_{m}+2} \\ & = 3\Big(\frac{3e}{\varepsilon}\Big)^{2d_{m}+4}.\end{aligned}$$ Also, we get $$\begin{aligned} \mathscr N(\varepsilon, G_2,Z_1^n) & \leq \frac{4\beta_n}{\varepsilon}\mathscr N\Big(\frac{\varepsilon}{2\beta_n}, G_1,Z_1^n\Big) \\ & \leq \Big(\frac{12 e \beta_n}{\varepsilon}\Big)^{2d_{m}+5}.\end{aligned}$$ We obtain the bound on the covering number of $\mathscr F_n$, $$\begin{aligned} \mathscr N(\varepsilon, \mathscr F_{n},Z_1^n) & \leq \frac{2\beta_n}{\varepsilon}\mathscr N\Big(\frac{\varepsilon}{k+1}, G_2,Z_1^n\Big)^{k} \nonumber\\ & \leq \Big( \frac{12 e \beta_n(k+1)}{\varepsilon}\Big)^{(2d_{m}+5)k+1}.\label{eight}\end{aligned}$$ According to (\[eight\]), and for any $Z_1^n \in \mathbb{R}^{d_m}$, we have $$\begin{aligned} \mathscr N\Big(\frac{1}{n}, \mathscr F_{n,k},Z_1^n\Big) \leq \Big(12en\beta_n(k+1)\Big)^{(2d_m + 5)k+1}. \label{final}\end{aligned}$$ Using the complexity regularization principle we have $$\sup_{Z_1^n}\mathscr N\bigg(\frac{1}{n}, \mathscr F_{k,n},Z_1^n\bigg) \leq \mathscr N\bigg(\frac{1}{n}, \mathscr F_{k,n}\bigg)$$ to be the upper bound on the covering number of $\mathscr F_{k,n}$, and define for $w_{k} \geq 0$, $$pen_n(k) = \frac{\mbox{constant} \times K^2 \times log\mathscr N\big(\frac{1}{n}, \mathscr F_{k,n}\big)+w_{k}}{n}$$ as a penalty term penalizing the complexity of $\mathscr F_{k,n}$ [@kohler2005adaptive]. Thus (\[final\]) implies that $pen_n(k)$ is of the following form with $w_{k} = 1$ and $\beta_n < \mbox{constant} < \infty$, $$pen_n(k) = \frac{\mbox{constant} \times K^2 \times (2d_{m}+6)klog\big(12en\beta_n)+1}{n}=O\bigg(\frac{kd_{m}log(n)}{n}\bigg).$$ The approximation error $\inf_{f \in \mathscr F_{k,n}}\int_{C^{d_m}}\big|f(z)-m(z)\big|^2\mu(\mbox{dz})$ depends on the smoothness of the regression function. According to Theorem 3.4 of [@mhaskar1993approximation], for any feedforward neural network with one hidden layer satisfying the assumptions of Proposition \[optimalnumberann\], we have $$\big|f(z)-m(z)\big| \leq \bigg(\frac{1}{\sqrt{k}}\bigg)^{\frac{\delta}{d_m}}$$ for all $z \in [0,1]^{d_m}$. Thus, we have, $$\inf_{f \in \mathscr F_{n, k}}\int_{C^{d_m}}\big|f(z)-m(z)\big|^2\mu(\mbox{dz}) = O\Big( \frac{1}{k} \Big).$$ Using (\[roc\]), we have $$\begin{aligned} \mathbb{E} \int_{C^{d_m}} \big| m_n(Z)-m(Z)\big|^2 \mu(\mbox{dz}) \leq O\Bigg( \frac{kd_mlog(n)}{n} \Bigg) + O\Bigg( \frac{1}{k} \Bigg)\end{aligned}$$ for sufficiently large $n$.\ Now we can balance the approximation error with the bound on the covering number to obtain the optimal choice of $k$ from which the assertion follows. For practical purposes, we choose the number of hidden neurons in the BNT-2 model to be $k=\sqrt{\frac{n}{d_{m}log(n)}}$. Experimental evaluation \[sec4\] ================================ We now present applications of the two BNT models to real-life data sets, and evaluate them against their component regression models, namely a simple CART model, a simple BCART model, a one-hidden-layer ANN, and a one-hidden-layer BNN. Data ---- We use regression data sets available on the UCI machine learning repository. These data sets have a limited number of observations and high-dimensional feature spaces. As a part of the data cleaning process, we systematically eliminate all nonnumerical features and observations with missing values. Table \[tab:data\] summarizes the characteristics of the data sets. Performance metrics {#secperfmet} ------------------- For evaluating the BNT models, we use two absolute performance measures, viz. the mean absolute error (MAE) and the root mean squared error (RMSE), one relative measure, viz. the mean absolute percentage error (MAPE), and two goodness of fit measures, i.e., the coefficient of determination ($R^2$) and adjusted $R^2$. The metrics are defined as follows: 1. MAE = $\mathlarger{\frac{1}{n}\sum_{i=1}^{n} |Y_i- \hat Y_i|}$, 2. MAPE = $\mathlarger{\frac{1}{n}\sum_{i=1}^{n} \Bigg|\frac{Y_i-\hat Y_i}{Y_i}\Bigg|}$, 3. RMSE = $\mathlarger{{\sqrt{\frac{1}{n}\sum_{i=1}^{n}(Y_i-\hat Y_i)^2}}}$, 4. $R^2$ = $\mathlarger{{1-\frac{\sum_{i=1}^{n}(Y_i-\hat Y_i)^2}{\sum_{i=1}^{n}(Y_i- \overline{Y})^2 }}}$, 5. Adjusted $R^2$ = $\mathlarger{{1-\frac{(1-R^2)(n-1)}{(n-d-1)}}}$, **Dataset** **Number of observations (n)** **Number of features (d)** ------------- -------------------------------- ---------------------------- : Summary of the datasets used to evaluate the BNT models.\[tab:data\] We note that lower values of MAE, MAPE, and RMSE, and higher values of $R^2$ and adjusted $R^2$ indicate better model performance. Implementation and results -------------------------- We shuffle the observations in each data set, and split into training and test sets in the ratio 70:30. We carry out 10 random train-test splits and report average results across all 10 iterations. All models are fit on the training data, and evaluated on the test data. Experiments are carried out using $\textsf{R}$ (version 3.6.1). We fit a CART model using the $\textsf{rpart}$ package, with the stopping parameter ‘minsplit’ set to 10% of the training sample size. To fit a simple BNN, we use the $\textsf{brnn}$ package with the number of hidden layers set to one and the number of hidden neurons set to the default value (i.e., 2). The $\textsf{brnn}$ package implements a BNN with a Gaussian prior and likelihood, as discussed in Section \[bnnexplain\]. To fit a simple, one-hidden-layer ANN, we make use of the $\textsf{neuralnet}$ package, and set the number of hidden neurons to the default value (2). A Bayesian CART model is fit using the $\textsf{bartMachine}$ package [@bleich2016bartmachine], with the number of trees set to one. For feature selection under BCART, we use local thresholding of the variable inclusion proportions, although empirical explorations show that results are not very sensitive to other thresholding methods. As seen in Tables \[tabresults1\] and \[tabresults2\], the component models of the BNTs exhibit consistent results and neural networks perform better than the tree based models for a majority of the data sets. We now turn to the implementation of the two BNT models. To implement BNT-1, we first record the selected features and predictions from the CART model, which forms the set of features for the subsequent BNN model. Again, a CART model is trained with the stopping parameter ‘minsplit’ set to 10% of the training sample size. A one-hidden-layer BNN is then fit with the number of hidden neurons $k$ drawn from Geometric distributions with success probabilities $p \in \{0.3,0.6,0.9\}$. To implement BNT-2, we record important features and predictions from the BCART model, and use these as inputs to the ANN model with one hidden layer. The number of neurons in the ANN is taken to be $\sqrt{\frac{n}{d_{m}log(n)}}$, which is the optimal number derived in Section \[secoptimalval\]. Additionally, all data sets are min-max scaled to be in the $[0,1]$ range before training the neural network models. From Tables \[tabresults1\] and \[tabresults2\], we observe that across all data sets, the proposed BNT models greatly improve the performance of their component models. We note that the BNT-2 model outperforms all others on most data sets. As a consequence, we can expect the BNT predictions to be at least better than the individual model predictions, since cases where further optimization is likely to have lead to overfitting are directly filtered out. [|c|c|L|c|c|c|c|c|]{} & [**Model**]{} & &\ & &Number of features used &MAE &MAPE &RMSE &$R^2$ &adjusted $R^2$\ & CART &3 &2.640 &0.120 &3.419 &0.834 &0.830\ & BCART &3 &2.796 &0.117 &3.693 &0.806 &0.803\ & ANN &7 &2.241 &0.0967 &3.164 &0.858 &0.850\ & BNN &7 &2.253 &0.097 &3.123 &0.861 &0.854\ & BNT-1 ($p$=0.3) &4 &2.111 &0.091 &3.016 &0.871 &0.867\ & BNT-1 ($p$=0.6) &4 &2.110 &0.092 &3.013 &0.871 &0.870\ & BNT-1 ($p$=0.9) &4 &2.119 &0.092 &3.018 &0.873 &0.870\ & BNT-2 &4 &2.081 & 0.090 &3.0333 &0.869 &0.868\ & CART &3 &3.161 &0.163 &5.068 &0.696 &0.690\ & BCART &4 &3.683 &0.194 &5.057 &0.697 &0.689\ & ANN &13 &2.736 &0.132 &4.782 &0.729 &0.706\ & BNN &13 &2.742 &0.132 &4.793 &0.704 &0.702\ & BNT-1 ($p$=0.3) &4 &2.643 &0.129 &4.731 &0.735 &0.730\ & BNT-1 ($p$=0.6) &4 &2.641 &0.128 &4.730 &0.735 &0.730\ & BNT-1 ($p$=0.9) &4 &2.641 &0.128 &4.730 &0.735 &0.730\ & BNT-2 &5 &2.751 &0.134 &4.597 &0.750 &0.748\ & CART &2 &4.157 &0.009 &5.389 &0.901 &0.901\ & BCART &2 &5.502 &0.008 &4.561 &0.929 &0.929\ & ANN &4 &3.558 &0.008 &4.501 &0.937 &0.937\ & BNN &4 &3.563 &0.007 &4.510 &0.940 &0.940\ & BNT-1 ($p$=0.3) &3 &3.444 &0.008 &4.460 &0.932 &0.932\ & BNT-1 ($p$=0.6) &3 &3.443 &0.008 &4.463 &0.932 &0.932\ & BNT-1 ($p$=0.9) &3 &3.442 &0.008 &4.461 &0.932 &0.932\ & BNT-2 &3 &3.408 &0.007 &4.410 &0.934 &0.934\ [|c|c|L|c|c|c|c|c|]{} & [**Model**]{} & &\ & &Number of features used &MAE &MAPE &RMSE &$R^2$ &adjusted $R^2$\ & CART &12 &0.166 &0.435 &0.230 &0.399 &0.335\ & BCART &15 &0.186 &0.580 &0.231 &0.394 &0.250\ & ANN &101 &0.164 &0.442 &0.222 &0.443 &0.468\ & BNN &101 &0.167 &0.567 &0.290 &0.580 &0.580\ & BNT-1 ($p$=0.3) &13 &0.158 &0.395 &0.218 &0.463 &0.406\ & BNT-1 ($p$=0.6) &13 &0.154 &0.395 &0.218 &0.463 &0.406\ & BNT-1 ($p$=0.9) &13 &0.158 &0.395 &0.218 &0.463 &0.406\ & BNT-2 &16 &0.143 &0.367 &0.193 &0.578 &0.574\ & CART &5 &7.462 &0.286 &9.414 &0.694 &0.689\ & BCART &3 &7.909 &0.304 &10.064 &0.651 &0.649\ & ANN &8 &6.987 &0.235 &9.194 &0.709 &0.701\ & BNN &8 &6.043 &0.268 &7.676 &0.746 &0.842\ & BNT-1 ($p$=0.3) &6 &5.493 &0.194 &6.961 &0.833 &0.830\ & BNT-1 ($p$=0.6) &6 &5.492 &0.194 &6.950 &0.840 &0.830\ & BNT-1 ($p$=0.9) &6 &5.493 &0.194 &6.961 &0.833 &0.830\ & BNT-2 &4 &5.473 &0.178 &6.636 &0.879 &0.878\ Concluding remarks\[sec5\] ========================== In this work, we present two hybrid models that combine frequentist and Bayesian implementations of decision trees and neural networks. The BNT models are novel, first-of-their-kind proposals for nonparametric regression purposes. We find that the models perform competitively on small to medium-sized datasets compared to other state-of-the-art nonparametric models. Moreover, the BNT models have a major advantage over purely frequentist hybridizations. A Bayesian approach to constructing a CART or an ANN model can check overfitting. A BCART model allows placing priors that control the depth of the resultant trees, and furthermore, BNNs with Gaussian priors are inherently regularized. This obviates the need to manually tune multiple parameters via cross-validation. Thus, the proposed BNT models not only overcome the deficiencies of their component models, but also the drawbacks of using fully frequentist or fully Bayesian models. We also show that the BNT models are consistent, which ensures their theoretical validity. An immediate extension of this work will be to construct BNT models for classification problems. Another area of future work will be to extend the proposed approaches to survival regression frameworks. Data and code {#data-and-code .unnumbered} ============= For the sake of reproducibility of this work, code for implementing the BNT models is made available at <https://github.com/gaurikamat/Bayesian_Neural_Tree>. The data for the experiments is obtained from <https://archive.ics.uci.edu/ml/datasets.html>. [^1]: Corresponding author. Email: tanujitr@isical.ac.in
--- abstract: 'We study a genuine Brownian motor by hard disk molecular dynamics and calculate analytically its properties, including its drift speed and thermal conductivity, from microscopic theory.' author: - 'C. Van den Broeck' - 'R. Kawai' - 'P. Meurs' title: Exact microscopic analysis of a thermal Brownian motor --- It is (believed to be) impossible to systematically rectify thermal fluctuations in a system at equilibrium. Such a perpetuum mobile of the second kind, also referred to as a Maxwell demon [@md], would violate the second law of thermodynamics and would, from the point of view of statistical mechanics, be in contradiction with the property of detailed balance. Yet, it may require quite subtle arguments to explain in detail on specific models why rectification fails. Apart from the academic and pedagogical interest of the question, the study of small scale systems is motivated by rapidly increasing capabilities in nanotechnology and by the huge interest in small scale biological systems. Furthermore, when operating under nonequilibrium conditions, as is the case in living organisms, the rectification of thermal fluctuations becomes possible. This mechanism, also referred to as a Brownian motor [@rr], could furnish the engine that drives and controls the activity on a small scale. In this letter we propose a breakthrough in the theoretical and numerical study of a small scale thermal engine. Our starting point is the observation that one of the basic and most popular models, namely the Smoluchowski-Feynman ratchet [@rr; @smoluchowski; @feynman], is needlessly complicated, and can be replaced by a simplified construction involving exclusively hard core interactions. Its properties, including speed, diffusion coefficient and heat conductivity, can be measured very accurately by hard disk molecular dynamics and can be calculated exactly from microscopic theory. ![ (a) Schematic representation of the Smoluchowski-Feynman ratchet. (b) Similar construction without a pawl and spring. (c) Two dimensional analogue referred to as the $AB$ motor. The motor is constrained to move along the horizontal $x$-direction (without rotation or vertical displacement). The host gas consists of hard disks whose centers collide elastically with the engine parts. The shape of the arrow is determined by the apex angle $2\theta_0$ and the vertical cross section $S$. Periodic boundary conditions are used in the computer simulations. (d) A symmetric construction referred to as the AA motor.[]{data-label="fig:models"}](models.eps){width="3in"} In Fig. \[fig:models\]a, we have schematically depicted the construction originally introduced by Smoluchowski [@smoluchowski] in his discussion of Maxwell demons and re-introduced with two compartments at different temperatures by Feynman [@feynman]. One compartment contains a ratchet with a pawl and a spring, mimicking the rectifier device that is used in clockworks of all kinds. The macroscopic mode of operation of such an object generates the impression that only clockwise rotations can take place, suggesting that this construction can be used as a rectifier of the impulses generated by the impacts of the particles in the other compartment on the blades with which the ratchet is rigidly linked. As Feynman has argued, such a rectification is only possible when the temperature in both compartments is different. We now introduce a model which is at the same time a simplification and generalization of this construction. First, we can dispose of the pawl and spring in the ratchet and consider any rigid but asymmetric object. An example with a cone-shaped object in one compartment and a flat “blade” or “sail” located in the other one is illustrated in Fig. \[fig:models\]b. Second, we replace the single rotational degree of freedom with a single translational degree of freedom (Figs. \[fig:models\]c and \[fig:models\]d). Third, we restrict ourselves to two-dimensional systems. Finally, the substrate particles in the various compartments are modeled by hard disks which undergo perfectly elastic collisions with each other while their center collides elastically with the edges of the motor [@remark]. ![Probability density $P(x,t)$ for the position $x$ of motor $AB$ at times $t=1000$ (shaded) and $t=4000$ (open). Inset left: mean square displacement versus time. Inset right: velocity correlation function.[]{data-label="fig:diffusion"}](diffusion.eps){width="3in"} ![Upper panel: Position of the motor as a function of time. The thin solid curve shows a typical trajectory. All other curves represent the average $\langle x(t) \rangle$. The thick solid line is the equilibrium case ($T_1=T_2=1$) for motor $AB$. The dotted and dashed curves correspond to the nonequilibrium situation for respectively motor $AA$ and $AB$ ($T_1=1.9$, $T_2=0.1$). The situation with switched temperatures for $AB$ is the dashed curve with a negative velocity. Lower panel: Average velocity of motor $AB$ as a function of its mass $M$. Inset: average speed (avg. over $2000$ runs) of motors $AA$ and $AB$ as a function of the initial temperature difference $T_1-T_2$ ($T_1+T_2=2$ fixed). The theoretical results (\[T.13b\]) and (\[T.13a\]) predict lower speeds than the simulations. However, when the magnitude is scaled, the theoretical curves (dotted lines) fit well with the simulations.[]{data-label="fig:velocity"}](drift.eps){width="3in"} ![Upper panel: Position of the motor as a function of time. The thin solid curve shows a typical trajectory. All other curves represent the average $\langle x(t) \rangle$. The thick solid line is the equilibrium case ($T_1=T_2=1$) for motor $AB$. The dotted and dashed curves correspond to the nonequilibrium situation for respectively motor $AA$ and $AB$ ($T_1=1.9$, $T_2=0.1$). The situation with switched temperatures for $AB$ is the dashed curve with a negative velocity. Lower panel: Average velocity of motor $AB$ as a function of its mass $M$. Inset: average speed (avg. over $2000$ runs) of motors $AA$ and $AB$ as a function of the initial temperature difference $T_1-T_2$ ($T_1+T_2=2$ fixed). The theoretical results (\[T.13b\]) and (\[T.13a\]) predict lower speeds than the simulations. However, when the magnitude is scaled, the theoretical curves (dotted lines) fit well with the simulations.[]{data-label="fig:velocity"}](velocity.eps){width="3in"} We first report on the results obtained from molecular dynamics for two different realizations of our motor. The first one, referred to as arrow/bar or $AB$ is inspired by the above discussion. It consists of one triangular-shaped arrow in the first compartment and a flat bar in the other (Fig. \[fig:models\]c). The other motor, called arrow/arrow or $AA$, consists of an identical triangular-shaped arrow in both compartments (Fig. \[fig:models\]d). Both units of the motor are constrained to move as a rigid whole along the horizontal $x$-direction as a result of their collision with the hard disks in the two compartments. The initial state of the hard disk gases corresponds to uniform (number) densities $\rho_1$ and $\rho_2$ and Maxwellian speeds at temperatures $T_1$ and $T_2$, in the compartments $1$ and $2$ respectively, [@comment2] ($k_B=1$ by choice of units). The boundary conditions are periodic both left and right, and top and bottom. Unless mentioned otherwise, the following parameter values are used: Each compartment is a $1200$ by $300$ rectangle and contains $800$ hard disks (mass $m=1$, diameter $1$), i.e., particle densities $\rho_1=\rho_2=0.00222$. Initial temperatures were set to $T_1=1.9$ and $T_2=0.1$. The motor has a mass $M=20$, apex angle $2\theta_0=\pi/18$, and vertical cross section $S=1$. The averages are taken over $1000$ runs. When the temperatures are the same in both compartments, $T_1=T_2$, no rectification takes place. In fact, Fig. \[fig:diffusion\] shows that the motor undergoes plain Brownian motion, with average zero speed, exponentially decaying velocity correlations and linearly increasing mean square displacement. The corresponding friction and diffusion coefficient obey the Einstein relation. On the other hand, as soon as the temperatures are no longer equal, the motor spontaneously develops an average systematic drift along the x-axis. The amplitude and direction of the speed depend in an intricate way on the parameters of the problem. In particular, the average speed increases with the temperature difference and the degree of asymmetry (decreasing $\theta_0$) and decreases with increasing mass of the motor roughly as $1/M$ (see the lower panel in Fig. \[fig:velocity\]). Note furthermore that the observed average speed can be very large, i.e. comparable to the thermal speed $\sqrt{k_B T/M}$ of the motor. The $AA$ motor has a peculiar behavior, resulting from the fact that both units are identical. Whereas equilibrium is usually a point of flux reversal, the velocity now displays a parabolic curve as a function of $T_1-T_2$ with a minimum equal to zero at the equilibrium state $T_1=T_2$. It is clear from its symmetric construction that, at least when $\rho_1=\rho_2$, an interchange of $T_1$ with $T_2$ can not modify the speed so that the latter has to be an even function of $T_1-T_2$, cf. Fig. \[fig:velocity\]. We finally note that the observed systematic speed does not persist forever. Indeed, the motion of the motor along the x direction is a single degree of freedom that allows for (microscopic) energy transfers hence thermal contact between the compartments, a fact that was overlooked by Feynman in his analysis and first pointed out in [@parrondo; @sekimoto]. As a result one observes that the temperatures in both compartments converge exponentially to a common final temperature with a concomitant reduction and eventual disappearance of the systematic motion as shown in Fig. \[fig:temp\]. While this feature has already been documented in detail in other constructions [@vandenbroeck], we have focused here on conditions in which this conductivity is small and the compartments sufficiently large so that one reaches a quasi-steady state with a well defined and measurable average drift velocity. ![Exponential decay of the temperatures to a final common value ($T_{\text{final}}=(T_1+T_2)/2=1$.) in motor $AB$ and concomitant disappearance of the average drift speed. To enlarge the conductivity, a small mass $M=1$, a large vertical cross section $S=10$, and a large apex angle $\theta_0=\pi/9$ are used.[]{data-label="fig:temp"}](heatcond.eps){width="3in"} To obtain analytic results from microscopic theory, which are asymptotically exact, we focus on the situation in which the compartments are infinitely large while the densities of the hard disk gases are extremely low (more precisely, the so-called high Knudsen number regime requires that the mean free path is much larger than the linear dimensions of the motor units). In this limit, each compartment, characterized by its particle density $\rho_i$ and temperature $T_i$, acts as an ideal thermal reservoir. We will furthermore assume that all the constituting units of the motor are closed and convex. Under these circumstances, the motor never undergoes recollisions and the assumption of molecular chaos becomes exact [@lk]. The probability density $P(V,t)$ for the speed $\vec{V}=(V,0)$ of the motor thus obeys the following Boltzmann-Master equation: $$\begin{aligned} &&\partial_t P(V,t) = \nonumber\\ &&\int dr \left[ W (V-r,r) P(V-r,t)-W(V,r) P(V,t)\right] \label{T.0}\end{aligned}$$ $W(V,r)$ is the transition probability per unit time for the motor to change speed from $V$ to $V+r$ due to the collisions with the gas particles in various compartments. The explicit expression for $W(V,r)$ follows from elementary arguments familiar from kinetic theory of gases, taking into account the statistics of the perfectly elastic collisions of the motor, constrained to move along the $x$-direction, with the impinging particles: $$W(V,r)=\sum_i\int_0^{2\pi}d\theta \int_{-\infty}^{\infty}dv_x\int_{-\infty}^{\infty}dv_y \rho_i\phi_i(v_x, v_y) L_i F_i(\theta) (\vec{V} -\vec{v}) \cdot \vec{e} (\theta) H[(\vec{V}-\vec{v}) \cdot\vec{e}(\theta)] \nonumber\\ \delta \left[r+\frac{m}{M} B(\theta)(V-v_x+v_y \cot\theta)\right] \label{W}$$ Here, the sum over $i$ runs over all the different compartments, $L_i$ is the total circumference of the $i$th unit of the motor, $B(\theta)=2M\sin^2{\theta} /(M+m\sin^2{\theta})$, $H[x]$ is the Heaviside function, and $\vec{e}(\theta)=(\sin\theta,-\cos\theta)$ is the unit vector normal to a surface at angle $\theta$, $\theta \in [0,2\pi]$, the angles being measured counter-clockwise from the horizontal axis. $\phi_i (v_x, v_y)=m \exp{\left[-m(v_x^2 + v_y^2)/2k_BT_i\right]}/2\pi k_B T_i$ is the Maxwellian velocity distribution in compartment $i$. The shape of any closed convex unit of the motor is defined by the (normalized) probability density $F(\theta)$ such that $F(\theta) d\theta$ is the fraction of its outer surface that has an orientation between $\theta$ and $\theta + d\theta$. Note that $\langle\sin{\theta}\rangle=\langle\cos{\theta}\rangle=0$, where the average is with respect to $F(\theta)$, a property resulting from the requirement that the object is closed. The Boltzmann-Master equations (\[T.0\]) and (\[W\]) can now be solved by a perturbation expansion in $\sqrt{{m}/{M}}$, following a procedure similar to the one used in one-dimensional problems such as the Rayleigh piston [@vankampen] or the adiabatic piston [@gruber]. The details are somewhat involved and will be given elsewhere [@meurs]. To lowest order in the perturbation, the Master equation (\[T.0\]) reduces to a Fokker-Planck equation equivalent to the following linear Langevin equation: $$M\dot{V} = -\sum\limits_i \gamma_i V + \sum\limits_i\sqrt{2\gamma_ik_B T_i} \;\eta_i \label{T.6}$$ with $\eta_i$ independent Gaussian white noises of unit strength and $$\gamma_i = 4\rho_i L_i \sqrt{%\ds \frac{k_B T_i m}{2\pi}} \int_0^{2\pi} d\theta F_i(\theta) \sin^2{\theta} \label{T.7}$$ the friction coefficient experienced by the motor due to its presence in compartment $i$. We conclude that at this order of the perturbation the contributions from the separate compartments add up and are each - taken separately - of the linear equilibrium form. In particular the motor has no (steady state) drift velocity, $\langle V \rangle = 0$. It does however conduct heat. In the case of two compartments $1$ and $2$, the heat flow per unit time between them is, as anticipated in Ref. , given by a Fourier law: $\dot{Q}_{1\rightarrow 2} = \kappa (T_1 - T_2)$ with conductivity $\kappa={k_B} {\gamma_1\gamma_2}/[M({\gamma_1 + \gamma_2})]$. One also concludes from (\[T.6\]) and (\[T.7\]) that the (steady state) velocity distribution of the motor is Maxwellian, but at the effective temperature $$\begin{aligned} T_{\text{eff}} = {\sum\limits_i \gamma_i T_i}\Big/{\sum\limits_i \gamma_i}\end{aligned}$$ At the next order of perturbation in $\sqrt{m/M}$, the corresponding Langevin equation becomes nonlinear in $V$ while at the same time the Gaussian nature of the white noise is lost, a feature well-known from the Van Kampen $1/\Omega$ expansion [@vankampen]. The most relevant observation is the appearance, at the steady state, of a non-zero drift velocity: $$\begin{aligned} \langle V\rangle &=& \sqrt{ \frac{\pi k_B T_{\text{eff}}}{8M}} \sqrt{\frac{m}{M}} \nonumber \\ &\times& \frac{\sum\limits_i \rho_i L_i \frac{T_i - T_{\text{eff}}}{T_{\text{eff}}} \int_0^{2\pi}d\theta F_i (\theta) \sin^3 \theta}{\sum\limits_i \rho_i L_i \sqrt{\frac{T_i}{T_{\text{eff}}}} \int_0^{2\pi} d\theta F_i(\theta) \sin^2(\theta)}\label{T.10}\end{aligned}$$ This speed is of the order of the thermal speed of the motor, times the expansion parameter $\sqrt{m/M}$, and further multiplied by a factor that depends on the details of the construction. Note that the Brownian motor ceases to function in the absence of a temperature difference ($T_i \equiv T_{\text{eff}},\forall i$) and in the macroscopic limit $M\rightarrow \infty$ ($\langle V \rangle \sim 1/M$). Note also that the speed is scale-independent, i.e., independent of the actual size of the motor units: $\langle V \rangle$ is invariant under the rescaling $L_i$ to $CL_i$. To isolate more clearly the effect of the asymmetry of the motor on its speed, we focus on the case where the units have the same shape in all compartments, i.e. $F_i(\theta) = F(\theta)$. In this case $T_{\text{eff}}$ is independent of $ F(\theta)$ and the drift velocity is proportional to $\langle \sin^3\theta\rangle/\langle \sin^2\theta\rangle$, with the average defined with respect to $F(\theta)$. The latter ratio is in absolute value always smaller than $1$, a value that can be reached for “strongly” asymmetric objects as will be shown below on a particular example. We now turn to a comparison between theory and simulations. From the general result (\[T.10\]), one obtains the following expressions for the speed of the two motors that were studied: $$\begin{aligned} \langle V \rangle_{AA} &=& \rho_1 \rho_2 (1-\sin \theta_0) \nonumber\\ &\times& \sqrt{\frac{m}{M}} \sqrt{\frac{\pi k_B}{8M}} \frac{(T_1 - T_2) (\sqrt{T_1}-\sqrt{T_2})}{[\rho_1 \sqrt {T_1} + \rho_2\sqrt{T_2}]^2} \label{T.13b}\end{aligned}$$ $$\begin{aligned} \langle V \rangle_{AB} &=& \rho_1 \rho_2 (1-\sin^2 \theta_0) \sqrt{ \frac{m}{M}} \sqrt{\frac{\pi k_B}{2M}} \nonumber\\ &\times& \frac{(T_1 - T_2) \sqrt{T_1}}{[2\rho_1 \sqrt {T_1} + \rho_2\sqrt{T_2} (1 + \sin \theta_0)]^2} \label{T.13a}\end{aligned}$$ In agreement with previous arguments, the AA motor, cf. (\[T.13b\]), always moves in the same direction, namely the direction of the arrow. Furthermore it is an example where one can increase the asymmetry to generate a maximum drift speed. The limit $|\langle \sin^3\theta\rangle| = \langle \sin^2\theta\rangle$ is reached here when $\theta_0 \rightarrow 0$, which corresponds to an infinitely elongated and sharp arrow in both compartments. Due to strong finite size effects (e.g. sound waves among others), the agreement between the theoretical results (\[T.13b\]) and (\[T.13a\]) and the computer simulations is only qualitative: the theory predicts speeds which are typically $20-40\%$ lower. However, Eqs.(\[T.13b\]) and (\[T.13a\]) can be fitted to the simulation results by appropriately rescaling the magnitude of the velocity (see Fig. \[fig:velocity\]), indicating that their dependencies on the parameters, $M$, $T_1$ and $T_2$, are in good agreement with the simulations. In conclusion, we have provided a detailed analytic and numerical study of a simplified version of the Smoluchowski-Feynman ratchet, including an exorcism - based on microscopic theory - of its operation as a Maxwell demon. This work was supported in part by the National Science Foundation under Grant No. DMS-0079478. [99]{} H. S. Leff and A. F. Rex, [*Maxwell’s Demon*]{} (Adam Hilger, Bristol 1990). P. Reimann, Phys. Rep. [**361**]{}, 57 (2002). R. P. Feynman, R. B. Leighton, and M. Sands, [*The Feynman Lectures on Physics I*]{} (Addison-Wesley, Reading, MA, 1963), Chapter 46. M. v. Smoluchowski, Physik. Zeitschr. [**13**]{}, 1069 (1912). This trick avoids collisions of the disk’s surfaces with the sharp corners of the motor; for more details, see: C. Van den Broeck, R. Kawai and P. Meurs, Proceedings of SPIE Volume: 5114, Noise in Complex Systems and Stochastic Dynamics, p. 1 (2003). Note that our system as a whole is finite and isolated so that strictly speaking the equilibrium state should refer to a micro-canonical ensemble. J. M. R. Parrondo and P. Espagnol [**64**]{}, 1125 (1996). K. Sekimoto, J. Phys. Soc. Jap. [**66**]{}, 1234 (1997). C. Van den Broeck, E. Kestemont, and M. Malek Mansour, Europhys. Lett. [**56**]{}, 771 (2001). J. R. Dorfman, H. Van Beijeren, and C. F. McClure, Archives of Mechanics [**28**]{}, 333 (1976). N. G. van Kampen, [*Stochastic Processes in Physics and Chemistry*]{} (North-Holland, Amsterdam, 1981). Ch. Gruber and J. Piasecki, Physica [**A268**]{}, 412 (1999); E. Kestemont, C. Van den Broeck, and M. Malek Mansour, Europhys. Lett. [**49**]{}, 143 (2000). P. Meurs, C. Van den Broeck, and A. Garcia, preprint.
--- author: - 'Cheng Sok Kin, Ian Man Ut, Lo Hang, U Ieng Hou, Ng Ka Weng, Un Soi Ha, Lei Ka Hin' - 'Cheng Kun Heng, Tam Seak Tim, Chan Iong Kuai, Lee Wei Shan[^1]' date: | Escola Choi Nong Chi Tai\ Macao Special Administrative Region, People’s Republic of China. title: 'Predicting Earth’s Carrying capacity of Human Population as the Predator and the Natural Resources as the Prey in the Modified Lotka-Volterra Equations with Time-dependent Parameters' --- 0.1cm [**Keywords:**]{} Carrying capacity, Human Population, Modified Lotka-Volterra Equations, Tamed Quasi-hyperbolic Function. \[section\] \[section\] \[section\] Introduction ============ From the origin of humankind, there has been such a vast amount of human beings in the world right now. The emerge of globalization, and the development of society have set off magnificent economic progress, generating materials and luxury of human beings’ life. Meanwhile, the progress and its associated advantages have forced an enormous cost to the circumstance that we live.\ Carrying capacity is the maximum number of species that allows us to stay in the earth$^{\cite{Def 1}}$, and it depends on the conditions and resources available in the specific area, as well as the consumption habits of the species considered$^{\cite{Def 2}}$. Carrying capacity is a measure of sustainability within these changing conditions. It is a conventional believe that because both resources and consumption that are available in the area change over time, carrying capacity is always changing. In addition, Joel et al.$^{\cite{Cohen BESA}}$, as well as Pulliam et al.$^{\cite{Pulliam et. al.}}$ introduced the concept about carrying capacity of human populations in the 1960s. It was noted that the consumption habits of humans are much more variable than those of other animal species, making it considerably more difficult to predict the carrying capacity for human beings. This realization gave rise to the IPAT Equation which pointed out that carrying capacity for humans was a function not only of population size but also of differing levels of consumption, which in turn are affected by the technologies involved in production and utilization. There have been a large number of published estimates for the human carrying capacity of the planet$^{\cite{Cohen Science}-\cite{Hopfenberg}}$; they range from a low of one half billion people to a staggering 800 billion. Many of these estimates are more ideologically based than determined by scientific principles. These exercises demonstrate the complexity of developing useful estimates of the human carrying capacity of the earth, and the limitations of using the methodology which has been successful with non-human species. Also claimed by Cohen$^{\cite{Cohen Science}}$, the carrying capacity of this planet is determined by different factors, namely the restraints of natural circumstance and society element-economic. Moreover, Mohammad$^{\cite{Mohammad}}$ pointed out that carrying capacity is a mathematical concept that assumes the limit of the size of the population and that the carrying capacity of a resource system mostly relies on the size of the needs of that population. The size of the obligation cannot exceed the limit of carrying capacity to maintain sustainability. Furthermore, critical factors for manipulating needs are population numbers, density, affluence, technology, depletion rate of renewable and nonrenewable resources, and finally the build-up of hazardous wastes in the environment. What is more, Hopfenberg et al.$^{\cite{Hopfenberg}}$ considered that the carrying capacity is controlled by food availability and argued that data from Cohen may not be accurate. Furthermore, the Lotka-Volterra equations$^{\cite{Evans and Findley}}$ were also used to describe the relationships between the preys and the predators, making it possible to model the relationships between humans and natural resources. In addition, Taagepera claimed that the “tamed quasi-hyperbolic function”, or simply put, the T-Function$^{\cite{T-Function}}$, may successfully describe the human population over time in the range between the 5th century to the 20th century, also predicting the maximum value of human population that will reach the saturation around year 2080.\ Nevertheless, none of the research discussed on the calculation of the carrying capacity about humans that are able to please our mind. In addition, the factors they concerned are not enough as a result of plenty unwanted elements that can affect the data of the carrying capacity.\ In this study, we aimed to use modified the Lotka-Volterra Equations with the assumption that two of the original four parameters in the traditional equations are time dependent. In the first place, we assumed that the human population plays the role as the prey while all lethal factors that jeopardize the existence of the human race as the predator. Second, we exchanged the roles between the prey and the predator, treating the prey as the natural resources while the predator as the human population. In both cases, we calculate the correspoding time-varying parameters and explain the respective meaning. Contray to our intuition, the carrying capacity is a constant function (with the value $10.2$ billion) rather than a function that changes with time. Theorems and Models =================== We first review the traditional Lotka-Volterra equations with all the parameters all being constants. Second, because the T-Function fits well the human population vs. time, we conjecture that one of the solutions to the equations should satisfy the T-Function. But if we still kept the original parameters constant, the T-Function would not be the solution. Therefore, after reviewing the traditional Lotka-Volterra equations, we ease the restriction on the time-independent parameters to allow two of them to be time-dependent, while not making the equations too difficult to solve. Review on Lotka-Volterra equations. {#L-V Section} ------------------------------------ The “traditional” Lotka-Volterra equations, also known as the prey and the predator equations, are successfully used to describe the interactions between the predators and preys in the natural system. In the first place, we follow closely the derivations in Ref$\cite{Evans and Findley}$ and Ref$\cite{L-V wiki}$ and review some imporant properties on these equations. The model of these equations demonstrates the trend or the variety of the quantities of both of the preys and the predators to help compare them easily. Further, it can also learn how the population of each side goes when there is the participation of the opposite group. The first equation describes the conditions of the preys, denoted by $x_{1}$, while the second equation describes the conditions of the predator, denoted by $x_{2}$, with the following differential equations: \[traditional L-V\] [align]{} &=x\_[1]{} - x\_[1]{}x\_[2]{};\[diff for prey\]\ &=-x\_[2]{} + x\_[1]{}x\_[2]{},\[diff for predator\] where $\alpha$, $\beta$, $\gamma$, and $\delta$ are all positive constants. From Eq.($1$), we get $$\frac{dx_{1}}{dx_{2}}=\frac{\alpha x_{1}-\beta x_{1}x_{2}}{-\gamma x_{2}+\delta x_{1}x_{2}} =-\frac{x_{1}}{x_{2}}\cdot\frac{\alpha-\beta x_{2}}{\gamma - \delta x_{1}}.$$ Standard calculations on the separation of variables lead to $$\label{diff_form} \frac{\gamma-\delta x_{1}}{x_{1}}dx_{1}+\frac{\alpha-\beta x_{2}}{x_{2}}dx_{2}=0.$$ Integrating the equation, we get $$\label{traditional Lambda} \gamma\ln(x_{1})-\delta x_{1} + \alpha\ln(x_{2})-\beta x_{2} = -\Lambda = \mathbf{constant}.$$ Define the Carrying capacity $K$ as $$\label{carrying capacity} K=e^{-\Lambda}=x_{2}^{\alpha}e^{-\beta x_{2}}x_{1}^{\gamma}e^{-\delta x_{1}},$$ with the maximum value $K^{*}$ at the stationary point $\Bigl(\gamma/\delta,\alpha/\beta\Bigr)$ to be $$\label{maximum carrying capacity} K^{*}=\Bigl(\frac{\alpha}{\beta e}\Bigr)^{\alpha}\Bigl(\frac{\gamma}{\delta e}\Bigr)^{\gamma}.$$ Further, if we introduce new coordinates $z_{1}$ and $z_{2}$ such that \[coord trans\] [align]{} z\_[1]{}&=(x\_[1]{} + x\_[2]{})\^;\[z1t\]\ z\_[2]{}&=(- - )\^.\[z2t\] It is easy to show that $$\label{sumSquare1} z_{1}^{2} + z_{2}^{2}=1,$$ which allows us to set \[z1sinz2cos\] [align]{} z\_[1]{}&=(t);\[z1tsin\]\ z\_[2]{}&=(t).\[z2tcos\] Modified Lotka-Volterra equations with the time-dependent parameters. --------------------------------------------------------------------- ### Time-dependent parameters {#for parameters are all time dependent} In the case that $\alpha$, $\beta$, $\gamma$, and $\delta$ are all constant, T-Function$^{\cite{T-Function}}$ may not be a solution for either $x_{1}$ or $x_{2}$ in Eq.($\ref{traditional L-V}$). This conundrum may be overcome by assuming that some of the four parameters are time dependent. We now discuss the behaviors of equations that involve the time-dependent parameters. First we modify Eq.($\ref{traditional L-V}$) as follows: \[L-V for all time dependent\] [align]{} &=(t)x\_[1]{}(t) - (t) x\_[1]{}(t)x\_[2]{}(t);\[diff for prey all time dependent\]\ &=-(t) x\_[2]{}(t) + (t) x\_[1]{}(t)x\_[2]{}(t),\[diff for predator all time dependent\] where $x_{1}$ referes to the preys and $x_{2}$ refers to the predators. Eq.($\ref{diff_form}$) also holds even if the parameters, $\alpha, \beta,\gamma, \delta,$ are time dependent. But this time, $$\int\frac{\gamma}{x_{1}}dx_{1}+\int\frac{\alpha}{x_{2}}dx_{2}-\delta x_{1}-\beta x_{2}=-\Lambda=\mathbf{constant}.$$ The carrying capacity is also defined as $K=e^{-\Lambda}$, but this time it is equal to $$\label{carrying capacity time dependent parameters} K=e^{-\Lambda}=e^{\int\frac{\gamma}{x_{1}}dx_{1}}e^{\int\frac{\alpha}{x_{2}}d{x_{2}}}e^{-\delta x_{1}}e^{-\beta x_{2}}$$ Similary to Eq.($\ref{coord trans}$), we may also introduce new coordinates $z_{1}(t)$ and $z_{2}(t)$ such that \[coord trans time\] [align]{} z\_[1]{}(t)&=(x\_[1]{} + x\_[2]{})\^;\[z1t time\]\ z\_[2]{}(t)&=(dx\_[1]{}+dx\_[2]{})\^.\[z2t time\] It is easy to demonstrate, as in Eq.($\ref{sumSquare1}$), that $$\label{sumSquare1 time} z_{1}(t)^{2} + z_{2}(t)^{2}=1,$$ which means that even if the parameters are time dependent, we are also allowed to set, as in Eq.($\ref{z1sinz2cos}$), \[z1sinz2cos time\] [align]{} z\_[1]{}(t)&=(t);\[z1tsin time\]\ z\_[2]{}(t)&=(t).\[z2tcos time\] Keeping in mind that all parameters and variables are time dependent, we drop $t$ in the following derivations for brevity. Further, because $$\left\{ \begin{array}{ll} \int\frac{\gamma}{x_{1}}d{x_{1}}=\int\frac{\gamma}{x_{1}}\frac{dx_{1}}{dt}dt=\int\bigl[\frac{\gamma}{x_{1}}\bigl(\alpha x_{1}-\beta x_{1}x_{2}\bigr)\bigr]dt,\hspace{2pt}\mathrm{and}\\ \int\frac{\alpha}{x_{2}}d{x_{2}}=\int\frac{\alpha}{x_{2}}\frac{dx_{2}}{dt}dt=\int\bigl[\frac{\alpha}{x_{2}}\bigl(-\gamma x_{2}+\delta x_{1}x_{2}\bigr)\bigr]dt,\hspace{2pt}\mathrm{as\hspace{2pt}well\hspace{2pt}as} \end{array} \right.$$ $$\left\{ \begin{array}{ll} \frac{d}{dt}\bigl(\int\frac{\gamma}{x_{1}}\frac{dx_{1}}{dt}dt\bigr)=\frac{\gamma}{x_{1}}\frac{dx_{1}}{dt}=\frac{\gamma}{x_{1}}\bigl(\alpha x_{1}-\beta x_{1}x_{2}\bigr)\\ \frac{d}{dt}\bigl(\int\frac{\alpha}{x_{2}}\frac{dx_{2}}{dt}dt\bigr)=\frac{\alpha}{x_{2}}\frac{dx_{2}}{dt}=\frac{\alpha}{x_{2}}\bigl(-\gamma x_{2}+\delta x_{1}x_{2}\bigr), \end{array} \right.$$ we get $$\dot{z_{2}}=\frac{i}{\sqrt{\Lambda}}\frac{\frac{\gamma}{x_{1}}\frac{dx_{1}}{dt}+\frac{\alpha}{x_{2}}\frac{dx_{2}}{dt}}{2\bigl(\int\frac{\gamma}{x_{1}}dx_{1}+\int\frac{\alpha}{x_{2}}dx_{2}\bigr)^{1/2}}.$$ Therefore, $$\label{gammaz122z2z2dot} \begin{aligned} \gamma z_{1}^{2}-2z_{2}\dot{z_{2}} &=\frac{\gamma}{\Lambda}(\delta x_{1}+\beta x_{2}) -2\frac{i}{\sqrt{\Lambda}}\biggl(\int\frac{\gamma}{x_{1}}dx_{1}+\int\frac{\alpha}{x_{2}}dx_{2}\biggr)^{1/2}\frac{i}{\sqrt{\Lambda}}\frac{\frac{\gamma}{x_{1}}\frac{dx_{1}}{dt}+\frac{\alpha}{x_{2}}\frac{dx_{2}}{dt}}{2\bigl(\int\frac{\gamma}{x_{1}}dx_{1}+\int\frac{\alpha}{x_{2}}dx_{2}\bigr)^{1/2}}\\ &=\frac{\gamma}{\Lambda}(\delta x_{1}+\beta x_{2})+\frac{1}{\Lambda}\bigl[\frac{\gamma}{x_{1}}\bigl(\alpha x_{1}-\beta x_{1}x_{2}\bigr)+\frac{\alpha}{x_{2}}\bigl(-\gamma x_{2}+\delta x_{1}x_{2}\bigr)\bigr]\\ &=\frac{1}{\Lambda}\bigl(\gamma\delta x_{1}+\gamma\beta x_{2}+\gamma\alpha-\gamma\beta x_{2}-\alpha\gamma+\alpha\delta x_{1}\bigr)\\ &=\frac{1}{\Lambda}\delta x_{1}(\alpha+\gamma). \end{aligned}$$ Similarly, we may also obtain $$\label{alphaz122z2z2dot} \alpha z_{1}^{2}+2z_{2}\dot{z_{2}}=\frac{1}{\Lambda}\beta x_{2}(\alpha+\gamma).$$ Imitating Evans and Findley’s idea$^{\cite{Evans and Findley}}$, we may also define $$\label{omega time} \omega(t)=\Lambda\cdot\frac{\sin^{2}\phi(t)}{\alpha(t)+\gamma(t)}.$$ Therefore, $$\begin{aligned} \dot{\omega}(t) &=\frac{\Lambda}{(\alpha+\gamma)^{2}}\bigl[2\sin\phi\cos\phi\dot{\phi}(\alpha+\gamma)-(\dot{\alpha}+\dot{\gamma})\sin^{2}\phi\bigr]\\ &=\frac{\Lambda}{\alpha+\gamma}\sin(2\phi)\dot{\phi}-\frac{\Lambda}{(\alpha+\gamma)^{2}}(\dot{\alpha}+\dot{\gamma})\sin^{2}\phi. \end{aligned}$$ Therefore, $$\begin{aligned} \gamma\omega+\dot{\omega} &=\gamma\Lambda\frac{\sin^{2}\phi}{\alpha+\gamma}+\frac{\Lambda}{\alpha+\gamma}\sin(2\phi)\dot{\phi}-\frac{\Lambda}{(\alpha+\gamma)^{2}}(\dot{\alpha}+\dot{\gamma})\sin^{2}\phi\\ &=\frac{\Lambda}{\alpha+\gamma}(\gamma\sin^{2}\phi+\dot{\phi}\sin(2\phi))-\frac{\Lambda}{(\alpha+\gamma)^{2}}(\dot{\alpha}+\dot{\gamma})\sin^{2}\phi. \end{aligned}$$ But $ \gamma z_{1}^{2}-2z_{2}\dot{z_{2}}=\gamma\sin^{2}\phi+\dot{\phi}\sin(2\phi)$, after combining Eq.($\ref{gammaz122z2z2dot}$) and Eq.($\ref{omega time}$), we obtain $$\gamma\omega+\dot{\omega}=\delta x_{1}-\frac{\dot{\alpha}+\dot{\gamma}}{\alpha+\gamma}\omega.$$ Similary derivations may lead to $$\alpha\omega-\dot{\omega}=\beta x_{2}+\frac{\dot{\alpha}+\dot{\gamma}}{\alpha+\gamma}\omega.$$ Thus, [align]{} x\_[1]{}&==(+);\ x\_[2]{}&==(A-), where [align]{} &=+;\ A&=-, from which we may deduce $$\delta x_{1}+\beta x_{2}=\omega(\alpha+\gamma).$$ If the four time-dependent parameters are not varrying dramatically with time, we may still make use of Eq.($\ref{carrying capacity}$) as the carrying capacity and Eq.($\ref{maximum carrying capacity}$) as the maximum carrying capacity. It should not be surprising that under this assumption, some of the parameters may not be positive. On the other hand, it is mearly a pompous thought to consider that human race can only play the role as the predator. Completely opposite to this, before the modern time, we humans had been serving as the preys of some gigantic predators for a very long time, such as lions, tigers, leopards, and crocodilians. Bears, Komodo dragons and hyenas may also eat humans whenever they had chances.$^{\cite{man-eater}}$. Even for the present time, diseases, wars, and polutions may also be regarded as the predators that may do harm to human beings.\ Keeping this in mind, we believe that the best way to make use of the Lotka-Volterra equations could be that we divide the situations into two scenarios. The first one is that we treat human population as the prey and all the lethal factors that may jeopardize the existence of humans as the predator. On the contrary, in the second scenario, we adopt the humans as the predators and the natural resources as the preys. ### Scenario 1: Human population vs. lethal factors as the prey and the predator {#Case 1} In this case, we treat that $x_{1}$ is the human population and $x_{2}$ is the lethal factor at time $t$, respectively, whose interactive relationship may be illustrated in Figure \[fig:relationshipLethalFactorsHumans\]. ![The relationship between lethal factors (predators) and humans (preys).[]{data-label="fig:relationshipLethalFactorsHumans"}](relationshipLethalFactorsHumans.eps){width="80.00000%"} Instead of assuming that all parameters are constants as in Sec.(\[L-V Section\]), we ease the restrictions on allowing $\gamma$ and $\delta$ to be time dependent with the relation $\delta(t) = \zeta\gamma(t)$, where $\zeta$ is a constant, while still maintaining $\alpha$ and $\beta$ to be constant. By doing so, we acquire the modified Lotka-Volterra equations as follows: \[L-V for Case 1\] [align]{} &=x\_[1]{}(t) - x\_[1]{}(t)x\_[2]{}(t);\[diff for prey Case 1\]\ &=-(t) x\_[2]{}(t) + (t) x\_[1]{}(t)x\_[2]{}(t) =-(t) x\_[2]{}(t) + (t) x\_[1]{}(t)x\_[2]{}(t)\[diff for predator Case 1\] Making use of the fact that the T-Function fits the recorded human population quite well$^{\cite{T-Function}}$, it is natural to think that the prey $x_{1}$ satisfies $$x_{1}(t)=\frac{A}{\lbrack\ln{(B+E)}\rbrack^{M}},\label{T-Function x1}$$ where $E$ is a function of t such that $E(t)=e^{\frac{D-t}{\tau}}$. The parameters in Eq.(\[T-Function x1\]) were already given in Ref[@T-Function] with $A=3.83$ billion, $B=1.28$, $D=1980$, $M=0.70$, and finally $\tau=22.9$ years. We now study the behavior of $\gamma(t)$. First, $$\frac{dx_{1}}{dt}=x_{1}\cdot\frac{ME}{\tau (B+E)\ln{(B+E)}}=x_{1}\cdot(\alpha-\beta x_{2}).$$ Therefore, $$\label{x2 in Case 1} x_{2}(t)=\frac{\alpha}{\beta}-\frac{ME}{\beta\tau(B+E)}\ln{(B+E)}=\eta-\frac{ME}{\beta\tau(B+E)\ln{(B+E)}},$$ assuming $\frac{\alpha}{\beta}=\eta.$ Simple derivations of differentiating the above equation with respect to $t$ gives us $$\frac{dx_{2}}{dt}=\frac{ME\lbrack B\ln(B+E)-E\rbrack}{\beta\tau^{2}(B+E)^{2}\lbrack\ln(B+E)\rbrack^{2}}.$$ Combining the above equation with Eq.(\[diff for predator Case 1\]), we may obtain $$\label{gamma(t)} \gamma(t)=\frac{ME\lbrack \ln(B+E)\rbrack^{M-1}\lbrack B\ln(B+E)-E\rbrack}{\tau (B+E)\lbrace \zeta A\lbrack\eta\tau\beta(B+E)\ln{(B+E)}-ME\rbrack + ME\lbrack\ln{(B+E)}\rbrack^{M} -\eta\tau\beta(B+E)\lbrack\ln{(B+E)}\rbrack^{M+1}\rbrace}.$$ ### Scenario 2: Natural resources vs. Human Population as the prey and the predator {#Case 2} Next, we consider an opposite situation at which $x_{2}$ is assumed to be the human population (the predator), also described by the T-Function, while $x_{1}$ the natural resources (the prey). Under this scenario, we assume that $\beta(t)=\zeta\alpha(t)$ while $\gamma$ and $\delta$ are still constants, comparing against assuming $\alpha$, $\beta$ as constants and $\gamma$, $\delta$ as time-varying functions in Sec.(\[Case 1\]). Thus, we acquire another new form of the modified Lotka-Volterra equations as follows: \[L-V for Case 2\] [align]{} &=(t) x\_[1]{}(t) - (t) x\_[1]{}(t)x\_[2]{}(t)=(t) x\_[1]{}(t) - (t) x\_[1]{}(t)x\_[2]{}(t);\[diff for prey Case 2\]\ &=-x\_[2]{}(t) + x\_[1]{}(t)x\_[2]{}(t).\[diff for predator Case 2\] Setting $$x_{2}(t)=\frac{A}{\lbrack\ln{(B+E)}\rbrack^{M}},\label{T-Function x2}$$ with similar derivations as in Sec.(\[Case 1\]), we may obtain the following formula. First, $$\frac{dx_{2}}{dt}=x_{2}\cdot\frac{ME}{\tau (B+E)\ln{(B+E)}}=x_{2}\cdot(-\gamma+\delta x_{1}).$$ Therefore, $$\label{x1 in Case 2} x_{1}(t)=\frac{\gamma}{\delta}+\frac{ME}{\delta\tau(B+E)}\ln{(B+E)}=\eta+\frac{ME}{\delta\tau(B+E)\ln{(B+E)}},$$ assuming $\frac{\gamma}{\delta}=\eta.$ Likewise, simple derivations of differentiating the above equation with respect to $t$ gives us $$\frac{dx_{1}}{dt}=\frac{ME\lbrack E-B\ln(B+E)\rbrack}{\delta\tau^{2}(B+E)^{2}\lbrack\ln(B+E)\rbrack^{2}}.$$ Also, combining Eq.(\[diff for prey Case 2\]) gives us $$\label{alpha(t)} \alpha(t)=\frac{ME\lbrack \ln(B+E)\rbrack^{M-1}\lbrack E-B\ln(B+E)\rbrack}{\tau (B+E)\lbrace \zeta A\lbrack\eta\tau\delta(B+E)\ln{(B+E)}+ME\rbrack - ME\lbrack\ln{(B+E)}\rbrack^{M} -\eta\tau\delta(B+E)\lbrack\ln{(B+E)}\rbrack^{M+1}\rbrace}.$$ Results and Discussions ======================= In this section, we first discussed factors that may have influence on the carrying capacity on Earth. Then we summarized the parameters that are used in either Scenario 1 or Scenario 2. Starting from this we showed the plots of T-Function, together with the fitted observed data, the carrying capacity, as well as some time-varying functions. The major factors that restrict the carrying capacity of Earth -------------------------------------------------------------- With the behavior of humankind, we are facing the problem of the carrying capacity reaching the peak soon. There were many issues with nature nowadays such as water shortage, the decline of land fertility, reduction of the forest area, and the diminishing of the cultivated land. We understand that humankind only concerns about the development of technology but ignores the side effects of what we have done previously. The application of water resources is manifold. They are compulsory to the survival of human species as well as the development of agriculture. Nevertheless, freshwater resources are limited on Earth. Though Earth is regarded as the “blue planet” for it is covered with water practically, more than 95% of them are not able to be utilized, plus it is troublesome to achieve freshwater from the sea. Relatively, humans can barely survive on less than 5% of the water resources now. Per capita of water resources would decline for the population explosion. We must face the obstacle of the lack of water resources in the prospect if freshwater has achieved a certain proportion.\ What is more, oxygen is another mandatary factor for mankind to survive on Earth. The respiration of both humans and trees affects the proportion of the balance of carbon dioxide and oxygen. Recently, the number of trees has shown a declining trend for the development of mankind. Wood resources are essential to the construction of architecture. Hence, humanity has cut more trees to finish it casually. Plus, trees were cut if humankind wanted to expand their living quarters. As deforestation happened, the absorption of carbon dioxide would decrease as well. The balance of carbon dioxide and oxygen in the air would be devastated if the population keeps increasing while number the trees is reducing. Regarding the message, the banishment of the forest area is severe for six football field of forest vanishing within a minute. Consequently, both water and oxygen resources have severe impacts on Earth’s carrying capacity.\ Apart from the necessity of freshwater, arable land is an essential part of the development of agriculture. Food is fundamental to the survival of humankind. Humans must sustain their life; therefore, they have done the reclamation unrestrained aimed to survive. Immensely using the cultivated land, it cannot deal with the supply of fertility. Under natural conditions, arable land must stop farming after farming plants for a while. Humanity skipped the process for they want to gain more food resources. Two primary purposes are survival and gaining benefits from food resources. The cultivated field has overworked and offered fertility all the time. Consequently, fertility in the soil will decline in a normal situation. Moreover, using chemical fertilizers against natural law is a way to boost profits. However, the soil is polluted at the same time. Humankind gained much food by using chemical fertilizers, and the pollution of the land has become severe. In addition to fertility decline, acidification of the soil is dangerous. It devastated the land structure, and it brought quite a bit of harm to the living circumstance of mankind. In the process of human reproduction, fertilizers produced large amounts of organic acids that cause acidification to the land. Plus, whether the cultivated land can be used is relevant to human survival. If the population grows, the per capita footprint gradually decreases. More and more construction for residence is needed. Besides, the advantages of real estate are much more than agriculture. Humankind would instead use cultivated land as a place of construction because of profit relations. There are fewer and fewer acres of cultivated land on the earth because of the activity of mankind for we have devastated the soil and generate an irreversible impact. We used cultivated land selfishly, and this would make the earth’s carrying capacity reaching an end.\ The ocean, although it is salt water, is vital to our human life. On our “blue home,” we understood that the area of the sea is larger than that of land. Likewise, the number of species living in the sea is more than the number of species living on the ground. According to the statistic, the earth had an extinction crisis before 250 million years, and all creatures living on Earth had extinct at one night. Meanwhile, scientists consider that the ocean acidification caused by rising carbon dioxide in the air may need to be blamed for the extinction of mankind in the future. It was worth noting that the industrial revolution made the PH value of seawater dropping because of carbon dioxide.\ Furthermore, carbon dioxide increases because humans burn fossil fuels in large quantities. In the opinion of the predecessor, the draft will absorb all the carbon dioxide after the emission. The ocean has a self-sustaining method of acid-base balance. Carbonic acid in terrestrial rocks and seabed rocks and the carbon dioxide formed by the dissolution of atmospheric carbon dioxide in seawater will occur a reaction together. They are thereby achieving the role of acid-base balance. About 40% of the emission of carbon dioxide was being absorbed by the ocean. It requires to spend hundreds of thousands of years to purify carbon dioxide that has been absorbed by seawater. The severe acidification of the ocean has caused much death of the lives in the sea, such as fish, coral, and shellfish. Moreover, they are the other main food for human. Humankind would suffer from a shortage of food if ocean acidification continued to intensify. Whether it is water resources, forest resources, or the cultivated land are all important. We could assume their existence equal to the presence of human life. Conversely, the human beings will die if they vanished. Uneven distribution of the world’s population was an enormous issue as shown in Figure \[fig:world Pop Map\]$^{\cite{kaggle}}$; the issue occurred from ancient to now. For if a region had an overpopulation issue, it led to roar property prices and decline the living standard. As a result, it would influence the population in that region. Therefore, the uneven distribution of the world’s population was also the one of the main reasons for the carrying capacity of the earth accelerates to the limit due to the problem of arable land, acidification of the ocean, water shortage or reduction of the forest area. ![The world population map from 1968 to 2018.[]{data-label="fig:world Pop Map"}](worldPopMap.eps){width="70.00000%"} Parameters and the calculated carrying capacity ----------------------------------------------- Table \[tab:alphatozeta\] summarized the parameters we used in Sec.(\[Case 1\]) and in Sec.(\[Case 2\]). It should be clear that in Scenario 1, $\alpha$ and $\beta$ are constants while $\gamma$ and $\delta$ are time-dependent; whereas in Scenario 2, $\alpha$ and $\beta$ are time-dependent while $\gamma$ and $\delta$ are constants. $\eta$ and $\zeta$ are the same in both Scenarios. Moreover, $\alpha$ and $\beta$ in Scenario 1 correspond to $\gamma$ and $\delta$ in Scenario 2, respectively.\ Figure \[fig:allTogetherAndTfunction\] shows the T-Function in Eq.(\[T-Function x1\]), together with two recorded data in references$^{\cite{T-Function},\cite{kaggle}}$, and the calculated Carrying Capacity in Eq.(\[carrying capacity\]). The main idea of choosing parameters was to match the value of the maximum carrying capacity with that of the saturated T-Function.\ As we observe, the carrying capacity is time independent, with the constant value roughly equal to 10.2 billion. It seemed to make sense because the carrying capacity is a fixed number once the surrounding conditions in the environment (comparable to the parameters in Table \[tab:alphatozeta\] in our study) are all set. In other words, the carrying capacity of human population has been determined at the very beginning of the birth of Earth. ![T-function in shorter duration of time, comparing with fitted data from two sources. $K(t)$ refers to the carrying capacity, which is a constant function in time with value $10.2$ billion. (Inlet) Plotting of the whole-range T-Function ranging from Year 0 to Year 2200.[]{data-label="fig:allTogetherAndTfunction"}](allTogetherAndTFunction.eps){width="80.00000%"} [\*[3]{}[c]{}]{} Parameters & Scenario 1 in Sec.($\ref{Case 1}$) & Scenario 2 in Sec.($\ref{Case 2}$)\ $\alpha$& 23.0457 & Eq.($\ref{alpha(t)}$)\ $\beta $& $\frac{\alpha}{e^{2}}\approx3.1189$ & $\beta(t)=\zeta\alpha(t)$\ $\gamma$& Eq.($\ref{gamma(t)}$) & 23.0457\ $\delta$& $\delta(t)=\zeta\gamma(t)$ & $\frac{\gamma}{e^{2}}\approx3.1189$\ $\eta $ & $\frac{\alpha}{\beta}=e^{2}\approx7.3891$ &$\frac{\gamma}{\delta}=e^{2}\approx7.3891$\ $\zeta$ & 25 & 25\ The Lethal factors and $\gamma(t)$ as the time-varing functions --------------------------------------------------------------- In Figure \[fig:lethal factors\], we understand that the predator is the value of the lethal factors that would lead to negative effects on the human population. From Year 0 to approximately Year 2000, we notice that the predator had been significantly decreasing down to the value lower than 7.384. Nevertheless, around Year 2000, the predator had been sharply increasing and reaching at 7.389 about Year 2100, remaining stable at the value even upto Year 2500. The trend of this lethal function with time may be explained by stating that before Year 2000, attributed to the development of technology and the improvement of living circumstances, we humans had been reducing the “lethal factors”. However, after Year 2000, even if our technology and society still keep improving and developing, the lethal factors have been increase rather than decreasing, possibly due to the misuse or overuse of technology and our natural resources, raising the amount of the undesirable and unpleasant harmful substances on Earth. ![Plotting of $x_{2}(t)$.[]{data-label="fig:lethal factors"}](x2.eps){width="80.00000%"} As shown in Figure \[fig:gamma\], $\gamma(t)$ is a function with quite small numbers, roughly ranging from $-1.0\times10^{-16}$ to $0.6\times10^{-16}$. The overall description about the shape of the function may be that it decreased from Year 0 to Year 1900 down to the value $-1.0\times10^{-16}$, and changed steeply (with only small amount) around that year, climbing up to the value $0.6\times10^{-16}$ around Year 2000, and then returned and remained to roughly zero. ![Plotting of Eq.(\[gamma(t)\]) for $\gamma(t)$.[]{data-label="fig:gamma"}](gamma.eps){width="80.00000%"} The natural resources and $\alpha(t)$ as the time-varing functions ------------------------------------------------------------------ From Figure \[fig:natural resources\], we discovered that the natural resources had been gradually increasing from Year 0 to Year 1500. This could mean that we had been excavating and making use of more and more natural resources during the period. Afterwards, it led to a significantly abrupt increasing between Year 1500 and Year 1950. Unfortunately, it had a dramatic drop from Year 1950 to Year 2100. We may explain this by observing the fact that we almost run out of the natural resources in modern times. Then, the natural resource will be staying stable thereafter. ![Plotting of $x_{1}(t)$.[]{data-label="fig:natural resources"}](x1.eps){width="80.00000%"} It is evident that the impact we discovered here was the same as the one we found in the previous subsection. We perceived that abrupt changes happened around 2000. For our perspective, we considered that it may be because during the course of time, the natural resources have be exploited, diminishing the lethat factors as well, as shown in Figure \[fig:lethal factors\]. Because we assumed the population as the predator. Therefore, we needed to acknowledge that the result of the human population in this sub-section should be the same as the result of the population in the previous sub-section. From the inlet in Figure \[fig:allTogetherAndTfunction\], we knew that the human population had increased to approximately 11 billion between Year 2000 and Year 2500. Hence, we had the same result for the human population in our model, which meant that the result for the calculations on the natural resources with respect to time would be quite reasonable. As shown in Figure \[fig:alpha\], $\alpha(t)$ kept enhancing approximately from $0.04\times10^{-16}$ to $1.4\times10^{-16}$ between Year 0 and Year 1900. Unexpectedly, it dramatically declined from $1.4\times10^{-16}$ to $-0.56\times10^{-16}$ in between Year 1900 and Year 2000. Based on this unearthing of $\alpha(t)$, we discovered that the changing of $\alpha(t)$ is similar to $\gamma(t)$ (Figure \[fig:gamma\]) and $x_{1}(t)$ (Figure \[fig:natural resources\]). They all had a significant changing around Year 2000. ![Plotting of Eq.(\[alpha(t)\]) for $\alpha(t)$.[]{data-label="fig:alpha"}](alpha.eps){width="80.00000%"} It is obvious that Figure \[fig:lethal factors\] and Figure \[fig:natural resources\] are symmetric to the axis $y=7.389$, whereas Figure \[fig:gamma\] and Figure \[fig:alpha\] are symmetric to the axis $y=0.0$. Also, comparing through Figure \[fig:lethal factors\] to Figure \[fig:alpha\], it is evident that the dramatic changes occurred in almost same year around 1900 to 2000. It is interesting what might happen during 1900 to 2000 for the dramatic differences to occur. To our first thought, it is an astonishing coincidence with several religious prophecies that near Year 2000 some big events may have happened. For instance$^{\cite{near Y2K}}$, millennialists believe that in that period of time, subsequent with the final judgment and future eternal state of the “World to Come”, the earth would welcome a Golden Age or Paradise. Similar to this, Zoroastrianism states that each continual 1000-year period of time ends in a total annihilation of destruction and cataclysm until the end of the final millennial age, which is belived to be Year 2000, arrived and a triumphant king of peace won the fight against the evil spirit. Although most of the authors do not have religious belief, we still believe that Year 2000 is a turning point of human race. As a matter of fact, it is because of human activities that cause most of the global warming. There is higher and higher probability that we have extreme climates, all of which may be blamed for over-exploitation of natural resources by humans. Technology development could only deteriorate the situation rather than providing solutions to the problem. Factors on limiting the carrying capacity ----------------------------------------- Nature should be the bigest limitation of the world’s carrying capacity because it is irresistible that humans are affected by nature itself in development and distribution. Thus, the natural factors are chosen as the crucial factors in our discussion. After having this concept, land resource and energy problem are the first elements to be considered.\ However, precautious considerations make us to choose the energy as the major factor in limiting the carrying capacity. The reason for us to leave behind the land resource is that it can be effectively solved by advance urban planning like the Singaporean Government does. The population of Singapore was 6,028,912$^{\cite{SingaporeCountryMeter}}$ at the present time and the area of Singapore is approximately 721.571 square kilometers$^{\cite{SingaporeWiki}}$. A simple circulation, we got the population density of Singapore is about 8355.258 per square kilometer. The world population is approximately 7,714,576,923$^{\cite{WorldPopulationProjections}}$ at the moment. If we use the urban planning of Singapore to accommodate the world population, we may get that we can only use about 923,320 square kilometers for all the people in the world and that is about 9.1% of the area of Europe.\ On the other hand, energy is an essential requirement nowadays and even in the future because it may be runing out soon in the high demand situation right now. Moreover, the renewable and the new energy resource are still at the very beginning of the development and application, which means that we still have a long time to go in order to replace the limited energy like coal or oil with them. [\*[4]{}[c]{}]{} Year & Unrenewable (GWh) & Renewable(GWh) & Electricity Proportion\ 1990 & 9516471 & 2364747 & 13%\ 1995 & 10581821 & 2727262 & 14%\ 2000 & 12550522 & 2949714 & 15%\ 2005 & 14935155 & 3411933 & 16%\ 2010 & 17200288 & 4336559 & 17%\ 2015 & 18646323 & 5689433 & 18%\ 2016 & 18925573 & 6119497 & 19%\ Table \[tab:energy\] illustrated the situation of the unrenewable source like coal gets a biger percentage comparing with the renewable source like hydro in electricity generation. Furthermore, the Electricity Proportion shows the percentage of electricity in total energy consumption which means that the renewable source is an uncommon use and plays a small role in the total expenditure. Also, the efficiency of renewable energy is way less than the unrenewable energy, so the latter may still play an important role in the development of the world. Possible ways for raising the carrying capacity of Earth -------------------------------------------------------- We consider developing underground urban that includes useful transportation of natural resources that may be the factor for raising the carrying capacity of the Earth. Based on the knowledge we have, we understand that carrying capacity is mainly calculated by available land area and total population, with the affection of natural resources, namely freshwater and agriculture, by which the output of this calculation will be affected. Hence, let us propose that if total population stays still and available land area is enlarged, upon which buildings are constructed, carrying capacity will possibly be increased. As the idea mentioned, we can achieve the ideal situation above by developing underground urban.\ As only 29.1 percent$^{\cite{infoplease}}$ of our planet is land, meaning that there is no enough space for 7.7 billion people to live. Ergo, developing underground city can expand our land area as a consequence of humans are allowed to build under the extraordinarily limited land. By detecting whether the underground contains any mineral products or any danger, namely magma, we are allowed to develop a city as it is safe to excavate a big area.\ What is more, we also discover that the living place is also necessary for land use. Whence, urbanizing rural area can also reach the ideal situation we proposed. Of the greater importance, urbanization refers to a population shift from rural to urban where a significant amount of high-rising has built; this illustrates that living places for humans are developing to a higher position. By doing this, the problem of running out of living place will also be solved. It is not a rare phenomenon around the world. As claimed by some studies$^{\cite{emporis}}$, South Korea has the most significant amount of high-rising containing approximately 16,739 buildings in total. In addition, the country also possesses 229 skyscrapers. Let us take the tallest building in the world as another example: Burj Khalifa, located in Dubai has 163 floors$^{\cite{skyscrapercenter}}$. In this case, as we are allowed to build high-rising or skyscraper in urban, available living areas would increase while we keep building, the carrying capacity will also keep rising.\ On the other hand, Because of what we have done to our environment, the natural resources on Earth would be scarce in the near future that would severely affect the carrying capacity of the Earth. Hence, we need to discover more resources if we want to raise the carrying capacity. In this case, scientists have been exploring the space to find out the resources that we are lack of. In the same time, some scientists have found out plenty of metals from the Moon and other celestial bodies that we are short of. Moreover, they have also discovered that some asteroids have carbon-rich, metallic, or mineral-rich silicate. Since the number of minerals is diminishing nowadays, there will be fewer minerals that we can use for human life. Thus, scientists need to develop the space mining industry to discover more sources of minerals under the conditions of the development of technology.\ Furthermore, along with the amount of water with low salt content is diminishing nowadays, it means that there is less water that we can use to support human life in the future. Hence, there are being more wars for scrambling the water resources among the nations, take Kyrgyzstan and Uzbekistan for instance. With this in mind, if there are more countries that are short of the potable water, the carrying capacity of the Earth will reduce apparently because of the lack of resources. Also, the major support of human life is food. In this modern time, the relationship between population and food is not balanced. From 2014 to 2016, the number of people who are suffering from starvation is approximately 7.95 million. Meanwhile, the grain output in 2014 is 25.638 million ton, and there is 25.28 million ton in 2015. These two years are the most grain output of these years; however, there are still many people who are suffering from starvation under such much grain output, even in the future. It is evidence that the current grain output is not enough to support human’s demand. The main factor of it is the soil problem that is caused by the situation of over-cultivation, over-exploitation and the chemical fertilizer that make the fertility of the soil is decreasing. To sort out this problem fundamentally, people should have reasonable organic farming, take the traditional silkworm breeding method for example that could mix agriculture, fishery as well as the clothing industry together. Moreover, we need to promote diminishing the use of chemical fertilizers or using organic fertilizers instead of chemical fertilizers.\ Another philosophical point of view is that we may inevitably wonder why we have to improve the carrying capacity for greater carrying capacity did not signify better living conditions on the earth. On the other hand, there are many other critical issues to deal with in the world, such as poverty, starvation, and low educational standard, etc. Those countries are mainly situated in Africa, particularly for the third-world countries; however, why do we need to raise the carrying capacity of the Earth, but not solve these problems at first? In support of this, there is a kind of rat that would commit collective suicide to produce more space for the whole family of the rats. If Earth had the issue of overpopulation, we may repeat the same mistake. We ought to learn nothing from past failures and make the same mistake. Consequently, we cannot help asking: is raising carrying capacity of the Earth the better idea to allow more humans to live in? Population hypothesis --------------------- In this subsection, we aimed to provide the hypothesis about humans living in the future after it had reached the peak of the population. Some authors$^{\cite{mice}}$ pointed out that having too many mice in one narrow living space would bring problems to the whole race. This gives us hints to getting to understand that the factors that would affect mice mortality are emigration, resource shortage, inclement weather, disease, and predation, as listed in Table \[tab:mice\]. [c X]{} Factors & Explanations\ Emigration & In strange and less favorable habitats, the emigres become more exposed to other mortality factors.\ Resource shortage & Shortages of shelter, other environmental resources, and associates lead to debilitation and dissatisfaction of habitat that culminates in death\ Inclement Weather & Any conditions of wind, rain, humidity or temperature which exceed the usual limits of tolerance which increases the risk of death through debilitation.\ Disease & Abnormally high densities enhance the likelihood of spread of disease to epidemic proportions.\ Predation & Through evolution, it would associate with its predators capable of killing some of its members.\ Therefore, we originated in this success of introducing our hypothesis of the population. First and foremost, we discovered that the factors affecting the mice population are similar to those affecting the human population, raising the curiosity on what circumstainces we humans would face the same destiny as the mice. Humans would become rivals to each other, scrambling for the natural resources and the living materials, leading to the devastatingly dangerous situation for the whole human race. To sum up, in this subsection, we aimed to introduce the hypothesis of the human population in the future. We predicted that the population of human would decrease because of the factors in Table \[tab:mice\]. Conclusions =========== Of greatest importance, because of the rapid development of society, there are more and more problems that we have made to our living environment, namely for overpopulation, urbanization, global warming, as well as land desertification and other factors that would make our Earth to be overloaded. Because of these problems, more and more studies focus on discussing what the major factors affect the carrying capacity of the Earth. Yet most of the researches did not provide the exact number of carrying capacity together with the direct factors because they did not have a specific mathematical model that can determine actual carrying capacity and factors.\ Unquestionably, with the understanding of the theories of T-function and modified Lotka-Volterra Equations, we successfully constructed the model to calculate the carrying capacity of Earth for humans. In our paper, we modified the Lotka-Volterra Equations with the assumption that two of the original four parameters in the traditional equations are time dependent. In the first place, we assumed that the human population (borrowed from the T-Function) plays the role as the prey while all lethal factors that jeopardize the existence of the human race as the predator. Although we could still calculate time-dependent lethal function, the idea of treating the prey as the lethal factors was too general, resulting in impratical to recognizing what lethal factors may be included. Hence, in the second part of the modified Lotka-Volterra Equations, we exchanged the roles between the prey and the predator. This time, we treated the prey as the natural resources while the predator as the human population (still borrowed from the T-Function). In this case, we successfully calculated the natural resources as a function of time, and we also determined that the carrying capacity of the Earth is 10.2 billion people in the current time. Next, about the first part of the Lotka-Volterra Equations, we considered the interactions of humans and the limiting factors like prey and predator in the natural system to simulate the population growth affected by the other factors in order to illustrate a better estimation and find out the right parameters to calculate the $K$ value, comparing to the T-Function. The parameters $\alpha$, $\beta$, $\gamma$, and $\delta$ are to describe the interactions of two species, being identified that $\alpha$ and $\beta$ are related to the prey or human while $\gamma$ and $\delta$ are relate to the predator or bad factors. When we found the suitable value of each parameter, we would get a better cognition in the relationships between population growth and influences of factors.\ In addition, we used some methods to evaluate and analyze the factors that would lead the carrying capacity to get a decline. In the same way, we have also built three models with the three theories to ascertain the contemporary carrying capacity of Earth that is under the current conditions by measuring. Likewise, we have also determined what humans could do for increasing the Earth’s carrying capacity for human life. We discovered that different factors that would limit the Earth’s carrying capacity for humans based on the current conditions. However, after our analysis, we found that only energy resources would have the negative impact on the carrying capacity. We also discussed the factors that would raise the carrying capacity, and we discovered that urbanization which increased land use and the increase in living area could increase the carrying capacity.\ Last but not least, we identified that the solutions that we could take for arising the carrying capacity of our Earth in future conditions. As evidence, statistics shows that the relationships between the number of people who are suffering from starvation and the grain is getting imbalanced, and this might be an obstacle in the future. Therefore, with the conditions of the development of organic agriculture, people ought to attempt to reasonable cultivation as well as using the organic fertilizers comprehensively. In the same way, scientists should start to discover and explore some new resources to be utilized before the resources on Earth have run out of, like finding water on other planets and developing the space mining industry. After taking these measures, the Earth’s carrying capacity could someday have been promising raised under the conditions of significant development of organic cultivation together with technology for space exploration. Acknowledgement =============== We thank Escola Choi Nong Chi Tai in Macao PRC for the kindness to support this research project. [99]{} <https://www.eionet.europa.eu/gemet/en/concept/1198> <http://www.sustainablescale.org/ConceptualFramework/UnderstandingScale/MeasuringScale/CarryingCapacity.aspx> Cohen, Joel. “How Many People Can the Earth Support?” The New York Review, Oct. 8, 1998, pp 29-31.\ <http://lab.rockefeller.edu/cohenje/assets/file/257bCohenHowManyPeopleCanEarthSupportNYRB1998.pdf> Pulliam, H.R., and N.M. Haddad. “Human population growth and the carrying capacity concept.” Bulletin of the Ecological Society of America, 1994, 75: 141-157. J.E. Cohen,“Population Growth and Earth’s Human Carrying capacity”, Science, New Series, Volumn 269, Issue 5222(Jul.21, 1995), 341-346. Mohammad Ali, “Sustainability Assessment Context of Resource and Environmental Policy” Department of Environmental Science and Management, North South University, Dhaka, Bangladesh, 2013,13-29 Russell Hopfenberg, “Human carrying capacity is determined by food availability”, Population and Environment, Volume 25, Issue 2, pp 109–117, November 2003. C.M. Evans and G.L. Findley, “A new transformation for the Lotka-Volterra problem”, Journal of Mathematical Chemistry 25 (1999) 105-110. Rein Taagepera,“A world population growth model: Interaction with Earth’s carrying capacity and technology in limited space”, Technological Forecasting & Social Change 82 (2014) 34-41. Lotka-Volterra equations in wikipedia.\ <https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equations> <https://en.wikipedia.org/wiki/Man-eater> <https://www.kaggle.com/theworldbank/global-population-estimates> <https://en.wikipedia.org/wiki/Millennialism> <https://countrymeters.info/ct/Singapore> <https://zh.wikipedia.org/wiki/%E6%96%B0%E5%8A%A0%E5%9D%A1> <http://www.worldometers.info/world-population/world-population-projections/> <https://www.iea.org/statistics/?country=WORLD&year=2016&category=Electricity&indicator=ElecGenByFuel&mode=chart&dataTable=ELECTRICITYANDHEAT> <https://www.iea.org/statistics/?country=WORLD&year=2016&category=Energy%20consumption&indicator=TFCbySource&mode=chart&dataTable=BALANCES> <https://www.infoplease.com/world/general-world-statistics/profile-world-2016> <https://www.emporis.com/city/100429/seoul-south-korea> <http://www.skyscrapercenter.com/building/burj-khalifa/3> John B Calhoun, “Death Squared: The Explosive Growth and Demise of a Mouse Population.” Section on Behavioral Systems, Laboratory of Brain Evolution & Behavior, National Institute of Mental Health, 9000 Rockville Pike, Bethesda, Maryland 20014, USA.\ [ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1644264/pdf/procrsmed00338-0007.pdf]( https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1644264/pdf/procrsmed00338-0007.pdf) [^1]: email: [weishan\_lee@yahoo.com](mailto:WSLEEemails)
--- abstract: 'We consider the inverse problem of determining spatially heterogeneous *absorption* and *diffusion* coefficients $\mu(x),D(x)$, from *a single measurement* of the *absorbed energy* $\operatorname{\mathcal{E}}(x)=\mu(x) {u}(x)$, where ${u}$ satisfies the elliptic partial differential equation $$-\nabla \cdot (D(x) \nabla {u}(x)) + \mu(x) {u}(x) = 0 \quad \text{in $\Omega \subset \mathbb{R}^N$}.$$ This problem, which is central in *quantitative photoacoustic tomography*, is in general ill-posed since it admits an infinite number of solution pairs. Using similar ideas as in [@NaeSch14], we show that when the coefficients $\mu,D$ are known to be piecewise constant functions, a unique solution can be obtained. For the numerical determination of $\mu,D$, we suggest a variational method based on an *Ambrosio-Tortorelli approximation* of a *Mumford-Shah-like functional*, which we implement numerically and test on simulated two-dimensional data.' author: - 'E. Beretta ([elena.beretta@polimi.it]{})' - 'M. Muszkieta ([monika.muszkieta@pwr.edu.pl]{})' - 'W. Naetar ([wolf.naetar@univie.ac.at]{})' - 'O. Scherzer ([otmar.scherzer@univie.ac.at]{})' date: 'October 5, 2015' title: A variational method for quantitative photoacoustic tomography with piecewise constant coefficients --- #### Keywords. {#keywords. .unnumbered} Quantitative photoacoustic tomography, mathematical imaging, inverse problems, Mumford–-Shah functional #### AMS subject classifications. {#ams-subject-classifications. .unnumbered} 35R25, 35R30, 65J22, 65K10, 92C55 Quantitative photoacoustic tomography {#sec:introduction} ===================================== Introduction ------------ *Photoacoustic tomography* (PAT) is a medical imaging technique that combines electromagnetic excitation (in the visible spectrum) with ultrasound measurements. In a photoacoustic experiment, a translucent sample is illuminated by a laser pulse with wavelength $\lambda$ (which we assume to be fixed in this article). The absorbed optical energy leads to thermal expansion, generating an ultrasound pressure wave $p(x,t)$ that can be measured outside the sample. Since the pulses used in PAT are very short, the complete energy is deposited almost instantaneously compared to travel times of acoustic waves, and the pressure wave $p$ can be assumed to have been generated by an *initial pressure* ${\mathcal{H}}(x)$, that is, it satisfies the wave equation $$\begin{aligned} \partial_{tt}p(x,t) - c^2(x) \Delta p(x,t) &= 0 \\ \partial_t p(x,0) &= 0 \\ p(x,0) &= {\mathcal{H}}(x) \end{aligned}$$ and $p|_{\operatorname{\mathcal{M}}}$, where $\operatorname{\mathcal{M}}$ denotes a *measurement surface*, can be obtained from ultrasound measurements [@CoxLauArrBea12; @CoxLauBea09; @KucKun08; @WanWu07]. By solving an *inverse problem for the wave equation* (see, e.g, [@KucKun08] for a review of inversion techniques), these ultrasound measurements can be used to estimate the *initial pressure* ${\mathcal{H}}(x)$. Since $$\label{eq:def_E} {\mathcal{H}}(x) = \Gamma(x) \operatorname{\mathcal{E}}(x) = \Gamma(x) \mu(x) {u}(x),$$ that is, ${\mathcal{H}}(x)$ is proportional to the *absorbed energy* $\operatorname{\mathcal{E}}(x)$, which is in turn proportional to the *optical absorption coefficient* $\mu(x)$ at the applied wavelength $\lambda$ and the local fluence $u(x)$ (the time-integrated laser power received at $x$), the initial pressure visualizes contrast in $\mu$. The constant of proportionality $\Gamma(x)$ is called *Grüneisen coefficient* (or *PA efficiency* since it describes the efficiency of conversion from absorbed energy to acoustic signal) [@CoxLauArrBea12; @CoxLauBea09]. In *quantitative photoacoustic tomography* (qPAT), the goal is to apply PAT to determine (inhomogeneous) optical material properties of the sample (which are of diagnostic interest). To do so, an additional non-linear *inverse problem for light transport* has to be solved, since the fluence $u(x)$ is inhomogeneous and itself dependent on the optical properties of the sample [@CoxLauArrBea12; @CoxLauBea09]. While it is also possible to attack the acoustic and optical inverse problems simultaneously (see, e.g., [@BerBonHabPri14; @HalNeuRab15]), here, we assume that the acoustic part of the problem has been solved successfully, i.e., that (possibly noisy) data ${\mathcal{H}}^\delta \approx {\mathcal{H}}$ (with $\delta$ signifying the noise level) are available. In this article, we utilize the *diffusion approximation* (which is valid in highly scattering media [@CoxLauArrBea12; @WanWu07]) of the *radiative transfer equation* to model the fluence distribution. It is, however, also possible to use a radiative transfer model for qPAT (see [@DeCTraSej15; @PulCoxArrKaiTar15; @SarTarCoxArr13; @TarCoxKaiArr12; @YaoSunJia09]), albeit at the cost of increased computational and analytical complexity. The diffusion model (in a Lipschitz domain $\Omega \subset \mathbb{R}^N$) is given by $$\label{eq:diff_eq_dirichlet} \begin{aligned} -\nabla \cdot (D(x) \nabla {u}(x)) + \mu(x) {u}(x) &= 0 \quad \text{in $\Omega$}\\ {u}(x)|_{\partial \Omega} &= g(x). \end{aligned}$$ The parameter $D$ is called *diffusion coefficient*. The Dirichlet boundary data $g$ (which we assume to be continuous and known in this article) describes the illumination pattern. Note that this is a time-independent model, again due to the fact that energy is deposited almost instantaneously compared to time scales of the acoustic system. The difficulty of the inverse problem varies depending on which parameters of the model are assumed to be known. We present three inverse problems often considered in the literature. The hardest one is: $$\label{prob:problem3} \text{ Determine $(\mu,D,\Gamma)$ from measurements of ${\mathcal{H}}^\delta$. } \tag{P3}$$ Bal and Ren showed in [@BalRen11a] that for arbitrary coefficients $\mu,D,\Gamma \in W^{1,\infty}(\Omega)$, this problem is unsolvable, even if multiple measurements of ${\mathcal{H}}$ (with different known boundary illumination patterns $g$) are available. In [@NaeSch14], the authors showed that with a restriction to *piecewise constant parameters*, unique reconstruction of all three unknown parameters $\mu,D,\Gamma$ from multiple measurements is possible (under a condition on the directions of $\nabla {u_k}$, where $u_k$ is the fluence of the $k$th illumination pattern). Furthermore, an analytical reconstruction procedure was suggested and implemented numerically, which, unfortunately, is relatively sensitive to noise. Alberti and Ammari [@AlbAmm15] also established a unique reconstruction result, based on *morphological component analysis* (a sparsity approach), in a slightly more general setting (which assumes different degrees of smoothness of the coefficients and the fluence). They also provide numerical reconstructions, which were, however, not tested for noise sensitivity in the case of . To simplify the problem, it is often assumed that the Grüneisen coefficient $\Gamma$ is known or constant, which implies that the absorbed energy can be estimated with $\operatorname{\mathcal{E}}^\delta = \frac{{\mathcal{H}}^\delta}{\Gamma}\approx \operatorname{\mathcal{E}}$ (with $\delta$ again denoting the noise level). It remains to solve $$\label{prob:problem2} \text{ Determine $(\mu,D)$ from measurements of $\operatorname{\mathcal{E}}^\delta$. } \tag{P2}$$ If only a single measurement of $\operatorname{\mathcal{E}}^\delta$ is given, this inverse problem is also ill-posed (since it has infinitely many solutions pairs, see [@CoxArrBea09; @NaeSch14; @ShaCoxZem11]). However, in [@AlbAmm15], the authors were able to recover $\mu$ (independently of the light transfer model used) from a single measurement of $\operatorname{\mathcal{E}}$ (again using a *sparsity method*, assuming different degrees of smoothness of the coefficients and the fluence). In [@BalRen11a; @BalUhl10] it was shown that this problem is uniquely solvable if two measurements of $\operatorname{\mathcal{E}}$ (corresponding to well-chosen boundary illuminations $g_1,g_2$) are available. Numerically, this multi-illumination case was treated in [@BalRen11a; @GaoOshZha12; @RenGaoZhao13; @ShaCoxZem11; @TarCoxKaiArr12; @Zem10], using a multitude of different techniques, see also the review paper by Cox et al. [@CoxLauArrBea12]. The simplest case works under the assumption that the diffusion coefficient $D$ is also known: $$\label{prob:problem1} \text{ Determine $\mu$ from measurements of $\operatorname{\mathcal{E}}^\delta$. } \tag{P1}$$ This inverse problem has a unique solution even for a single measurement, which can be seen by substituting $\mu u = \operatorname{\mathcal{E}}^\delta$ in , providing the possibility to solve for $u$ [@BanBagVasRoy08]. For other (numerical) approaches, cf. [@CoxLauArrBea12]. To the authors knowledge, this simplified problem is the only case for which practical viability (with experimental data) has been established both for phantoms [@YuaJia06] and biological samples [@SunSobJia09; @SunSobJia13]. Contributions of this article ----------------------------- In this article, we consider the problem using a *single measurement* for our reconstructions, a problem that is in general ill-posed. Similar to [@NaeSch14] (which treats with multiple measurements), a restriction to piecewise constant $\mu,D$ also proves to be useful for this problem. In fact, as shown in Section \[sec:recovery\_pw\_const\], when the parameters $\mu,D$ are piecewise constant functions (and noise-free data are given), the inverse problem can be solved uniquely, without any further assumptions. In Section \[sec:mumford\_shah\_parameter\_detection\], we present a variational model for the reconstruction of piecewise constant $\mu,D$ from noisy data $\operatorname{\mathcal{E}}^\delta$ based on the *Ambrosio-Tortorelli approximation* of a *Mumford-Shah-like functional*. Compared to the two-step reconstruction process presented in [@NaeSch14] (which was introduced for , but is applicable for ), which first detects the regions where the parameters are constant, then reconstructs the parameter values from jumps of the data and its derivatives, this variational approach is much more robust with respect to noise. This is mainly due to the fact that the numerical approach presented in [@NaeSch14] requires almost perfect jump detection in the second derivatives of $\operatorname{\mathcal{E}}^\delta$ to get reasonable estimates of $\mu,D$ (since the jumps have to form a full partition of the domain $\Omega$), which is highly challenging in the presence of significant amounts of noise. This is not the case for the variational method presented here. Finally, a description of our implementation and numerical results can be found in Section \[sec:numerical\_results\]. Recovery of piecewise constant coefficients {#sec:recovery_pw_const} =========================================== In this Section we show that piecewise constant parameters $\mu,D$ can be recovered uniquely from a single measurement of the absorbed energy $\operatorname{\mathcal{E}}(\mu,D)$. In the following, let $(\Omega_m)_{m=1}^M$ be a partition of $\Omega \subset \mathbb{R}^N$ into open sets and $\mu,D$ piecewise constant on $(\Omega_m)_{m=1}^M$, that is, for $\Omega_m$ open and $\mu_m,D_m \in \mathbb{R}^+$ $$\label{eq:pw_const} \overline\Omega = \bigcup_{m=1}^M \overline\Omega_m, \enskip \mu = \sum_{m=1}^M \mu_m 1_{\Omega_m}, \enskip D = \sum_{m=1}^M D_m 1_{\Omega_m}.$$ Furthermore, for $k \in \mathbb{N}$, denote by $$J_k(f)=\Omega \setminus \bigcup \{ B \subset \Omega \mid B \text{ is open and } f \in C^k(B) \}$$ the discontinuities of a function $f \in L^\infty(\Omega)$ and its derivatives up to $k$-th order. First, we show that coefficient discontinuities can be recovered from the data $\operatorname{\mathcal{E}}$. \[prop:pw\_const\_prop\_1\] Let $\mu,D$ be of the form and $\operatorname{\mathcal{E}}=\operatorname{\mathcal{E}}(\mu,D)$ satisfy , weakly. Then we have $$\label{eq:prop_jumps_D2} J_0(\mu) \cup J_0(D) = J_2(\operatorname{\mathcal{E}}).$$ First, take an arbitrary open ball $B$ with $B \cap (J_0(\mu) \cup J_0(D)) = \emptyset$ and notice that, by interior elliptic regularity, ${u} \in C^\infty(B)$ and hence also $\operatorname{\mathcal{E}}\in C^\infty(B)$. It follows that $$J_0(\mu) \cup J_0(D) \supset J_2(\operatorname{\mathcal{E}}).$$ By the De Giorgi-Nash-Moser theorem [@GilTru01 Theorem 8.22], we have ${u} \in C^0(\overline\Omega)$, so $$\label{eq:mu_data_jump} J_0(\mu) = J_0(\operatorname{\mathcal{E}}).$$ Finally, as $u \in C^\infty(\Omega_m), \, m=1,\ldots,M$, holds strongly in all $\Omega_m$, and we get $$\label{eq:diff_eq_scalar} \begin{aligned} -D_m \Delta {u} + \mu_m {u} &= 0 \quad \text{in $\Omega_m$} \\ -D_m \Delta \operatorname{\mathcal{E}}+ \mu_m \operatorname{\mathcal{E}}&= 0 \quad \text{in $\Omega_m$} \end{aligned}$$ since $\mu_m,D_m$ are scalars. Hence, $\Delta \operatorname{\mathcal{E}}= \frac{\mu}{D} \operatorname{\mathcal{E}}= \frac{\mu^2}{D} {u}$ in $\bigcup_{m=1}^M \Omega_m$, which shows that $\Delta \operatorname{\mathcal{E}}$ cannot be continuous where $\frac{\mu^2}{D}$ jumps (since ${u} \in C^0(\overline\Omega)$), hence $$J_0\left(\frac{\mu^2}{D}\right) \subset J_2(\operatorname{\mathcal{E}}).$$ The Proposition follows from $J_0(\mu) \cup J_0\left(\frac{\mu^2}{D}\right) = J_0(\mu) \cup J_0(D)$. \[rem:edges\_first\_order\] The proof of Proposition \[prop:pw\_const\_prop\_1\] also shows that jumps in $\mu$ and $D$ have different effects on the data $\operatorname{\mathcal{E}}$. By , the jumps of $\mu$ can be recovered from those in $\operatorname{\mathcal{E}}$. Jumps of $D$ (not coinciding with jumps of $\mu$) on the other hand, are smoothed in the data and have to be obtained from the derivatives of $\operatorname{\mathcal{E}}$. In particular, the discontinuities of $\Delta \operatorname{\mathcal{E}}$ suffice to find the jumps in $D$. Furthermore, note that from discontinuities of $\nabla \operatorname{\mathcal{E}}$ (or, equivalently, $|\nabla \operatorname{\mathcal{E}}|^2$), one can, in general, only recover the part of the jumps of $D$ where $\nabla {u}(x) \cdot \nu(x) \neq 0$ (where $\nu$ denotes a normal vector to the hyper-surface of discontinuity in $D$), as $\nabla {u}$ may be continuous across parts of the jump set of $D$ where $\nabla {u}(x)$ is tangential. To see this, consider for instance the case of a partition consisting of a homogeneous region $\Omega_1$ with (non-touching) smooth inclusions $\Omega_2,\ldots,\Omega_M$. In this case, we have ${u} \in C^\infty(\overline\Omega_m)$ for all $m=1,\ldots,M$ (due a result of Li and Nirenberg, see [@LiNir03]) and the interface conditions $$\label{eq:interface_cond} D_m \nabla {u}|_{\Omega_m} \cdot \nu = D_1 \nabla {u}|_{\Omega_1} \cdot \nu, \quad \nabla {u}|_{\Omega_m} \cdot \tau = \nabla {u}|_{\Omega_1} \cdot \tau \quad$$ hold point-wise on $\partial\Omega_m$ for $m=2,\ldots,M$, where $\nu,\tau$ denote vectors normal and tangential to $\partial\Omega_m$ (see, e.g., [@NaeSch14] for a derivation). Now, let $$\Lambda:=\{x \in \bigcup_{m=2}^M \partial\Omega_m \mid \nabla {u}(x) \cdot \nu(x) \neq 0\}.$$ We have $J_0(D) \cap \Lambda \subset J_1({u})$ since, by the interface conditions , $\nabla {u}$ cannot be continuous across $J_0(D) \cap \Lambda$ (it changes length). On the other hand, if we take an open ball $B \subset \Omega_k \cup \Omega_1$ (for some $2 \leq k \leq M$) with $B \cap J_0(D) \cap \Lambda = \emptyset$, we have, again by , $$\label{eq:nabla_phi_cont} \nabla {u}|_{\Omega_k}=\nabla {u}|_{\Omega_1} \quad \text{ on $\partial\Omega_k \cap B$}$$ since either $D_k=D_1$ or $\nabla {u}|_{\Omega_k \cap B} \cdot \nu = \nabla {u}|_{\Omega_1 \cap B} \cdot \nu = 0$. As we have ${u} \in C^\infty(\overline\Omega_k)$, ${u} \in C^\infty(\overline\Omega_1)$, implies ${u} \in C^1(B)$ and therefore $J_1({u}) \subset \overline{J_0(D) \cap \Lambda}$, hence $$\overline{J_1({u})} = \overline{J_0(D) \cap \Lambda}.$$ This shows that, in general, discontinuities in the second derivatives of the data $\operatorname{\mathcal{E}}$ have to be identified in order to get the whole jump set of $D$. Once the partition $(\Omega_m)_{m=1}^M$ is known, the coefficients $\mu,D$ can be recovered from the jumps in $\operatorname{\mathcal{E}}, \frac{\Delta \operatorname{\mathcal{E}}}{\operatorname{\mathcal{E}}}$ and the boundary values of ${u}$. \[prop:pw\_const\_prop\_2\] Let $\mu,D$ be of the form with $(\Omega_m)_{m=1}^M$ known. Furthermore, let $\operatorname{\mathcal{E}}$ satisfy , for a known boundary illumination $g$. Then, $\mu$ and $D$ can be determined uniquely from $\operatorname{\mathcal{E}}$. Let $x \in \partial\Omega_m \cap \partial\Omega_n$ for some $m,n \in \{1,\ldots,M\}$. Since ${u} \in C^0(\overline\Omega)$, $$\label{eq:mu_jump} \frac{\lim_{y \to x} \operatorname{\mathcal{E}}(y)|_{\Omega_m}}{\lim_{z \to x} \operatorname{\mathcal{E}}(z)|_{\Omega_n}}= \frac{\mu_m {u}(x)}{\mu_n {u}(x)} = \frac{\mu_m}{\mu_n}.$$ Furthermore, take $x \in \partial\Omega \cap \partial\Omega_k$ (for some $k \in \{1,\ldots,M\}$). We have $$\lim_{y \to x} \frac{\operatorname{\mathcal{E}}(y)|_{\Omega_k}}{g(x)}=\mu_k \lim_{y \to x} \frac{{u}(y)|_{\Omega_k}}{g(x)} = \mu_k.$$ Starting in $\Omega_k$ and using on all interfaces, we recover all $\mu_m, \,m=1,\ldots,M$. Finally, to obtain $D_m, \, m=1,\ldots,M$, we use , that is, $$\label{eq:mu_D_ratio} \frac{\mu_m}{D_m}=\frac{\Delta \operatorname{\mathcal{E}}(y)}{\operatorname{\mathcal{E}}(y)} \quad \text{for all $y \in \Omega_m$}.$$ If $g$ is not known, Proposition \[prop:pw\_const\_prop\_2\] can be used to determine the parameters $\mu,D$ up to a constant. A Mumford-Shah-like functional for qPAT {#sec:mumford_shah_parameter_detection} ======================================= In Section \[sec:recovery\_pw\_const\], we showed that piecewise constant absorption and diffusion coefficients $\mu,D$, can be recovered from noise-free data $\operatorname{\mathcal{E}}$ by an analytical procedure which first determines the coefficient jumps, i.e., the partition $(\Omega_m)_{m=1}^M$, and then the numerical values $(\mu_m,D_m)_{m=1}^M$. In [@NaeSch14], such a two-step approach was implemented numerically (for the problem ). In the presence of significant amounts of noise in the data, however, such an approach is infeasible, since it requires the detection of jumps of derivatives up to second order of the data $\operatorname{\mathcal{E}}^\delta$. In particular, jumps may remain partially undetected (e.g., if the edge detection has to be restricted to first derivatives due to noise, see Remark \[rem:edges\_first\_order\] and the numerical examples in Section \[sec:numerical\_results\]), leading to an incomplete estimated partition and therefore highly erroneous parameter estimates. To overcome this problem, we propose a variational approach favouring piecewise constant solutions that estimates the numerical values of piecewise constant $\mu,D$ and their jumps at the same time. There are multiple different methods for piecewise constant regularization of inverse problems. For instance, a popular class of methods is based on the *level set method* [@OshSet88] or variations of it, e.g., [@BurOsh05; @ChaVes01; @DeCLeiTai09; @TaiCha04; @TaiLi06]). This approach has been suggested for qPAT (using the radiative transfer model) in [@DeCTraSej15], however, it was only tested using multiple measurements. In this article, we use an *Ambrosio-Tortorelli* approximation of a *Mumford-Shah-like* functional (which was first suggested for *electrical impedance tomography* in [@RonSan01]). The main advantage of this approximation is that we can utilize incomplete jump information (obtained, for instance, from jumps of the data or other means) to initialize the minimization procedure (see Section \[sec:mumford\_shah\_minimization\]). Numerically, leads to faster convergence (and in some cases improved minimizers). Additionally, the number of segments does not have to be known in advance (in contrast to *multiple level set methods*) and the minimization with respect to the jump indicator functions is a simple elliptic problem. We want to minimize the Mumford-Shah-like functional $$\label{eq:qpat_ms_funct} \begin{aligned} \operatorname{\mathcal{F}}(\mu,D,K_\mu,K_D) &= \frac{1}{2} {\left\| \operatorname{\mathcal{E}}(\mu,D)-\operatorname{\mathcal{E}}^\delta \right\|}_{L^2(\Omega)}^2 \\ &+ \frac{\alpha_\mu}{2} \int_{\Omega \setminus K_\mu} |\nabla \mu|^2 \,dx + \frac{\alpha_D}{2} \int_{\Omega \setminus K_D} |\nabla D|^2 \,dx \\ &+ \beta_\mu H^{N-1}(K_\mu) + \beta_D H^{N-1}(K_D), \end{aligned}$$ where $H^{N-1}$ is the $(N-1)$-dimensional Hausdorff-measure and $\operatorname{\mathcal{E}}(\mu,D)$ the operator that maps the (unknown) parameters $\mu,D$ to the measurements $\operatorname{\mathcal{E}}$ satisfying ,. The minimum is taken over all $\mu,D$ in suitable (that is, point-wise bounded from below and above) subsets of $W^{1,2}(\Omega \setminus K_\mu), W^{1,2}(\Omega \setminus K_D)$ and all closed sets $K_\mu,K_D \subset \Omega$ (the jump sets of the coefficients $\mu,D$). Functionals of this type, first introduced by Mumford and Shah for image denoising and segmentation in [@MumSha89], have been applied for a wide range of inverse problems, see, e.g., [@EseShe02; @Kla11; @KreRie14; @RamRin07; @RamRin10; @RonSan01]). The basic idea behind the functional is as follows. The *discrepancy term* $$\frac{1}{2} {\left\| \operatorname{\mathcal{E}}(\mu,D)-\operatorname{\mathcal{E}}^\delta \right\|}_{L^2(\Omega)}^2$$ forces minimizing coefficients $\hat\mu,\hat D$ to be close (in $L^2$-sense) to a solution of the inverse problem $\operatorname{\mathcal{E}}(\mu,D)=\operatorname{\mathcal{E}}^\delta$ (of which there are infinitely many), while the *regularization terms* $$\frac{\alpha_\mu}{2} \int_{\Omega \setminus K_\mu} |\nabla \mu|^2 \,dx + \frac{\alpha_D}{2} \int_{\Omega \setminus K_D} |\nabla D|^2 \,dx + \beta_\mu H^{N-1}(K_\mu) + \beta_D H^{N-1}(K_D)$$ force both the variation of $\hat\mu,\hat D$ (outside their jump sets $\hat K_\mu,\hat K_D$) as well as the hyper-surface area of $\hat K_\mu,\hat K_D$ to be small. The regularization parameters $\alpha_\mu,\alpha_D$ control the amount of continuous variation the parameters may have, whereas $\beta_\mu,\beta_D$ control the complexity of the coefficient’s jump sets $K_\mu,K_D$. Note that while in general, minimizers of this functional can have some continuous variation, they have to be close to piecewise constant for large values of $\alpha_\mu,\alpha_D$. In fact, the Ambrosio-Tortorelli approximation of the functional , which will be introduced in Section \[sec:mumford\_shah\_approximation\], can also be used for piecewise constant regularization (in the limit $\alpha_\mu,\alpha_D \to \infty$, see Remark \[rem:piecewise\_constant\_limit\]). Existence of minimizers {#sec:mumford_shah_existence} ----------------------- Existence of minimizers of functionals like (which are, in general, not unique) was first established in [@DeGCarLea89] for image segmentation and in [@RonSan01] for regularization of non-linear operator equations. To ensure that $\operatorname{\mathcal{E}}(\mu,D)$ is well-defined and that $\operatorname{\mathcal{F}}$ has a minimizer, point-wise bounds have to be enforced (see [@JiaMaaPag14]), so we have to restrict the coefficients to $X_a^b(\Omega) = \{ f \in L^\infty(\Omega)\colon a \leq f \leq b \text{ a.e. in }\Omega\}$ for some $0 < a,b < \infty$ a priori. We also require the following Lemma, the proof follows the ideas in [@EggSchl10]. \[lem:continuity\_L2\_E\] The non-linear measurement operator $$\operatorname{\mathcal{E}}\colon X_a^b(\Omega)^2 \subset L^2(\Omega)^2 \to L^2(\Omega), \ \operatorname{\mathcal{E}}(\mu,D) = \mu \,{u}(\mu,D),$$ where ${u}(\mu,D)$ solves , is continuous. Let $(\mu_n,D_n) \to (\mu,D)$ in $X_a^b(\Omega)^2 \subset L^2(\Omega)^2$. Furthermore, let ${u}_n := {u}(\mu_n,D_n)$ and ${u} := {u}(\mu,D)$ be solutions of corresponding to the given coefficients. By the usual energy estimates for (cf. [@Eva98 Chapter 6, Theorem 2]) we have $${\left\| {u}_n \right\|}_{W^{1,2}(\Omega)} \leq C(a,b,\Omega) {\left\| g \right\|}_{L^2(\partial\Omega)}.$$ Hence, there exists a (re-labeled) subsequence $({u}_k)_{k \in \mathbb{N}}$ with ${u}_k \rightharpoonup \overline{u}$ weakly in $W^{1,2}(\Omega)$ for some $\overline{u} \in W^{1,2}(\Omega)$. From the weak form of we get for all $v \in W^{1,\infty}(\Omega)$ and $k \in \mathbb{N}$ $$\begin{aligned} &{\left\langle D \nabla( {u}_k-{u}), \nabla v \right\rangle}_{L^2(\Omega)^2} + {\left\langle \mu({u}_k-{u}), v \right\rangle}_{L^2(\Omega)} \\ &\hspace{0.2\linewidth} = {\left\langle (D-D_k) \nabla {u}_k, \nabla v \right\rangle}_{L^2(\Omega)^2} + {\left\langle (\mu-\mu_k) {u}_k, v \right\rangle}_{L^2(\Omega)} \\ &\hspace{0.2\linewidth} \leq C({\left\| D_k-D \right\|}_{L^2(\Omega)} + {\left\| \mu_k-\mu \right\|}_{L^2(\Omega)}) {\left\| g \right\|}_{L^2(\partial\Omega)} {\left\| v \right\|}_{W^{1,\infty}(\Omega)}. \end{aligned}$$ Taking $k \to \infty$ and using the fact that the left side acts as a bounded linear functional on ${u}_k \in W^{1,2}(\Omega)$, we obtain for all $v \in W^{1,\infty}(\Omega)$ $${\left\langle D \nabla(\overline{u}-{u}), \nabla v \right\rangle}_{L^2(\Omega)^2} + {\left\langle \mu(\overline{u}-{u}), v \right\rangle}_{L^2(\Omega)} = 0.$$ By density of $W^{1,\infty}(\Omega) \subset W^{1,2}(\Omega)$, this implies $\overline{u}={u}$. Since the same argument also holds for every subsequence of the original sequence $({u}_n)_{n \in \mathbb{N}}$, we get ${u}_n \rightharpoonup {u}$ weakly in $W^{1,2}(\Omega)$ and thus ${u}_n \to {u}$ strongly in $L^2(\Omega)$ . Finally, since ${u} \in L^\infty(\Omega)$ by the maximum principle, continuity of $\operatorname{\mathcal{E}}$ follows from $${\left\| \mu_n {u}_n - \mu {u} \right\|}_{L^2(\Omega)} \leq {\left\| \mu_n-\mu \right\|}_{L^2(\Omega)} {\left\| {u} \right\|}_{L^\infty(\Omega)} + {\left\| {u}_n-{u} \right\|}_{L^2(\Omega)} {\left\| \mu_n \right\|}_{L^\infty(\Omega)}.$$ \[rem:continuity\_Lp\_E\] Since ${\left\| f \right\|}_{L^2}^2 \leq {\left\| f \right\|}_{L^1(\Omega)} {\left\| f \right\|}_{L^\infty(\Omega)}$ for all $f \in X_a^b(\Omega)$, the operator $\operatorname{\mathcal{E}}\colon X_a^b(\Omega)^2 \subset L^1(\Omega)^2 \to L^2(\Omega)$ is also continuous. Following [@Amb90; @RonSan01], we show the existence of minimizers of a weak form of $\operatorname{\mathcal{F}}$ defined for coefficients $\mu,D$ in the space $\operatorname{\mathcal{SBV}}(\Omega)$, the *special functions of bounded variation*, which may have jump discontinuities. For a quick introduction, see Appendix \[sec:sbv\_introduction\]. We obtain the functional $\overline{\operatorname{\mathcal{F}}}\colon X_a^b(\Omega)^2 \cap \operatorname{\mathcal{SBV}}(\Omega)^2 \to \mathbb{R}$, $$\label{eq:qpat_ms_funct_weak} \begin{aligned} \overline{\operatorname{\mathcal{F}}}(\mu,D) &= \frac{1}{2} {\left\| \operatorname{\mathcal{E}}(\mu,D)-\operatorname{\mathcal{E}}^\delta \right\|}_{L^2(\Omega)}^2 + \frac{\alpha_\mu}{2} \int_{\Omega} |\nabla \mu|^2 \,dx \\ &+ \frac{\alpha_D}{2} \int_{\Omega} |\nabla D|^2 \,dx + \beta_\mu \,H^{N-1}[S(\mu)] + \beta_D \,H^{N-1}[S(D)], \end{aligned}$$ where $S(f)$ denotes the *approximate discontinuity set* of a Lebesgue-measurable function $f$ and $\nabla \mu, \nabla D$ the density of the absolutely continuous part (with respect to the Lebesgue measure) of the respective distributional gradients (see Appendix \[sec:sbv\_introduction\]). In this setting, the existence of minimizers can be established using the direct method and the $\operatorname{\mathcal{SBV}}$ compactness theorem. \[prop:existence\_minimizers\_ms\] The weak Mumford-Shah-like functional $\overline{\operatorname{\mathcal{F}}}$ has at least one minimizer $(\hat \mu, \hat D)$ in $X_a^b(\Omega)^2 \cap \operatorname{\mathcal{SBV}}(\Omega)^2$. Let $(\mu_n,D_n) \in X_a^b(\Omega)^2 \cap \operatorname{\mathcal{SBV}}(\Omega)^2$ be a minimizing sequence of the functional . Clearly, we have for all $n \in \mathbb{N}$ $$\begin{aligned} &{\left\| \mu_n \right\|}_{L^\infty(\Omega)} + \int_\Omega |\nabla \mu_n|^2 \,dx + H^{N-1}[S(\mu)] \leq C < \infty \\ &{\left\| \operatorname{\operatorname{D}}_n \right\|}_{L^\infty(\Omega)} + \int_\Omega |\nabla \mu_n|^2 \,dx + H^{N-1}[S(D)] \leq C < \infty. \end{aligned}$$ By the $\operatorname{\mathcal{SBV}}$-compactness theorem (see Section \[sec:sbv\_introduction\]), and using Lemma \[lem:continuity\_L2\_E\], first applied to $\mu_n$, then to the obtained subsequence of $D_n$, there exist $(\hat \mu,\hat D) \in \operatorname{\mathcal{SBV}}(\Omega)^2$ and a subsequence $(\mu_k,D_k)_{k \in \mathbb{N}}$ (re-labeled) with $$\begin{aligned} (\mu_k,D_k) &\to (\hat\mu,\hat D) \text{ strongly in } L^1_{\text{loc}}(\Omega)^2 \\ (\nabla \mu_k,\nabla D_k) &\rightharpoonup (\nabla \hat\mu,\nabla \hat D) \text{ weakly in } L^2(\Omega;\mathbb{R}^N)^2 \\ H^{N-1}[S(\hat\mu)] &\leq \liminf_{k \to \infty} H^{N-1}[S(\mu_k)] \\ H^{N-1}[S(\hat D)] &\leq \liminf_{k \to \infty} H^{N-1}[S(D_k)]. \end{aligned}$$ We also have $(\hat\mu,\hat D) \in X_a^b(\Omega)^2$ since $X_a^b(\Omega)^2$ is closed in $L^1_{\text{loc}}(\Omega)^2$. Furthermore, it is a minimizer of $\overline\operatorname{\mathcal{F}}$ since the $L^2(\Omega)$-norm is weakly lower semi-continuous and $\operatorname{\mathcal{E}}\colon X_a^b(\Omega)^2 \subset L^1(\Omega)^2 \to L^2(\Omega)$ is continuous. If the minimizer $(\hat\mu,\hat D)$ has *essentially closed* jump sets $S(\hat\mu),S(\hat D)$ (that is, taking the closure of the jump sets does not increase their $H^{N-1}$-measure) the quadruple $\left( \hat\mu,\hat D,\overline{S(\hat\mu)},\overline{S(\hat D)} \right)$ also minimizes the strong Mumford-Shah functional among all $\mu \in X_a^b(\Omega) \cap W^{1,2}(\Omega \setminus K_\mu)$, $D \in X_a^b(\Omega) \cap W^{1,2}(\Omega \setminus K_D)$ and $K_\mu, K_D$ closed. While essential closedness of jump sets of Mumford-Shah-minimizers can be established under certain conditions on the discrepancy term [@DeGCarLea89; @JiaMaaPag14; @RonSan01], this is out of the scope of this article. Approximation {#sec:mumford_shah_approximation} ------------- Using the approach of Ambrosio and Tortorelli (cf. [@AmbTor92]), one can obtain functionals $$\operatorname{\mathcal{F}}_\epsilon\colon X_a^b(\Omega)^2 \cap W^{1,2}(\Omega)^2 \times X_0^1(\Omega)^2 \cap W^{1,2}(\Omega)^2 \to \mathbb{R}$$ that approximate the functional $\overline\operatorname{\mathcal{F}}$ from and are easier to minimize: $$\label{eq:qpat_ms_at_funct} \begin{aligned} \operatorname{\mathcal{F}}_\epsilon(\mu,D&,v_\mu,v_D) = \frac{1}{2} {\left\| \operatorname{\mathcal{E}}(\mu,D)-\operatorname{\mathcal{E}}^\delta \right\|}_{L^2(\Omega)}^2 \\ &+ \frac{\alpha_\mu}{2} \int_\Omega (v_\mu^2+\zeta_\mu) |\nabla \mu|^2 \,dx + \frac{\alpha_D}{2} \int_{\Omega} (v_D^2+\zeta_D) |\nabla D|^2 \,dx \\ &+ \beta_\mu \int_\Omega \left(\epsilon |\nabla v_\mu|^2 + \frac{(v_\mu-1)^2}{4\epsilon} \right) dx \\ &+ \beta_D \int_\Omega \left(\epsilon |\nabla v_D|^2 + \frac{(v_D-1)^2}{4\epsilon} \right) dx. \end{aligned}$$ It is well-known that, for all $\alpha,\beta > 0$ and $\zeta=o(\epsilon)$, $$\alpha \int_{\Omega} (v^2+\zeta) |\nabla f|^2 + \beta \left(\epsilon |\nabla v|^2 + \frac{(v-1)^2}{4\epsilon} \right) dx \underset{\epsilon \to 0}{\longrightarrow} \alpha \int_{\Omega} |\nabla f|^2 \,dx + \beta \,H^{N-1}[S(f)]$$ in the sense of $\Gamma$-convergence in $L^1(\Omega)$ (formally, the latter has to be extended to an additional variable, see [@AmbTor92; @RonSan01]). Since the data term is continuous in $L^1(\Omega)$ and $\Gamma$-convergence is stable under continuous perturbations, we thus get $\operatorname{\operatorname{\Gamma-lim}}_{\epsilon \to 0} \operatorname{\mathcal{F}}_\epsilon = \overline\operatorname{\mathcal{F}}$ and therefore $L^1(\Omega)$-convergence of (a subsequence of) $\operatorname{\mathcal{F}}_\epsilon$-minimizers (which can be shown to lie in a compact set [@AmbTor92]) to $\overline\operatorname{\mathcal{F}}$-minimizers. The existence of a minimizer of can be established using the direct method. \[prop:existence\_minimizers\_at\] The functional $\operatorname{\mathcal{F}}_\epsilon$ has at least one minimizer $(\hat \mu, \hat D, \hat{v}_\mu, \hat{v}_D)$ in $X_a^b(\Omega)^2 \cap W^{1,2}(\Omega)^2 \times X_0^1(\Omega)^2 \cap W^{1,2}(\Omega)^2$. $F_\epsilon$ is coercive with respect to the semi-norm $$|\mu|_{W^{1,2}(\Omega)} + |D|_{W^{1,2}(\Omega)} + {\left\| v_\mu \right\|}_{W^{1,2}(\Omega)} + {\left\| v_D \right\|}_{W^{1,2}(\Omega)},$$ which, combined with the restriction $(\mu,D) \in X_a^b(\Omega)^2$, shows that a minimizing sequence is bounded in $W^{1,2}(\Omega)^4$, so it contains a subsequence that converges weakly to $(\hat \mu, \hat D, \hat{v}_\mu, \hat{v}_D)$ in $W^{1,2}(\Omega)^4$ (and thus strongly in $L^2(\Omega)^4$). Moreover, note that the spaces $X_a^b(\Omega)$ and $X_0^1(\Omega)$ are closed in $L^2(\Omega)$. The Proposition follows since the discrepancy term is continuous in $L^2(\Omega)^2$ and the regularization terms are weakly lower semi-continuous in $W^{1,2}(\Omega)^4$ (cf. [@Dac08 Theorem 1.13] for the $\alpha$-terms). \[rem:piecewise\_constant\_limit\] If we vary $\alpha_\mu,\alpha_D$ with $\epsilon$ and take $$\alpha_\mu,\alpha_D \underset{\epsilon \to 0}{\longrightarrow} \infty$$ the minimizers of $\operatorname{\mathcal{F}}_\epsilon$ converge to piecewise constant (in $\operatorname{\mathcal{SBV}}$-sense, that is, with gradient vanishing almost everywhere) functions $(\hat \mu,\hat D)$ that minimize $$\widetilde{\operatorname{\mathcal{F}}}(\mu,D) = \frac{1}{2} {\left\| \operatorname{\mathcal{E}}(\mu,D)-\operatorname{\mathcal{E}}^\delta \right\|}_{L^2(\Omega)}^2 + \beta_\mu \,H^{N-1}[S(\mu)] + \beta_D \,H^{N-1}[S(D)]$$ among all piecewise constant $\mu,D$. A proof (for the single variable case) can be found in [@AmbTor92]. This explains, in light of Section \[sec:recovery\_pw\_const\], why this type of regularization is useful for qPAT. Minimization {#sec:mumford_shah_minimization} ------------ In this subsection, we proceed formally, i.e., without proving convergence, existence of minimizers of sub-problems and that the necessary derivatives and adjoints exist. For the minimization of , we suggest the following alternating directions approach: \[alg:qpat\_ms\]  \ 1. Choose parameters $a,b,\epsilon,\alpha_\mu,\alpha_D,\beta_\mu,\beta_D > 0$. 2. Find an initial estimate $\hat K$ of the edge set $J_0(\mu) \cup J_0(D)$ using edge detection. 3. Initialize with $\mu^0 = D^0 \equiv 1$ and $v^0_\mu = v^0_D = 1-1_{\hat K}$ . 4. Iterate until convergence: - $(\mu^{n+\frac 1 2},D^{n+\frac 1 2})= \operatorname{arg\,min}_{\mu,D} \ \operatorname{\mathcal{F}}_\epsilon(\mu,D,v^n_\mu,v^n_D)$ - $(v^{n+1}_\mu,v^{n+1}_D) = \operatorname{arg\,min}_{v_\mu,v_D} \ \operatorname{\mathcal{F}}_\epsilon(\mu^{n+\frac 1 2},D^{n+\frac 1 2},v_\mu,v_D)$ We now explain the steps in more detail. ### Initialization using edge detection {#initialization-using-edge-detection .unnumbered} In our numerical simulations, it became clear that using Proposition \[prop:pw\_const\_prop\_1\] to obtain a reasonable estimate $\hat K$ of $J_0(\mu) \cup J_0(D)$ leads to faster convergence and better minimizers than simply taking $\hat K = \emptyset$. Given the noisy measurements $\operatorname{\mathcal{E}}^\delta$, we have to estimate the discontinuities (or edges) of $\operatorname{\mathcal{E}}^\delta$, $|\nabla \operatorname{\mathcal{E}}^\delta|^2$ and $|\Delta \operatorname{\mathcal{E}}^\delta|$. Since the jumps in these functions are of *multiplicative* nature (proportional to the local value of ${u}$ or $\nabla {u} \cdot \nu$), it is advantageous to apply a logarithmic transformation prior to edge detection (to obtain constant contrast), see [@NaeSch14]. Due to noise, the data has to be smoothed prior to taking derivatives (turning discontinuities into areas with large gradients). We smooth by convolution with a Gaussian, since this approach has the advantage that differentiation and smoothing can be done in one step (by differentiating the low-pass filter instead of the function). We have $$\begin{aligned} \operatorname{G}_\sigma(x) &= (2\pi)^{-\frac{N}{2}} \sigma^{-N} \exp\left(\frac{-x^2}{2\sigma^2} \right) \\ (\nabla \operatorname{G}_\sigma)(x) &= (2\pi)^{-\frac{N}{2}} \sigma^{-(N+2)} (-x) \exp\left(\frac{-x^2}{2\sigma^2} \right) \\ (\Delta \operatorname{G}_\sigma)(x) &= (2\pi)^{-\frac{N}{2}} \sigma^{-N+2} \left( \frac{|x|^2}{\sigma^2}-N \right) \exp\left(\frac{-x^2}{2\sigma^2} \right). \end{aligned}$$ Similar to [@NaeSch14], we proceed as follows: 1. Detect edges $\hat K_0$ of $f_0:=\log \left( \operatorname{\mathcal{E}}^\delta \ast \ 1_{B}\,G_{\sigma_0} \right)|_{\Omega \ominus B}$ 2. Detect edges $\hat K_1$ of $f_1:=\log \left( \left| \left( \operatorname{\mathcal{E}}^\delta \ast \ 1_{B} \, \nabla G_{\sigma_1} \right) \right|^2 \vee \gamma \right))|_{(\Omega \setminus \hat K_0) \ominus B}$ 3. Detect edges $\hat K_2$ of $f_2:=\log \left( \left| \left( \operatorname{\mathcal{E}}^\delta \ast \ 1_{B} \, \Delta G_{\sigma_2} \right) \right| \right)|_{(\Omega \setminus (\hat K_0 \cup \hat K_1)) \ominus B}$ 4. Take $\hat K = \hat K_0 \cup \hat K_1 \cup \hat K_2$ Here, $1_{B}$ acts as a cut-off function for the filters (e.g., using $B=B^p_\rho$, the $p$-norm ball of radius $\rho$). The operation $A \ominus B = \{ z \in \Omega \mid (B + z) \subset A \}$ denotes set erosion, which is performed to avoid multiple detection of edges (by ensuring that the cut-off filter does not intersect with already detected edges or the outside of the domain). Note, however, that for large $B$ this may lead to parts of edges (close to the domain boundary or already detected edges) not being detected. The scalar $\gamma$ is a minimal value enforced for $|\nabla \operatorname{\mathcal{E}}^\delta|^2$ (to avoid creating singularities at zeros). The edge detection itself is performed by applying thresholds $\xi_0$,$\xi_1$,$\xi_2$ to the functions $|\nabla f_0|, |\nabla f_1|, |\nabla f_2|$ (taking the super-level sets as edge sets). Note that in contrast to [@NaeSch14], no complete segmentation is necessary and cruder edge estimates suffice, that is, the detected edges don’t have to be reduced to thin curves. ### Iteration {#iteration .unnumbered} In step (i), to find, for fixed $v_\mu,v_D \in X_0^1(\Omega) \cap W^{1,2}(\Omega)$, $$\underset{\mu,D \in X_a^b(\Omega) \cap W^{1,2}(\Omega)}{\operatorname{arg\,min}} \ \operatorname{\mathcal{F}}_\epsilon(\mu,D)$$ we use Gauss-Newton-minimization. That is, in every iteration of an inner loop, we linearly approximate $\operatorname{\mathcal{E}}(\mu_k+s_\mu,D_k+s_D) \approx \operatorname{\mathcal{E}}(\mu_k,D_k) + \operatorname{\mathcal{E}}'(\mu_k,D_k)(s_\mu,s_D)$ and take update steps $$\label{eq:qpat_ms_gauss_newton} \begin{aligned} s &= \underset{s_\mu,s_D \in W^{1,2}(\Omega)}{\operatorname{arg\,min}} \ \frac{1}{2} {\left\| \operatorname{\mathcal{E}}(\mu_k,D_k)+\operatorname{\mathcal{E}}'(\mu_k,D_k)(s_\mu,s_D)-\operatorname{\mathcal{E}}^\delta \right\|}_{L^2(\Omega)}^2 \\ &+ \frac{\alpha_\mu}{2} \int_{\Omega} (v_\mu^2+\zeta_\mu) |\nabla (\mu_k+s_\mu)|^2 \,dx + \frac{\alpha_D}{2} \int_{\Omega} (v_D^2+\zeta_D) |\nabla (D_k+s_D)|^2 \,dx \\ \end{aligned}$$ which we then project into $X_a^b(\Omega)^2$, so we update with $$(\mu_{k+1},D_{k+1}) = (\mu_k,D_k) + (s \wedge a) \vee b.$$ A straightforward calculation shows that a minimizer $s=(s_\mu,s_D)$ of satisfies the weak form of $$\label{eq:qpat_ms_gauss_newton_el} \begin{aligned} \operatorname{\mathcal{E}}'&(\mu_k,D_k)^*\operatorname{\mathcal{E}}'(\mu_k,D_k)(s_\mu,s_D) + \left(\alpha_\mu L_{(v_\mu^2 + \zeta_\mu)} \,s_\mu,\ \alpha_D L_{(v_D^2 + \zeta_D)} \,s_D \right) \\ &= -\operatorname{\mathcal{E}}'(\mu_k,D_k)^*[ \operatorname{\mathcal{E}}(\mu_k,D_k)-\operatorname{\mathcal{E}}^\delta] - \left(\alpha_\mu L_{(v_\mu^2 + \zeta_\mu)} \,\mu_k,\ \alpha_D L_{(v_D^2 + \zeta_D)} \,D_k \right) \\ &=: (R_{\mu_k},R_{D_k}) \end{aligned}$$ with homogeneous Neumann boundary conditions for $s_\mu,s_D$ and $$L_\sigma\colon W^{1,2}(\Omega) \to W^{-1,2}(\Omega), \ x \mapsto -\nabla \cdot ( \sigma \nabla x ).$$ The linear operator $\operatorname{\mathcal{E}}'(\mu,D)$ and its (formal) adjoint $\operatorname{\mathcal{E}}'(\mu,D)^*$ are given by $$\label{eq:qpat_derivatives_adjoints} \begin{aligned} \operatorname{\mathcal{E}}'(\mu,D)(s_\mu,s_D) &= s_\mu {u} + \mu L_{\mu,D}^{-1}(-s_\mu {u}) + \mu L_{\mu,D}^{-1}[\nabla \cdot ( s_D \nabla {u} )] \\ \operatorname{\mathcal{E}}'(\mu,D)^*\,t &= \left( t {u} - {u} L_{\mu,D}^{-1}(\mu t), -\nabla {u} \cdot \nabla L_{\mu,D}^{-1}(\mu t) \right) \end{aligned}$$ where ${u}={u}(\mu,D)$ and $$L_{\mu,D}\colon W_0^{1,2}(\Omega) \to W^{-1,2}(\Omega), \ x \mapsto -\nabla \cdot ( D \nabla x ) + \mu x.$$ By introducing auxiliary variables $y_1,y_2,y_3$ (and taking $\mu=\mu_k, D=D_k$), the equations , can be re-written as the system of (weak) PDE $$\label{eq:qpat_ms_system} \begin{aligned} (s_\mu {u} + \mu y_1 + \mu y_2) {u} - {u} y_3 + \alpha_\mu L_{(v_\mu^2 + \zeta_\mu)} \,s_\mu = R_\mu, \quad \frac{\partial s_\mu|_{\partial\Omega}}{\partial \nu}&=0 \\ -\nabla {u} \cdot \nabla y_3 + \alpha_D L_{(v_D^2 + \zeta_D)} \,s_D = R_D, \quad \frac{\partial s_D|_{\partial\Omega}}{\partial \nu}&=0 \\ L_{\mu,D} \, y_3 = \mu (s_\mu {u} + \mu y_1 + \mu y_2), \quad y_3|_{\partial\Omega}&=0 \\ L_{\mu,D} \, y_2 = \nabla \cdot ( s_D \nabla {u} ), \quad y_2|_{\partial\Omega}&=0 \\ L_{\mu,D} \, y_1 = -s_\mu {u}, \quad y_1|_{\partial\Omega}&=0 \end{aligned}$$ which is amenable to discretization as a sparse matrix using, e.g., the *finite element method (FEM)*. In step (ii) of the outer loop we have to find, for fixed $\mu,D \in X_a^b(\Omega) \cap W^{1,2}(\Omega)$, $$\underset{v_\mu,v_D \in X_0^1(\Omega) \cap W^{1,2}(\Omega)}{\operatorname{arg\,min}} \ \operatorname{\mathcal{F}}_\epsilon(v_\mu,v_D).$$ The minimizer $(v_\mu,v_D)$ satisfies the linear equations $$\label{eq:qpat_ms_edges_el} \begin{aligned} \left(-2 \beta \epsilon \Delta + (\alpha_\mu |\nabla \mu|^2 + \frac{\beta_\mu}{2 \epsilon}) \operatorname{\operatorname{Id}}\right) \, v_\mu &= \frac{\beta}{2 \epsilon}, \quad \frac{\partial v_\mu|_{\partial\Omega}}{\partial \nu}=0 \\ \left(-2 \beta \epsilon \Delta + (\alpha_D |\nabla D|^2 + \frac{\beta_D}{2 \epsilon}) \operatorname{\operatorname{Id}}\right) \, v_D &= \frac{\beta}{2 \epsilon}, \quad \frac{\partial v_D|_{\partial\Omega}}{\partial \nu}=0 \\ \end{aligned}$$ which can be solved separately and implemented numerically using FEM. Implementation and numerical results {#sec:numerical_results} ==================================== In this section, the proposed algorithm is tested on two sets of simulated two-dimensional data (with different parameter range and level of detail). We also vary the noise on the data since the reconstruction quality strongly depends on the noise level. The data (see Figures \[fig:setup\_circles\] and \[fig:setup\_rectangles\]) was generated in the diffusion model using self-written (linear-basis) finite element code in *MATLAB*. For both examples, we took $\Omega=[0,5]^2$ and used a uniform boundary condition $g \equiv 1$. The simulated data were generated on a $(400\times400)$-grid and then down-sampled (by averaging) to $(200\times200)$ to avoid inverse crime. After that, Gaussian noise with different intensities (standard deviations of $0.5\%$ and $10\%$ of the average signal value $\frac{1}{|\Omega|} \int_\Omega \operatorname{\mathcal{E}}(x)\,dx$) was added to the data. \ \ \ \ The edge detector described in Section \[sec:mumford\_shah\_minimization\] was implemented by finite difference approximations of $|\nabla f_0|, |\nabla f_1|, |\nabla f_2|$ using central differences inside and one-sided differences near the boundary of the domain. The functions $f_1,f_2,f_3$ were calculated by convolution filtering with the *MATLAB* function *imfilter*. For all examples, we used a square cutoff function, i.e., $B=B^\infty_\rho$ with $\rho=2 \lceil{\sigma_k}\rceil$ (where $k=0,1,2$). The edge detector is used to detect jumps in the derivatives of the data $\operatorname{\mathcal{E}}^\delta$ up to second order (to obtain an initial estimate of the parameter jump set $J_0(\mu) \cup J_0(D)$). Since this process is highly sensitive with respect to noise, we varied the edge detection procedure subject to the amount of noise in the data. In the noise-free examples, we estimated the jumps of all three functions $f_0,f_1,f_3$, that is, jumps of derivatives of $\operatorname{\mathcal{E}}^\delta$ up to second order. We restricted the jump estimation to $f_0,f_1$ for the low-noise examples (i.e., jumps of derivatives up to first order) and $f_0$ in the high-noise examples (only jumps in the data $\operatorname{\mathcal{E}}^\delta$ itself). To obtain the parameters $\mu,D$ given an initial estimate of their combined edge set $J_0(\mu) \cup J_0(D)$, we used the Ambrosio-Tortorelli approximation of the Mumford-Shah functional introduced in Section \[sec:mumford\_shah\_parameter\_detection\]. To minimize the functional, we used Algorithm \[alg:qpat\_ms\], iterating, for all examples, the outer alternating-directions loop until the functional changes by less than $0.01\%$ and the inner Gauss-Newton loop until the functional value changes less than $1\%$. We directly implemented the systems and using (self-written) linear-basis finite element code. The implementation is not fully conforming, that is, where necessary we used conversions between piecewise linear and piecewise constant functions for simplicity. The discrete systems were solved using the standard *MATLAB* sparse equation solver *mldivide*. For all examples, we chose $a=0.01, b=3$ and $\epsilon=0.01$. The other reconstruction parameters used (which were selected by hand) are listed in Tables \[tab:results\_parameters\_iteration\] and \[tab:results\_parameters\_edge\_detection\]. $\boldsymbol{\alpha_\mu}$ $\boldsymbol{\alpha_D}$ $\boldsymbol{\beta_\mu}$ $\boldsymbol{\beta_D}$ $\boldsymbol{\zeta_\mu}$ $\boldsymbol{\zeta_D}$ ---------------------------- --------------------------- ------------------------- -------------------------- ------------------------ -------------------------- ------------------------ **Ex. A** ($0 \%$ noise) $10^{-2}$ $10^{-4}$ $10^{-6}$ $10^{-8}$ $10^{-6}$ $10^{-4}$ **Ex. A** ($0.1 \%$ noise) $10^{-2}$ $10^{-5}$ $10^{-6}$ $10^{-8}$ $10^{-6}$ $10^{-3}$ **Ex. A** ($10 \%$ noise) $1$ $5 \cdot 10^{-3}$ $10^{-5}$ $5 \cdot 10^{-6}$ $10^{-5}$ $10^{-3}$ **Ex. B** ($0 \%$ noise) $10^{-2}$ $10^{-4}$ $10^{-6}$ $10^{-8}$ $10^{-6}$ $10^{-5}$ **Ex. B** ($0.1 \%$ noise) $10^{-2}$ $10^{-5}$ $10^{-6}$ $10^{-8}$ $10^{-6}$ $10^{-3}$ **Ex. B** ($10 \%$ noise) $1$ $10^{-3}$ $10^{-5}$ $10^{-7}$ $10^{-5}$ $10^{-3}$ : Parameters used for Figures \[fig:results\_circles\] and \[fig:results\_rectangles\] (for the functional ). []{data-label="tab:results_parameters_iteration"} $\boldsymbol{\sigma_0}$ $\boldsymbol{\sigma_1}$ $\boldsymbol{\sigma_2}$ $\boldsymbol{\xi_0}$ $\boldsymbol{\xi_1}$ $\boldsymbol{\xi_2}$ $\boldsymbol{\gamma}$ ---------------------------- ------------------------- ------------------------- ------------------------- ---------------------- ---------------------- ---------------------- ----------------------- **Ex. A** ($0 \%$ noise) $0.5$ $0.5$ $0.5$ $0.1$ $0.1$ $0.1$ $10^{-4}$ **Ex. A** ($0.1 \%$ noise) $0.5$ $2$ $0.05$ $0.06$ $10^{-5}$ **Ex. A** ($10 \%$ noise) $1.5$ $0.05$ **Ex. B** ($0 \%$ noise) $0.5$ $0.5$ $0.5$ $0.1$ $0.1$ $0.1$ $10^{-6}$ **Ex. B** ($0.1 \%$ noise) $0.5$ $2.4$ $0.05$ $0.06$ $10^{-7}$ **Ex. B** ($10 \%$ noise) $1.5$ $0.03$ : Parameters used for Figures \[fig:results\_circles\] and \[fig:results\_rectangles\] (for edge detection). []{data-label="tab:results_parameters_edge_detection"} Reconstruction results and error profiles at different noise levels can be seen in Figures \[fig:results\_circles\] and \[fig:results\_rectangles\]. In both examples, the noise-free reconstructions are very accurate and contain mostly smoothing error. In the low-noise reconstructions, due to the fact that more regularization is necessary, some of the parameter variation is underestimated. In the high-noise examples, most detail in $D$ is lost since a lot of regularization is required to get reasonable results. The fine detail in $\mu$ can, however, still be recovered very accurately in both examples. ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_nonoise_edges.pdf "fig:"){width="29.00000%"} ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_lownoise_edges.pdf "fig:"){width="29.00000%"} ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_highnoise_edges.pdf "fig:"){width="29.00000%"}\ ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_nonoise_mu_est.pdf "fig:"){width="29.00000%"} ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_lownoise_mu_est.pdf "fig:"){width="29.00000%"} ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_highnoise_mu_est.pdf "fig:"){width="29.00000%"}\ ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_nonoise_D_est.pdf "fig:"){width="29.00000%"} ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_lownoise_D_est.pdf "fig:"){width="29.00000%"} ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_highnoise_D_est.pdf "fig:"){width="29.00000%"}\ ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_nonoise_mu_profile.pdf "fig:"){width="29.00000%"} ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_lownoise_mu_profile.pdf "fig:"){width="29.00000%"} ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_highnoise_mu_profile.pdf "fig:"){width="29.00000%"}\ ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_nonoise_D_profile.pdf "fig:"){width="29.00000%"} ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_lownoise_D_profile.pdf "fig:"){width="29.00000%"} ![ Reconstructions A. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show values of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown in Figure \[fig:setup\_circles\]. []{data-label="fig:results_circles"}](circles_highnoise_D_profile.pdf "fig:"){width="29.00000%"}\ ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_nonoise_edges.pdf "fig:"){width="29.00000%"} ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_lownoise_edges.pdf "fig:"){width="29.00000%"} ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_highnoise_edges.pdf "fig:"){width="29.00000%"}\ ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_nonoise_mu_est.pdf "fig:"){width="29.00000%"} ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_lownoise_mu_est.pdf "fig:"){width="29.00000%"} ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_highnoise_mu_est.pdf "fig:"){width="29.00000%"}\ ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_nonoise_D_est.pdf "fig:"){width="29.00000%"} ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_lownoise_D_est.pdf "fig:"){width="29.00000%"} ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_highnoise_D_est.pdf "fig:"){width="29.00000%"}\ ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_nonoise_mu_profile.pdf "fig:"){width="29.00000%"} ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_lownoise_mu_profile.pdf "fig:"){width="29.00000%"} ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_highnoise_mu_profile.pdf "fig:"){width="29.00000%"}\ ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_nonoise_D_profile.pdf "fig:"){width="29.00000%"} ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_lownoise_D_profile.pdf "fig:"){width="29.00000%"} ![ Reconstructions B. Columns: $0\%$, $0.1\%$ and $10\%$ noise. Rows: Estimated edge set, reconstructed parameters $\mu_{\text{est}}$, $D_{\text{est}}$ and error profiles for $\mu_{\text{est}},D_{\text{est}}$. The error profiles show valued of the true and reconstructed parameters along the image diagonals. The color axis in the images of the reconstructed parameters was fixed to the same range as the true parameters shown as in Figure \[fig:setup\_rectangles\]. []{data-label="fig:results_rectangles"}](rectangles_highnoise_D_profile.pdf "fig:"){width="29.00000%"}\ In some of the examples, minimization of using log-parameters, i.e., the mapping $\overline\operatorname{\mathcal{E}}\colon (\log \mu, \log D) \mapsto \operatorname{\mathcal{E}}(\mu,D)$ instead of $\operatorname{\mathcal{E}}$, gave slightly better results. Note that this approach leads to a different linearization and therefore also a different system to be solved in every step. However, to keep the presentation simple, we decided to use the functional as presented in Section \[sec:mumford\_shah\_parameter\_detection\] for our numerical experiments. We also tried to incorporate the Grüneisen coefficient $\Gamma$ as an additional unknown into the reconstruction process, that is, to solve the problem . Unfortunately, this proved to be highly unstable, even if the initial edge set was detected perfectly. Acknowledgements {#sec:acknowledgements} ================ This work has been supported by the Austrian Science Fund (FWF) within the project FSP P26687-“Interdisciplinary Coupled Physic Imaging” and by the IK I059-N funded by the University of Vienna. Special functions of bounded variation and the SBV-compactness theorem {#sec:sbv_introduction} ====================================================================== This section briefly introduces the notion of $\operatorname{\mathcal{SBV}}$-functions and their compactness theorem. For a more comprehensive presentation with proofs, see, e.g., [@AttButMic06]. For a function $f \in L^1(\Omega)$ with distributional gradient $Df$, we define its *total variation* by $$|Df|_\Omega := \sup \{ \langle Df, \varphi \rangle \mid \varphi \in C^1_c(\Omega;\mathbb{R}^N), {\left\| \varphi \right\|}_{L^\infty(\Omega;\mathbb{R}^N)} \leq 1 \}$$ The space $\operatorname{\mathcal{BV}}(\Omega)$, consisting of all $L^1(\Omega)$-functions with finite total variation (i.e., of bounded variation), is a Banach space with the norm $${\left\| f \right\|}_{\operatorname{\mathcal{BV}}(\Omega)} := {\left\| f \right\|}_{L^1(\Omega)} + |Df|_\Omega.$$ Note that by the Riesz-Markov representation theorem, functions $f \in L^1(\Omega)$ are of bounded variation if and only if $Df$ is a *finite vector Radon measure*. The measure $Df$ can be decomposed into three parts $$Df = D^a f + D^j f + D^c f.$$ $D^a f$ is the part of $Df$ that is *absolutely continuous* with respect to the Lebesgue measure ${\mathcal{L}}^N$, i.e., $$D^a f = \nabla f \,{\mathcal{L}}^N$$ for some integrable density function $\nabla f$. The *jump part* $D^j f$ is concentrated on the jump (or approximate discontinuity) set $S(f)$ defined by $$S(f):=\{ x \in \Omega \mid f^-(x) < f^+(x) \},$$ where, denoting with $B_\rho(x)$ the ball centered at $x$ with radius $\rho$, $$\begin{aligned} f^+(x) &:= \inf \left\{ t \in \mathbb{R} \mid \lim_{\rho \to 0} \frac{{\mathcal{L}}^N(\{y \in B_\rho(x) \mid f(y) > t \})}{{\mathcal{L}}^N(B_\rho(x))}=0 \right\} \\ f^-(x) &:= \sup \left\{ t \in \mathbb{R} \mid \lim_{\rho \to 0} \frac{{\mathcal{L}}^N(\{y \in B_\rho(x) \mid f(y) < t \})}{{\mathcal{L}}^N(B_\rho(x))}=0 \right\}. \end{aligned}$$ Furthermore, for $H^{N-1}$-almost all $x \in S(f)$, there exists a *unit normal vector* $\nu(x)$ and we have $$D^j f = (f^+ - f^-)\, \nu \, H^{N-1}|_{S(f)}.$$ The remaining *Cantor part* $D^c f$ is concentrated on a subset of $\Omega \setminus S(f)$ with intermediate Hausdorff dimension between $N-1$ and $N$. A function $f \in \operatorname{\mathcal{BV}}(\Omega)$ is a *special function of bounded variation* (or $f \in \operatorname{\mathcal{SBV}}(\Omega)$) if $D^c f=0$, that is, $$Df = \nabla f \,{\mathcal{L}}^N + (f^+ - f^-)\, \nu \, H^{N-1}|_{S(f)}.$$ Furthermore, we have the following *compactness theorem* due to Ambrosio [@Amb89a]: Let $f_n$ be a sequence in $\operatorname{\mathcal{SBV}}(\Omega)$ with $${\left\| f_n \right\|}_{L^\infty(\Omega)} + \int_\Omega |\nabla f_n|^p \,dx + H^{N-1}[S(f_n)] \leq M < \infty$$ for all $n \in \mathbb{N}$ and some $p > 1, M > 1$. Then there exist a subsequence $\left( f_{n_k} \right)_{k \in \mathbb{N}}$ and $f \in \operatorname{\mathcal{SBV}}(\Omega)$ with $$\begin{aligned} f_{n_k} &\to f \text{ strongly in } L^1_{\text{loc}}(\Omega) \\ \nabla f_{n_k} &\rightharpoonup \nabla f \text{ weakly in } L^p(\Omega;\mathbb{R}^N) \\ H^{N-1}[f] &\leq \liminf_{k \to \infty} H^{N-1}[f_{n_k}]. \end{aligned}$$ \[1\]\#1[0=0=0 0 by1pt\#1]{} [10]{} G.S. Alberti and H. Ammari. Disjoint sparsity for signal separation and applications to hybrid inverse problems in medical imaging. preprint, ENS, 2015. http://arxiv.org/abs/1502.04540. L. Ambrosio. A compactness theorem for a new class of functions of bounded variation. , 3:857–881, 1989. L. Ambrosio. Existence theory for a new class of variational problems. , 111:291–322, 1990. L. Ambrosio and V. M. Tortorelli. On the approximation of free discontinuity problems. , 6:105–123, 1992. H. Attouch, G. Buttazzo, and G. Michaille. SIAM, Society for Industrial and Applied Mathematics, 2006. G. Bal and K. Ren. Multi-source quantitative photoacoustic tomography in a diffusive regime. , 27(7):075003, 2011. G. Bal and G. Uhlmann. Inverse diffusion theory of photoacoustics. , 26:085010, 2010. B. Banerjee, S. Bagchi, R.M. Vasu, and D. Roy. Quantitative photoacoustic tomography from boundary pressure measurements: noniterative recovery of optical absorption coefficient from the reconstructed absorbed energy map. , 25(9):2347–2356, 2008. M. Bergounioux, X. Bonnefond, T. Haberkorn, and Y. Privat. An optimal control problem in photoacoustic tomography. , 24(12):2525, 2014. M. Burger and S. Osher. A survey on level set methods for inverse problems and optimal design. , 16(02):263–301, 2005. T. Chan and L. Vese. Active contours without edges. , 10(2):266–277, 2001. B. T. Cox, S. R. Arridge, and P. C. Beard. Estimating chromophore distributions from multiwavelength photoacoustic images. , 26(2):443–455, 2009. B. T. Cox, J. G. Laufer, S. R. Arridge, and P. C. Beard. Quantitative spectroscopic photoacoustic imaging: a review. , 17(6):061202, 2012. B. T. Cox, J. G. Laufer, and P. C. Beard. The challenges for quantitative photoacoustic imaging. , 7177:717713, 2009. B. Dacorogna. , volume 78 of [ *Applied Mathematical Sciences*]{}. Springer, New York, 2 edition, 2008. A. De Cezaro, A. Leitao, and X. Tai. On multiple level-set regularization methods for inverse problems. , 25(3):035004, 2009. A. De Cezaro, F. Travessini De Cezaro, and J. Sejje Suarez. Regularization approaches for quantitative photoacoustic tomography using the radiative transfer equation. , 429(1):415–438, 2015. E. De Giorgi, M. Carriero, and A. Leaci. Existence theorem for a minimum problem with free discontinuity set. , 108:195–218, 1989. H. Egger and M. Schlottbom. Analysis and regularization of problems in diffuse optical tomography. , 42(5):1934––1948, 2010. S. Esedoglu and J. Shen. Digital inpainting based on the [M]{}umford–[S]{}hah–[E]{}uler image model. , 13:353–370, 2002. L. C. Evans. , volume 19 of [*Graduate Studies in Mathematics*]{}. American Mathematical Society, Providence, RI, 1998. H. Gao, S. Osher, and H. Zhao. Quantitative photoacoustic tomography. In [*Mathematical Modeling in Biomedical Imaging II*]{}, pages 131–158. Springer Berlin, Heidelberg, 2012. D. Gilbarg and N. Trudinger. . Classics in Mathematics. Springer Verlag, Berlin, 2001. Reprint of the 1998 edition. M. Haltmeier, L. Neumann, and S. Rabanser. Single-stage reconstruction algorithm for quantitative photoacoustic tomography. , 31(6):065005, 2015. M. Jiang, P. Maass, and T. Page. Regularization properties of the [M]{}umford–[S]{}hah functional for imaging applications. , 30(3):035007, 2014. E. Klann. A [M]{}umford–[S]{}hah-like method for limited data tomography with an application to electron tomography. , 4(4):1029–1048, 2011. T. Kreutzmann and A. Rieder. Geometric reconstruction in bioluminescence tomography. , 8(1):173–197, 2014. P. Kuchment and L. Kunyansky. Mathematics of thermoacoustic tomography. , 19:191–224, 2008. Y.Y. Li and L. Nirenberg. Estimates for elliptic systems from composite material. , 56(7):892–925, 2003. D. Mumford and J. Shah. Optimal approximations by piecewise smooth functions and associated variational problems. , 42(5):577–685, 1989. W. Naetar and O. Scherzer. Quantitative photoacoustic tomography with piecewise constant material parameters. , 7(3):1755–1774, 2014. Funded by the Austrian Science Fund (FWF) within the FSP S105 - “Photoacoustic Imaging”. Funded by the University of Vienna within the Vienna Graduate School in Computational Science IK I059-N. S. Osher and J. A. Sethian. Fronts propagating with curvature-dependent speed: [A]{}lgorithms based on [H]{}amilton-[J]{}acobi formulations. , 79(1):12–49, 1988. A. Pulkkinen, B. Cox, S. Arridge, J.P. Kaipio, and T. Tarvainen. Quantitative photoacoustic tomography using illuminations from a single direction. , 20(3):036015, 2015. R. Ramlau and W. Ring. A [M]{}umford–[S]{}hah level-set approach for the inversion and segmentation of x-ray tomography data. , 221:539–57, 2007. R. Ramlau and W. Ring. Regularization of ill-posed [M]{}umford–[S]{}hah models with perimeter penalization. , 26:115001, 2010. K. Ren, Gao H., and H. Zhao. A hybrid reconstruction method for quantitative pat. , 6(1):32–55, 2013. L. Rondi and F. Santosa. Enhanced electrical impedance tomography via the [M]{}umford–[S]{}hah functional. , 6:517–538, 2001. T. Saratoon, T. Tarvainen, B. T. Cox, and S. R. Arridge. A gradient-based method for quantitative photoacoustic tomography using the radiative transfer equation. , 29(7):075006, 2013. P. Shao, B. Cox, and R.J. Zemp. Estimating optical absorption, scattering, and [G]{}rueneisen distributions with multiple-illumination photoacoustic tomography. , 50(19):3145–3154, 2011. Y. Sun, E. Sobel, and H. Jiang. Quantitative three-dimensional photoacoustic tomography of the finger joints: an in vivo study. , 14(6):064002, 2009. Y. Sun, E. Sobel, and H. Jiang. Noninvasive imaging of hemoglobin concentration and oxygen saturation for detection of osteoarthritis in the finger joints using multispectral three-dimensional quantitative photoacoustic tomography. , 15(5):055302, 2013. X. Tai and X.F. Chan. A survey on multiple level set methods with applications for identifying piecewise constant functions. , 1(1):25–47, 2004. X. Tai and H. Li. A piecewise constant level set method for elliptic inverse problems. , 57:686–696, 2006. T. Tarvainen, B. T. Cox, J. P. Kaipio, and S. R. Arridge. Reconstructing absorption and scattering distributions in quantitative photoacoustic tomography. , 28(8):084009, 2012. L. V. Wang and H. Wu, editors. . Wiley-Interscience, New York, 2007. L. Yao, Y. Sun, and H. Jiang. Quantitative photoacoustic tomography based on the radiative transfer equation. , 34(12):1765–1767, 2009. Z. Yuan and H. Jiang. Quantitative photoacoustic tomography: Recovery of optical absorption coefficient maps of heterogeneous media. , 88(23):231101, 2006. R. J. Zemp. Quantitative photoacoustic tomography with multiple optical sources. , 49(18):3566–3572, 2010.
--- abstract: 'Two subsystems of the Asian Monsoon: the Indian Summer Monsoon and the Western North Pacific Monsoon, have been analysed using their daily indices ISMI and WNPMI. It is shown that on the intraseasonal time scales the ISMI and WNPMI are dominated by the Hamiltonian distributed chaos with the stretched exponential spectrum: $E(f) \propto \exp-(f/f_0)^{\beta}$ and analytical values of the parameter $\beta =3/4$ and $\beta =1/2$ correspondingly. The relevant daily indices Niño 3 and Niño 4 (with $\beta =1/2$) of the El Niño-Southern Oscillation (ENSO) and the Australian Monsoon (AUSM index with $\beta = 1/2$) have been also discussed in this context.' author: - 'A. Bershadskii' title: 'Hamiltonian distributed chaos in the Asian-Australian Monsoons and in the ENSO' --- The Asian Monsoon ================= Two dominant regions of energy supply (by convective heat) can be determined for the Asian monsoon: the Philippine Sea and Bay of Bengal (see for a comprehensive review Ref. [@wwl]). Weak correlation between these rather different energy sources resulted in appearance of two different in their properties monsoon indices: Western North Pacific Monsoon Index (WNPMI) and Indian Summer Monsoon Index (ISMI or IMI) [@wwl],[@wf]. Together these indices describe one of the most powerful oscillating pattern of the earth’s climate. Definition of ISMI and WNPMI can be understood from the figure 1 [@wwl],[@mon]. The regions where the zonal winds are used for computation of the monsoon circulation indices are denoted in the Fig. 1 by the boxes (see also figure 2 [@global]). According to the Refs. [@wwl],[@wf] the difference of the 850-hPa zonal winds is used for the definition of the $$\scriptstyle ISMI= U850(40E-80E,5N-15N)-U850(70E-90E,20N-30N)$$ and the difference of 850-hPa westerlies is used for the definition of the $$\scriptstyle WNPMI = U850(100E-130E,5N-15N)-U850(110E-140E,20N-30N)$$ The ISMI represents the rainfall anomalies over a region including the India, Bay of Bengal and the eastern Arabian Sea. The WNPMI represents also the low-level vorticity generated by response of the Rossby waves to convective heat source located at the Philippine Sea (cf. the Ref. [@b]). Correlation between these indices is very weak that corresponds well to the difference in their origins. Dissimilarity between the ISM and WNPM geographic setting in respect of the ocean-continent distribution results in the considerable differences in their variability for all time scales. Despite this, convective activity related to the Asian Monsoon results in a global pattern expanding in both hemispheres for intraseasonal as well as for inter-annual time scales [@lin],[@ding]. Even Arctic ice patterns can be driven by the Asian Summer Monsoon via North Atlantic [@gw]. Hamiltonian distributed chaos ============================= Chaotic dynamical systems typically exhibit exponential power spectra (see, for instance [@fm]-[@sig]): $$E(f) \propto \exp-(f/f_c) \eqno{(1)}$$ A more complex situation takes place for analytical Hamiltonian dynamical systems. Namely, a weighted superposition of the exponentials produces a stretched exponential high-frequency spectral tails $$E(f ) \propto \int_0^{\infty} P(f_c) \exp -(f/f_c)~ df_c \propto \exp-(f/f_0)^{\beta} \eqno{(2)}$$ with $\beta =3/4$ or $\beta =1/2$ [@b1]. The value $\beta =1/2$ can also appear due to adiabatic invariance of the action [@suz] at spontaneous breaking of the time translational symmetry for the Hamiltonian dynamical systems [@b2]. The spectrum Eq. (2) represents the Hamiltonian distributed chaos. Figure 3 shows, as an example, power spectrum for the $x(t)$ variable of the Nose-Hoover oscillator (the Sprott A system [@spot]). This oscillator can be considered as a harmonic oscillator contacting with a thermal bath and can be described by the system of equations $$\left\{ \begin{array}{l} \dot{x} = y \\[0.1cm] \dot{y} = -x + yz \\[0.1cm] \dot{z} = 1 - y^2 \end{array} \right. \eqno{(3)}$$ The thermostat is represented by the term - $yz$. The dot over a variable denotes a time derivative. The system is a Hamiltonian one [@spot]. The data for computation of the spectrum shown in the Fig. 3 were taken from the site [@gen]. The computation was performed using the maximum entropy method, providing an optimal resolution for chaotic time series [@oh]. The straight line in the Fig. 3 is drawn to indicate correspondence (in the appropriately chosen scales) to the Eq. (2) with $\beta = 3/4$.\ Another example of the spectrum Eq. (2) (now with $\beta =1/2$) is shown in figure 4. This figure shows a temperature power spectrum. The temperature was measured at the center of an upright cylinder cell with strong thermal convection at very large Rayleigh number $Ra=3\cdot 10^{14}$ [@as] (see Ref. [@tg],[@gl] describing the low-order Hamiltonian models of the Rayleigh-Benard convection). Chaotic ISM and WNPM indices ============================= Most of the theoretical models constructed in the geophysical fluid dynamics are Hamiltonian [@b],[@she]-[@gl2] and it has been already established that the Hamiltonian distributed chaos determines the AAO, AO, NAO and PNA daily indices on the intraseasonal scales [@b2],[@b3]. Therefore, one can expect the same situation for the Asian Monsoon daily indices as well. In the time series the intraseasonal time scales can be separated by removing of the components of interannual variability and annual cycle. Subtraction of a 120-day moving average is considered as an appropriate tool for removing the low frequency variations in such cases (see, for instance, Ref. [@ven2] and references therein). We have used the 121-day moving average for this purpose: $$y(i)=\frac{\sum_{i-k}^{i+k} x(i)}{(2k+1)} \eqno{(4)}$$ Figure 5 shows the daily ISM index for the period 1948-2015yy (the data were taken from the Ref. [@mon]). Figure 6 shows power spectrum for the intraseasonal (i.e. after the subtraction) daily ISM index for the period 1948-2015yy (the data for the computations were taken from the site [@mon]). The straight line is drawn to indicate correspondence to the Eq. (2) with $\beta =3/4$. The characteristic time scale obtained from the best fit is $T_0=1/f_0 \simeq 32d$. Figure 7 shows power spectrum corresponding to the intraseasonal daily WNPM index for the period 1948-2015yy (the data for the computations were taken from the site [@mon]). The straight line is drawn to indicate correspondence to the Eq. (2) now with $\beta =1/2$. The characteristic time scale obtained from the best fit is $T_0=1/f_0 \simeq 171$d.\ One can see that the both analytical types of the Hamiltonian distributed chaos are present in the Asian Monsoon. The difference between its two components, mentioned in the first Section, is reflected in the difference between the two values of the $\beta$ parameter (cf. Figs. 6 and 7). Australian Monsoon ================== The Australian Monsoon is a natural equatorial counterpart for the Asian (mainly WNPM) Monsoon. Indeed, flow across the Equator of the dry air from the continent, where a winter (for corresponding hemisphere) takes place, toward the hemisphere with summer conditions delivers moisture, that was taken on the way from the worm oceans. This moisture feeds the monsoon rains at the hemisphere with the summer conditions. The reverse of the winds direction each half a year results in the alternation between the monsoon rainfalls in the Northern and the Southern hemispheres.\ The rainfall and wind records at the most northerly of the Australian capital cities - Darwin ($12^o$S, $130^o$E, see figure 8) were used to describe the Australian Monsoon in majority of early studies (see, for instance, the Refs. [@kim],[@sup] and references therein). However, as one can see from figure 9 a broad scale wind circulation index properly corresponding to the Australian Monsoon should be based at a different location. The authors of the recent paper Ref. [@ka] (see also an earlier paper Ref. [@wang2]) suggested such index - AUSMI, based on 850 hPa zonal wind anomalies averaged over the region $5^o$S-$15^o$S, $110^o$E-$130^o$E (see the Fig. 8). It should be noted that the Australian monsoon rainfall is located approximately in the area ($7.5^o$S-$17.5^o$S, $120^o$E-$150^o$E), i.e. well including the Darwin location (see also below). Despite this the AUSMI captures well not only the interannual and intraseasonal time scales variability of the Australian monsoonal rainfall but also the relationship of the Australian monsoon with the ENSO [@ka].\ Figure 10 shows shows power spectrum corresponding to the intraseasonal daily AUSM index for the period 1948-2015yy (the data for the computations were taken from the site [@mon]). The straight line is drawn to indicate correspondence to the Eq. (2) with $\beta =1/2$. The characteristic time scale obtained from the best fit is $T_0=1/f_0 \simeq 143d$. One can see that the AUSMI power spectrum corresponds to that of the WNPMI’s type ($\beta=1/2$) rather than that of ISMI’s type ($\beta = 3/4$). It could be expected, because the Australian monsoon has geographic setting in respect of the ocean-continent distribution more similar to that corresponding to WNPM rather than that corresponding to ISM. Relationship with ENSO ====================== Naturally relationship between the Asian-Australian Monsoons and El Niño (La Niña)-Southern Oscillation (ENSO) phenomenon is of great interest (see, for instance, Refs. [@wwl],[@ka]-[@gre] and references therein). The relationships between the ENSO and the two subsystems of the Asian Monsoon are different, according to the difference between these subsystems. While the relationship between the ENSO and the Indian Summer Monsoon is rather uncertain and non-stable the relationship between the ENSO and the NWPM is much more strong and stable. Relationship between the Australian Monsoon and the ENSO is even stronger due to their geographic setting (cf. Figs. 8 and 11, and see below).\ The Niño 3 (5N-5S, 150W-90W region) and the Niño 4 (5N-5S, 160E-150W region) indices - Fig. 11, based on sea surface temperature (SST) anomalies averaged across a given region, can be considered as the relevant ones for comparison with the Asian-Australian Monsoons indices. It should be noted that the Niño 4 region is characterized by less variance than the other ENSO regions.\ Figure 12 shows power spectrum corresponding to the intraseasonal daily Niño 3 index for 1981-2018yy period (the data for the computations were taken from site [@nl]). The straight line is drawn to indicate correspondence to the Eq. (2) with $\beta =1/2$. The characteristic time scale obtained from the best fit is $T_0=1/f_0 \simeq 280d$. Figure 13 shows power spectrum corresponding to the intraseasonal daily Niño 4 index for the 1981-2018yy period (the data for the computations were taken from site [@nl]). The straight line is drawn to indicate correspondence to the Eq. (2) with $\beta =1/2$. The characteristic time scale obtained from the best fit is $T_0=1/f_0 \simeq 196d$.\ Comparing the Figs. 6, 7 with the Figs. 12, 13 one can conclude that the Hamiltonian distributed chaos observed in the ENSO is more similar to the chaos observed in the WNPM than to that observed in the ISM. This conclusion is consistent with the other, above mentioned, observations. Moreover, compare the Fig. 10 with the Figs. 12 and 13 one can conclude that the Australian Monsoon belongs to an extended ENSO system (at least on the intraseasonal time scales). Acknowledgement =============== I thank H.J. Fernando and B. Galperin for inspiring comments and J.C. Sprott for sharing his data. I acknowledge use of the data provided by the KNMI Climate Explorer, by the Australian Bureau of Meteorology, and by the NOAA: the Climate Prediction Center and the Asia-Pacific Data Research Center at the University of Hawaii. [99]{} B. Wang, R. Wu and K.-M. Lau, J. Climate, [**14**]{}, 4073 (2001). B. Wang and Z. Fan, Bull. Amer. Meteor. Soc., [**80**]{}, 629 (1999). http://apdrc.soest.hawaii.edu/projects/monsoon/ http://www.cpc.ncep.noaa.gov/products/Global\_Monsoons A. Bershadskii, Phil. Trans. R. Soc. A, [**371**]{}, 20120168 (2013). H. Lin, J. Atmos. Sci. [**66**]{}, 2697 (2009). Q. Ding, Q et al., 2011. J. Climate, [**24**]{}, 1878 (2011). G. Grunseich and B. Wang, J. Climate, [**29**]{}, 9097 (2016). U. Frisch and R. Morf, Phys. Rev., [**23**]{}, 2673 (1981). J.D. Farmer, Physica D, [**4**]{}, 366 (1982). N. Ohtomo, K. Tokiwano, Y. Tanaka et. al., J. Phy. Soc. Jpn. [**64**]{} 1104 (1995). D.E. Sigeti, Phys. Rev. E, [**52**]{}, 2443 (1995). A. Bershadskii, arXiv:1803.10139 (2018). R.Z. Sagdeev, D.A. Usikov, G.M. Zaslavsky, Nonlinear Physics: from the Pendulum to Turbulence and Chaos (Harwood, New York, 1988). A. Bershadskii, arXiv:1801.07655 (2018). J.C. Sprott, Chaos and Time-Series Analysis (Oxford. University Press, 2003). http://sprott.physics.wisc.edu/cdg.htm S. Ashkenazi and V. Steinberg, Phys. Rev. Lett. [**83**]{}, 3641 (1999). C. Tong and A. Gluhovsky, Phys. Rev. E [**65**]{}, 046306 (2002). A. Gluhovsky, Nonlinear Processes in Geophysics, [**13**]{}, 125 (2006). T.G. Shepherd, Advances in Geophysics, [**32**]{}, 287 (1990). T.G.Shepherd, Encyclopedia of Atmospheric Sciences, J. R. Holton et al., Eds., 929 (Academic Press, 2003). V. Pelino et al., Commun. Nonlinear Sci. Numer. Simulat., [**17**]{}, 2122 (2012). A. Gluhovsky, and K. Grady, Chaos, [**26**]{}, 023119 (2016). A. Bershadskii, arXiv:1804.08536 (2018). M.J. Ventrice et al., Monthly Weather Review, [**141**]{}, 4197 (2013). K.Y. Kim et al., J. Geophys. Res., [**111**]{}, D20105 (2006). R. Suppiah, International Journal of Climatology, [**24**]{}, 269 (2004). http://www.clivar.org/asian-australian-monsoon Y. Kajikawa, B. Wang and J. Yang, International Journal of Climatology, [**30**]{}, 1114 (2010). B. Wang, I.S. Kang and J.Y. Lee, Journal of Climate, [**17**]{}, 803 (2004). http://www.cpc.ncep.noaa.gov/products/analysis\_monito ring/ensostuff/nino\_regions.shtml V. Krishnamurthy and B. N. Goswami, J. Climate, [**13**]{}, 579 (2000). B. Wang et al., Geophys. Res. Lett., [**32**]{}, L15711 (2005). C. Chou, J.Y. Tu and J.Y. Yu, J. Climate, [**16**]{}, 2275 (2003), W. Jiang et al., J. Climate, [**30**]{}, 109 (2017). J. Crétat et al., Climate Dynamics, [**49**]{}, 1429 (2017). https://climexp.knmi.nl/start.cgi
--- author: - 'Alexander Kurz[^1]' - 'Tao Liu[^2]' - 'Peter Marquard[^3]' - 'Alexander V. Smirnov[^4]' - | \ Vladimir A. Smirnov[^5] - 'Matthias Steinhauser[^6]' title: 'Higher order hadronic and leptonic contributions to the muon $g-2$' --- Introduction ============ The anomalous magnetic moment of the muon, $a_\mu$, is among the most precise measured quantities in particle physics. It is measured to a precision of 0.54 parts per million which matches the precision of the Standard Model theory prediction [@Bennett:2006fi; @Roberts:2010cj]. However, since many years one observes a discrepancy of about three to four standard deviations which survives persistent all improvements. This concerns both the experimental data and theoretical calculations entering the prediction. Currently a new experiment is built at FERMILAB with the aim to increase the accuracy of the measured value by about a factor four [@Carey:2009zzb; @Herzog_fccp15]. In the upcoming years also improvements on the theory side can be expected. On the one hand this is connected to improved measurements of $R(s)$ at low energies (see, e.g., Refs. [@Aubert:2009ad; @Ambrosino:2010bv; @Eidelman_fccp15]). On the other hand it can be expected that within the next few years results from lattice simulations become available both for the hadronic vacuum polarization and hadronic light-by-light contributions (see, e.g., Refs. [@DellaMorte:2011aa; @Burger:2013jya; @Blum:2014oka; @Blum:2015gfa; @Lehner_fccp15]). The by far dominant numerical contribution to $a_\mu$ originates from QED corrections which are known to five-loop order [@Aoyama:2012wk]. Note, however, that the four- and the five-loop corrections have only been computed by a single group.[^7] For this reason we have recently started to systematically check the four-loop results of [@Aoyama:2012wk]. In Ref. [@Lee:2013sx] analytic results for the gauge-invariant subsets with two or three closed electron loops have been obtained neglecting power corrections of the form $m_e/m_\mu$. All contributions involving a $\tau$ lepton have been computed in Ref. [@Kurz:2013exa]. After including three (analytic) expansion terms in $m_\mu^2/m_\tau^2$ a better precision has been obtained than in the numerical approach of Ref. [@Aoyama:2012wk]. The numerically most important QED contributions at four-loop level arise from light-by-light-type diagrams (i.e. the external photon does not couple to the external muon line) containing a closed electron loop. This well-defined subset has been considered in Ref. [@Kurz:2015bia] where an asymptotic expansion for $m_e \ll m_\mu$ has been performed to compute four expansion terms. We adopt the notation from Ref. [@Aoyama:2012wk] and parametrize the anomalous magnetic moment in the form $$\begin{aligned} a_\mu &=& \sum_{n=1}^\infty a_\mu^{(2n)} \left( \frac{\alpha}{\pi} \right)^n \,, \label{eq::amu}\end{aligned}$$ where the four-loop contribution can be written as $$\begin{aligned} a_\mu^{(8)} &=& A_1^{(8)} + A_2^{(8)}(m_\mu / m_e) + A_2^{(8)}(m_\mu / m_\tau) \nonumber\\&&\mbox{} + A_3^{(8)}(m_\mu / m_e, m_\mu / m_\tau) \,. \label{eq::Amu}\end{aligned}$$ $A_1^{(8)}$ contains only contributions from photons and muons, $A_2^{(8)}(m_\mu / m_e)$ and $A_2^{(8)}(m_\mu / m_\tau)$ involve closed electron or tau loops, and each Feynman diagram which contributes to $A_3^{(8)}(m_\mu / m_e, m_\mu / m_\tau)$ contains all three lepton flavours simultaneously. In Sections \[sec::electron\] and \[sec::tau\] we describe the calculation of the light-by-light type QED contribution to $A_2^{(8)}(m_\mu / m_e)$ (see also [@Kurz:2015bia]) and the computation of $A_2^{(8)}(m_\mu / m_\tau)$ (see also [@Kurz:2013exa]), respectively. Afterwards we summarize in Section \[sec::hadr\] the computation of the next-to-next-to-leading order (NNLO) hadronic vacuum polarization contribution published in Ref. [@Kurz:2014wya]. A brief summary and an outlook is given in Section \[sec::summary\]. \[sec::electron\]Four-loop electron contribution ================================================ The numerically most important contribution to $a_\mu^{(8)}$ originates from diagrams involving a closed electron loop (denoted by $A_2^{(8)}(m_\mu / m_e)$ in Eq. (\[eq::Amu\]). This contribution contains a gauge invariant subset where the external photon does not couple to the external muon line but to a closed fermion loop, the so-called leptonic light-by-light-type diagrams. Due to Furry’s theorem such diagrams do not contribute at two but only start at three loops where four photons can be attached to the closed fermion loop. Here we discuss the four-loop result which can be sub-divided into three gauge invariant and finite contributions which we denote by IV(a), IV(d) and IV(c). Sample Feynman diagrams are shown in Fig. \[fig::FDs\]. Case IV(a) can be further subdivided according to the flavour of the leptons in the closed fermion loops. The contribution with two electron loops is denoted by IV(a0), the one with one muon and one electron loop and the coupling of the external photon to the electron by IV(a1), and the remaining one with one muon and one electron loop by IV(a2). We do not consider the case with two muon loops since this contribution is part of $A_1^{(8)}$. ![image](gm2_4lbl_a.eps) ![image](gm2_4lbl_b.eps) ![image](gm2_4lbl_c.eps)\ IV(a) IV(b) IV(c) The light-by-light-type diagrams are numerically dominant and provide about 95% of the four-loop electron loop contribution. The main reason for this are $\log(m_e/m_\mu)$ terms which are even present in the limit $m_e\to0$. In fact, IV(a0) even has quadratic logarithms which makes this part the most important one. Our calculation is based on an asymptotic expansion [@Beneke:1997zp; @Smirnov:2013] for $m_e\ll m_\mu$ which is implemented with the help of [asy]{} [@Pak:2010pt; @Jantzen:2012mw] and in-house [Mathematica]{} programs. Similar to the hard mass procedure applied in Section \[sec::tau\] we obtain a factorization of the original two-scale integrals into products of one-scale integrals. The latter are either vacuum or on-shell integrals or integrals containing eikonal propagators of the form $1/(p\cdot q)$ (see Ref. [@Kurz:2015bia] for more details). For each integral class we perform a reduction to master integrals and obtain analytic results expressed as a linear combination of about 150 so-called master integrals. About 50% of them we know analytically or to high numerical precision. The remaining ones are computed with the help of the package [FIESTA]{} [@Smirnov:2013eza] which is the source of the numerical uncertainty in our final result. We would like to stress that in our approach a systematic improvement is possible if it is required to improve the accuracy. $A_2^{(8)}\left(\frac{m_\mu}{m_e}\right)$ ------------------------------------------- ------------------- -------------------------------------------------- IV(a0) $116.76 \pm 0.02$ $116.759183 \pm 0.000292$ IV(a1) $2.69 \pm 0.14$ $2.697443 \pm 0.000142$ IV(a2) $4.33 \pm 0.17$ $4.328885 \pm 0.000293$ IV(a) $123.78\pm 0.22$ $123.78551\hphantom{2} \pm 0.00044\hphantom{2}$ IV(b) $-0.38 \pm 0.08$ $-0.4170\hphantom{22} \pm 0.0037\hphantom{22}$ IV(c) $2.94 \pm 0.30$ $2.9072\hphantom{22} \pm 0.0044\hphantom{22}$ : \[tab::a8\]Summary of the final results for the individual four-loop light-by-light-type contributions and their comparison with results presented in Refs. [@Kinoshita:2004wi; @Aoyama:2012wk]. For all five cases we compute terms up to order $(m_e/m_\mu)^3$ (i.e. four expansion terms) and check that the cubic corrections only provide a negligible contribution. Our final results can be found in Tab. \[tab::a8\] where we compare to the findings of Refs. [@Kinoshita:2004wi; @Aoyama:2012wk]. Note that results for IV(a0) have also been obtained in Refs. [@Calmet:1975tw; @Chlouber:1977dr], though with significantly larger uncertainty. In all cases good agreement is found with [@Kinoshita:2004wi; @Aoyama:2012wk]. Although our numerical uncertainty, which amounts to approximately $0.4 \times (\alpha/\pi)^4 \approx 1.2 \times 10^{-11}$, is larger, the final result is nevertheless sufficiently accurate as can be seen by the comparison to the difference between the experimental result and theory prediction which is given by $$\begin{aligned} a_\mu({\rm exp}) - a_\mu({\rm SM}) &\approx& 249(87) \times 10^{-11} \,. \label{eq::amu_diff}\end{aligned}$$ This result is taken from Ref. [@Aoyama:2012wk]. Note that the uncertainty in Eq. (\[eq::amu\_diff\]) receives approximately the same amount from experiment and theory. Even after a projected reduction of the uncertainty by a factor four both in $a_\mu({\rm exp})$ and $a_\mu({\rm SM})$ our numerical precision is a factor ten below the uncertainty of the difference. \[sec::tau\]Four-loop tau lepton contribution ============================================= In this section we discuss the gauge invariant and finite subset of Feynman diagrams involving a closed heavy tau lepton loop. In the limit of infinitely heavy $m_\tau$ this contribution has to vanish. Thus $A_2^{(8)}(m_\mu / m_\tau)$ has a parametric dependence $m_\mu^2/m_\tau^2$ which is of order $10^{-3}$. Note, that $\alpha/\pi \approx 2\cdot 10^{-3}$ and thus one can expect that the four-loop tau lepton contribution is of the same order as the universal five-loop result [@Aoyama:2012wk]. We compute this contribution by applying an asymptotic expansion in the limit $m_\tau^2 \gg m_\mu^2$. This is realized with the help of the program [ exp]{} [@Harlander:1997zb; @Seidensticker:1999bb] which is written in [ C++]{}. As a result the two-scale four-loop integrals factorize into one-scale vacuum ($m_\tau$) and on-shell ($m_\mu$) integrals. Both integral classes are well studied in the literature (for references see [@Kurz:2013exa]). This concerns both the reduction to master integrals and the analytic evaluation of the latter. ------- =0.85 ------- In the first line of Fig. \[fig::ae\] a sample Feynman diagram is shown where the thick solid lines represent the tau leptons. Rows two and three of Fig. \[fig::ae\] show the result of the asymptotic expansion where the graphs left of the symbol $\otimes$ have to be expanded in all small quantities, i.e., the external momenta and the muon mass. Thus, the only mass scale of the remaining vacuum integral is the tau lepton mass. The result of the Taylor expansion is inserted into the effective vertex (thick blob) present in the diagram to the right of $\otimes$. Afterwards the remaining loop integrations, which are of on-shell type, are performed. As a final result we obtain an expansion in $m_\mu^2/m_\tau^2$ with analytic coefficients containing $\log(m_\mu^2/m_\tau^2)$ terms. Note that with the help of this method a better accuracy has been obtained than with the numerical approach of Ref. [@Aoyama:2012wk]. Inserting numerical values for the lepton masses leads to $$\begin{aligned} A^{(8)}_{2,\mu}(m_\mu/m_\tau) &=& 0.0421670 + 0.0003257 \nonumber\\&&\mbox{} + 0.0000015 + \ldots \,, \label{eq::A8}\end{aligned}$$ where the ellipsis indicates terms of order $(m_\mu^2/m_\tau^2)^4$ which are expected to contribute at order $10^{-8}$ to $A^{(8)}_{2,\mu}(m_\mu/m_\tau)$. $a_\mu$ receives contribution from $\tau$ lepton loops starting at two-loop order. Their numerical impact is given by $$\begin{aligned} 10^{11} \times a_\mu\Big|_{\tau \rm loops} &=& 42.13 + 0.45 + 0.12 \,, \label{eq::amu_tau}\end{aligned}$$ where the numbers on the right-hand side correspond to the two, three and four loops. It is interesting to note that the three-loop term is only less than a factor four larger than the four-loop counterpart. Furthermore, it is worth comparing the numbers in Eq. (\[eq::amu\_tau\]) to the universal contributions contained in $A_{1}$ which read [@Aoyama:2012wk] $$\begin{aligned} 10^{11} \times a_\mu\Big|_{\rm univ.} \!\!\!&=&\!\!\! 116\,140\,973.21 - 177\,230.51 \nonumber\\&&\mbox{} + 1\,480.42 - 5.56 + 0.06 \,,\end{aligned}$$ where the individual terms on the right-hand side represent the results from one to five loops. Note that the four-loop tau lepton term is twice bigger than the five-loop photonic contribution. \[sec::hadr\]NNLO hadronic contribution ======================================= The LO hadronic contribution to the anomalous magnetic moment of the muon is obtained from diagram (a) in Fig. \[fig::FD\_nnlo\]. One parametrizes the hadronic contribution (represented by the blob) by the polarization function $\Pi(q^2)$ which appears as a factor in the integrand of the one-loop diagram. In a next step one exploits analyticity of $\Pi(q^2)$ and uses a dispersion integral to introduce its imaginary part, $$\begin{aligned} R(s) &=& \frac{ \sigma(e^+e^-\to\mbox{hadrons}) }{ \sigma_{pt} } \,,\end{aligned}$$ with $\sigma_{pt} = 4\pi\alpha^2/(3s)$. Note that $\sigma(e^+e^-\to\mbox{hadrons})$ does note include initial state radiative or vacuum polarization corrections. At that point the loop integration and the dispersion integral are interchanged and one obtains $$\begin{aligned} a_\mu^{(1)} &=& \frac{1}{3} \left(\frac{\alpha}{\pi}\right)^2 \int_{m_\pi^2}^\infty {\rm d} s \frac{R(s)}{s} K^{(1)}(s) \,, \label{eq::aLO}\end{aligned}$$ A convenient integral representation for the kernel function $K^{(1)}(s)$, which is the result of the loop integration, is given by $$\begin{aligned} K^{(1)}(s) &=& \int_0^1 {\rm d} x \frac{x^2(1-x)}{x^2 + (1-x) \frac{s}{m_\mu^2}} \,. \label{eq::K1}\end{aligned}$$ At one-loop order it is possible to obtain analytic results (see Refs. [@BroRaf68; @Eidelman:1995ny]). Nevertheless, it is promising to consider $K^{(1)}(s)$ in the limit $m_\mu^2 \ll s$ which is justified since the lower integration limit in Eq. (\[eq::aLO\]) is $m_\pi^2$ which is bigger than $m_\mu^2$. The expansion of $K^{(1)}(s)$ is easily obtained by remembering that it originates from the vertex diagram similar to Fig. \[fig::FD\_nnlo\](a) where the hadronic blob (including the external photon lines) is replaced by a massive photon with mass $\sqrt{s}$. The expansion $m_\mu^2 \ll s$ is easily implemented with the help of the program [exp]{} [@Harlander:1997zb; @Seidensticker:1999bb] which implements the rules of asymptotic expansions involving a large internal mass (see, e.g., Ref. [@Smirnov:2013]). As a result the original two-scale integral is represented as a sum of one-scale integrals which are simple to compute. Using this approach several expansion terms in $m_\mu^2/s$ can be computed. One observes that an excellent approximation for $a_\mu^{(1)}$ is obtained by including terms up to order $(m_\mu^2/s)^5$. The approach described in detail for the one-loop diagram can also be applied at two and three loops where exact calculations of the kernel functions are either very difficult or even impossible. In Ref. [@Kurz:2014wya] four expansion terms have been computed which provides an approximation at the per mil level. A slight complication arises for the contributions involving more than one hadronic insertion, see Figs. \[fig::FD\_nnlo\](d,h,i,j,l). In case they are present in the same photon line formulas similar to Eq. (\[eq::K1\]) can be derived with two- and three-dimensional integrations. Diagrams of type $(3c)$ in Fig. \[fig::FD\_nnlo\] are more involved. Here, we apply a multiple asymptotic expansion in the limits $s\gg s^\prime \gg m_\mu^2$, $s \approx s^\prime\gg m_\mu^2$ and $s^\prime\gg s\gg m_\mu^2$ ($s$ and $s^\prime$ are the integration variables) and construct an interpolating function by combining the results from the individual limits. ----------------------- ----------------------- ------------------------- ----------------------- ![image](amuh1.eps) ![image](amuh2a.eps) ![image](amuh2b.eps) ![image](amuh2c.eps) \(a) LO \(b) $2a$ \(c) $2b$ \(d) $2c$ ![image](amuh3a.eps) ![image](amuh3b.eps) ![image](amuh3b3.eps) ![image](amuh3c1.eps) \(e) $3a$ \(f) $3b$ \(g) $3b$ \(h) $3c$ ![image](amuh3c2.eps) ![image](amuh3c3.eps) ![image](amuh3blbl.eps) ![image](amuh3d.eps) \(i) $3c$ \(j) $3c$ \(k) $3b$,lbl \(l) $3d$ ----------------------- ----------------------- ------------------------- ----------------------- The LO result for the hadronic vacuum polarization contribution to $a_\mu$ can be found in Refs. [@Davier:2010nc; @Hagiwara:2011af; @Jegerlehner:2011ti; @Benayoun:2012wc; @Jegerlehner:2015stw] and NLO analyses have been performed in Refs. [@Krause:1996rf; @Greynat:2012ww; @Hagiwara:2003da; @Hagiwara:2011af]. Our NLO results for the three contributions read $$\begin{aligned} a_\mu^{(2a)} &=& -20.90 \times 10^{-10}\,, \nonumber\\ a_\mu^{(2b)} &=& 10.68 \times 10^{-10}\,, \nonumber\\ a_\mu^{(2c)} &=& 0.35 \times 10^{-10}\,,\end{aligned}$$ which leads to $$\begin{aligned} a_\mu^{\rm had,NLO} &=& -9.87 \pm 0.09 \times 10^{-10}\,,\end{aligned}$$ in a good agreement with Refs. [@Hagiwara:2003da; @Hagiwara:2011af]. Note that in our analyses no correlated uncertainties are taken into account. Such a rough treatment should not be done at LO but is certainly acceptable at NNLO. For the individual NNLO contributions we obtain the results $$\begin{aligned} a_\mu^{(3a)} &=& 0.80 \times 10^{-10} \,,\nonumber\\ a_\mu^{(3b)} &=& -0.41 \times 10^{-10} \,,\nonumber\\ a_\mu^{(3b,\rm lbl)} &=& 0.91 \times 10^{-10} \,,\nonumber\\ a_\mu^{(3c)} &=& -0.06 \times 10^{-10} \,,\nonumber\\ a_\mu^{(3d)} &=& 0.0005 \times 10^{-10} \,,\end{aligned}$$ which leads to $$\begin{aligned} a_\mu^{\rm had,NNLO} &=& 1.24 \pm 0.01 \times 10^{-10}\,. \label{eq::amuNNLO}\end{aligned}$$ It is interesting to note that similar patterns are observed at two and three loops: multiple hadronic insertions are small and the contributions of type (b) involving closed electron two-point functions reduce the contributions of type (a) by about 50%. However, at three-loop order there is a new type of diagram where the external photon couples to a closed electron loop ($a_\mu^{(3b,\rm lbl)}$) which provides the largest individual contribution. This is in analogy to the three-loop QED corrections where the light-by-light type diagrams dominate the remaining contributions. In fact, due to $a_\mu^{(3b,\rm lbl)}$ the NNLO hadronic vacuum polarization contribution has a non-negligible impact. It has the same order of magnitude as the current uncertainty of the leading order hadronic contribution and should thus be included in future analyses. An important contribution to $a_\mu$ is provided by the so-called hadronic light-by-light diagrams where the external photon is connected to the hadronic blob. The NLO part of this contribution is of the same perturbative order as the corrections in Eq. (\[eq::amuNNLO\]). A first-principle calculation of this part is currently not available, however, in [@Colangelo:2014qya] it has been estimated to $a_\mu^{\rm lbl-had,NLO}=0.3 \pm 0.2 \times 10^{-10}$. We want to mention that there is a further hadronic contribution where four internal photons couple to the hadronic blob and the external photon couples to the muon line (“internal hadronic light-by-light”). This contribution, which is formally of the same perturbative order as $a_\mu^{\rm had,NNLO}$, is currently unknown. \[sec::summary\]Summary and conclusions ======================================= For more than a decade the measured and predicted results for the anomalous magnetic moment of the muon show a discrepancy of three to four standard deviations. This circumstance has triggered many publications which try to interpret the deviation with the help of beyond-SM theories. However, before drawing definite conclusions it is necessary to cross check the experimental result by performing an independent high-precision determination of $a_\mu$. Furthermore, all ingredients of the theory prediction should be computed by at least two groups independently. In this contribution we describe the calculation of two classes of four-loop QED contributions to $a_\mu$, which up to date only have been computed by one group: the contribution involving tau leptons and the one involving light-by-light-type closed electron loops. Good agreement with the results in the literature is found. To complete the cross check of the four-loop result the non-light-by-light electron contribution, the diagrams involving simultaneously electrons and taus, and the pure-muon contribution have to be computed. From the technical point of view the missing diagram classes have the same complexity as those described in Sections \[sec::electron\] and \[sec::tau\]. As a further topic we have discussed in Section \[sec::hadr\] the calculation of the NNLO hadronic vacuum polarization contribution. Acknowledgments {#acknowledgments .unnumbered} =============== M.S. would like to thank the organizers of “Flavour changing and conserving processes 2015” for the pleasant atmosphere during the conference. We thank the High Performance Computing Center Stuttgart (HLRS) and the Supercomputing Center of Lomonosov Moscow State University [@LMSU] for providing computing time used for the numerical computations with [FIESTA]{}. P.M. was supported in part by the EU Network HIGGSTOOLS PITN-GA-2012-316704. This work was supported by the DFG through the SFB/TR 9 “Computational Particle Physics”. The work of V.S. was supported by the Alexander von Humboldt Foundation (Humboldt Forschungspreis). G. W. Bennett [*et al.*]{} \[Muon g-2 Collaboration\], Phys. Rev. D [**73**]{} (2006) 072003 \[hep-ex/0602035\]. B. L. Roberts, Chin. Phys. C [**34**]{} (2010) 741 \[arXiv:1001.2898 \[hep-ex\]\]. R. M. Carey [*et al.*]{}, FERMILAB-PROPOSAL-0989. D. Hertzog, these proceedings. B. Aubert [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. Lett.  [**103**]{} (2009) 231801 \[arXiv:0908.3589 \[hep-ex\]\]. F. Ambrosino [*et al.*]{} \[KLOE Collaboration\], Phys. Lett. B [**700**]{} (2011) 102 \[arXiv:1006.5313 \[hep-ex\]\]. S. Eidelman, these proceedings. M. Della Morte, B. Jager, A. Juttner and H. Wittig, JHEP [**1203**]{} (2012) 055 \[arXiv:1112.2894 \[hep-lat\]\]. F. Burger [*et al.*]{} \[ETM Collaboration\], JHEP [**1402**]{} (2014) 099 \[arXiv:1308.4327 \[hep-lat\]\]. T. Blum, S. Chowdhury, M. Hayakawa and T. Izubuchi, Phys. Rev. Lett.  [**114**]{} (2015) 1, 012001 \[arXiv:1407.2923 \[hep-lat\]\]. T. Blum, N. Christ, M. Hayakawa, T. Izubuchi, L. Jin and C. Lehner, arXiv:1510.07100 \[hep-lat\]. C. Lehner, these proceedings. T. Aoyama, M. Hayakawa, T. Kinoshita and M. Nio, Phys. Rev. Lett.  [**109**]{} (2012) 111808 \[arXiv:1205.5370 \[hep-ph\]\]. S. Laporta, Phys. Lett. B [**312**]{} (1993) 495 \[hep-ph/9306324\]. P. A. Baikov and D. J. Broadhurst, \[hep-ph/9504398\]. J. -P. Aguilar, D. Greynat and E. De Rafael, Phys. Rev. D [**77**]{} (2008) 093010 \[arXiv:0802.2618 \[hep-ph\]\]. R. Lee, P. Marquard, A. V. Smirnov, V. A. Smirnov and M. Steinhauser, JHEP [**1303**]{} (2013) 162 \[arXiv:1301.6481 \[hep-ph\]\]. P. A. Baikov, K. G. Chetyrkin, J. H. Kuhn, C. Sturm, Nucl. Phys. B [**867**]{} (2013) 182 \[arXiv:1207.2199 \[hep-ph\]\]. P. A. Baikov, A. Maier and P. Marquard, Nucl. Phys. B [**877**]{} (2013) 647 \[arXiv:1307.6105 \[hep-ph\]\]. A. Kurz, T. Liu, P. Marquard and M. Steinhauser, Nucl. Phys. B [**879**]{} (2014) 1 \[arXiv:1311.2471 \[hep-ph\]\]. A. Kurz, T. Liu, P. Marquard, A. V. Smirnov, V. A. Smirnov and M. Steinhauser, arXiv:1508.00901 \[hep-ph\]. A. Kurz, T. Liu, P. Marquard and M. Steinhauser, Phys. Lett. B [**734**]{} (2014) 144 \[arXiv:1403.6400 \[hep-ph\]\]. M. Beneke and V. A. Smirnov, Nucl. Phys. B [**522**]{} (1998) 321 \[hep-ph/9711391\]. V. A. Smirnov, [*Analytic tools for Feynman integrals*]{}, Springer Tracts Mod. Phys.  [**250**]{} (2012) 1. A. Pak and A. Smirnov, Eur. Phys. J. C [**71**]{} (2011) 1626 \[arXiv:1011.4863 \[hep-ph\]\]. B. Jantzen, A. V. Smirnov and V. A. Smirnov, Eur. Phys. J. C [**72**]{} (2012) 2139 \[arXiv:1206.0546 \[hep-ph\]\]. A. V. Smirnov, Comput. Phys. Commun.  [**185**]{} (2014) 2090 \[arXiv:1312.3186 \[hep-ph\]\]. T. Kinoshita and M. Nio, Phys. Rev. D [**70**]{} (2004) 113001 \[hep-ph/0402206\]. J. Calmet and A. Peterman, Phys. Lett. B [**56**]{} (1975) 383. C. Chlouber and M. A. Samuel, Phys. Rev. D [**16**]{} (1977) 3596. R. Harlander, T. Seidensticker and M. Steinhauser, Phys. Lett. B [**426**]{} (1998) 125, arXiv:hep-ph/9712228. T. Seidensticker, arXiv:hep-ph/9905298. S. J. Brodsky and E. De Rafael, Phys. Rev.  [**168**]{} (1968) 1620. S. Eidelman and F. Jegerlehner, Z. Phys. C [**67**]{} (1995) 585 \[hep-ph/9502298\]. M. Davier, A. Hoecker, B. Malaescu and Z. Zhang, Eur. Phys. J. C [**71**]{} (2011) 1515 \[Erratum-ibid. C [**72**]{} (2012) 1874\] \[arXiv:1010.4180 \[hep-ph\]\]. K. Hagiwara, R. Liao, A. D. Martin, D. Nomura and T. Teubner, J. Phys. G [**38**]{} (2011) 085003 \[arXiv:1105.3149 \[hep-ph\]\]. F. Jegerlehner and R. Szafron, Eur. Phys. J. C [**71**]{} (2011) 1632 \[arXiv:1101.2872 \[hep-ph\]\]. M. Benayoun, P. David, L. DelBuono and F. Jegerlehner, Eur. Phys. J. C [**73**]{} (2013) 2453 \[arXiv:1210.7184 \[hep-ph\]\]. F. Jegerlehner, these proceedings, arXiv:1511.04473 \[hep-ph\]. B. Krause, Phys. Lett. B [**390**]{} (1997) 392 \[hep-ph/9607259\]. D. Greynat and E. de Rafael, JHEP [**1207**]{} (2012) 020 \[arXiv:1204.3029 \[hep-ph\]\]. K. Hagiwara, A. D. Martin, D. Nomura and T. Teubner, Phys. Rev. D [**69**]{} (2004) 093003 \[hep-ph/0312250\]. G. Colangelo, M. Hoferichter, A. Nyffeler, M. Passera and P. Stoffer, Phys. Lett. B [**735**]{} (2014) 90 \[arXiv:1403.7512 \[hep-ph\]\]. V. Sadovnichy, A. Tikhonravov, V. Voevodin, and V. Opanasenko, “‘Lomonosov’: Supercomputing at Moscow State University.” In Contemporary High Performance Computing: From Petascale toward Exascale (Chapman & Hall/CRC Computational Science), pp.283-307, Boca Raton, USA, CRC Press, 2013. [^1]: [^2]: [^3]: [^4]: [^5]: [^6]: [^7]: Partial four-loop corrections have been obtained in Refs. [@Laporta:1993ds; @Baikov:1995ui; @Aguilar:2008qj; @Lee:2013sx; @Baikov:2012rr; @Baikov:2013ula].
--- abstract: | [**Abstract**]{} The stability conditions for coordinate gauge independent perturbations of brane-worlds are analyzed. It is shown that, these conditions lead to the Einstein-Hilbert dynamics and to a confined gauge potential, independently of models and metric ansatzes. The size of the extra dimensions are estimated without assuming a fixed topology. The quantum modes corresponding to high frequency gravitational waves are defined through a canonical structure. author: - | M. D. Maia\ Universidade de Brasília, Instituto de Física\ Brasília. D.F. 7919-970\ [maia@fis.unb.br]{}\ and\ Edmundo M. Monte\ Universidade Federal da Paraíba\ Departamento de Física\ João Pessoa, Pb. 58100-000\ [edmundo@fis.ufpb.br]{} title: ' Geometric Stability of Brane-worlds' --- pacs[ 11.10.Kk, 04.50.+h, 04.60.+n]{} Introduction ============= The “brane-worlds” program proposes a unification of all interactions at the TeV scale, with the supposition that there are large (as compared to Planck’s length) extra dimensions. The hierarchy problem is solved by the assumption that the four-dimensional space-time geometry undergoes quantum fluctuations along the extra dimensions, but the gauge interactions of the standard model remain confined within the space-time. This is implemented by the introduction of a fundamental Planck scale in the higher dimensional bulk, while the effective four-dimensional Planck mass is adjusted in accordance with the geometry of the higher dimensional space. The geometrical scenario compatible with these assumptions is that of four-dimensional space-times, the brane-worlds, are dynamically embedded in a higher dimensional solution Einstein’s equations [@Arkani:1; @Randall]. In contrast with Kaluza-Klein and Superstring theories the brane-world unification in principle may be experimentally verified by high-energy collisions at hundreds of TeV. Another implication that may soon be checked is the modification of Newton’s law of gravitation at sub millimeter scale [@Newman]. There is not yet a theory of brane-worlds in the sense that the fundamental principles are well established. Some earlier publications, not always acknowledged, have proposed some basic concepts and formal tools. For example, the confinement of gauge interactions to space-time behaving as a potential well in a higher dimensional space has been suggested in the early eighties [@Akama:1; @RS; @Visser; @Antoniadis]. Applications of space-time embeddings to the quantization of the geometry, to generate internal symmetries and as an alternative to compactification, have been around for a considerable time [@Friedman; @RT; @Deser; @Maia:1; @Maia:2; @Davidson; @Pavsic:1; @Gibbons; @Tapia:1; @Pavsic:2; @Tapia:2]. Recent reviews and additional references can be found in [@Zurab:1; @Lorenzana; @Carter]. Presently there are two conceptually distinct approaches to brane-worlds. One of them is based on a higher dimensional space with product topology $M_{4}\times B_{N}$, $N\ge 2$, where $B_{N}$ is the a finite volume internal space, similar to Kaluza-Klein theory [@Arkani:1]. The other, is defined on a 5-dimensional anti-deSitter geometry, with brane-like boundary conditions [@Randall]. Current problems include the stability of the brane-world structure; the lack of a proper explanation of the confinement of the gauge fields and a consistent definition of the quantum states [@Giddings; @Akama:2]. The purpose of this paper is to show that the stability of the embedding is a central property of brane-worlds compatible with the aforementioned assumptions, independently of models and metric ansatzes. More specifically, we show in sections II and III that the stability conditions for the classical perturbations of a brane-world, regardless of the size, number and signature of the extra dimensions, provide the dynamics of the brane-world evolution. The cases with just one extra dimensions, studied in section IV, are found to have limited properties, including the need for confinement mechanisms which are independent and may interfere with the stability conditions. The quantum fluctuations of the brane-worlds described in section V, induce topological changes which are incompatible with a fixed topology. We show that such topological changes result from the multiparameter quantum fluctuations. In section VI it is shown that the differentiable structure of the brane-world also implies in the existence a confined gauge field structure. Finally, the concluding section also discusses on the number and size of the extra coordinates. Brane-worlds perturbations =========================== In very general terms, a brane-world may be described as a locally embedded space-time, capable of quantum fluctuations along the extra dimensions, but retaining the gauge interactions confined within. These conditions impose dynamical conditions on the embedding, producing what could be called a [*dynamical embedding*]{} of space-times as opposed to the more common analytic embedding. It is well known, that any space-time $\bar{V}_{n}$ may be locally embedded into a manifold $V_{D}$, with sufficiently large dimension $D$. The number of extra dimensions $N=D-n$ depends on the geometries of $\bar{V}_{n}$ and of $V_{D}$, and of course of the type of embedding map, which may be local, non-local, isometric, and conformal among other possibilities. The simplest examples are those of space-times isometrically embedded in a flat space $M_{D}$, where the embedding is given by analytic functions [@Cartan; @Janet]. The analytic assumption greatly simplifies the embedding and it implies that $D $ is at most $10$. In brane-worlds it is not obvious that the analyticity condition holds under the assumed conditions and in special under the quantum fluctuations. However, it seems reasonable to assume that the embedding remains differentiable, determined by the classic Gauss-Codazzi-Ricci equations. In this case, the limiting dimension for flat embeddings rises to $D=14$, with a wide range of compatible signatures [@Greene]. We will see that the differentiable hypothesis is consistent with the Einstein-Hilbert dynamical principle, which lies at the basis of the mentioned dynamical embedding. For generality, we consider an n-dimensional submanifold with arbitrary metric signature embedded in a D-dimensional space, which is a solution of higher dimensional Einstein’s equations[^1]. The embedding of the background $\bar{V}_{n}$, with metric $\bar{g}_{ij}$, is given by local the map $\bar{\cal X}: \bar{V}_{n}\rightarrow V_{D}$ such that[^2]. $$\bar{\cal X}^{\mu}_{,i}\bar{\cal X}^{\nu}_{,j}{\cal G}_{\mu\nu} =\bar{g}_{ij},\; \bar{\cal X}^{\mu}_{,i}\eta^{\nu}_{A}{\cal G}_{\mu\nu}=0,\; {\eta}^{\mu}_{A}{\eta}^{\nu}_{B}{\cal G}_{\mu\nu}=g_{AB} \label{eq:multi2}$$ where we have denoted by ${\cal G}_{\mu\nu}$ the metric of $V_{D}$ in arbitrary coordinates and $g_{AB}$ denotes the components of the metric of the complementary space $B_{N}$ in the basis $\{\eta_{A}\}$. One way to generate a brane-world is to deform the background space-time $\bar{V}_{n}$, in such a way that it remains compatible with the gauge confinement and the quantum fluctuations. The perturbations of an embedded geometry with respect to a small parameter $s$ along an arbitrary transverse direction $\zeta$ in $V_{D}$ has been defined long ago [@Campbell; @Nash; @Geroch; @Stewart; @Gowdy], starting with the perturbation of the embedding map $$\begin{aligned} {\cal Z}^{\mu}(x^{i},s) &= & \bar{{\cal X}}^{\mu}+s\pounds_{\zeta} \bar{{\cal X}}^{\mu}) = \bar{\cal X}^{\mu} + s [\zeta,{\cal X}]^{\mu} \label{eq:Defor1}\end{aligned}$$ The presence of a component of $\zeta$ tangent to $V_{n}$ is a cause for concern because it induces coordinate gauges. That is, a perturbation could be altered by a mere coordinate transformation. In general relativity, this problem is aggravated by the diffeomorphism invariance of the theory. There, the traditional solutions consist in imposing specific conditions on the metric, tetrads or even on the sources [@Chandra; @Bardeen]. Another solution, consists in choosing a hypersurface orthogonal perturbation (or equivalently, using the ADM language, to eliminate the shift function), with the obvious limitations resulting from the use of special coordinates [@Hojman]. In the case of brane-worlds, the extra dimensions do not share the same diffeomorphism invariance of $V_{n}$. Consequently, the limitations imposed by the choice of a hypersurface orthogonal vector does not apply. An additional simplification is obtained by taking the norm of this vector to be $\pm 1$. This is equivalent to normalize the lapse function to one in the ADM case, and using the extra coordinates $s^{A}$ to play the role of the “lapses”. With these precautions, the perturbations of the embedding map along an orthogonal direction $\bar{\eta}_{A}$, for some fixed value of $A$, gives $$\begin{aligned} {\cal Z}^{\mu}_{,i}(x,s^{A}) =\bar{{\cal X}}^{\mu}_{,i}(x) +s^{A}\bar{\eta}^{\mu}_{A,i}(x). \label{eq:Zi}\end{aligned}$$ On the other hand, since $\bar{\eta}_{A}(x^{i})$ are independent vectors, depending only of $x^{i}$, they remain unperturbed $$\begin{aligned} \eta^{\mu}_{A}(x^{i}) = \bar{\eta}_{A}^{\mu} +s^{B}[\bar{\eta}_{B},\bar{\eta}_{A}]^{\mu}=\bar{\eta}^{\mu}_{A} \label{eq:eta}\end{aligned}$$ The metric of the perturbed manifold may be written in generic coordinates as $$g_{ij} =\bar{g}_{ij} + f_{ij}(x^{i},s^{A}) \label{eq:metric}$$ Taking $s^{A}$ small as compared to one, the Taylor expansion of $f_{ij}$ in terms of $s^{A}$, gives the four-dimensional gravitational field in the vicinity of the brane-world. For a specified direction $\eta_{A}$, the linear perturbation assumes the form $$g_{ij} =\bar{g}_{ij} +s^{A}\gamma_{ijA}(x^{i})$$ Applying the de Donder gauge condition to the linearized Einstein’s equations, we obtain the homogeneous gravitational wave equations with respect to the extra dimension $\eta_{A}$. For a vacuum space-time the resulting wave equation reads as $$\Box^{ij}{}_{k\ell}\Psi_{ij}^{A}(x,s) =0 \label{eq:deRahm}$$ where $\Psi_{ij}^{A} =\gamma_{ij}- 1/2\gamma \bar{g}_{ij}$, $\gamma= \bar{g}^{mn}\gamma_{mn}$ and where we have denoted the generalized topological Dalambertian (de Rahm) wave operator by $$\Box^{ij}{}_{k\ell}=\bar{g}^{ij}\nabla_{k}\nabla_{\ell} +2\bar{R}^{i\;\; \;j}_{\, k\ell} + 2\bar{R}^{i\;}_{ (k}\delta^{\; j}_{\ell)}$$ We interpret the solutions of [(\[eq:deRahm\])]{} as describing gravitational waves over the curved background $\bar{V}_{n}$, in response to the quantum fluctuations of the brane-world. As such, these gravitational waves belong to the so-called high frequency limit, with a wavelength which is small as compared with the characteristic length of the background geometry [@Brill; @Isaacson; @Zakharov]. To define this property we may use the Gaussian reference frame based on the perturbed submanifold and the normal $\eta_{A}$. The embedding equations for the perturbed geometry are $${\cal Z}^{\mu}_{,i}{\cal Z}^{\nu}_{,j}{\cal G}_{\mu\nu} =g_{ij},\; {\cal Z}^{\mu}_{,i}\eta^{\nu}_{A}{\cal G}_{\mu\nu}=g_{iA},\; {\eta}^{\mu}_{A}{\eta}^{\nu}_{B}{\cal G}_{\mu\nu}=g_{AB} \label{eq:multi}$$ where now we have the mixed metric components $g_{iA}=s^{M}A_{iMA}$, with $$A_{iAB} =\eta^{\mu}_{B;i}\eta^{\nu}_{A}{\cal G}_{\mu\nu}= \bar{\eta}^{\mu}_{B;i}\bar{\eta}^{\nu}_{A}{\cal G}_{\mu\nu} =\bar{A}_{iAB} \label{eq:AiAB}$$ The extrinsic curvatures are $$k_{ijA}=-{\cal Z}^{\mu}_{,i}{\eta}^{\nu}_{A;j}{\cal G}_{\mu\nu} \label{eq:KijA}$$ Replacing [(\[eq:Zi\])]{} in [(\[eq:multi\])]{}, we obtain the perturbation of the metric $$\begin{aligned} g_{ij} = \bar{g}_{ij} -2s^{A}\bar{\kappa}_{ijA} &+& s^{A}s^{B}(\bar{g}^{mn}\bar{\kappa}_{imA}\bar{\kappa}_{jnB} \nonumber\\ &+& g^{MN} A_{iMA}A_{jNB}) \label{eq:pertu}\end{aligned}$$ and the perturbed extrinsic curvature in the same Gaussian frame $$\kappa_{ijA} =\bar{\kappa}_{ijA} - s^{B}(\bar{g}^{mn}\bar{\kappa}_{miA}\bar{\kappa}_{jnB} +g^{MN}{A}_{iMA}A_{jNB})$$ Equations [(\[eq:AiAB\])]{}, [(\[eq:KijA\])]{} and [(\[eq:pertu\])]{} give the evolution of the three basic geometrical attributes of the perturbed geometry. Comparing [(\[eq:KijA\])]{} and the derivative of [(\[eq:pertu\])]{} we obtain $$\frac{\partial g_{ij}}{\partial s^{A}}=-2\kappa_{ijA} \label{eq:YORKG}$$ which generalizes York’s relation used in the ADM formalism. Similar expressions were applied to geometric perturbations in [@Campbell; @Nash]. The curvature radii of the background $\bar{V}_{n}$ are defined as the solutions of the homogeneous equation $$(\bar{g}_{ìj} -s^{A}k_{ijA})dx^{i}= 0 \label{eq:radius1}, \;\; A \mbox{ fixed}.$$ From $det (\bar{g}_{ìj} -s^{A}k_{ijA})=0$ we obtain $n\times N$ distinct solutions, $s=\rho^{A}_{i}$, one for each principal direction $dx^{i}$ and for each normal $\eta_{A}$. These are local invariant properties of $\bar{V}_{n}$ and as such they are independent of the chosen Gaussian system [@Eisenhart]. Since [(\[eq:pertu\])]{} can be written as $$g_{ij} =\bar{g}^{mn}(\bar{g}_{im}-s^{A}\bar{k}_{imA})(\bar{g}_{jn}-s^{B}\bar{k}_{jnB}) + g^{MN}A_{iMA}A_{jNB}$$ it follows that the components $$\tilde{g}_{ij} =\bar{g}^{mn}(\bar{g}_{im}-s^{A}\bar{k}_{imA})(\bar{g}_{jn}-s^{B}\bar{k}_{jnB}) \label{eq:tildeg}$$ becomes singular at the curvature centers. Therefore, the fluctuations of the brane-world which are compatible with the regularity of the embedding should not reach these points. Since the smaller solutions $\rho^{A}_{i}$ contribute more significantly to the overall curvature of $\bar{V}_{n}$, the characteristic length of the background geometry suitable for comparing with the wavelength is $$\frac{1}{\bar{\rho}}= \sqrt{\bar{g}^{ij}g_{AB}\frac{1}{\bar{\rho}^{i}_{A}}\frac{1}{\bar{\rho}^{j}_{B}}} \label{eq:RHO}$$ which represents a classical limitation of the perturbations. Dynamics ======== To obtain the dynamical properties of the fluctuations which are compatible with the embedding, we apply the integrability conditions for the embedding, the Gauss, Codazzi and Ricci equations respectively $$\begin{aligned} R_{ijkl}& =& 2g^{MN}\kappa_{i[kM}\kappa_{jl]N} +{\cal R}_{\mu\nu\rho\sigma}{\cal Z}^{\mu}_{,i} {\cal Z}^{\nu}_{,j}{\cal Z}^{\rho}_{,k}{\cal Z}^{\sigma}_{,l}\nonumber \\ \kappa_{i[jA,k]}& =& g^{MN}A_{[kMA}\kappa_{ij]N} +{\cal R}_{\mu\nu\rho\sigma} {\cal Z}^{\mu}_{,i} \eta^{\nu}{\cal Z}^{\rho}_{,j}{\cal Z}^{\sigma}_{,k}\label{eq:GCR}\\ 2A_{[jAB;k]} &=& - 2g^{MN}A_{[jMA}A_{k]NB} \nonumber \\ &-& g^{mn}\kappa_{[jmA}\kappa_{k]nB} - {\cal R}_{\mu\nu\rho\sigma}{\cal Z}^{\rho}_{,j} {\cal Z}^{\sigma}_{,k}\eta^{\nu}_{A}\eta^{\mu}_{B} \nonumber\end{aligned}$$ These equations represent the conditions for a perturbation of $\bar{V}_{n}$ to be stable as a submanifold (and in particular a brane-world) [@Eisenhart]. The relevance of the above equations to the stability of brane-worlds has been pointed out in some particular situations, using the conformal flatness properties [@Tanaka; @Maeda]. In this section we show that that the implications of [(\[eq:GCR\])]{} to the dynamics of brane-worlds is quite general, independently of models or any additional assumption on the geometries of $V_{D}$ and $V_{n}$. For this, purpose, consider the expression $\xi^{\mu\nu}=g^{ij}Z^{\mu}_{,i}Z^{\nu}_{,j}$. From [(\[eq:multi\])]{} it follows that $$\xi^{\mu\nu}{\cal G}_{\mu\nu}= n\;\; \mbox{and}\;\; \xi^{\mu\nu}\eta^{\alpha}_{A}{\cal G}_{\mu\alpha} =0$$ so that $\xi^{\mu\nu}$ cannot be proportional to ${\cal G}^{\mu\nu}$. Writing $\xi^{\mu\nu}={{\cal G}}^{\mu\nu} +\zeta^{\mu\nu}$, we find that $\zeta^{\mu\nu}$ must satisfy $${\cal G}_{\mu\nu}\zeta^{\mu\nu}=-N\;\; \mbox{ and}\;\;\, \zeta_{\mu\nu}\eta^{\mu}_{A}\eta^{\nu}_{B}=-g_{AB}$$ The solution of these algebraic equations, compatible with [(\[eq:multi\])]{} is $\zeta^{\mu\nu} =g^{AB}\eta^{\mu}_{A}\eta^{\nu}_{B}$, so that $$g^{ij}{\cal Z}^{\mu}_{,i}{\cal Z}^{\nu}_{,j} ={\cal G}^{\mu\nu} -g^{AB}\eta^{\mu}_{A}\eta^{\nu}_{B} \label{eq:INV1}$$ Applying this result in the contractions of the first equation [(\[eq:GCR\])]{}, we obtain the Ricci scalar for the perturbed geometry $$\begin{aligned} R &=& (\kappa^{2} -h^{2}) +{\cal R} -2g^{MN}{\cal R}_{\mu\nu} \eta^{\mu}_{M}\eta^{\nu}_{N} \label{eq:RICCI}\\ & - & g^{AB}g^{MN}{\cal R}_{\mu\nu\rho\sigma}\eta^{\mu}_{A}\eta^{\sigma}_{B}\eta^{\nu}_{M} \eta^{\rho}_{N} \nonumber \end{aligned}$$ where we have denoted $\kappa^{2}=\kappa_{ijA}\kappa^{ijA}$ and the mean curvatures $h_{A}=g^{ij}\kappa_{ijA}$ with norm $h^{2} =g^{AB}h_{A}h_{B}$. Using the Gaussian frame we see that the last term vanishes and that $$g^{AB}{\cal R}_{\mu\nu}\eta^{\mu}_{A}\eta^{\nu}_{B} =-g^{AB}\frac{\partial h_{A}}{\partial s^{B}} +\kappa^{2}$$ Therefore, [(\[eq:RICCI\])]{}reduces to $$R ={\cal R} - (\kappa^{2} + h^{2}) -2 g^{AB}\frac{\partial h_{A}}{\partial s^{B}}$$ To obtain the canonical structure associated with the perturbations we may write the Einstein-Hilbert Lagrangian for $V_{D}$, after discarding the derivative terms ${\partial h_{A}}/{\partial s^{B}}$ as surface terms[^3]. $${\cal L}(g, g_{,i}, g_{,A})={\cal R}\sqrt{{\cal G}}= \left[ R + (\kappa^{2} + h^{2})\right] \sqrt{{\cal G}}\label{eq:EH}$$ The components of the momentum canonically conjugated to ${\cal G}_{\alpha\beta}$ relative to the normal direction $\eta_{A} $ are $$p^{\alpha\beta}_{( A)} =\frac{\partial {\cal L}}{ \partial \left( \frac{ \partial {\cal G}_{\alpha\beta} }{\partial s^{A}} \right) }$$ In particular, using [(\[eq:YORKG\])]{} we obtain the tangent components $$p^{ij}_{(A)}=-(\kappa^{ij}_{A} +h_{A}g^{ij})\sqrt{{\cal G}} \label{eq:PijA}$$ which describe the momentum the metric geometry of $V_{n}$, when propagated along the extra dimensions. On the other hand, the perturbation does not prescribe the evolution of ${\cal G}_{iA}$ and ${\cal G}_{AB}$. The corresponding momenta are be set as constraints: $$\begin{aligned} p^{iA}_{(B)} &= & -2\frac{\partial{\cal R}_{\alpha\beta}\eta^{\alpha}\eta^{\beta} }{\partial \frac{\partial{\cal G}_{iA}}{\partial s^{B} }} \sqrt{{\cal G}}=0 \label{eq:DEFCON1},\\ p^{AB}_{(C)} & = & -2 \frac{\partial {\cal R}_{\alpha\beta}\eta^{\alpha}_{A}\eta^{\beta}_{B}} {\partial \frac{\partial {\cal G}_{AB}}{\partial s^{C}}}\sqrt{{\cal G}}=0 \label{eq:DEFCON2}\end{aligned}$$ The constraint [(\[eq:DEFCON1\])]{} correspond to our previous choice of zero shift function, while [(\[eq:DEFCON2\])]{} corresponds to the normalization of the lapse. The Hamiltonian for each single direction $\eta^{A}$, is defined by the Legendre transformation $$\begin{aligned} {\cal H}_{A}(g,p) &=& p^{ij}_{(A)}g_{ij,A}-{\cal L} =\nonumber \\ &-& R\sqrt{{\cal G}} -\frac{1}{{\cal G}}\left(\frac{p_{A}^{2}}{n+1} - p_{ij (A)}p^{ij}_{(A)} \right) \label{eq:HG}\end{aligned}$$ leading to Hamilton’s equations, $$\begin{aligned} \frac{d g_{ij}}{d s^{A}} & =& \frac{\delta {\cal H}_{A}}{\delta p^{ij (A)}}= \frac{-2}{\sqrt{\cal G}} \left(\frac{g_{ij}p_{A}}{n+1}-p_{ij(A)}\right),\vspace{3mm}\label{eq:DOTG}\\ \frac{d p^{ij}_{(A)} }{ d s^{A} } & = & -\frac{\delta {\cal H}_{A}}{\delta g_{ij}}= G_{ij}\sqrt{g\varepsilon} + \frac{1}{\sqrt{g\varepsilon} } \left[ \frac{2p_{A}p_{ij(A)}}{n+1} \right. \nonumber\\ &+& \left. \frac{1}{2}\left( \frac{p_{A}^{2}}{n+1} +p_{ij(A)}p^{ij}_{(A)} \right) g_{ij}\right] \label{eq:DOTP}\end{aligned}$$ The first of these equations coincides with [(\[eq:YORKG\])]{} expressed in terms of $p_{ij(A)}$. The second equation gives the evolution of the extrinsic curvature of the perturbation in terms of momentum. Consequently, the stability conditions for the perturbations, represented by [(\[eq:GCR\])]{} are consistent with the Einstein-Hilbert dynamics for $V_{D}$. Hypersurface Brane-worlds ========================= When $D= n+1$, the brane-worlds are hypersurfaces of $V_{D}$. All expressions can be derived from the general case by setting $A,B\cdots =n+1$ and $g_{AB}=g_{n+1\,n+1}=\varepsilon =\pm1$. Since in this case the “twisting vector” $A_{iAB}$ vanishes, the integrability conditions lose Ricci’s equation. The remaining Gauss-Codazzi equations may be applied to derive the dynamics of the hypersurface for a general $V_{(n+1)}$. Expression [(\[eq:INV1\])]{} in this case becomes $$g^{ij}{\cal Z}^{\mu}_{,i}{\cal Z}^{\nu}_{,j} ={\cal G}^{\mu\nu} -\frac{1}{\varepsilon}\eta^{\mu}\eta^{\nu} \label{eq:INV}$$ As before, replacing in Gauss’ equation, after removing the total derivative terms the Lagrangian [(\[eq:EH\])]{} reduces to $${\cal L}=\left[ {R} + \frac{1}{\varepsilon}(\kappa^{2}+h^{2})\right] \sqrt{{\cal G}}\label{eq:L}$$ where now we have denoted $h=g^{ij}k_{ij}$ and $k^{2}=k^{ij}k_{ij}$. The momentum canonically conjugated to the metric ${\cal G}_{\alpha\beta}$, with respect to the perturbation parameter $s$ is (here the dot means derivation with respect to $s$) $p^{\alpha\beta}={\partial{\cal L}}/{\partial (\dot{\cal G}_{\alpha\beta}) } $, with components $$\begin{aligned} p^{ij}& =& \frac{-1}{\varepsilon}( k^{ij}+hg^{ij} )\sqrt{{\cal G}} \label{eq:piij}\\ p^{i\, n+1} & = & -2\frac{\partial{\cal R}_{\alpha\beta}\eta^{\alpha}\eta^{\beta} }{\partial \dot{{\cal G}}_{i\,n+1}}\sqrt{{\cal G}}=0 \label{eq:DEFCON1},\\ p^{n+1\,n+1} & = & -2\frac{\partial {\cal R}_{\alpha\beta}\eta^{\alpha}\eta^{\beta} } {\partial {\dot{\cal G}}_{n+1\,n+1}}\sqrt{{\cal G}}=0 \label{eq:DEFCON2}\end{aligned}$$ The Hamiltonian corresponding to the one parameter perturbation is $$\begin{aligned} {\cal H}=p^{\alpha\beta}\dot{\cal G}_{\alpha\beta}-{\cal L} &=&\nonumber\\ -R\sqrt{{\cal G}} &-&\frac{\varepsilon}{{\cal G}}\left( \frac{p^{2}}{n+1} -p_{ij}p^{ij} \right) \sqrt{{\cal G}} \label{eq:H}\end{aligned}$$ where $p={\cal G}_{\alpha\beta}p^{\alpha\beta}$. This describes the same perturbations of an arbitrary metric with some limitations to be noted: Since $A_{iAB}=0$, the perturbed metric in the Gaussian frame becomes simply $$g_{ij}=\tilde{g}_{ij}=\bar{g}^{mn}(\bar{g}_{im}-sk_{im})(\bar{g}_{jn}-sk_{jn})$$ which is singular at the curvature centers of $V_{n+1}$ defined by $det(\bar{g}_{ij} -sk_{ij})=0$. The solutions of this equation are the curvature radii $\rho_{i}$, one for each principal direction $dx^{i}$. The characteristic length has more significant contributions from the smaller $\rho_{i}$, in accordance with [(\[eq:RHO\])]{}, $$\frac{1}{\bar{\rho}} = \sum\sqrt{\varepsilon \frac{\bar{g}_{ij}}{\bar\rho_{i}\bar\rho_{j} }}$$ This is again an invariant property of the embedded background geometry. As it was already commented, the properties of this characteristic length have implications on the perturbations of brane-world geometry. In particular, a known result states that if $V_{n}$ has more than two finite curvature radii $\rho_{i}$, then the hypersurface becomes indeformable [@Eisenhart]. A particularly interesting example is given by a constant curvature space $V_{(n+1)}$. More specifically, consider the (non-compact) anti-De Sitter space $AdS_{(n+1)}$, with $\epsilon =-1$: $${\cal R}_{\mu\nu\rho\sigma}=-\Lambda_{*}({\cal G}_{\mu\rho}{\cal G}_{\nu\sigma}-{\cal G}_{\mu\sigma}{\cal G}_{\nu\rho}) \label{eq:ADS}$$ We obtain, ${\cal R}_{\mu\nu}= n \Lambda_{*}{\cal G}_{\mu\nu}$, so that the constraints [(\[eq:DEFCON1\])]{}, [(\[eq:DEFCON2\])]{} become identities and $\Lambda_{*}$ can be interpreted as a bulk cosmological constant. In this particular case, the integrability conditions are considerably simpler: $$\begin{aligned} R_{ijkl}& =& \frac{2}{\varepsilon} \kappa_{i[k}\kappa_{l]j} - \Lambda_{*}(g_{ik}g_{jl}-g_{il}g_{jk})\\ \kappa_{i[j;k]}& =& 0\end{aligned}$$ where we notice that the Riemann tensor of $V_{(n+1)}$ is entirely projected into the $n$-dimensional hypersurface. Therefore, it is more natural to derive Einstein’s equations for $V_{n}$. Using [(\[eq:INV\])]{}, we obtain $$G_{ij}= \frac{1}{\varepsilon} t_{ij}(k) +\Lambda_{*}(\frac{n}{2}-1)(n-1)g_{ij} =8\pi G T_{ij}\label{eq:EC}$$ where we have denoted $$t_{ij}(k) =k^{m}_{i}k_{mj} -hk_{ij} -\frac{1}{2}(k^{2}-h^{2})g_{ij}$$ and $T_{ij}$ denotes the energy-momentum tensor of the four-dimensional sources. Since the last equality in [(\[eq:EC\])]{} is not a differentiable equation on $g_{ij}$, it implies that this energy-momentum tensor becomes algebraically related to the extrinsic curvature $k_{ij}$ which represents a serious limitation for the brane-world. In this respect, it has been noted that when $\Lambda_{*} <0$, the null energy condition for $T_{ij}$ is not compatible with the embedding [@Manheim:1; @Barcelo]. We will see in section VI that the condition $A_{iAB}=0$ means that the gauge structure does not arise from the integrability conditions. Consequently it has to be imposed over the geometrical structure, requiring additional conditions to ensure the stability of the hypersurface brane-worlds. Quantum states ============== Klein’s compactification of the extra dimension was introduced to make Kaluza’s theory compatible with quantum mechanics. Specifically, the normal modes of the harmonic expansion with respect to the internal parameters (of Planck’s length size) were set in correspondence with the quantum modes [@Klein]. This eventually led to a major problem, namely the inability to generate light chiral fermions at the electroweak sector of the theory. More specifically, the strong curvature of the internal space contributed to large mass fermion states, which would necessarily be observable at the electroweak scale. On the other hand, in brane-worlds the extra dimensions are macroscopic, so that the harmonic expansion of physical fields over $V_{D}$ would not lead to the correct quantum phenomenology. In its place, we look at the linear gravitational wave equation [(\[eq:deRahm\])]{} in the de Donder gauge. However, two points must be observed: Firstly, the quantum fluctuations of the geometry should be independent of the classical approximation order on $s^{A}$. Furthermore, the classical waves correspond to the quantum fluctuations of the geometry. This suggests that the definition of the quantum states should precede the linear approximation of the metric. One way to define the quantum states relative to the extra dimensions is to use the canonical formalism associated with the perturbations. In fact, the Poisson bracket structure for each extra dimension $\eta_{A}$ defined by $$[{\cal F},{\cal H}_{A}]=\frac{\delta {\cal F}}{\delta g_{ij}}\frac{\delta {\cal H_{A}}}{\delta p^{ij(A)}}-\frac{\delta {\cal F}}{\delta p^{ij(A)}}\frac{\delta {\cal H_{A}}}{\delta g_{ij}}$$ can be derived consistently with [(\[eq:HG\])]{}, [(\[eq:DOTG\])]{} and [(\[eq:DOTP\])]{}, with the commutator between two independent perturbations given by $[{\cal H}_{A},{\cal H}_{B}]$. Therefore the quantum fluctuations may be defined for each independent extra dimension $\eta_{A}$, and the superposition principle applied afterwards. As one example, consider Schrödinger’s equation $$-i\hbar \frac{ d \Psi_{ij(A)} }{ d s^{A}} =\hat{\cal H}_{A}\Psi_{ij(A)} \label{eq:SCA}$$ where the operator $\hat{\cal H}_{A}$ is constructed with the perturbation Hamiltonian [(\[eq:HG\])]{}. The probability of a brane-world to be in a state $\Psi_{ij(A)}$ is given by the Hilbert norm $$<\Psi_{ij(A)},\Psi_{ij(A)}>=\int \Psi_{ij(A)}^{\dagger}\Psi_{ij(A)}\,dv$$ where the integral extends over a volume in $V_{D}$ with a base on a compact region of the background and a finite extension of the extra coordinates $s^{A}$. Considering two independent directions $\eta_{A}$ and $\eta_{B}$, the transition probability between the corresponding states is given by the integral $<\Psi_{ij\,(A)},\Psi_{k\ell\,(B)}>$. Topological changes are expected to occur in any quantum theory of space-times [@Hawking; @Balachandra; @Dowker; @Sorkin]. Therefore, we cannot use a fixed product topology for $V_{D}$, as in [@Arkani:1]. With multiple evolution parameters we may have a more complex topological variation than those expected in a single time theories. For example, if $\eta_{A}$ and $\eta_{B}$ are both space-like, then $<\Psi_{ij\, (A)},\Psi_{k\ell\, (B)}>$ corresponds to a space-like handle. On the other hand, if $\eta_{A}$ and $\eta_{B}$ have both time-like signatures, then the classical limit of the transition probability corresponds to a classical loop involving two internal time parameters, suggesting a multidimensional time machine. Finally, if $\eta_{A}$ and $\eta_{B}$ have different signatures, then the transition probability $<\Psi_{ij\, (A)},\Psi_{k\ell\, (B)}>$ corresponds to a signature change. An example of the last case is given by the Kruskal brane-world regarded as a perturbation of the embedded Schwarzschild space-time, in such a way that the latter becomes geodesically complete. These spaces are both embedded in six dimensional flat spaces, with signatures $(5,1)$ and $(4,2)$ respectively [@Fronsdal:1]. The Schwarzschild space-time is a subset of Kruskal space-time, but they do not belong to the same fixed embedding space. However, they may be considered as classical limits of distinct quantum states of the dynamically embedded Kruskal brane-world, with a signature transition at the horizon. As a final remark we add that due to the extrinsic nature of the parameters $s^{A}$, equation [(\[eq:SCA\])]{} may be understood as first quantization of the brane-world geometry. However, it does not exclude the quantization of the metric as an effective field theory in four dimensions, taking for example $V_{D}$ as the space of all deformed metrics [@Ashtekar; @DeWitt]. Confinement =========== The confinement hypothesis for gauge fields implies that regardless of the electromagnetic, weak and strong interactions taking place inside the brane-world, its differentiable structure remains intact. Since, those interactions are concomitant with the quantum fluctuations of the geometry, it follows that equations [(\[eq:GCR\])]{} must also be compatible with the confinement of the gauge fields. The basic field variables in [(\[eq:GCR\])]{} are $g_{ij}$, $\kappa_{ijA}$ and $A_{iAB}$. The first two vary with the perturbation and they are related by [(\[eq:YORKG\])]{}. On the other hand, from [(\[eq:AiAB\])]{} it follows that $A_{iAB}$ remain confined in the sense that they remain unchanged, independently of the quantum fluctuations of the brane-world. The relevant but so far little explored fact, is that those components transform as the components of a gauge potential under the group of isometries of the complementary space $B_{N}$. This follows directly from the transformation of the mixed component of the metric tensor, under a local infinitesimal coordinate transformation of $B_{N}$ $$s'^{A}= s^{A} +\xi^{A}\;\;\mbox{with}\;\; \xi^{i}=0,\;\; \mbox{and}\;\; \xi^{A} a =\Theta^{A}_{M}(x^{i})s^{M}$$ where $\Theta^{A}_{B}$ are the infinitesimal parameters. Denoting generic coordinates in $V_{D}$ by $\{x^{\mu}\} =\{x^{i},s^{A}\}$, it follows that $$g'_{iA} =g_{iA} + g_{i\mu}\xi^{\mu}_{,A} +g_{A\mu}\xi^{\mu}_{,i} +\xi^{\nu}\frac{\partial g_{iA}}{\partial x^{\nu}} +0(\xi^{2})$$ The transformation of $A_{iAB}$ follows from $$A'_{iAB}=\frac{\partial g'_{iA}}{\partial s'^{B}}= \frac{\partial g'_{iA}}{\partial s^{B}} - \xi^{\mu}_{,B}\frac{\partial g'_{iA}}{\partial x^{\mu}}$$ Using $\xi^{A}_{,B}=\Theta^{A}_{B}(x^{i})$, $\xi^{A}_{,i} =\Theta^{A}_{B}s^{B}$ and $\xi^{A}_{BC}=0$ we obtain $$A'_{iAB} = A_{iAB} -2g^{MN}A_{iM[A}\Theta_{B]M} +g_{MB} \Theta^{M}_{A,i}$$ which shows that indeed $A_{iAB}$ transforms as a gauge potential. Once the group of isometries of $B_{N}$ has been characterized, the above transformation may also be written in terms of its structure constants [@Maia:3; @Maia:4]. To conclude the characterization of $A_{iAB}$, we notice that the metric of $V_{D}$ written in the Gaussian frame has separate has components $$\begin{aligned} g_{ij} &= &{\cal Z}^{\mu}_{i,}{\cal Z}^{\nu}_{,j}{\cal G}_{\mu\nu}= \bar{g}_{ij} -2s^{A}\bar{k}_{ijA}\nonumber\\ &+& s^{A}s^{B}\left ( \bar{g}^{mn}\bar{k}_{imA}\bar{k}_{jnB} +\bar{g}^{MN}A_{iMA}A_{jNB}\right )\nonumber\\ g_{iA} & = & {\cal Z}^{\mu}_{,i}\eta^{\nu}_{A}{\cal G}_{\mu\nu} = s^{M}A_{iMA}\nonumber \\ g_{AB} & =&\eta^{\mu}_{A}\eta^{\nu}_{B}{\cal G}_{AB} \nonumber \end{aligned}$$ or, in matrix notation, $${\cal G}_{\alpha\beta}= \left( \matrix{ \tilde{g}_{ij} + g^{MN}A_{iM}A_{jN} & A_{iA} \cr A_{jB} & g_{AB}} \right) \label{eq:KK}$$ where $\tilde{g}_{ij}$ is given by [(\[eq:tildeg\])]{} and $$A_{iA} = s^{M}A_{iMA} \label{eq:AiA}$$ The metric [(\[eq:KK\])]{} has the same appearance as the Kaluza-Klein metric ansatz, with the exception that $\tilde{g}_{ij}$ is not the background metric but rather an untwisted perturbation of it, given by [(\[eq:tildeg\])]{}. The Einstein-Hilbert Lagrangian derived directly from [(\[eq:KK\])]{} is $${\cal L } ={\cal R}\sqrt{\cal G} ={R}\sqrt{\tilde{g}\epsilon} +\frac{1}{4}tr {F}^{2}\sqrt{\tilde{g}\epsilon} \label{eq:EYM}$$ where we have denoted $\epsilon= det{(g_{AB})}$, ${F}^{2}={F}_{ij}{F}^{ij}$ and ${F}_{ij}=[D_{i},D_{j}]$, $D_{i}=\partial_{i} + A_{i}$. Considering the gauge group as the group of isometries of $B_{N}$, the gauge connection $A_{i}$ can be expressed in the Killing basis $\{K^{AB}\}$ of the corresponding Lie algebra as $${A}_{i}=A_{iAB}K^{AB}$$ From [(\[eq:EYM\])]{} we see that the bulk gravitational field described by ${\cal G}_{\alpha\beta}$ decomposes in a four dimensional gravitational interaction represented by $\tilde{g}_{ij}$, plus the gauge interactions represented by $A_{iAB}$, much in the sense of Kaluza-Klein theory. Conclusion ========== We have taken an approach to brane-world theory that is independent of the two known models proposed in [@Arkani:1; @Randall]. Starting with the classic perturbation analysis of submanifolds, the geometric stability conditions [(\[eq:GCR\])]{} for the perturbation are assumed to hold throughout. The conclusion is that those equations in fact determine the brane-world dynamics in the general case. From the fact that the internal parameters do not share the same diffeomorphism invariance of the gauge fields, we have described a non-constrained canonical structure which reproduces the classical perturbation equations. The perturbation Hamiltonians, one for each extra dimension, is compatible with a canonical quantization relative to the internal parameters $s^{A}$. It was also noted that the quantum fluctuations in general are not compatible with the adoption of a fixed topology for $V_{D}$. We find it significant to brane-worlds the fact that when $N\ge 2$, the twisting vector $A_{iAB}$, has the structure of a confined gauge field built in [(\[eq:GCR\])]{}. It is normally assumed that the brane-world is four-dimensional. However, this assumption must be consistent with the properties of the embedding and the gauge group $G$. Since the host space $V_{D}$ may be thought of as populated by point particles to (n-1)-dimensional objects, the end product of their dynamics is the n-dimensional brane-world regarded as a dynamically embedded space-time. As such, the differentiable equations [(\[eq:GCR\])]{} assume the role of structure preserving equations for brane-worlds embedded in a $V_{D}$ with $D\le n(n+3)/2$. Setting $D=n+N$, it follows that $n^{2} +n -2N\ge 0$. For the standard model gauge group $SU(3)\times SU(2)\times U(1)$ acting on a seven dimensional projective space, we find $n \ge 3.27$, meaning that the standard model just fits in four-dimensional stable brane-worlds. On the other hand, using the $SO(10)$ GUT we obtain exactly $n=4$, suggesting that four dimensional brane-worlds have a natural existence in a particular fourteen dimensional theory with signature $(11,3)$ and with $SO(1O)$ as the gauge group. Our final remark concerns the size of the extra dimensions and the consequent modification of Newton’s law at small distances. From [(\[eq:tildeg\])]{} it follows that the perturbation becomes singular at the curvature centers of the background characterized by $\rho^{A}_{i}$, so that $s^{A}$ must remain in the open intervals $[0,\pm\rho^{A}_{i})$. This means that as long as the curvature radii $\rho^{A}_{i}$ remain finite, the volume of the complementary space available for the graviton probes is finite. In this case, noting that the right hand side of [(\[eq:EH\])]{}, depends only on $x^{i}$, we may evaluate the action integral in that region $$\int{\cal L}\sqrt{\cal G}d^{n+N}v= \int\left(\int[R -(k^{2} +h^{2})]\sqrt{\tilde{g}}\sqrt{\epsilon}\,d^{n}v\right)d^{N}v$$ which leads to an argument similar to that in [@Arkani:1], but without assuming the product topology: $$\frac{1}{M_{\ast}^{2+N}}= \frac{1}{M_{pl}^{2}}(1 -K) {\cal V}$$ where we have denoted $K= \int(k^{2} +h^{2})]\sqrt{\tilde{g}\epsilon}\,d^{n}v$, $ {\cal V}$ is the finite volume of the region in the complementary space and $M_{\ast}$ is the Planck’s mass in the bulk. The volume ${\cal V}$ depends on the curvature of $\bar{V}_{n}$ and it is detailed by the different values of $\rho^{A}_{i}$ as given by [(\[eq:RHO\])]{}. Therefore, defining an internal spherical space with radius $\bar{\rho}$, we have ${\cal V}\approx \bar{\rho}^{N}$, so that $$\bar{\rho}^{N} \approx \frac{M_{pl}^{2}}{M_{\ast}^{2+N}}\frac{1}{(1 -K)},\;\; K\neq 1$$ This gives the same estimates as in [@Arkani:1] when $K$ is small as compared to one. The case $N=1$ has been ruled out in our analysis because of the limitations imposed on the brane-world fluctuations. The actual distances probed by gravitons depend on further analysis of the quantum states. and Vanda Silveira. [55]{} N. Arkani-Hamed et Al, Phys. Lett. , 263 (1998), Phys. Rev. Lett. , 586, (2000) L. Randall & R. Sundrum, Phys. Rev. Lett. , 4690 (1998), Phys. Rev. Lett. , 3370 (1999) R. Newman, [*Current searches for non-Newtonian gravity at Sub-mm Distance Scales*]{}. Matters of Gravity 15, p18, (2000), (hep-th/0002021). K. Akama, [*Pregeometry*]{}, in Gauge Theory and Gravitation, Lecture notes in Physics 176 (Springer Verlag (1983)), hep-th/0001113 V. A. Rubakov & M. E. Shaposhnikov, Phys. Lett. [, 136, (1983)]{} M. Visser, Phys. Lett. [ , 22 (1985)]{}, hep-th/9910093 I. Antoniadis, Phys. Lett. , 317 (1990) A. Friedman, Rev. Mod. Phys. , 201, (1965) (See also all papers of the special embedding seminar in the same issue). T. Regge & C. Teitelboim, C. [*Relativity a la String*]{} Proc. $I^{st}$ Marcell Grosmmann Meeting, Trieste [ (1975)]{}; S. Deser et Al, Phys. Rev. , 3301, (1976) M. D. Maia & W. Mecklemburg, Jour. Math. Phys. , 3047 (1984) M. D. Maia, Phys. Rev.D , 262, and 268 (1985) A. Davidson, Mod. Phys. Lett. , 869 (1985) M. Pavsic, Phys. Lett , 1, (1986) G.W. Gibbons & D. L. Wiltshire, Nucl. Phys. , 717, (1987) V. Tapia, Class. Quant. Grav. , L49, (1990) M. Pavsic, Class. Quant. Grav. , 221 (1995) M. Pavsic & V. Tapia, gr-qc/0010045 Zurab Kakushhadze & S.H Henry Tye, Nucl. Phys. , 180 (1999), A. Pérez-Lorenzana, hep-th/0008333 B. Carter, hep-th/0012036 S. Giddings, [*Branification: an alternative to compactification*]{}. Matters of Gravity 15, p15, (2000), (hep-th/0002021) K. Akama, [*Induced Field Theory on the Brane-World*]{} hep-th/ 0007001, hep-th/0008133 E. Cartan, Ann. Soc. Pol. Mat , 1, (1927) M. Janet, Ann. Soc. Pol. Mat , (1928) R. Greene, Memoirs Am. Math. Soc [ , (1970)]{}. J. E. Campbell, [*A Course of Differential Geometry*]{}. Claredon Press, Oxford (1926) J. Nash, Annals of Maths. , 20 (1956) R. Geroch, J. Math. Physics, , 918 (1971) J. M. Stewart & M. Walker, Proc. Roy. Soc. London. , 49 (1971) R. H. Gowdy, J. Math. Phys. , 988 (1981) S. Chandrasekhar, [*The Mathematical Theory of Black Holes*]{}. Oxford U.P. (1980) J. M. Bardeen, Phys. Rev. , 1882, (1980) S. Hojman, K. Kučhar & C. Teitelboim, Ann. of Phys. [ , 88, [ (1976)]{}]{}. D. R. Brill & J. B. Hartle, Phys. Rev. , 271, (1964) R. A. Isaacson, Phys. Rev. , 1263 (1968) V. D. Zakharov, [*Gravitational Waves in Einstein’s Theory*]{}, Halsted Willey, New York (1973) L. P. Eisenhart, [*Riemannian Geometry*]{} Princeton U. P. Princeton, N.J. (1966) T. Tanaka, J. Garriga, Phys. Rev. Lett. , 2778, (2000) T. Shiromizu, K. Maeda & M. Sasaki, Phys. rev. D62, 024012 (2000) P. Manheim, hep-th/0005226 C. Barcelo & M. Visser, hep-th/0004022 O. Klein, Nature, , 516, (1926) S. Hawking, Phys. Rev. , 904 (1988) A. P. Balachndran et all, Nucl. Phys. , 299, (1995) F. Dowker & R. D. Skirme, Class. Quant. Grav. , 1153, (1998) J. L. Friedman & R. D. Sorkin, Phys. Rev Lett. , 1100 (1990) C. Fronsdal, Phys. Rev. , 778, (1959) A. Ashtekar, [*Quantum Mechanics of Geometry*]{} Preprint, Penn. State University, CGPG-98-12-5, gr-qc/9901023 DeWitt, B.S. & Molina-París, C. Mod. Phys. Lett. , 2475 (1998) M. D. Maia, Class. Quant. Grav. , 173 (1989) E. M. Monte & M. D. Maia, J. Math. Phys. , 1972 (1996) [^1]: The use of generic signature may cause the emergence of ghosts. However, in view of prospective topological changes, it would be unwise to any specify the signature at this stage. The expressions can be adjusted to specific signatures by simple arithmetics. [^2]: All Greek indices run from 1 to $D$. Small case Latin indices $i,j,k...$ run from 1 to $n$. An overbar denotes an object of the background space-time. The covariant derivative with respect to the metric of the higher dimensional manifold is denoted by a semicolon and $\xi^{\mu}_{;i}=\xi^{\mu}_{;\gamma}\bar{\cal X}^{\gamma}_{,i}$ denotes its projection over $V_{n}$. The curvatures of the higher dimensional space are distinguished by a calligraphic ${\cal R} $ . [^3]: To allow for different signatures, we have denoted ${\cal G}= {det({\cal G}_{\alpha\beta})}$ module the signature of the extra dimensions.
--- abstract: | [**Abstract**]{} Molecular exciton effects in the neutral and maximally doped C$_{60}$ (C$_{70}$) are considered using a tight binding model with long-range Coulomb interactions and bond disorder. By comparing calculated and observed optical spectra, we conclude that relevant Coulomb parameters for the doped cases are about half of those of the neutral systems. The broadening of absorption peaks is well simulated by the bond disorder model. author: - | Kikuo Harigaya and Shuji Abe\ \ Fundamental Physics Section, Electrotechnical Laboratory, Umezono, Tsukuba, Ibaraki 305, Japan title: 'Optical response of C$_{60}$ and C$_{70}$ fullerenes: Exciton and lattice fluctuation effects' --- INTRODUCTION ============ Fullerenes have a hollow cage of carbons [@1]. Its structure is formed with $\sigma$-bondings. The $\pi$-electron orbitals extend to the outside of the cage. As $\pi$-electrons are delocalized on the surface of fullerenes, the optical properties are similar to those of $\pi$-conjugated polymers [@2]. Recently, we have applied the intermediate exciton formalism which was used in polymers [@3], and have investigated relevant Coulomb parameters in order to understand the dispersions and oscillator strengths of the absorption spectra of the neutral $\soc$ ($\rug$) [@4; @5] and maximally doped systems [@6; @7]. The purpose of this article is to review the main results published elsewhere [@8; @9]. In the formalism [@8], a tight binding model with a long-range Coulomb interaction and bond disorder has been used. The free electron part corresponds to the simple Hückel model with the mean hopping integral $t$; the presence of the dimerization is considered for the neutral $\soc$ case, and the constant hopping is assumed for the other cases. The long-range Coulomb interaction is the Ohno expression: $W(r) = 1/\sqrt{(1/U)^2 + (r/r_0 V)^2}$. Here, $U$ is the onsite Coulomb strength, $V$ is the long-range component, and $r_0$ is the average bond length. The bond disorder model with the Gaussian distribution of the hopping integral with a standard deviation $t_{\rm s}$ simulates the lattice fluctuation effects [@10]. This gives rise to broadening of the absorption spectra. The model is solved by the Hartree-Fock approximation and the single excitation configuration interaction method. Optical spectra are obtained by using the dipole approximation. In the next two sections, we present the calculated spectra and compare them with experiments. NEUTRAL C$_{\bf 60}$ AND C$_{\bf 70}$ ===================================== We discuss the calculated optical absorption spectra of $\soc$ and $\rug$. $\soc$ has high $I_h$ symmetry, and $\rug$ has lower $D_{5h}$ symmetry. Figure 1(a) shows the spectrum of $\soc$, and Fig. 1(b) that of $\rug$. Parameters, $U=4t$, $V=2t$, $t_{\rm s} = 0.09t$, and $t=1.8$eV, are used. These parameters are determined so as to reproduce the experimental spectra as good as possible. Experimental data of molecules in solutions are taken from [@4; @5], and are shown by thin lines. There are three main features in $\soc$ absorption found around the energies 3.5eV, 4.7eV, and 5.6eV. In the case of $\rug$, we could say that several small peaks in the energy interval from 1.7eV to 3.6eV are originated from the 3.5eV feature of $\soc$ after splitting. The 4.7eV and 5.6eV features of $\soc$ overlap into the large feature present in the energy region larger than 3.6eV. The optical gap decreases from 3.1eV ($\soc$) to 1.7eV ($\rug$). These changes would be due to the symmetry reduction from the $\soc$ soccer ball to the $\rug$ rugby ball. There is a good agreement with experiments about the peak positions and relative oscillator strengths in the $\soc$ case. The agreement is overall for $\rug$. There is a systematic deviation of the experimental curves from the theoretical ones at energies higher than 5.0eV due to the overlap of $\sigma$-electron excitations. MAXIMALLY DOPED C$_{\bf 60}$ AND C$_{\bf 70}$ ============================================= Calculations are performed for $\soc^{6-}$ and $\rug^{6-}$ in order to look at molecular exciton effects in $A_6 \soc$ and $A_6 \rug$ solids ($A =$ alkali metals). Results are shown in Figs. 2(a) and (b) with the experimental absorption [@6] and the optical conductivity data obtained by the EELS studies [@7]. We find that the parameter set, $U=2t$, $V=1t$, $t_{\rm s} = 0.20t$, and $t=2.0$eV, is relevant to both systems. For $A_6 \soc$, two features around 1.2eV and 2.8eV are reasonably explained by the present calculation. The broadening is simulated well by the bond disorder. The disorder strength $t_{\rm s} = 0.20t$ is about the twice as large as that of the neutral $\soc$. The broad feature in the energies higher than 3.6eV in the experiments might correspond to the two broad peaks around 4.4eV and 6.0eV of the calculation. In these energies, the agreement is not so good. The same fact has been seen in the neutral $\soc$ case. Excitations which include $\sigma$-orbitals would mix in this energy region. This effect could be taken into account by using models with $\pi$- and $\sigma$-electrons. For $A_6 \rug$ shown in Fig. 2(b), we can assign that the features at 1.2eV, 2.6eV, and 4.6eV of the calculation correspond to the peaks at 1.0eV, 2.7eV, and 4.2eV of the experiment, respectively. Thus, we can explain the presence of three broad features observed by experiments. DISCUSSION ========== We have mainly looked at Frenkel exciton effects in the neutral and maximally doped $\soc$ (and $\rug$) molecules. The relevant Coulomb parameters for the optical spectra are $U \sim 2V \sim 4t$ for the neutral systems, and $U \sim 2V \sim 2t$ for $A_6 \soc$ and $A_6 \rug$ solids. In the Ohno potential, the dielectric screening of Coulomb interactions can be taken into account by assuming $U = U_0/\eps_{\rm s}$ and $V = V_0/\eps_{\rm l}$, where $\eps_{\rm s}$ and $\eps_{\rm l}$ are short- and long-range dielectric constants. The magnitude of the static dielectric constant increases by the factor about two upon doping: $\eps_1(0) = 4.3$ in the neutral $\soc$ and $\eps_1(0) = 7.1$ in Rb$_6\soc$, and also $\eps_1(0) = 4.0$ in the neutral $\rug$ and $\eps_1(0) = 8.0$ in Rb$_6\rug$ [@7]. This enhancement of dielectric constants reasonably accords with the decrease of $U$ and $V$ in the present calculations. [99]{} H.W. Kroto, J.E. Fischer and D.E. Cox (eds.), The Fullerenes, Pergamon Press, Oxford, 1993. A.J. Heeger, S. Kivelson, J.R. Schrieffer and W.P. Su, Rev. Mod. Phys. 60 (1988) 781. S. Abe, J. Yu and W.P. Su, Phys. Rev. B 45 (1992) 8264. S.L. Ren, Y. Wang, A.M. Rao, E. McRae, J.M. Holden, T. Hager, K.A. Wang, W.T. Lee, H.F. Ni, J. Selegue and P.C. Eklund, Appl. Phys. Lett. 59 (1991) 2678. J.P. Hare, H.W. Kroto and R. Taylor, Chem. Phys. Lett. 177 (1991) 394. T. Pichler, M. Matus, J. Kürti and H. Kuzmany, Solid State Commun. 81 (1992) 859. E. Sohmen and J. Fink, Phys. Rev. B 47 (1993) 14532. K. Harigaya and S. Abe, Phys. Rev. B 49 (1994) 16746. K. Harigaya, Jpn. J. Appl. Phys. 33 (1994) in press. K. Harigaya, Phys. Rev. B 48 (1993) 2765.
--- author: - Abdelmalek Abdesselam and Jaydeep Chipalkatti title: On the linear combinants of a binary pencil --- Introduction ============ This paper is a thematic sequel to [@AC] and [@JC]. The problem solved here was originally posed in [@JC] (of which a pr[é]{}cis is given below). All of the unexplained notation and terminology used in this paper may be found in [@AC]. The reader is referred to [@Clebsch; @Dolgachev; @GrYo; @Olver] for some foundational material in classical invariant theory and the symbolic method. The basics of the representation theory of $SL_2$ may be found in [@FH Lecture 11] and [@Sturmfels Chapter 4]. The base field ${\Bbbk}$ will be of characteristic zero. Let $S_d$ denote the $(d+1)$-dimensional irreducible representation of the group $SL_2=SL(2,{\Bbbk})$. We identify $S_d$ with the space of (homogeneous) binary $d$-ics in the variables ${\mathbf x}= \{x_1,x_2\}$. Given integers $m,n \ge 0$ and $0 \le q \le \min(m,n)$, there is an $SL_2$-equivariant split surjection (see [@AC §1.5]) $$\pi_q: S_m \otimes S_n {\longrightarrow}S_{m+n-2q}. \label{pi.q}$$ Given binary forms $F \in S_m$ and $G \in S_n$, the image $\pi_q(F \otimes G)$ is classically referred to as the $q$-th transvectant of $F$ and $G$, denoted by $(F,G)_q$. We have an explicit formula $$(F,G)_q = \frac{(m-q)! \, (n-q)!}{m! \, n!} \, \sum\limits_{i=0}^q \, (-1)^i \, \binom{q}{i} \, \frac{\partial^q F}{\partial x_1^{q-i} \, \partial x_2^i} \, \frac{\partial^q G}{\partial x_1^i \, \partial x_2^{q-i}} \, ;$$ however, it is seldom directly useful. For later use, let $$\imath_q: S_{m+n-2q} {\longrightarrow}S_m \otimes S_n \label{i.q}$$ denote the canonical inclusion, so that $\pi_q \circ \imath_q$ is the identity map on $S_{m+n-2q}$. Now let $A,B \in S_d$ denote two linearly independent forms. There is an isomorphism of $SL_2$-representations $$\wedge^2 S_d = \bigoplus\limits_{r=1}^{\lfloor\frac{d+1}{2}\rfloor} \, S_{2d-4r+2}, \label{wedge2.Sd}$$ with projection morphisms $p_r: \wedge^2 S_d {\longrightarrow}S_{2d-4r+2}$. The image $p_r(A \wedge B)$ equals the transvectant $(A,B)_{2r-1}$. For any scalars $\alpha,\beta,\gamma,\delta$, we have an invariance property $$(\alpha \, A + \beta \, B,\gamma \, A + \delta \, B)_{2r-1} = (\alpha \, \delta - \beta \, \gamma) \, (A,B)_{2r-1}.$$ Hence, up to a scalar, the forms ${\mathcal C}_{2r-1} = (A,B)_{2r-1}$ depend only on the subspace $\Pi_{A,B} = \text{Span} \, \{A,B\}$. In classical terminology (see [@GrYo §250]), the $\{{\mathcal C}_{2r-1}\}$ are linear combinants of the pencil $\{A + \lambda \, B\}_{\lambda \in {\mathbf P}^1}$. Decomposition (\[wedge2.Sd\]) implies that the pencil is completely determined by the sequence of forms $${\mathcal C}_1,\; {\mathcal C}_3,\dots, {\mathcal C}_{\lfloor \frac{d+1}{2} \rfloor},$$ but rather more can be said. An arbitrary form $F \in S_d$ belongs $\Pi_{A,B}$, if and only if the Wronskian $${\mathbf W} = \left| \begin{array}{ccc} A_{x_1^2} & A_{x_1x_2} & A_{x_2^2} \\ B_{x_1^2} & B_{x_1x_2} & B_{x_2^2} \\ F_{x_1^2} & F_{x_1x_2} & F_{x_2^2} \end{array} \right| = 0.$$ After some manipulation, this condition can be rewritten as $$({\mathcal C}_1,F)_2 + \frac{d-2}{4d-6} \, F \, {\mathcal C}_3 = 0.$$ It follows that ${\mathcal C}_1,{\mathcal C}_3$ determine $\Pi_{A,B}$, and hence they indirectly determine all the subsequent combinants ${\mathcal C}_5,{\mathcal C}_7,{\mathcal C}_9$ etc. It is natural to enquire whether there exists a concrete formula for ${\mathcal C}_{2r-1}$ in terms of ${\mathcal C}_1,{\mathcal C}_3$. This problem was solved in [@JC §5] for ${\mathcal C}_5$ and ${\mathcal C}_7$ using some ad-hoc calculations; here we will give an inductive solution which applies to all $r \ge 3$. Assume $d=7$. We have an identity $${\mathcal C}_1 \, {\mathcal C}_5 = - \frac{21}{2} \, ({\mathcal C}_1,{\mathcal C}_1)_4 +\frac{84}{11} \, ({\mathcal C}_1,{\mathcal C}_3)_2 + \frac{735}{484} \, {\mathcal C}_3^2, \label{syzygy.d7.r3}$$ which expresses ${\mathcal C}_5$ in terms of ${\mathcal C}_1,{\mathcal C}_3$. Similarly, the identity $$\begin{aligned} {\mathcal C}_1 \, {\mathcal C}_7 = & -28 \, ({\mathcal C}_1,{\mathcal C}_1)_6 - \frac{210}{11} \, ({\mathcal C}_1,{\mathcal C}_3)_4+ 8 \, ({\mathcal C}_1,{\mathcal C}_5)_2 \\ & + \, \frac{1960}{121} \, ({\mathcal C}_3,{\mathcal C}_3)_2+\frac{35}{11} \, {\mathcal C}_3 \, {\mathcal C}_5, \end{aligned} \label{syzygy.d7.r5}$$ indirectly expresses ${\mathcal C}_7$ in terms of ${\mathcal C}_1,{\mathcal C}_3$. We will show that such formulae always exist for all $d$ and $3 \le r \le \lfloor\frac{d+1}{2}\rfloor$. After completing our results, we discovered that a few such calculations had been done by Shenton [@Shenton p. 257ff]. Quadratic syzygies ================== Define a (quadratic) syzygy of weight $2 r$ to be an identity $$\sum \, \alpha_{i,j} \, ({\mathcal C}_{2i-1},{\mathcal C}_{2j-1})_{2(r-i-j+1)} =0, \qquad (\alpha_{i,j} \in {\mathbf Q}) \label{quad.syzygy}$$ assumed to hold for all $d$-ics $A,B$. The sum is quantified over all pairs $(i,j)$ such that $$1 \le i \le j \le r, \quad i + j \le r+1. \label{range.ij}$$ For instance, (\[syzygy.d7.r3\]) and (\[syzygy.d7.r5\]) are syzygies of weight $6$ and $8$ respectively. Notice that the only term in (\[quad.syzygy\]) involving ${\mathcal C}_{2r-1}$ corresponds to $(i,j)=(1,r)$. Now our main result is the following: *For every $3 \le r \le \lfloor\frac{d+1}{2}\rfloor$, there exists a quadratic syzygy of weight $2 r$ such that $\alpha_{1,r} \neq 0$. \[main.theorem\]* We will, in fact, produce an explicit formula for the $\alpha_{i,j}$. Given this, one can rewrite (\[quad.syzygy\]) as $${\mathcal C}_{2r-1} = - \frac{1}{{\mathcal C}_1} \, \sum\limits \; \frac{\alpha_{i,j}}{\alpha_{1,r}} \; ({\mathcal C}_{2i-1},{\mathcal C}_{2j-1})_{2(r-i-j+1)} =0,$$ which recovers ${\mathcal C}_{2r-1}$ from ${\mathcal C}_1,\dots,{\mathcal C}_{2r-3}$. Notice that ${\mathcal C}_1$ is (up to a scalar) the Jacobian of $A,B$; in particular it is nonzero if $\{A,B\}$ are linearly independent. By a classical theorem of Gordan, the algebra of all combinants of a pencil is finitely generated. However, a specific set of generators is known in only a few cases (see [@Gordan; @Meulien; @Newstead; @Wall]). Our main theorem is not directly comparable to these results, since we allow not only polynomial, but also rational transvectant expressions in the combinants. In outline, the proof of Theorem \[main.theorem\] proceeds as follows. The following proposition (proved in [@JC §5]) reinterprets a syzygy as an $SL_2$-equivariant morphism. *\[prop.wedge4Sd\] The vector space of syzygies of weight $2r$ is isomorphic to $\text{Hom}_{SL_2}(S_{4(d-r)},\wedge^4 S_d)$.* In §\[zetadef\] we will construct a [*specific*]{} morphism $$\zeta: S_{4(d-r)} {\longrightarrow}\wedge^4 S_d,$$ and then calculate the corresponding syzygy coefficients. In fact this calculation will be done twice: first by classical symbolic methods, and secondly by recasting the coefficient as a 9-j symbol in the sense of the quantum theory of angular momentum. It would be of interest to know whether an analogue of Theorem \[main.theorem\] holds for other categories of representations. In §\[example.SL3\] we give such an example for the group $SL_3$. We informally sketch the idea behind Proposition \[prop.wedge4Sd\]. Consider the Pl[ü]{}cker imbedding $$G(2,S_d) \hookrightarrow {\mathbf P}(\wedge^2 S_d),$$ with image $X$ and ideal sheaf ${\mathcal I}_X$. The short exact sequence of $SL_2$-representations $$0 {\rightarrow}H^0({\mathcal I}_X(2)) {\rightarrow}H^0({\mathcal O}_{\mathbf P}(2)) {\rightarrow}H^0({\mathcal O}_X(2)) {\rightarrow}0,$$ can be naturally identified with $$0 {\rightarrow}\wedge^4 S_d \stackrel{\imath}{{\rightarrow}} S_2(\wedge^2 S_d) \stackrel{q}{{\rightarrow}} {\mathbb S}_{(2,2)}(S_d) {\rightarrow}0.$$ Here ${\mathbb S}_{(2,2)}$ denotes the Schur functor associated to the partition $(2,2)$ (see [@FH Lecture 6]). The coefficients of each ${\mathcal C}_{2i-1}$ can be seen as homogeneous co[ö]{}rdinates on ${\mathbf P}(\wedge^2 S_d)$, hence an expression $${\mathcal E}= \sum \, \alpha_{i,j} \,({\mathcal C}_{2i-1},{\mathcal C}_{2j-1})_{2(r-i-j+1)}$$ corresponds to the function $$S_{4(d-r)} \stackrel{\phi_{\mathcal E}}{{\longrightarrow}} H^0({\mathcal O}_{\mathbf P}(2)), \quad F {\longrightarrow}(F,{\mathcal E})_{4(d-r)}.$$ Now ${\mathcal E}$ is a syzygy iff this function is identically zero on $X$, i.e., iff $q \circ \phi_{\mathcal E}=0$. This is equivalent to the condition that $\phi_{\mathcal E}$ factor through $\ker q$. Conversely, a nonzero map $S_{4(d-r)} \stackrel{\phi}{{\longrightarrow}} \wedge^4 S_d$ defines an irreducible subrepresentation of $H^0({\mathcal I}_X(2))$, which translates into a quadratic syzygy ${\mathcal E}_\phi$. This interpretation allows to read off the individual coefficients in a syzygy. Let ${\mathcal E}=0$ denote a quadratic syzygy of weight $2r$, and fix a pair of integers $(i,j)$ satisfying $$1 \le i, j \le r, \quad i + j \le r+1.$$ (Notice that we have not imposed the condition $i \le j$.) Consider the sequence of morphisms $$\begin{aligned} S_{4(d-r)} & \stackrel{\phi_{\mathcal E}}{{\longrightarrow}} \wedge^4 S_d \stackrel{\imath}{{\longrightarrow}} S_2(\wedge^2 S_d) \stackrel{\beta_1}{{\longrightarrow}} \wedge^2 S_d \otimes \wedge^2 S_d \\ & \stackrel{\beta_2}{{\longrightarrow}} S_{2d-4i+2} \otimes S_{2d-4j+2} \stackrel{\beta_3}{{\longrightarrow}} S_{4(d-r)}. \end{aligned} \label{beta.seq}$$ Here $\beta_1$ is the natural inclusion map $v \cdot w {\longrightarrow}\frac{1}{2}(v \otimes w + w \otimes v)$, $\beta_2$ is the tensor product of projections $p_{2i-1} \otimes p_{2j-1}$, and $\beta_3$ is the transvectant map $\pi_{2(r-i-j+1)}$. By Schur’s lemma, the composite endomorphism $$\beta_3 \circ \beta_2 \circ \beta_1 \circ \imath \circ \phi_{\mathcal E}: S_{4(d-r)} {\longrightarrow}S_{4(d-r)}$$ must be the multiplication by a constant, say $\theta_{i,j}$. Then, up to a global constant, $${\mathcal E}= \sum\limits \; \theta_{i,j} \, ({\mathcal C}_{2i-1},{\mathcal C}_{2j-1})_{2(r-i-j+1)}. \label{expr.E.theta}$$ In this section we will describe the $\beta_i$ using the classical symbolic calculus. Our notation follows [@AC] and [@GrYo]; in particular, ${\mathbf x}=(x_1,x_2),{\mathbf y}=(y_1,y_2)$ etc. denote binary variables, and $${({\mathbf x} \, {\mathbf y})} = x_1 \, y_2 - y_1 \, x_2, \quad {\Omega_{{\mathbf x} \, {\mathbf y}}} = \frac{\partial^2}{\partial x_1 \, \partial y_2}- \frac{\partial^2}{\partial y_1 \, \partial x_2}.$$ Define $${\mathsf h}(m,n;q) = \frac{(m+n-2q+1)!}{(m+n-q+1)! \, q \, !} \, .$$ The rationale for introducing this factor is explained in [@AC §1.6]. We will realise $S_2(\wedge^2 S_d)$ as the space of quadrihomogeneous forms $Q({\mathbf x},{\mathbf y},{\mathbf z},{\mathbf w})$ of order $d$ in each variable, satisfying the conditions $$Q({\mathbf x},{\mathbf y},{\mathbf z},{\mathbf w}) = - \, Q({\mathbf y},{\mathbf x},{\mathbf z},{\mathbf w}) = - \, Q({\mathbf x},{\mathbf y},{\mathbf w},{\mathbf z}) = Q({\mathbf z},{\mathbf w},{\mathbf x},{\mathbf y}).$$ Inside this space, the image of $\imath$ is identified with the set of alternating forms, i.e., those $Q$ for which $$Q({\mathbf x},{\mathbf y},{\mathbf z},{\mathbf w}) = \text{sign}(\sigma) \, Q({\mathbf x}^\sigma,{\mathbf y}^\sigma,{\mathbf z}^\sigma,{\mathbf w}^\sigma),$$ for every permutation $\sigma$ of the four letters. Now realise $S_{2d-4i+2} \otimes S_{2d-4j+2}$ as the space of bihomogeneous forms of respective orders $(2d-4i+2, 2d-4j+2)$ in ${\mathbf u},{\mathbf v}$. Then $\beta_2 \circ \beta_1$ maps $Q$ to $${\mathsf h}(d,d;2i-1) \, {\mathsf h}(d,d;2j-1) \, \left[ \, {\Omega_{{\mathbf x} \, {\mathbf y}}}^{2i-1} \, {\Omega_{{\mathbf z} \, {\mathbf w}}}^{2j-1} \ Q \, \right]$$ followed by the substitutions $\{ {\mathbf x},{\mathbf y}{\rightarrow}{\mathbf u}\}, \{ {\mathbf z},{\mathbf w}{\rightarrow}{\mathbf v}\}$. Notice that, given the two pairs of operations $${\Omega_{{\mathbf x} \, {\mathbf y}}}, \{{\mathbf x},{\mathbf y}{\rightarrow}{\mathbf u}\}, \qquad {\Omega_{{\mathbf z} \, {\mathbf w}}}, \{{\mathbf z},{\mathbf w}{\rightarrow}{\mathbf v}\},$$ any operation from the first pair commutes from any operation from the second. Finally realise $S_{4(d-r)}$ as order $4(d-r)$ forms in ${\mathbf t}$, then $\beta_3$ maps $R({\mathbf u},{\mathbf v})$ to $${\mathsf h}(2d-4i+2,2d-4j+2;2r-2i-2j+2) \, [ \, {\Omega_{{\mathbf u} \, {\mathbf v}}}^{2r-2i-2j+2} \, R({\mathbf u},{\mathbf v}) \, ],$$ followed by the substitutions $\{{\mathbf u},{\mathbf v}{\rightarrow}{\mathbf t}\}$. {#zetadef} Now define $\zeta: S_{4(d-r)} {\longrightarrow}S_2(\wedge^2 S_d)$ to be the morphism which sends $f_{\mathbf t}^{4(d-r)}$ to the form $$\begin{aligned} {\mathcal F}= \; & {({\mathbf x} \, {\mathbf y})} \, {({\mathbf z} \, {\mathbf w})}^{2r-1} \, f_{\mathbf x}^{d-1} \, f_{\mathbf y}^{d-1} \, f_{\mathbf z}^{d-2r+1} \, f_{\mathbf w}^{d-2r+1} \\ - \; & {({\mathbf x} \, {\mathbf z})} \, {({\mathbf y} \, {\mathbf w})}^{2r-1} \, f_{\mathbf x}^{d-1} \, f_{\mathbf z}^{d-1} \, f_{\mathbf y}^{d-2r+1} \, f_{\mathbf w}^{d-2r+1} \\ & {({\mathbf x} \, {\mathbf w})} \, {({\mathbf y} \, {\mathbf z})}^{2r-1} \, f_{\mathbf x}^{d-1} \, f_{\mathbf w}^{d-1} \, f_{\mathbf y}^{d-2r+1} \, f_{\mathbf z}^{d-2r+1} \\ - \; & {({\mathbf y} \, {\mathbf w})} \, {({\mathbf x} \, {\mathbf z})}^{2r-1} \, f_{\mathbf y}^{d-1} \, f_{\mathbf w}^{d-1} \, f_{\mathbf x}^{d-2r+1} \, f_{\mathbf z}^{d-2r+1} \\ & {({\mathbf z} \, {\mathbf w})} \, {({\mathbf x} \, {\mathbf y})}^{2r-1} \, f_{\mathbf z}^{d-1} \, f_{\mathbf w}^{d-1} \, f_{\mathbf x}^{d-2r+1} \, f_{\mathbf y}^{d-2r+1} \\ - \; & {({\mathbf z} \, {\mathbf y})} \, {({\mathbf x} \, {\mathbf w})}^{2r-1} \, f_{\mathbf z}^{d-1} \, f_{\mathbf y}^{d-1} \, f_{\mathbf x}^{d-2r+1} \, f_{\mathbf w}^{d-2r+1} \, . \end{aligned}$$ By construction, ${\mathcal F}$ is alternating in all four variables; hence $\zeta$ factors through $\wedge^4 S_d$. The rationale behind this choice of $\zeta$ will be explained in §\[pos.HS\]. The first calculation --------------------- Let us write (using the obvious notation) $${\mathcal F}= {\mathcal T}({\mathbf x}{\mathbf y},{\mathbf z}{\mathbf w}) - \, {\mathcal T}({\mathbf x}{\mathbf z},{\mathbf y}{\mathbf w}) \, + \dots - \, {\mathcal T}({\mathbf z}{\mathbf y}, {\mathbf x}{\mathbf w}).$$ We should like to gauge the effect of the morphism $\beta_3 \circ \beta_2 \circ \beta_1$ on each summand in ${\mathcal F}$. The next two lemmata allow us to ‘cancel’ an ${\Omega_{{\mathbf x} \, {\mathbf y}}}$ against an $({\mathbf x}\, {\mathbf y})$. *Let ${\mathcal G}$ denotes an arbitrary bihomogeneous form of orders $p,q$ in ${\mathbf x},{\mathbf y}$ respectively.* 1. For every $m\ge 1$, $${\Omega_{{\mathbf x} \, {\mathbf y}}} \, ({\mathbf x}\, {\mathbf y})^m \, {\mathcal G}= m \, (p+q+m+1) \, ({\mathbf x}\, {\mathbf y})^{m-1} \, {\mathcal G}+ ({\mathbf x}\, {\mathbf y})^m \, {\Omega_{{\mathbf x} \, {\mathbf y}}} \, {\mathcal G}.$$ 2. For every $\ell \ge 1$, $${\Omega_{{\mathbf x} \, {\mathbf y}}}^\ell \, ({\mathbf x}\, {\mathbf y}) \, {\mathcal G}= \ell \, (p+q-\ell+3) \, {\Omega_{{\mathbf x} \, {\mathbf y}}}^{\ell-1} \, {\mathcal G}+ ({\mathbf x}\, {\mathbf y}) \, {\Omega_{{\mathbf x} \, {\mathbf y}}}^\ell \, {\mathcal G}.$$ \[cancel.lemma\] [[Proof.]{}]{}By straightforward differentiation, $$\begin{aligned} {} & {\Omega_{{\mathbf x} \, {\mathbf y}}} \, ({\mathbf x}\, {\mathbf y}) \, {\mathcal G}= 2 \, {\mathcal G}+ (x_1 \, \frac{\partial {\mathcal G}}{\partial x_1} + x_2 \, \frac{\partial {\mathcal G}}{\partial x_2}) + (y_1 \, \frac{\partial {\mathcal G}}{\partial y_1} + y_2 \, \frac{\partial {\mathcal G}}{\partial y_2}) + \\ & (x_1 \, y_2 - x_2 \, y_1) \, (\frac{\partial^2 {\mathcal G}}{\partial x_1 \, \partial y_2} - \frac{\partial^2 {\mathcal G}}{\partial x_2 \, \partial y_1}) \\ = \; & (p+q+2) \, {\mathcal G}+ ({\mathbf x}\, {\mathbf y}) \, {\Omega_{{\mathbf x} \, {\mathbf y}}} \, {\mathcal G}. \end{aligned}$$ Now part (a) follows by an easy induction on $m$, and (b) by one on $\ell$. *With ${\mathcal G}$ as above, and $\ell,m \ge 0$, $$[\, {\Omega_{{\mathbf x} \, {\mathbf y}}}^\ell \, ({\mathbf x}\, {\mathbf y})^m \, {\mathcal G}\, ]_{{\mathbf x},{\mathbf y}{\rightarrow}{\mathbf u}} = \begin{cases} \mu(p,q;\ell,m) \; [\, {\Omega_{{\mathbf x} \, {\mathbf y}}}^{\ell-m} \, {\mathcal G}\, ]_{{\mathbf x},{\mathbf y}{\rightarrow}{\mathbf u}} & \text{if $\ell \ge m$}, \\ 0 & \text{otherwise,} \end{cases}$$ where $$\mu(p,q;\ell,m) = \frac{\ell \, !}{(\ell-m)!} \frac{(p+q-\ell+2m+1)!}{(p+q-\ell+m+1)!} \, . \label{eqn.c3}$$ \[lemma.Ev2\]* [[Proof.]{}]{} Using part (a) of the previous lemma for the connecting step, one shows by induction on $\ell$, that $$\, {\Omega_{{\mathbf x} \, {\mathbf y}}}^\ell \, ({\mathbf x}\, {\mathbf y})^m \, {\mathcal G}\equiv \begin{cases} \mu(p,q;\ell,m) \; {\Omega_{{\mathbf x} \, {\mathbf y}}}^{\ell-m} \, {\mathcal G}& \text{if $0\le \ell-m \le\min(p,q)$}, \\ 0 & \text{otherwise,} \end{cases}$$ where $\equiv$ stands for congruence modulo $({\mathbf x}\, {\mathbf y})$. The result follows, because terms involving $({\mathbf x}\, {\mathbf y})$ vanish after the substitution ${\mathbf x},{\mathbf y}{\rightarrow}{\mathbf u}$. As a consequence, the term ${\mathcal T}({\mathbf z}{\mathbf w},{\mathbf x}{\mathbf y})$ is annihilated by the operation ${\Omega_{{\mathbf x} \, {\mathbf y}}}^{2i-1}$ followed by $\{{\mathbf x},{\mathbf y}{\rightarrow}{\mathbf u}\}$, unless $i=r$ (and hence necessarily $j=1$). In the latter case, $$\begin{aligned} {} & \beta_2 \circ \beta_1 \, ({\mathcal T}({\mathbf z}{\mathbf w},{\mathbf x}{\mathbf y})) = \\ & {\mathsf h}(d,d,2r-1)\, {\mathsf h}(d,d,1) \, [ \, {\Omega_{{\mathbf x} \, {\mathbf y}}}^{2r-1} \, {\Omega_{{\mathbf z} \, {\mathbf w}}} \circ {\mathcal T}({\mathbf z}{\mathbf w},{\mathbf x}{\mathbf y}) \, ]_{\{{\mathbf x},{\mathbf y}{\rightarrow}{\mathbf u}\}, \{{\mathbf z},{\mathbf w}{\rightarrow}{\mathbf v}\}}, \end{aligned}$$ evaluates to $$f_{\mathbf u}^{2d-4r+2} \, f_{\mathbf v}^{2d-2} \label{fuv}$$ because $$\begin{aligned} {} & {\mathsf h}(d,d,2r-1) \; {\mathsf h}(d,d,1) \; \mu(d-2r+1,d-2r+1,2r-1,2r-1) \, \times \\ & \mu(d-1,d-1,1,1) = 1. \end{aligned}$$ Then $\beta_3$ carries (\[fuv\]) into $f_{\mathbf t}^{4(d-r)}$. By the same argument, ${\mathcal T}({\mathbf x}{\mathbf y},{\mathbf z}{\mathbf w})$ goes to $f_{\mathbf t}^{4(d-r)}$ if $(i,j)=(1,r)$, and zero otherwise. This disposes of two of the summands in ${\mathcal F}$; the rest of them will need more work. As an interlude, we will consider a preparatory example which illustrates the operation of $\Omega_{{\mathbf x}{\mathbf y}}$ on a symbolic product involving ${\mathbf x},{\mathbf y}$ (cf. [@Glenn §3.2.5]). Let $$E = {({\mathbf x} \, {\mathbf z})}^7 \, {({\mathbf y} \, {\mathbf w})}^2 \, f_{\mathbf x}\, g_{\mathbf x}^4 \, f_{\mathbf y}^5.$$ First we follow the calculation of $\Omega_{{\mathbf x}{\mathbf y}} \, E$. The idea, in brief, is to pair an ${\mathbf x}$-factor with a ${\mathbf y}$-factor and contract them against each other. The following diagram shows all the types of ${\mathbf x}$ and ${\mathbf y}$ factors in $E$, and the possible pairings between them. (100,30) (30,20)[${({\mathbf x} \, {\mathbf z})}$]{} (60,20)[$f_{\mathbf x}$]{} (90,20)[$g_{\mathbf x}$]{} (45,0)[${({\mathbf y} \, {\mathbf w})}$]{} (75,0)[$f_{\mathbf y}$]{} (36,18)[(3,-4)[11]{}]{} (41,19)[(2,-1)[33]{}]{} (61,18)[(-2,-3)[10]{}]{} (88,19)[(-2,-1)[33]{}]{} (90,18)[(-2,-3)[11]{}]{} (63,18)[(1,-1)[14]{}]{} (34,13)[(1)]{} (44,13)[(2)]{} (55,15)[(3)]{} (67,15)[(4)]{} (75,15)[(5)]{} (88,13)[(6)]{} The equality $\Omega_{{\mathbf x}{\mathbf y}} \, [ {({\mathbf x} \, {\mathbf u})} {({\mathbf y} \, {\mathbf v})} ] = {({\mathbf u} \, {\mathbf v})}$ gives our basic rule: contracting ${({\mathbf x} \, {\mathbf u})}$ against ${({\mathbf y} \, {\mathbf v})}$ gives ${({\mathbf u} \, {\mathbf v})}$. For instance, contraction along the arrow (1) gives ${({\mathbf z} \, {\mathbf w})}$. Introducing a phantom letter $\tilde{\mathbf f} = (-f_2,f_1)$, we can write $f_{\mathbf x}= ({\mathbf x}\, \tilde{\mathbf f})$, and hence contraction along (3) gives $(\tilde{\mathbf f} \, {\mathbf w}) = - \, f_{\mathbf w}$. Contraction along (4) gives $(f \, f)=0$. Now $\Omega_{{\mathbf x}{\mathbf y}} \, E$ is a sum of terms (quantified over all choices of contractions), where in each term the contracted factors are replaced by their result. Thus, $\Omega_{{\mathbf x}{\mathbf y}} \, E = $ $$\underbrace{14 \, {({\mathbf z} \, {\mathbf w})} \, {({\mathbf x} \, {\mathbf z})}^6 \, {({\mathbf y} \, {\mathbf w})} \, f_{\mathbf x}\, g_{\mathbf x}^4 \, f_{\mathbf y}^5}_{\text{from arrow (1)}} + \underbrace{35 \, f_{\mathbf z}\, {({\mathbf x} \, {\mathbf z})}^6 \, {({\mathbf y} \, {\mathbf w})}^2 \, f_{\mathbf x}\, g_{\mathbf x}^4 \, f_{\mathbf y}^4}_{\text{from arrow (2)}} \, + \dots \text{etc.}$$ To calculate $\Omega_{{\mathbf x}{\mathbf y}}^2 \, E$ we must sum over all possible $2$-step sequences of contractions, taking account of available multiplicities. For instance, the sequences of arrows $$(1)(4), \quad (2)(2), \quad (3)(5)$$ are allowed, but (3)(3) is not since there is only one $f_{\mathbf x}$ available. This gives $\Omega_{{\mathbf x}{\mathbf y}}^2 \, E =$ $$\underbrace{840 \, f_{\mathbf z}^2 \, {({\mathbf x} \, {\mathbf z})}^5 \, {({\mathbf y} \, {\mathbf w})}^2 \, f_{\mathbf x}\, g_{\mathbf x}^4 \, f_{\mathbf y}^3}_{\text{from (2)(2)}} - \underbrace{120 \, (g \, f) \, g_{\mathbf w}\, {({\mathbf x} \, {\mathbf z})}^7 \, {({\mathbf y} \, {\mathbf w})} \, f_{\mathbf x}\, g_{\mathbf x}^2 \, f_{\mathbf y}^4}_{\text{from (5)(6)}} + \dots \text{etc.}$$ If we treat the seven ${({\mathbf x} \, {\mathbf z})}$ factors as notionally distinct, a sequence of two from them can be chosen in $7!/5!$ ways, and similarly for $f_{\mathbf y}^5$. This gives the first coefficient as $\frac{7!}{5!} \, \frac{5!}{3!} = 840$. Similarly, the second coefficient is $\frac{4!}{2!} \times 2 \times 5$. Notice that the sequence (6)(5) will give an [*additional*]{} term identical to the one coming from (5)(6). \[example.evalOmega\] {#section-6} We will now follow the evaluation of $\beta_3 \circ \beta_2 \circ \beta_1 \circ {\mathcal T}({\mathbf x}{\mathbf w},{\mathbf y}{\mathbf z})$. As a first step we have to remove $(2i-1)$ factors each of type ${\mathbf x},{\mathbf y}$ from ${\mathcal T}({\mathbf x}{\mathbf w},{\mathbf y}{\mathbf z})$. The available factors are respectively $${({\mathbf x} \, {\mathbf w})} \, f_{\mathbf x}^{d-1} \quad \text{and} \quad {({\mathbf y} \, {\mathbf z})}^{2r-1} \, f_{\mathbf y}^{d-2r+1}.$$ There are three choices: $$\begin{array}{crcl} \text{(I)} & {({\mathbf x} \, {\mathbf w})} \, f_{\mathbf x}^{2i-2} & \text{and} & {({\mathbf y} \, {\mathbf z})}^{2i-2} \, f_{\mathbf y}\\ \text{(II)} & {({\mathbf x} \, {\mathbf w})} \, f_{\mathbf x}^{2i-2} & \text{and} & {({\mathbf y} \, {\mathbf z})}^{2i-1}, \\ \text{(III)} & f_{\mathbf x}^{2i-1}& \text{and} & {({\mathbf y} \, {\mathbf z})}^{2i-1}. \end{array} \label{choices.xy}$$ The possibilities are limited by the following constraint: since $f_{\mathbf y}$ can only be paired with ${({\mathbf x} \, {\mathbf w})}$, no more than one copy of $f_{\mathbf y}$ can be chosen; and hence at least $2i-2$ copies of ${({\mathbf y} \, {\mathbf z})}$ must be chosen. After contraction and the substitution $\{{\mathbf x},{\mathbf y}{\rightarrow}{\mathbf u}\}$, choice (I) leads to the expression $$c_I \, {({\mathbf u} \, {\mathbf z})}^{2r-2i+1} \, f_{\mathbf u}^{2d-2r-2i+1} \, f_{\mathbf z}^{d-2r+2i-1} \, f_{\mathbf w}^d. \label{Omegaxy.I}$$ Here (and subsequently) $c_I,c_{I'}$ etc. stand for some rational constants which will be determined later. Now we must remove $(2j-1)$ factors each of type ${\mathbf z},{\mathbf w}$ from (\[Omegaxy.I\]). The choice is forced, namely $$\begin{array}{crcl} \text{(I')} & {({\mathbf u} \, {\mathbf z})}^{2j-1} & \text{and} & f_{\mathbf w}^{2j-1}. \end{array}$$ After contraction and $\{{\mathbf z},{\mathbf w}{\rightarrow}{\mathbf v}\}$, we get an expression $$- \, c_I \, c_{I'} \, {({\mathbf u} \, {\mathbf v})}^{2r-2i-2j+2} \, f_{\mathbf u}^{2d-2r-2i+2j} \, f_{\mathbf v}^{2d-2r+2i-2j}. \label{expr.uvI}$$ (The negative sign arises, because contracting ${({\mathbf u} \, {\mathbf z})}$ against $f_{\mathbf w}$ gives $-f_{\mathbf u}$.) Now $\beta_3$ will convert (\[expr.uvI\]) into $$- \, c_I \, c_{I'} \, f_{\mathbf t}^{4(d-r)} \label{expr.choiceI}$$ as a consequence of Lemma \[lemma.Ev2\]. {#section-7} Choice (II) in (\[choices.xy\]) leads to the expression $$- \, c_{II} \, {({\mathbf z} \, {\mathbf w})} \, \underbrace{{({\mathbf u} \, {\mathbf z})}^{2r-2i} \, f_{\mathbf u}^{2d-2r-2i+2} \, f_{\mathbf w}^{d-1} \, f_{\mathbf z}^{d-2r+2i-1}}_{{\mathcal G}},$$ on which we have to operate on by ${\Omega_{{\mathbf z} \, {\mathbf w}}}^{2j-1}$. Using part (b) of Lemma \[cancel.lemma\], $${\Omega_{{\mathbf z} \, {\mathbf w}}}^{2j-1} \, ({\mathbf z}\, {\mathbf w}) \, {\mathcal G}= (2j-1)(2d-2j+2) \, {\Omega_{{\mathbf z} \, {\mathbf w}}}^{2j-2} \, {\mathcal G}+({\mathbf z}\, {\mathbf w}) \, {\Omega_{{\mathbf z} \, {\mathbf w}}}^{2j-1} \, {\mathcal G}.$$ After the substitution $\{{\mathbf z},{\mathbf w}{\rightarrow}{\mathbf v}\}$, the second term goes away. In evaluating ${\Omega_{{\mathbf z} \, {\mathbf w}}}^{2j-2} \, {\mathcal G}$, we have a forced choice $$\begin{array}{clcr} \text{(II')} & {({\mathbf u} \, {\mathbf z})}^{2j-2} & \text{and} & f_{\mathbf w}^{2j-2}, \end{array}$$ leading to $$- \, (2d-2j+2)(2j-1) \, c_{II} \, c_{II'} \, f_{\mathbf t}^{4(d-r)}. \label{expr.choiceII}$$ {#section-8} Choice (III) (which is only possible if $2i\le d$), leads to $$-c_{III} \, ({\mathbf u}{\mathbf w}) \, ({\mathbf u}{\mathbf z})^{2r-2i} \, f_{\mathbf u}^{2d-2r-2i+1} \, f_{\mathbf w}^{d-1} \, f_{\mathbf z}^{d-2r+2i}.$$ When applying ${\Omega_{{\mathbf z} \, {\mathbf w}}}^{2j-1}$, it further bifurcates into the two choices: $$\begin{array}{clcr} \text{(III')} & {({\mathbf u} \, {\mathbf z})}^{2j-2} \, f_{\mathbf z}& \text{and} & {({\mathbf u} \, {\mathbf w})} \, f_{\mathbf w}^{2j-2} \\ \text{(III'')} & {({\mathbf u} \, {\mathbf z})}^{2j-1} & \text{and} & f_{\mathbf w}^{2j-1}, \\ \end{array}$$ which are dealt with similarly. In fact (III’) can arise only if $$2j \le d, \quad \text{and} \quad r \ge i+j. \label{condition.CIIIp}$$ Altogether we arrive at the expression $$\begin{aligned} {} & \beta_3 \circ \beta_2 \circ \beta_1 \circ {\mathcal T}({\mathbf x}{\mathbf w},{\mathbf y}{\mathbf z}) = {\mathsf h}(d,d;2i-1) \, {\mathsf h}(d,d;2j-1) \, \times \\ & (-c_I \, c_{I'} - \, (2d-2j+2)(2j-1) \, c_{II} \, c_{II'} - c_{III} \, c_{III'} + c_{III} \, c_{III''}) \, f_{\mathbf t}^{4(d-r)}. \end{aligned}$$ Using the recipe of Example \[example.evalOmega\], we get the constants $$\begin{aligned} c_I & = (2i-1) \, (d-2r+1) \, \frac{(d-1)!}{(d-2i+1)!} \frac{(2r-1)!}{(2r-2i+1)!}, \\ c_{I'} & = \frac{(2r-2i+1)! \, d!}{(2r-2i-2j+2)! \, (d-2j+1)!}, \\ c_{II} & = \frac{(2i-1) \, (d-1)! \, (2r-1)! }{(d-2i+1)! \, (2r-2i)!}, \\ c_{II'} & = \frac{(2r-2i)! \, (d-1)!}{(2r-2i-2j+2)! \, (d-2j+1)!}, \\ c_{III} & = \frac{(d-1)! \, (2r-1)!}{(d-2i)! \, (2r-2i)!}, \\ c_{III'} & = \frac{(2j-1) \, (d-2r+2i) \, (2r-2i)! \, (d-1)!}{(2r-2i-2j+2)! \, (d-2j+1)!}, \\ c_{III''} & = \frac{(2r-2i)! \, (d-1)!}{(2r-2i-2j+1)! \, (d-2j)!}. \end{aligned}$$ If $2i\le d$ fails, then $c_{III}$ is zero by definition. Likewise, if the conditions in (\[condition.CIIIp\]) are not satisfied, then $c_{III''}$ is understood to be zero. Recall that the prevailing hypotheses are $$3\le r\le \frac{d+1}{2}, \quad 1\le i,j\le r, \quad \text{and} \quad i+j\le r+1.$$ Therefore, any of extra conditions $2i\le d$, $2j\le d$, and $i+j\le r$ can only fail if respectively $d-2i+1$, $d-2j+1$, or $r-i-j+1$ vanish. Hence, the following expressions for $c_{III}$ and $c_{III''}$ hold unconditionally: $$\begin{aligned} c_{III} & = & \frac{(d-2i+1)\, (d-1)! \, (2r-1)!}{(d-2i+1)! \, (2r-2i)!}, \label{improvedIII}\\ c_{III''} & = & \frac{(d-2j+1)\,(2r-2i-2j+2)\,(2r-2i)! \, (d-1)!}{(2r-2i-2j+2)! \, (d-2j+1)!}. \label{improvedIIIpp}\end{aligned}$$ Due to the symmetry in the situation, the rest of the terms $$- \, {\mathcal T}({\mathbf x}{\mathbf z},{\mathbf y}{\mathbf w}), \quad - \, {\mathcal T}({\mathbf y}{\mathbf w},{\mathbf x}{\mathbf z}), \quad - \, {\mathcal T}({\mathbf z}{\mathbf y},{\mathbf x}{\mathbf w})$$ give identical evaluations. After some simplification, we arrive at the following formula: {#section-9} Define $\delta_{i,j}$ to be $1$ if $i=j$, and $0$ otherwise. Let $$\begin{aligned} {\mathcal N}_1 = & \, (2 d \, i+2d \, j-d \, r-2i^2-2j^2-2d+3i+3j-2) \, \times \\ & d! \, (d-1)! \, (2r-1)! \, (2d-4i+3)! \, (2d-4j+3)!, \\ {\mathcal N}_2 = & \, (2i-1)! \, (2j-1)! \, (d-2i+1)! \, (d-2j+1)! \, \times \\ & (2d-2i+2)! \, (2d-2j+2)! \, (2r-2i-2j+2)!, \end{aligned}$$ then $$\theta_{i,j} = (\, \delta_{i,1} \, \delta_{j,r} + \delta_{i,r} \, \delta_{j,1} - 8 \; \frac{{\mathcal N}_1}{{\mathcal N}_2} \, ). \label{formula.alpha.ij}$$ Evidently $\theta_{i,j} = \theta_{j,i}$. Therefore, in expression (\[expr.E.theta\]) one can combine the terms $(i,j)$ and $(j,i)$. Let $\epsilon_{i,j} = 2$ if $i \neq j$, and $1$ if $i = j$. Now let $\alpha_{i,j} = \epsilon_{i,j} \, \theta_{i,j}$. We have finally arrived at the required syzygy $${\mathcal E}_{\zeta}: \sum\limits_{(i,j)} \, \alpha_{i,j} \, ({\mathcal C}_{2i-1},{\mathcal C}_{2j-1})_{2(r-i-j+1)} =0,$$ where the sum is quantified over all pairs $(i,j)$ such that $$1 \le i \le j \le r, \quad i + j \le r+1.$$ The reader may check that for $d=7,r=3$, the syzygy becomes $$10 \, ({\mathcal C}_1,{\mathcal C}_1)_4 - \frac{80}{11} \, ({\mathcal C}_1,{\mathcal C}_3)_2 - \frac{175}{121} \, {\mathcal C}_3^2 + \frac{20}{21} \, {\mathcal C}_1 \, {\mathcal C}_5 = 0,$$ which is the same as (\[syzygy.d7.r3\]). We have (successfully) tested formula (\[formula.alpha.ij\]) in [Maple]{} on several examples. Second calculation ------------------ In fact, formula (\[formula.alpha.ij\]) was first arrived at by a different path, namely by interpreting $\theta_{i,j}$ as (in essence) a 9-j symbol in the sense of the quantum theory of angular momentum (see [@AC §7]). We pick up the thread at the beginning of §\[zetadef\]. The trajectory $f_{\mathbf t}^{4(d-r)} {\longrightarrow}{\mathcal T}({\mathbf x}{\mathbf w}, {\mathbf y}{\mathbf z})$ followed by $\beta_3 \circ \beta_2 \circ \beta_1$ is described by the sequence of morphisms $$\begin{aligned} {} & S_{4(d-r)} {\longrightarrow}S_{2d-2} \otimes S_{2d-4r+2} {\longrightarrow}(S_d \otimes S_d) \otimes (S_d \otimes S_d) {\longrightarrow}\\ & (S_d \otimes S_d) \otimes (S_d \otimes S_d) {\longrightarrow}S_{2d-4i+2} \otimes S_{2d-4j+2} {\longrightarrow}S_{4(d-r)}. \end{aligned}$$ Here the first two maps are natural injections, the last two are natural projections, and the one in the middle is the shuffling map $$(v_1 \otimes v_2) \otimes (v_3 \otimes v_4) {\longrightarrow}(v_1 \otimes v_4) \otimes (v_2 \otimes v_3).$$ By Schur’s lemma, the total composite must a multiple of the identity map $\text{Id}_{S_{4(d-r)}}$. Up to an easily calculated factor (see [@AC §7.9]), this multiple is the 9-j symbol $$B = \left\{ \begin{array}{ccc} \frac{d}{2} & \frac{d}{2} & d-2i+1 \\ & & \\ \frac{d}{2} & \frac{d}{2} & d-2j+1 \\ & & \\ d-1 & d-2r+1 & 2d-2r \end{array} \right\} \ .$$ Now interchange rows $1,2$ of $B$, then interchange rows $1,3$ of the new array, and finally interchange columns $2,3$. This gives an equivalent array $$B' = \left\{ \begin{array}{ccc} d-1 & 2d-2r & d-2r+1\\ & & \\ \frac{d}{2} & d-2i+1 & \frac{d}{2}\\ & & \\ \frac{d}{2} & d-2j+1 & \frac{d}{2}\\ \end{array}\right\}. \label{fortriplesum}$$ Finally apply the Ališauskas-Jucys triple sum formula (see [@AC §7.10]) to $B'$. In the notation used there, the set $\Lambda$ of triples of indices which appear in the sum is contained in $$\left\{ \, (d-2r+1,2j-1,0), \; (d-2r+1,2j-2,0), \; (d-2r,2j-1,0) \, \right\},$$ which reduces the sum to at most three easily manageable terms. The triple $(d-2r+1,2j-1,0)$ appears in the sum unless $i=r=\frac{d+1}{2}$. The triple $(d-2r+1,2j-2,0)$ always appears. Finally $(d-2r,2j-1,0)$ appears unless $r=\frac{d+1}{2}$. One can remove the case discussion using the same trick which led to the unconditional formulae (\[improvedIII\]) and (\[improvedIIIpp\]). After a little simplification, once again we get formula (\[formula.alpha.ij\]). Positivity ========== {#pos.HS} The next proposition will conclude the proof of Theorem \[main.theorem\]. \[nonzero.alpha\] *The coefficient $\alpha_{1,r}$ is nonzero, and in fact strictly positive.* [[Proof.]{}]{} We will extensively use the material in [@AC §7]. If $u: {\mathcal E}_1 {\longrightarrow}{\mathcal E}_2$ denotes a linear map between Hilbert spaces, then $u^*: {\mathcal E}_2 {\longrightarrow}{\mathcal E}_1$ denotes its adjoint. Recall that the Hilbert-Schmidt norm of $u$ is defined to be $$|| u||_{\text{HS}} = \sqrt{\, \text{trace} \, (u^* \circ u)}.$$ For a composite ${\mathcal E}_1 \stackrel{u}{{\longrightarrow}} {\mathcal E}_2 \stackrel{v}{{\longrightarrow}} {\mathcal E}_3$, we have $(v \circ u)^* = u^* \circ v^*$. In the notation of [@AC §7], we write ${\mathcal H}_{\frac{m}{2}}$ for $S_m$, which carries a natural structure of a finite dimensional Hilbert space. We will view $\zeta$ as a map from ${\mathcal H}_{2(d-r)}$ to $({\mathcal H}_{\frac{d}{2}})^{\otimes 4}$ via the natural inclusion $\wedge^4 \, {\mathcal H}_{\frac{d}{2}} \hookrightarrow ({\mathcal H}_{\frac{d}{2}})^{\otimes 4}$. Similarly, we view $\beta_1$ as originating from $({\mathcal H}_{\frac{d}{2}})^{\otimes 4}$ via the natural surjection $$\begin{aligned} ({\mathcal H}_{\frac{d}{2}})^{\otimes 4} & {\longrightarrow}S_2(\wedge^2\, {\mathcal H}_{\frac{d}{2}})\\ z_1\otimes z_2 \otimes z_3\otimes z_4 & {\longrightarrow}(z_1\wedge z_2)\cdot(z_3\wedge z_4). \end{aligned}$$ Henceforth, throughout the proof, the symbol ${\, \circlearrowleft }$ will stand for some strictly positive constant which need not be specified. Recall that we have defined maps $\pi^{\text{PHY}},\imath^{\text{PHY}}$ such that $$\pi_{\frac{m}{2},\frac{n}{2},\frac{1}{2}(m+n-2q)}^{\text{PHY}}= {\, \circlearrowleft }\pi_q, \quad \imath_{\frac{m}{2},\frac{n}{2},\frac{1}{2}(m+n-2q)}^{\text{PHY}}= {\, \circlearrowleft }\imath_q,$$ in the notation of (\[pi.q\]) and (\[i.q\]); moreover $\pi^{\text{PHY}}= (\imath^{\text{PHY}})^*$. We will show that, $${\, \circlearrowleft }\alpha_{1,r} = ||\zeta||_{\text{HS}}^2. \label{alpha1r.zeta}$$ First, observe that $\alpha_{1,1} = 2 \, (r-2) (2r-1) \neq 0$, hence the map $\zeta$ is not identically zero (if the reader was not already so persuaded). If $\left( a_{s,t}\right)$ denotes the matrix representing $\zeta$ with respect to some orthonormal bases, then $$\text{trace} \, (\zeta^* \circ \zeta) = \sum\limits_{s,t} \; |a_{s,t}|^2 > 0,$$ hence it only remains to show (\[alpha1r.zeta\]) to complete the proof of the proposition. Now specialise to $i=1,j=r$, and let $\psi = \beta_3 \circ \beta_2 \circ \beta_1 \circ \zeta$. By definition, $\alpha_{1,r}=2 \, \theta_{1,r}$, where $\psi = \theta_{1,r} \, \text{Id}_{S_{4(d-r)}}$ and hence $$\theta_{1,r} = \frac{\text{trace}(\psi)}{4(d-r)+1}.$$ Notice that, up to a positive multiplicative constant, the map $f_{\mathbf t}^{4(d-r)} {\longrightarrow}{\mathcal T}({\mathbf x}{\mathbf y},{\mathbf z}{\mathbf w})$ is the sequence $${\mathcal H}_{2(d-r)} {\longrightarrow}{\mathcal H}_{d-1} \otimes {\mathcal H}_{d-2r+1} {\longrightarrow}({\mathcal H}_{\frac{d}{2}} \otimes {\mathcal H}_{\frac{d}{2}}) \otimes ({\mathcal H}_{\frac{d}{2}} \otimes {\mathcal H}_{\frac{d}{2}} ),$$ where the first map is $\imath_{j_{12}j_{34}J}^{{\text{PHY}}}$, and the second is $\imath_{j_1 j_2 j_{12}}^{{\text{PHY}}}\otimes \imath_{j_3 j_4 j_{34}}^{{\text{PHY}}}$, with $$\begin{aligned} {} & j_{12} = d-1, \quad j_{34} = d-2r+1, \quad J = 2(d-r), \quad \text{and} \\ {} & j_1 = j_2 = j_3 = j_4 = \frac{d}{2}. \end{aligned}$$ If we compose this with the alternation map $$\begin{aligned} {\mathcal A}: ({\mathcal H}_{\frac{d}{2}})^{\otimes 4} & {\longrightarrow}({\mathcal H}_{\frac{d}{2}})^{\otimes 4} \\ z_1 \otimes z_2 \otimes z_3 \otimes z_4 & {\longrightarrow}\frac{1}{4!}\sum\limits_{\sigma \in {\mathfrak S}_4} \; \text{sign}(\sigma) \, z_{\sigma(1)} \otimes z_{\sigma(2)} \otimes z_{\sigma(3)} \otimes z_{\sigma(4)}, \end{aligned}$$ the net effect (up to a constant) is $\zeta: f_{\mathbf t}^{4(d-r)} {\longrightarrow}{\mathcal F}$. In other words, $$\zeta = {\, \circlearrowleft }\, {\mathcal A} \circ \left( \imath^{\text{PHY}}_{j_1 j_2 j_{12}} \otimes \imath^{\text{PHY}}_{j_3 j_4 j_{34}} \right) \circ \imath^{\text{PHY}}_{j_{12} j_{34} J}.$$ Now observe that $\beta_3 \circ \beta_2 \circ \beta_1$ is (up to a constant) the sequence of maps: $$({\mathcal H}_{\frac{d}{2}} \otimes {\mathcal H}_{\frac{d}{2}}) \otimes ({\mathcal H}_{\frac{d}{2}} \otimes {\mathcal H}_{\frac{d}{2}}) {\longrightarrow}{\mathcal H}_{d-1} \otimes {\mathcal H}_{d-2r+1} {\longrightarrow}{\mathcal H}_{2(d-r)},$$ where the first map is $\pi_{j_1 j_2 j_{12}}^{{\text{PHY}}}\otimes \pi_{j_3 j_4 j_{34}}^{{\text{PHY}}}$, and the second is $\pi^{\text{PHY}}_{j_{12} j_{34} J}$. Since the maps $\imath^{\text{PHY}}$ and $\pi^{\text{PHY}}$ (with identical subscripts) are mutually adjoint, and ${\mathcal A}$ is a self-adjoint idempotent, $$\psi = {\, \circlearrowleft }\, \zeta^* \circ \zeta,$$ and the claim follows. Indeed, it was this argument which led us to the correct guess for ${\mathcal F}$. One strategy to ensure that $\alpha_{1,r}$ does not vanish is to make it appear as the Hilbert-Schmidt norm of a nonzero operator. This prompted us to take the adjoint of $\beta_3\circ\beta_2\circ\beta_1$, which determines the first term in ${\mathcal F}$ and hence all the rest. {#section-10} The result of Proposition \[nonzero.alpha\] amounts to the inequality $$4 \, (dr-2r^2+3r-1) \times\frac{(d-1)!\,(2d-4r+3)!}{(d-2r+1)!\,(2d-2r+2)!} <1 \label{elemineq}$$ in the range $r \ge 3, d \ge 2r-1$. We include an elementary proof of this inequality. Let $\Gamma(r,d)$ denote the left-hand side of (\[elemineq\]). First, $$\Gamma(r,2r-1) = \frac{2}{r} < 1.$$ Let us write $$\frac{\Gamma(r,d+1)}{\Gamma(r,d)} = \frac{N}{D},$$ where $$\begin{aligned} N & = d \, (d \, r + 4 \, r - 2 \, r^2-1) \, (2d-4r+5), \\ D & = (d \, r - 2 \, r^2 + 3 \, r -1) \, (2d-2r+3) \, (d-r+2). \end{aligned}$$ Now observe that $$D-N = (r-1) \, (r-2) \, (2r-1) \, (d-2r+3) > 0,$$ hence $\Gamma(r,d+1) < \Gamma(r,d)$. This completes the proof. Unfortunately this proof gives no insight into why the inequality should be true. It seems especially fortuitous that $D-N$ should admit such a tidy factorisation. For reasons already stated, we prefer the earlier argument. Note that the Hilbert-Schmidt idea also guided the construction of the closed form syzygy in [@AC §2.14]. It can be used to provide an alternate proof of Lemma 2.3 therein. In [@AC0] and [@SB Proposition 5] one may find similar instances, where the nonvanishing of an algebraic expression produced by a tensorial construction is the key ingredient in a geometric result. A ternary example {#example.SL3} ================= Our main theorem leads to the analogous problem for $SL_N$-representations. To wit, let $V$ denote an $N$-dimensional vector space and write ${\mathbb S}_\lambda$ for the Schur module ${\mathbb S}_\lambda \, V$ (see [@FH Lecture 6]). Assume that we are given a plethysm decomposition[^1] of Schur modules $$\wedge^2 \, {\mathbb S}_\lambda \simeq \bigoplus\limits_\nu \; ({\mathbb S}_\nu \otimes {\Bbbk}^{M_\nu}). \label{decom.SLN}$$ Let $\mathcal C = \{{\mathcal C}_\nu^{(i)}: \quad 1 \le i \le M_\nu \}$ denote the associated linear combinants of a pencil of tensors in ${\mathbb S}_\lambda$. It is a natural problem to find a subcollection of ${\mathcal C}$ which determines the rest of them. We will now exhibit such an example in the ternary case. The symbolic formalism used below is explained in [@AC §4]. {#section-11} Assume $N=3$ and $\lambda = (3,1)$. We have a decomposition $$\wedge^2 \, {\mathbb S}_{(3,1)} \simeq \underbrace{{\mathbb S}_{(5)} \oplus {\mathbb S}_{(5,3)} \oplus {\mathbb S}_{(4,1)} \oplus {\mathbb S}_{(3,2)} \oplus {\mathbb S}_{(1,1)}}_{{\mathbf E}},$$ with projection morphisms $f_\lambda: \wedge^2 \, {\mathbb S}_{(3,1)} {\rightarrow}{\mathbb S}_\lambda$. Let $$A = (a \, b \, {\mathbf u}) \, a_{\mathbf x}^2, \quad B = (c \, d \, {\mathbf u}) \, c_{\mathbf x}^2,$$ denote two ‘generic’ forms in ${\mathbb S}_{(3,1)}$, and write ${\mathcal C}_\lambda = f_\lambda(A \wedge B)$. Then we have symbolic formulae $$\begin{aligned} {\mathcal C}_{(5)} & = (a \, b \, d) \, a_{\mathbf x}^2 \, c_{\mathbf x}^3, \\ {\mathcal C}_{(5,3)} & = (a \, b \, {\mathbf u}) \, (a \, c \, {\mathbf u}) \, (a \, d \, {\mathbf u}) \, c_{\mathbf x}^2, \\ {\mathcal C}_{(4,1)} & = (a \,b \,d) (a \, c \, {\mathbf u}) \, a_{\mathbf x}\, c_{\mathbf x}^2 -5 \, (a \,b \, c) (a \, d \, {\mathbf u}) \, a_{\mathbf x}\, c_{\mathbf x}^2, \\ {\mathcal C}_{(3,2)} & = (a \,b \,d) \, (a \,c \, {\mathbf u})^2 \, c_{\mathbf x}+ (a \,b \,c) \, (a \,c \,{\mathbf u}) \, (a \, d \, {\mathbf u}) \, c_{\mathbf x}, \\ {\mathcal C}_{(1,1)} & = (a \,b \,c) \, (a \,c \,d) \, (a \,c \, {\mathbf u}). \end{aligned}$$ There is an exact sequence of $SL_3$-representations $$0 {\rightarrow}\underbrace{\wedge^4 \, {\mathbb S}_{(3,1)}}_{\mathcal Q}{\rightarrow}{\mathbb S}_{(2)}(\wedge^2 \, {\mathbb S}_{(3,1)}) {\rightarrow}{\mathbb S}_{(2,2)}({\mathbb S}_{(3,1)}) {\rightarrow}0,$$ and, as in the binary case, the irreducible subrepresentations of ${\mathcal Q}$ correspond to the quadratic syzygies between the ${\mathcal C}_\lambda$. *Either of the combinants ${\mathcal C}_{(3,2)}$ and ${\mathcal C}_{(1,1)}$ can be recovered from the set $\{ {\mathcal C}_{(5)},{\mathcal C}_{(5,3)},{\mathcal C}_{(4,1)} \}$.* The result follows from an explicit calculation involving plethysms and projection maps. Taking our cue from the binary case, we look for subrepresentations corresponding to $(5,0) + (3,2) = (8,2)$. Decomposing[^2] ${\mathcal Q}$ and ${\mathbb S}_2({\mathbf E})$ into irreducible summands, we found that they respectively contain $2$ and $7$ copies of ${\mathbb S}_{(8,2)}$. The latter come from tensor products of the summands in ${\mathbf E}$ taken two at a time; e.g., the morphism $${\mathbb S}_{(5)} \otimes {\mathbb S}_{(5,3)} {\longrightarrow}{\mathbb S}_{(8,2)}$$ is given by the formula $$a_{\mathbf x}^5 \otimes (c \, d \, {\mathbf u})^3 \, c_{\mathbf x}^2 {\rightarrow}(a \, c \, d) \, (a \, d \, {\mathbf u})^2 \, a_{\mathbf x}^2 \, c_{\mathbf x}^4.$$ Let us write $\langle {\mathcal C}_{(5)}, {\mathcal C}_{(5,3)}\rangle$ for the image of ${\mathcal C}_{(5)} \otimes {\mathcal C}_{(5,3)}$ via this morphism. Once all the seven maps have been written down symbolically, it only remains to solve a system of linear equations to find the two-dimensional space of syzygies; this was done in [Maple]{}. One conveniently chosen syzygy is the following: $$\begin{aligned} {\mathcal C}_{(5)} \, {\mathcal C}_{(3,2)} = \, & \frac{1}{7680} \, \langle {\mathcal C}_{(5)}, {\mathcal C}_{(5)} \rangle + \frac{1}{92160} \, \langle {\mathcal C}_{(5)}, {\mathcal C}_{(5,3)} \rangle - \frac{1}{5760} \, \langle {\mathcal C}_{(5)}, {\mathcal C}_{(4,1)} \rangle \\ - & \frac{1}{204800} \, \langle {\mathcal C}_{(5,3)}, {\mathcal C}_{(5,3)} \rangle - \frac{1}{51200} \, \langle {\mathcal C}_{(5,3)}, {\mathcal C}_{(4,1)} \rangle. \end{aligned} \label{C5.C32}$$ This gives a formula for ${\mathcal C}_{(3,2)}$ in terms of ${\mathcal C}_{(5)}, {\mathcal C}_{(5,3)}, {\mathcal C}_{(4,1)}$. There are respectively $3$ and $9$ copies of ${\mathbb S}_{(6,1)}$ in ${\mathcal Q}$ and ${\mathbb S}_2({\mathbf E})$, and the corresponding syzygies are found similarly. The following syzygy $$\begin{aligned} {\mathcal C}_{(5)} \, {\mathcal C}_{(1,1)} = \, & \frac{1}{25920} \, \langle {\mathcal C}_{(5)}, {\mathcal C}_{(5,3)} \rangle - \frac{1}{1728} \, \langle {\mathcal C}_{(5)}, {\mathcal C}_{(4,1)} \rangle - \frac{1}{864} \, \langle {\mathcal C}_{(5)}, {\mathcal C}_{(3,2)} \rangle \\ - & \frac{11}{69120} \, \langle {\mathcal C}_{(5,3)}, {\mathcal C}_{(4,1)} \rangle - \, \frac{5}{13824} \, \langle {\mathcal C}_{(5,3)}, {\mathcal C}_{(3,2)} \rangle - \frac{1}{4320} \, \langle {\mathcal C}_{(4,1)}, {\mathcal C}_{(3,2)} \rangle, \end{aligned} \label{C5.C11}$$ shows that ${\mathcal C}_{(1,1)}$ can be recovered from the rest of the combinants. {#section-12} For the record, we state the symbolic expressions which were used to define the maps above. In formula (\[C5.C32\]), they are respectively $$\begin{aligned} {\mathbb S}_{(5)} \otimes {\mathbb S}_{(5)} & \rightsquigarrow (a \,c \, {\mathbf u})^2 \, a_{\mathbf x}^3 \, c_{\mathbf x}^3, \\ {\mathbb S}_{(5)} \otimes {\mathbb S}_{(5,3)} & \rightsquigarrow (a \,c \,d) \, (a \,d \, {\mathbf u})^2 \, a_{\mathbf x}^2 \, c_{\mathbf x}^4, \\ {\mathbb S}_{(5)} \otimes {\mathbb S}_{(4,1)} & \rightsquigarrow (a \,c \, {\mathbf u}) \, (a \,d \, {\mathbf u}) \, a_{\mathbf x}^3 \, c_{\mathbf x}^3, \\ {\mathbb S}_{(5,3)} \otimes {\mathbb S}_{(5,3)} & \rightsquigarrow (a \,b \,d)^2 \, (a \,b \, {\mathbf u}) \, (a \,d \, {\mathbf u}) \, a_{\mathbf x}\, c_{\mathbf x}^5, \\ {\mathbb S}_{(5,3)} \otimes {\mathbb S}_{(4,1)} & \rightsquigarrow (a \,b \,d) \, (a \,b \, {\mathbf u})^2 \, a_{\mathbf x}^2 \, c_{\mathbf x}^4, \end{aligned}$$ where the target of each map is ${\mathbb S}_{(8,2)}$. In (\[C5.C11\]), they are respectively $$\begin{aligned} {\mathbb S}_{(5)} \otimes {\mathbb S}_{(5,3)} & \rightsquigarrow (a \,c \,d)^2 \, (a \,d \, {\mathbf u}) \, a_{\mathbf x}^2 \, c_{\mathbf x}^3, \\ {\mathbb S}_{(5)} \otimes {\mathbb S}_{(4,1)} & \rightsquigarrow (a \,c \,d) \, (a \,c \, {\mathbf u}) \, a_{\mathbf x}^3 \, c_{\mathbf x}^2, \\ {\mathbb S}_{(5)} \otimes {\mathbb S}_{(3,2)} & \rightsquigarrow (a \,c \,d) (a \,d \, {\mathbf u}) \, a_{\mathbf x}^3 \, c_{\mathbf x}^2, \\ {\mathbb S}_{(5,3)} \otimes {\mathbb S}_{(4,1)} & \rightsquigarrow (a \,b \,c) \, (a \,b \,d) \, (a \,b \, {\mathbf u}) \, a_{\mathbf x}^2 \, c_{\mathbf x}^3, \\ {\mathbb S}_{(5,3)} \otimes {\mathbb S}_{(3,2)} & \rightsquigarrow (a \,b \,d)^2 \, (a \,b \, {\mathbf u}) \, a_{\mathbf x}^2 \, c_{\mathbf x}^3, \\ {\mathbb S}_{(4,1)} \otimes {\mathbb S}_{(3,2)} & \rightsquigarrow (a \,b \,d) \, (a \,d \, {\mathbf u}) \, a_{\mathbf x}^2 \, c_{\mathbf x}^3, \end{aligned}$$ with target ${\mathbb S}_{(6,1)}$. [[Acknowledgements:]{} The second author was partly funded by a discovery grant from NSERC. We are thankful to John Stembridge (author of the ‘SF’ package for Maple). The G[ö]{}ttinger Digitalisierungszentrum ([**GDZ**]{}), the University of Michigan Historical Library ([**MiH**]{}) as well as Project Gutenberg ([**PG**]{}) have been useful in accessing some classical references.]{} [10]{} A. Abdesselam and J. Chipalkatti. The bipartite Brill-Gordan locus and angular momentum. , vol. 11, no. 3, pp. 341–370, 2006. A. Abdesselam and J. Chipalkatti. The higher transvectants are redundant. Preprint arXiv:0801.1533v1 \[math.AG\], 2008. J. Chipalkatti. On the invariant theory of the B[é]{}zoutiant. , vol. 47, no. 2, pp. 397–417, 2006. A. Clebsch. Teubner, Leipzig, 1872 ([**MiH**]{}). I. Dolgachev. . London Mathematical Society Lecture Notes No. 296, Cambridge University Press, 2003. W. Fulton and J. Harris. . Graduate Texts in Mathematics. Springer–Verlag, 1991. O. Glenn. . Ginn and Co., Boston, 1915 ([**PG**]{}). P. Gordan. Ueber Combinanten. , vol. 5, pp. 95-122, 1872 ([**GDZ**]{}). J. H. Grace and A. Young. Reprinted by Chelsea Publishing Co., New York, 1962 ([**MiH**]{}). I. G. MacDonald. , 2nd Ed., Oxford Mathematical Monographs, Clarendon Press, 1995. M. Meulien. Sur les invariants des pinceaux de formes quintiques binaires. , vol. 54, pp. 21–51, 2004. P. E. Newstead. Covariants of pencils of binary cubics. , vol. 91, no. 3-4, pp. 181–183, 1981/82. P. Olver. . London Mathematical Society Student Texts, Cambridge University Press, 1999. W. Shenton. Linear combinants of systems of binary forms, with the syzygies of the second degree connecting them. , vol. 37, no. 3, pp. 247–271, 1915. N. I. Shepherd-Barron. The rationality of some moduli spaces of plane curves. , vol. 67, pp. 51–88, 1988. B. Sturmfels. Texts and Monographs in Symbolic Computation, Springer–Verlag, 1993. Wall, C. T. C. Pencils of binary quartics. , vol. 99, pp. 197–217, 1998. — [^1]: To the best of our knowledge, no explicit formula for the multiplicities $M_\nu$ is known for an arbitrary $\lambda$. See [@MacDonald Ch. I.8] for some special cases. [^2]: The full decompositions are very lengthy, and it seems needless to list them here. All plethysm decomposition throughout this example were calculated using the ‘SF’ (Symmetric Functions) package for [Maple]{} written by John Stembridge.
--- abstract: 'We consider the interaction-driven Mott transition at zero temperature from the viewpoint of microscopic Fermi liquid theory. To this end, we derive an exact expression for the Landau parameter within the dynamical mean-field theory (DMFT). At the Mott transition both the symmetric and the anti-symmetric Landau parameters diverge. The vanishing compressibility at the Mott transition directly implies the divergence of the forward scattering amplitude in the charge sector, which connects the proximity of the Mott phase to a tendency towards phase separation. We verify the expected behavior of the Landau parameters in a DMFT application at finite temperature. Exact conservation laws and the Ward identity are crucial to capture vertex divergences related to the Mott transition. We furthermore generalize Leggett’s formula for the static susceptibility of the Fermi liquid, expressing the static response of individual electronic states through the dynamic response and a remainder. In the charge sector the remainder vanishes at the Mott transition, the static charge response of the Hubbard bands is thus given by the dynamic response.' author: - Friedrich Krien - 'Erik G. C. P. van Loon' - 'Mikhail I. Katsnelson' - 'Alexander I. Lichtenstein' - Massimo Capone bibliography: - 'main.bib' title: 'Two-particle Fermi liquid parameters at the Mott transition: Vertex divergences, Landau parameters, and incoherent response in dynamical mean-field theory' --- Introduction ============ The Landau theory of Fermi liquids provides a fundamental phenomenological description of metals in their normal state [@Landau56]. The theory accounts for (strong) interactions between the original fermions by introducing the concept of quasi-particles [@Woelfle18], effective low-energy fermionic excitations which are characterized by an effective mass resulting from the interactions and residual effective interactions. Fermi liquid theory is applicable as long as the interacting system is continuously connected to the non-interacting fermion gas, that is, no phase transition occurs. The theory makes general statements about the physical properties of Fermi liquids, which can be directly connected with several experimental properties. However, the values of the quasi-particle effective mass and the Landau parameters $\mathfrak{f}$ describing the residual interactions between quasi-particles must be either derived from a microscopic theory of a well-defined model, or extracted from experiments. In this work we focus on the former strategy, addressing the Landau theory for the Mott-Hubbard transition as described by the single-band Hubbard model. A semi-phenomenological way to obtain non-perturbative numerical results for the Landau parameter in variational Monte Carlo studies is to fit the energy of low-lying particle-hole excitations with the Fermi liquid energy functional [@Kwon94; @Lee18]. On the other hand, analytical expressions of the Landau parameter from first principles are frequently obtained by means of diagrammatic perturbation theory around the non-interacting limit, see, for example, [@Fuseya00; @Frigeri02; @Chubukov18]. However, perturbation theory can not capture the breakdown of the Fermi liquid picture at an interaction-driven metal-to-insulator transition. A way to derive a microscopic Landau theory is to solve the Hubbard model using the variational Gutzwiller approximation [@Gutzwiller63], or the equivalent Kotliar-Ruckenstein slave-boson mean-field [@Kotliar86; @Li94]. These methods describe a strongly renormalized, almost localized Fermi liquid and its disappearance at the Mott transition [@Vollhardt84]. The behavior of the Landau parameters close to the metal-insulator transition is especially interesting: At the critical interaction the symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ diverges [@Vollhardt84; @Fresard12], in correspondence to the charge localization, whereas the anti-symmetric one $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ remains finite. On the other hand, when a Landau parameter $\mathfrak{f}$ approaches the value $-1$, in general [@Kiselev17] a Pomeranchuk instability occurs, which can be favored decisively by non-local interactions [@Lhoutellier15]. The symmetric Landau parameter of a multi-orbital Hubbard model in the so-called Hund’s metal regime has recently been calculated using the slave-spin method [@deMedici05; @deMedici17], which predicts a phase separation upon doping below the critical interaction of the metal-insulator transition [@deMedici17-2]. In contrast, in the single-orbital Hubbard model the phase separation has been identified as an instability of the doped insulator [@Nourafkan18]. The development of the dynamical mean-field theory [@Metzner89; @Georges96] (DMFT) has widened our understanding of the Mott metal-insulator transition in the Hubbard model extending the previous results within a non-perturbative and conserving approach. DMFT describes the evolution from the metal to the Mott insulator in terms of the reduction and the vanishing (in the insulating phase) of the quasi-particle weight $Z$, which within DMFT coincides with the inverse of the effective mass enhancement $Z = m/m^*$, one of the main parameters of Fermi liquid theory. As a matter of fact $Z$, which can be obtained from the momentum-independent DMFT self-energy, is the key quantity in DMFT investigations of metal-insulator transitions and related phenomena. However, while a Fermi liquid picture of this Mott metal-insulator transition was developed [@Kotliar99; @Kotliar00] in terms of $Z$, surprisingly little is known about the Landau parameters in DMFT, despite their central role for the theory of Fermi liquids. A notable exception is Ref. [@Capone02], where a Landau approach has been used to estimate the Cooper instability in a multi-band Hubbard model. This work fills this gap with a thorough investigation of the Landau parameters in the single-band Hubbard model with a special focus on the approach to the interaction-driven Mott-Hubbard transition. The Landau parameters have a crucial physical significance, as they may be interpreted as the residual interaction between the quasi-particles. From a technical point of view, the interaction character implies that they are two-particle quantities. In particular, as we will detail in the following, in a microscopic Fermi liquid theory the Landau parameters are given by the dynamic limit of the two-particle vertex function, $\mathfrak{f}\propto \presuper{0\!}FZ^2$. This vertex corresponds to the forward-scattering limit of vanishing momentum and frequency transfer, ${\ensuremath{\mathbf{q}}}\rightarrow\mathbf{0}$ and $\omega\rightarrow0$. The static limit $\presuper{\infty\!}F$ where the zero frequency limit is taken before sending ${\ensuremath{\mathbf{q}}}$ to zero, also called forward scattering amplitude, accounts for all forward scatterings of particles and holes, and it is therefore the quantity responsible of actual physical instabilities. On the other hand the dynamic limit $\presuper{0\!}F$, where the limits are taken in the opposite order, describes all forward scatterings *except* those between quasi-particles and quasi-holes. In general it is very difficult to calculate the vertex function, due to its dependence on three real frequencies and momenta. In an isotropic system, such as $^3$He, the Fermi liquid equations simplify and the Landau parameter can be expanded into the Legendre polynomials. This simplification leads to several prominent Fermi liquid relations for the isotropic Fermi liquid, such as Leggett’s formula for the static susceptibility [@Legget65; @Kiselev17; @Wu18; @Chubukov18]. On the other hand, in a spatially inhomogeneous system like a lattice model the Landau parameter acquires a rich momentum dependence which emerges already at the second order of perturbation theory [@Fuseya00; @Frigeri02]. This may appear as a serious obstacle to compute the Landau parameter in the non-perturbative DMFT, which is defined for lattice systems. As will be shown in this work, this is not the case, the Landau parameter can be calculated easily in DMFT. As a matter of fact, significant progress has been made more recently with respect to the direct calculation of the vertex function. DMFT maps the lattice model onto an Anderson impurity model whose hybridization function must be determined self-consistently as we briefly recall below. Therefore, the DMFT evaluation of the vertex function requires to compute four-point correlation functions of the auxiliary Anderson impurity model. To perform this measurement, excited states must be taken into account even at zero temperature, which limits the applicability of the exact diagonalization technique [@Valli15; @Pomerol]. However, continuous-time Quantum Monte Carlo (CTQMC) solvers [@Rubtsov05; @Werner06; @Gull11] with improved estimators [@Hafermann12; @Hafermann14; @Gunacker16] are in widespread use today allowing the measurement of impurity vertices at finite temperature with very high accuracy. In the calculation of the DMFT susceptibility the numerical error can be reduced even further, by separating it into local and non-local contributions [@Pruschke96; @Rubtsov12; @vanLoon15; @vanLoon16] and by taking vertex asymptotics into account [@Kunes11; @Wentzell16]. These improvements have given rise to diagrammatic extensions of DMFT [@Rohringer17] and opened a window into the two-particle level of its impurity model, which led to the discovery of vertex divergences. Some of these occur at the Mott transition [@Rohringer12; @vanLoon18], but the two-particle self-energy $\gamma$ (irreducible vertex) also shows divergences in its vicinity [@Schaefer13] and even in the atomic limit [@Thunstroem18], which have been related to the multi-valuedness of the Luttinger-Ward functional [@Kozik15; @Gunnarsson17; @Vucicevic18]. A natural question is raised by the evidence of multiple divergences of vertices, namely if some of those can be directly related to the Mott transition and explain its features. For example, it was hypothesized in Ref. [@Chitra01] that in the Mott phase there may exist a divergent scattering amplitude in the charge sector. If this prediction was confirmed in a microscopic scheme which properly accounts for the Mott transition, it would strengthen the case for the somewhat counter-intuitive tendency towards phase separation –associated with a divergent compressibility– close a Mott insulator, where the same compressibility must vanish. In the two-dimensional Hubbard model phase separation close to the Mott transition has been widely debated [@Furukawa91; @Furukawa93; @Cosentini98; @Sorella15] also as a possible source of charge-ordering instabilities [@Caprara17] or even superconductivity [@Grilli95; @Emery93]. A finite-temperature divergent compressibility in DMFT has been suggested to underlie the $\alpha-\gamma$ transition in cerium [@Kotliar02]. One may notice that a divergence of a vertex function is also found in the variational approaches to the metal-insulator transition mentioned above, and it seems plausible that the divergence of the symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ at the Mott transition predicted by these methods could also occur in DMFT. However, this merely implies the divergence of the dynamic vertex $\presuper{0\!}F^{\ensuremath{\text{ch}}}$ and it does not straightforwardly imply the divergence predicted in Ref. [@Chitra01] for the static vertex $\presuper{\infty\!}F^{\ensuremath{\text{ch}}}$. Furthermore, a divergence of the latter in the Mott insulator must somehow be reconciled with the vanishing compressibility of this phase. Another question which arises in the attempt to connect the divergences of the vertex in impurity models obtained in DMFT is their connection with physical instabilities of the lattice model. Despite a divergence of the two-particle self-energy $\gamma$ the impurity vertex function $f$ may remain finite. And, in turn, a divergence of $f$ does not always imply a divergence in the DMFT approximation to the lattice vertex function $F$. A basic example is the divergence of the impurity spin susceptibility in the Mott phase at zero temperature, due to the free local moment of the paramagnetic Mott insulator. This does not necessarily lead to a ferromagnetic instability in DMFT, which would be associated with a divergent susceptibility at zero momentum, since the latter is typically cut off by the effective exchange $J=\tilde{t}^{\,2}/U$, [@Rozenberg94; @Georges96]. In fact, as will be shown in this work, even the divergence of the lattice forward scattering vertex $\presuper{\infty\!}F$ does not always imply the divergence of a susceptibility. In this work we address the questions raised above, by letting the Fermi liquid theory itself guide our investigation. After we briefly recall the DMFT approximation to the Hubbard model in Sec. \[sec:dmftapproximation\] we recapitulate in detail the main stepping stones of the Fermi liquid theory in Sec. \[sec:flt0\]: The definition of the Landau parameter, the Boltzmann equation, and the Ward identities. We then show in Sec. \[sec:dmft\] how the central two-particle Fermi liquid parameters can be recovered in DMFT calculations. Furthermore, we derive a generalized form of Leggett’s decomposition of the static susceptibility of a Fermi liquid [@Legget65], which allows to observe the static and dynamic response of individual electronic states separately. In Sec. \[sec:coherent\] we discuss the physical meaning of this response function and make exact statements about the behavior of the Fermi liquid parameters near the interaction-driven Mott transition at zero temperature. DMFT calculations at finite temperature serve to validate the expected scenario for the Mott transition in Sec. \[sec:mott\]. We summarize and discuss our main results in Sec. \[sec:discussion\] and close with the conclusions in Sec. \[sec:conclusions\]. The Hubbard model in DMFT approximation {#sec:dmftapproximation} ======================================= We consider the single-band Hubbard model $$\begin{aligned} H = &-\sum_{\langle ij\rangle\sigma}\tilde{t}_{ij} c^\dagger_{i\sigma}c^{}_{j\sigma}+ U\sum_{i} n_{i{\uparrow}} n_{i{\downarrow}},\label{eq:hubbard}\end{aligned}$$ where $\tilde{t}_{ij}$ is the nearest neighbor hopping between lattice sites $i,j$ . We use the hopping amplitude $\tilde{t}=1$ as the unit of energy. $c^{},c^\dagger$ are the construction operators, $\sigma={\uparrow},{\downarrow}$ the spin index. $U$ is the Hubbard repulsion between the densities $n_{\sigma}=c^\dagger_{\sigma}c^{}_{\sigma}$. Within dynamical mean-field theory the Hubbard model  is mapped to an auxiliary Anderson impurity model (AIM) with a local self-energy [@Georges96]. We denote by $g$ and $G$ the Green’s function of the AIM and of the Hubbard model, respectively. Starting from an initial guess, the parameters of the AIM are adjusted self-consistently, until the condition, $$\begin{aligned} G_{\text{loc}}=g,\label{eq:dmftsc}\end{aligned}$$ is satisfied. Here, $G_{\text{loc}}$ indicates the local part of $G$. The evaluation of the local component of $G$ requires an energy integration over the non-interacting density of states of the chosen lattice. This is indeed the only dependence of the results on the original lattice. For this reason, in this work we consider a triangular lattice, which is taken as a representative of a generic lattice where the density of states has no singularity at the Fermi level or special symmetries, like the particle-hole symmetry of the square lattice. DMFT emerges as an ideal candidate to derive a microscopic Landau theory for the Mott transition because it provided us with a complete and simple picture of the Mott transition and it is a thermodynamically consistent approximation. This guarantees that one observes the same divergences of response functions at the one- and two-particle level simultaneously. Furthermore, the local approximation of DMFT leads to a simplification of two-particle vertices which helps our analysis without losing the key physics of Mott localization. We come back to these points in Sec. \[sec:dmft\]. We stress that in this work we limit ourselves to the DMFT picture of a Mott transition [@Georges96] which is exact in the limit of infinite coordination and neglects non-local correlations. The latter can lead to the opening of a correlation gap at small to intermediate interaction in the Hubbard model on the square lattice [@Schaefer15; @vanLoon18-2], which will not be considered here. Fermi liquid theory at $T=0$ {#sec:flt0} ============================ In this section we recollect several cornerstones of the microscopic foundations of the Landau Fermi liquid theory. To this end, we introduce the *causal* Green’s function $G^c_{{\ensuremath{\mathbf{k}}}\sigma}(t-t')=-\imath\langle T_t c_{{\ensuremath{\mathbf{k}}}\sigma}(t)c^\dagger_{{\ensuremath{\mathbf{k}}}\sigma}(t')\rangle$, which is used in perturbation theory for real times $t,t'$ [@Noziere97], where $T_t$ is the time-ordering operator. The spin label $\sigma$ will be dropped where unambiguous. The frequency transform of this function can be expressed in the following way \[cf. Appendix \[app:gf\]\], $$\begin{aligned} G^c_{{\ensuremath{\mathbf{k}}}\nu}=& n_f(-\nu)G^r_{{\ensuremath{\mathbf{k}}}\nu}+n_f(\nu) G^a_{{\ensuremath{\mathbf{k}}}\nu}.\label{eq:gcgr}\end{aligned}$$ Here, $\nu$ is the real frequency, $n_f(\nu)=(1+e^{\beta\nu})^{-1}$ is the Fermi function, $G^r$ and $G^a$ are the retarded and advanced Green’s functions. The latter are analytical in the upper/lower complex half-plane, respectively, whereas $G^c$ itself is not analytical in either half-plane. At zero temperature $T={\beta}^{-1}\rightarrow0$ the Fermi function becomes a Heaviside step-function, $n_f(\nu)\rightarrow\theta(-\nu)$. Therefore, $$\begin{aligned} G^{c}_{{\ensuremath{\mathbf{k}}}\nu} =&\begin{cases} G^r_{{\ensuremath{\mathbf{k}}}\nu} & \text{for}\;\nu>0, \\ G^a_{{\ensuremath{\mathbf{k}}}\nu} & \text{for}\;\nu\leq0, \end{cases} \;\;\;\; T=0.\label{eq:gct0}\end{aligned}$$ In this case the causal Green’s function is strictly particle-like (retarded) for $\nu>0$ and hole-like (advanced) for $\nu\leq0$. At finite temperature $G^c$ in Eq.  attains an admixture of hole(particle)-like components for $\nu>0$ ($\nu\leq0$), due to the thermal softening of the Fermi function around the Fermi level $\nu=0$. Here we focus on the zero temperature limit in Eq. . Fermi-liquid Green’s function ----------------------------- The central assumption of Fermi liquid theory is that even in presence of an interaction the Green’s function has a simple structure, with a pole of weight $Z_{\ensuremath{\mathbf{k}}}$ at the Fermi level. In the neighborhood of the Fermi momentum, ${\ensuremath{\mathbf{k}}}\approx{\ensuremath{\mathbf{k}}}_F$, one may write the Green’s function as $$\begin{aligned} G^c_{{\ensuremath{\mathbf{k}}}\nu}=&\frac{Z_{\ensuremath{\mathbf{k}}}}{\nu-\tilde{\varepsilon}_{{\ensuremath{\mathbf{k}}}}+\mu+\imath\eta}+G^{c,\text{inc}}_{{\ensuremath{\mathbf{k}}}\nu}.\label{eq:gffl}\end{aligned}$$ Here, $\tilde{\varepsilon}_{\ensuremath{\mathbf{k}}}$ is the renormalized (quasi-particle) dispersion, $\mu$ is the chemical potential. $G^{c,\text{inc}}$ is an incoherent background, by assumption a smooth function of ${\ensuremath{\mathbf{k}}}$ and $\nu$. $\eta=0^\pm$ is an infinitesimal number. The first term can be obtained from the generic expression of the Green’s function as a function of the self-energy $\Sigma_{{\ensuremath{\mathbf{k}}}}(\nu)$ expanding the latter around the Fermi level. This defines the quasi-particle weight, $$\begin{aligned} Z_{\ensuremath{\mathbf{k}}}^{-1} = 1 - \left.\frac{\partial\Re\Sigma_{\ensuremath{\mathbf{k}}}(\nu)}{\partial\nu}\right|_{\nu=0},\label{eq:qpweight}\end{aligned}$$ and the quasi-particle dispersion, $$\begin{aligned} \tilde{\varepsilon}_{\ensuremath{\mathbf{k}}}-\mu=Z_{\ensuremath{\mathbf{k}}}[\varepsilon_{{\ensuremath{\mathbf{k}}}}-\mu+\Re\Sigma_{{\ensuremath{\mathbf{k}}}}(0)].\label{eq:qpdispersion}\end{aligned}$$ In combination with Eq.  one sees that $G^c_{{\ensuremath{\mathbf{k}}}\nu}$ has a pole of weight $Z_{\ensuremath{\mathbf{k}}}$ in the lower complex half-plane ($\eta=0^+$, retarded) when ${\ensuremath{\mathbf{k}}}$ lies outside of the Fermi surface. This pole then represents a quasi-particle. On the other hand, when ${\ensuremath{\mathbf{k}}}$ lies within or on the Fermi surface, the pole is in the upper half-plane ($\eta=0^-$, advanced), representing a quasi-hole. The label $c$ denoting the causal Green’s function $G^c$ will be dropped in the remainder of this section. Discontinuity of the bubble {#sec:flt0:disc} --------------------------- The formal derivation of Fermi liquid theory following Landau, Nozières and Luttinger [@Landau80; @Noziere97; @Abrikosov75; @Noziere62-1; @Noziere62-2] is obtained from the two-particle level, by analysis of the analytical structure of the particle-hole spectrum. The crucial point is that the pole structure of the causal Fermi liquid Green’s function  leads to the counter-intuitive situation that the limits ${\ensuremath{\mathbf{q}}}\rightarrow{\ensuremath{\mathbf{0}}}$ and $\omega\rightarrow0$ of the bubble $G^2_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}},\omega)=G_{{\ensuremath{\mathbf{k}}}\nu}G_{{\ensuremath{\mathbf{k}}}+{\ensuremath{\mathbf{q}}},\nu+\omega}$ do not commute: Taking the homogeneous limit ${\ensuremath{\mathbf{q}}}\rightarrow{\ensuremath{\mathbf{0}}}$ first, $\eta$ in Eq.  has the same sign for both Green’s functions. Therefore, $G_{{\ensuremath{\mathbf{k}}}\nu}$ and $G_{{\ensuremath{\mathbf{k}}},\nu+\omega}$ have their poles in the same complex half-plane. These poles either represent two quasi-holes or two quasi-particles with the same momentum ${\ensuremath{\mathbf{k}}}$, but never a quasi-particle-hole pair. Taking the limit $\omega\rightarrow0$ [subsequently]{} does not change this situation. However, when taking the limit $\omega\rightarrow0$ first, a peculiarity arises at the Fermi momentum ${\ensuremath{\mathbf{k}}}={\ensuremath{\mathbf{k}}}_F$: The pole of $G_{{\ensuremath{\mathbf{k}}}_F,\nu}$ represents a quasi-hole ($\eta=0^-$), whereas the pole of $G_{{\ensuremath{\mathbf{k}}}_F+{\ensuremath{\mathbf{q}}},\nu}$ may describe a quasi-hole or a quasi-particle, depending on whether ${\ensuremath{\mathbf{k}}}_F+{\ensuremath{\mathbf{q}}}$ lies within/on or outside of the Fermi surface, respectively. As a consequence, in the limit ${\ensuremath{\mathbf{q}}}\rightarrow\mathbf{0}$ one may still be left with a quasi-particle-hole pair. These distinct limits of the bubble are defined as, $$\begin{aligned} \presuper{\infty\!}G^2_{{\ensuremath{\mathbf{k}}}\nu}\equiv&\lim\limits_{{\ensuremath{\mathbf{q}}}\rightarrow0}\lim\limits_{\omega\rightarrow0}G_{{\ensuremath{\mathbf{k}}}\nu}G_{{\ensuremath{\mathbf{k}}}+{\ensuremath{\mathbf{q}}},\nu+\omega},\\ \presuper{0\!}G^2_{{\ensuremath{\mathbf{k}}}\nu}\equiv&\lim\limits_{\omega\rightarrow0}\lim\limits_{{\ensuremath{\mathbf{q}}}\rightarrow0}G_{{\ensuremath{\mathbf{k}}}\nu}G_{{\ensuremath{\mathbf{k}}}+{\ensuremath{\mathbf{q}}},\nu+\omega},\label{eq:bubblelimits}\end{aligned}$$ where the left superscript of $\presuper{\mathfrak{r}}G^2$ denotes the ratio $\mathfrak{r}=|{\ensuremath{\mathbf{q}}}|/\omega$. We will refer to $\mathfrak{r}=\infty$ and $\mathfrak{r}=0$ in the following as the static homogeneous and the dynamic homogeneous limit, respectively, (we abbreviate as the static and the dynamic limit where unambiguous). One further defines the discontinuity of the bubble as the difference between the static and the dynamic limit, $$\begin{aligned} R_{{\ensuremath{\mathbf{k}}}\nu}=&\presuper{\infty\!}G^2_{{\ensuremath{\mathbf{k}}}\nu}-\presuper{0\!}G^2_{{\ensuremath{\mathbf{k}}}\nu}=-2\pi\imath Z^2_{{\ensuremath{\mathbf{k}}}}\delta(\nu)\delta(\tilde{\varepsilon}_{\ensuremath{\mathbf{k}}}-\mu),\label{eq:rdef}\end{aligned}$$ which has poles at ${\ensuremath{\mathbf{k}}}={\ensuremath{\mathbf{k}}}_F$ and $\nu=0$ and is zero elsewhere. The explicit expression for $R$ is derived in Ref. [@Noziere97], it is *not* restricted to isotropic systems. Vertex function and Landau parameter {#sec:landauparm} ------------------------------------ We introduce the vertex function $F^\alpha_{kk'q}$, where we use the short notation $k=({\ensuremath{\mathbf{k}}},\nu)$ and $q=({\ensuremath{\mathbf{q}}},\omega)$. The label $\alpha$ denotes the charge ($\alpha={\ensuremath{\text{ch}}}$) and spin ($\alpha=x,y,z={\ensuremath{\text{sp}}}$) channels, where the latter can be comprised into one index due to rotational invariance. Fig. \[fig:3leg\] c) shows a diagrammatic representation of $F$ and the convention for its labels $k, k'$, and $q$ used in this text. The vertex function $F$ is constructed from the bubble $G^2$ via the Bethe-Salpeter equation, $$\begin{aligned} \presuper{\mathfrak{r}}F^\alpha_{kk'}=\Gamma^\alpha_{kk'}+\int_{k''}\Gamma^\alpha_{kk''}\presuper{\mathfrak{r}}G^2_{k''}\presuper{\mathfrak{r}}F^\alpha_{k''k'}.\label{eq:bselimits}\end{aligned}$$ Here, $\Gamma^\alpha_{kk'q}$ is the two-particle self-energy, which is irreducible with respect to the bubble $G^2$. The integral over $k''$ implies normalized summation/integration over ${\ensuremath{\mathbf{k}}}''$ and $\nu''$. For the Hubbard model  we have $\int_{k}=\frac{1}{2\pi N}\sum_{{\ensuremath{\mathbf{k}}}}\int_{-\infty}^{+\infty}d\nu$, with $N$ the number of lattice sites. In Eq.  the double limit $q=({\ensuremath{\mathbf{q}}},\omega)\rightarrow0$ has already been taken. In fact, since $F$ is constructed from the bubble $G^2$, it inherits the ambiguity of this limit. This means that $F$ and $G^2$ in Eq.  both carry a label $\mathfrak{r}$, indicating that either the static or dynamic limit is taken. On the other hand, in the Fermi liquid the limit $q\rightarrow0$ of the two-particle self-energy $\Gamma$ is *not* ambiguous, (see, for example, Ref. [@Noziere97]), since by construction $\Gamma$ is free of the problematic bubble insertions $G^2$. Hence, the homogeneous limit of the two-particle self-energy does not inherit a label $\mathfrak{r}$, $$\begin{aligned} \presuper{\mathfrak{r}}\Gamma_{kk'}\equiv\Gamma_{kk'}.\end{aligned}$$ As a consequence, $\Gamma$ can be eliminated from Eq. , leading to an important exact relation between the static and dynamic limits of the vertex function, $$\begin{aligned} \presuper{\infty\!}F^\alpha_{kk'}=\presuper{0\!}F^\alpha_{kk'}+\int_{k''}\presuper{0\!}F^\alpha_{kk''}R_{k''}\presuper{\infty\!}F^\alpha_{k''k'}, \label{eq:bselimits_inv}\end{aligned}$$ where $R$ is the discontinuity of the bubble defined in Eq. . We comment on the physical significance of the limits $\presuper{\infty\!}F$ and $\presuper{0\!}F$ of the vertex function and of Eq. : The static limit $\presuper{\mathfrak{r}=\infty}F$, the so-called forward scattering amplitude, describes the physical situation of small momentum $\delta{\ensuremath{\mathbf{q}}}\approx{\ensuremath{\mathbf{0}}}$ and strictly vanishing energy transfer $\omega=0$. This includes, but is not limited to, the scattering events between quasi-particles and quasi-holes that leave both of them on the Fermi surface. On the other hand, the scatterings associated to $\presuper{\mathfrak{r}=0}F$ imply the situation of small energy $\delta\omega\approx0$ and vanishing momentum transfer ${\ensuremath{\mathbf{q}}}={\ensuremath{\mathbf{0}}}$. As explained in Sec. \[sec:flt0:disc\], the peculiarity of this limit is precisely that it does *not* account for quasi-particle-hole contributions. Hence, $\presuper{0}F$ describes all forward scatterings except the ones between quasi-particles and quasi-holes \[such as incoherent-on-incoherent or coherent-on-incoherent scatterings\]. The second term on the right-hand-side of Eq.  therefore represents the contribution of quasi-particle-hole scatterings to the static limit $\presuper{\infty\!}F$. The latter is recovered from the dynamic limit $\presuper{0\!}F$ by taking repeated scatterings of this type into account, while $\presuper{0\!}F$ assumes the role of an effective quasi-particle interaction. One defines $\mathfrak{f}^\alpha_{kk'}\propto Z_{\ensuremath{\mathbf{k}}}Z_{{\ensuremath{\mathbf{k}}}'}\presuper{0\!}F^\alpha_{kk'}$, the Landau parameter, where $Z_{\ensuremath{\mathbf{k}}}$ is the quasi-particle weight from Eq.  and $k, k'$ lie on the Fermi surface. Three-leg vertex and Ward identity {#sec:wardidt0} ---------------------------------- We introduce a central object of this work, the three-leg vertex $\Lambda^\alpha_{kq}$. The latter is obtained from the vertex function $F$ by attaching a bubble $G^2$ to $F$, closing the open ends, and adding $1$, as in Fig. \[fig:3leg\] a), $$\begin{aligned} \Lambda^\alpha_{kq} = 1 + \int_{k'}F^\alpha_{kk'q}G_{k'}G_{k'+q}\label{eq:lambdat0}.\end{aligned}$$ Although $\Lambda$ itself may not have a direct physical interpretation, it is closely related to a physical response function, the fermion-boson response function \[cf. also Eq.  and Appendix \[app:ac:g3\]\], $$\begin{aligned} L^{\alpha}_{kq}=G_kG_{k+q}\Lambda^{\alpha}_{kq}.\label{eq:g3def}\end{aligned}$$ In fact, $L_{kq}$ is best construed as the response of an electronic state with momentum and energy vector $k=({\ensuremath{\mathbf{k}}},\nu)$, which responds to an applied field with spatial and temporal dependence $q=({\ensuremath{\mathbf{q}}},\omega)$. In the limit $q\rightarrow0$ this can be seen using Ward’s identity, which allows to calculate $\presuper{\mathfrak{r}}L^{\alpha}_k=\presuper{\mathfrak{r}}G^2_k\;\presuper{\mathfrak{r}}\Lambda^\alpha_k$ explicitly, where again $\mathfrak{r}$ indicates how the double limit is taken. We show in Appendix \[app:ac:w0\] that one obtains the following relations for the static homogeneous limit, $$\begin{aligned} \presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}_k=&-\frac{dG_{k}}{d\mu},\label{eq:ward:g3statch}\\ \presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}_k=&-\frac{dG_{k\uparrow}}{dh},\label{eq:ward:g3statsp}\end{aligned}$$ where on the right-hand-sides appear derivatives with respect to the chemical potential $\mu$ and the homogeneous magnetic field $h$ directed along the $z$-axis. (-.5,.25) node[c)]{}; (0,-.25) – (1,-.25); (0,.75) – (1,0.75); (1,-.27) – (1,.77) – (2,.77) – (2,-.27) – cycle; (1.5,.25) node [$F^\alpha$]{}; (2,-.25) – (3,-.25); (2,.75) – (3,0.75); (.5,1.1) node [$k$]{}; (.5,-.6) node [$k+q$]{}; (2.5,1.1) node [$k'$]{}; (2.5,-.6) node [$k'+q$]{}; (-.5,.25) node[a)]{}; (2,0.25) – (1,1) – (1,-0.5) – cycle; (1.4,.25) node [$\Lambda^\alpha$]{}; (2,0.25) circle \[radius=0.1cm\]; (2.8,.25) node[$=$]{}; (0.5,0.25) circle \[radius=0.1cm\]; (4.25,.25) node[$+$]{}; (0,-.25) – (1,-0.25) – (1.,.75) – (0,.75) – (0,-.25); (.5,.25) node [$F^\alpha$]{}; (0,-.23) to \[bend right=10\] (1,0.25); (0,.73) to \[bend left=10\] (1,0.25); (1,0.25) circle \[radius=0.1cm\]; (-1.5,.25) node[b)]{}; (0,.25) circle \[radius=0.635cm\]; (0,.25) node [$X^\alpha$]{}; (-0.635cm,.25) circle \[radius=0.1cm\]; (+0.635cm,.25) circle \[radius=0.1cm\]; (2.5,.25) node[$=\;\;2\;\;\times$]{}; (-.5,0.25) to \[bend left=20\] (1,.98); (-.5,0.25) to \[bend right=20\] (1,-.48); (-.5,0.25) circle \[radius=0.1cm\]; (2,0.25) – (1,1) – (1,-0.5) – cycle; (1.4,.25) node [$\Lambda^\alpha$]{}; (2,0.25) circle \[radius=0.1cm\]; The Ward identities  and  for $\presuper{\infty\!}L$ have a straightforward physical interpretation: Upon a small change of the chemical potential $\delta\mu$ or magnetic field $\delta h$, within the linear response regime, the spectral weight of electronic states with momentum ${\ensuremath{\mathbf{k}}}$ and energy $\nu$ is changed by an amount $-\delta\mu\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}_k$ and $-\delta h\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}_k$, respectively. (See also Sec. \[sec:coherent\] and Ref. [@vanLoon18].) The response function $L$ is therefore more rich in information than the susceptibility $X^\alpha_q=2\int_k L^{\alpha}_{kq}$, which merely describes the total response of the electronic spectrum. The relation between $X$ and $L$ is depicted diagrammatically in Fig. \[fig:3leg\] b). For the dynamic limit of $L$, on the other hand, one finds the following relation \[see Appendix \[app:ac:ward\], Eq. \], $$\begin{aligned} \presuper{0\!}L^{\alpha}_k=&-\frac{dG_{k}}{d\nu}.\label{eq:ward:g3dyn}\end{aligned}$$ Note that this relation is valid for $\alpha={\ensuremath{\text{ch}}},{\ensuremath{\text{sp}}}$, leading to the same right-hand-side. A physical interpretation of Eq.  is less obvious than for the static limit $\presuper{\infty\!}L$. The significance of the dynamic limit $\presuper{0\!}L$ will be articulated over the course of this work. The Ward identities - can be reformulated in terms of the three-leg vertex $\Lambda$ via Eq. , and by making use of Dyson’s equation $G^{-1}_{k\sigma}=\nu-\varepsilon_{\ensuremath{\mathbf{k}}}+\sigma h+\mu-\Sigma_{k\sigma}+\imath\eta$, where $\Sigma$ is the electronic self-energy. The result is, $$\begin{aligned} \presuper{\infty\!}\Lambda^\alpha_{k}=&\begin{cases} 1-\frac{d\Sigma_{k}}{d\mu} & \text{for}\;\alpha={\ensuremath{\text{ch}}}, \\ 1-\frac{d\Sigma_{k\uparrow}}{dh} & \text{for}\;\alpha={\ensuremath{\text{sp}}}, \end{cases}\label{eq:lambdastatt0}\\ \presuper{0\!}\Lambda^\alpha_{k}=&1-\frac{d\Sigma_k}{d\nu}.\label{eq:lambdadynt0}\end{aligned}$$ We note that if $k=k_F$ lies at the Fermi level Eq.  relates the dynamic limit of the three-leg vertex to the quasi-particle weight \[cf. Eq. \], $\presuper{0\!}\Lambda^\alpha_{{\ensuremath{\mathbf{k}}}={\ensuremath{\mathbf{k}}}_F,\nu=0}=Z^{-1}_{{\ensuremath{\mathbf{k}}}_F}$. (-.5,0.98) to (1,.98); (-.5,-0.48) to (1,-.48); (2,0.25) – (1,1) – (1,-0.5) – cycle; (1.4,.25) node [$\Lambda^\alpha$]{}; (2,0.25) circle \[radius=0.1cm\]; Leggett’s decomposition {#sec:legget} ----------------------- We discuss the relation between the static and dynamic limits of $\Lambda$ and $L$, respectively. We also do this for the susceptibility $X$, which recovers a result of Leggett [@Legget65]. First, we recall that the static and dynamic homogeneous limits of the vertex function $F$ are related via Eq. . From that relation follows a similar one for the three-leg vertex $\Lambda$ (see Refs. [@Noziere97; @Landau80] and Appendix \[app:decomp\]), $$\begin{aligned} \presuper{\infty\!}\Lambda^\alpha_{k}=\presuper{0\!}\Lambda^\alpha_{k}+\int_{k'}\presuper{0\!}F^\alpha_{kk'}R_{k'}\presuper{\infty\!}\Lambda^\alpha_{k'},\label{eq:fbvertex}\end{aligned}$$ which is, in fact, equivalent to Boltzmann’s equation (or Landau’s kinetic equation). The latter describes the collective modes of the Fermi liquid, which may be understood as oscillatory deformations of the Fermi surface. In Appendix \[app:decomp\] we show that from Eq.  follows also a relation between $\presuper{\infty\!}L$ and $\presuper{0\!}L$, $$\begin{aligned} \presuper{\infty\!}L^{\alpha}_{k}=\presuper{0\!}L^{\alpha}_{k}+\int_{k'}\left(\delta_{kk'}+\presuper{0\!}G^2_{k}\;\presuper{0\!}F^\alpha_{kk'}\right)R_{k'}\presuper{\infty\!}\Lambda^\alpha_{k'},\label{eq:g3decomp}\end{aligned}$$ where $\delta_{kk'}$ implies a factor $2\pi N$. The integral of $\presuper{\infty\!}L$ yields the total static response, that is, the static homogeneous susceptibility, $\presuper{\infty\!}X^\alpha=2\int_k \presuper{\infty\!}L^{\alpha}_k$, $$\begin{aligned} \presuper{\infty\!}X^{\ensuremath{\text{ch}}}=-\imath\frac{d\langle n\rangle}{d\mu},\;\;\;\presuper{\infty\!}X^{\ensuremath{\text{ch}}}=-\imath\frac{d\langle m\rangle}{dh},\label{eq:suscs}\end{aligned}$$ where $\langle n\rangle=\langle n_{\uparrow}\rangle+\langle n_{\downarrow}\rangle$ and $\langle m\rangle=\langle n_{\uparrow}\rangle-\langle n_{\downarrow}\rangle$ denote the total density and magnetization per site, respectively. The factor $\imath$ originates from the causal Green’s function  \[cf. Appendix \[app:gf\]\]. Performing the integration in Eq.  leads to, $$\begin{aligned} \presuper{\infty\!}X^\alpha=\presuper{0\!}X^\alpha+2\int_{k'}\presuper{0\!}\Lambda^\alpha_{k'}\;R_{k'}\presuper{\infty\!}\Lambda^\alpha_{k'},\label{eq:legget}\end{aligned}$$ where we have identified the three-leg vertex $\presuper{0\!}\Lambda^\alpha_k=\int_k(\delta_{kk'}+\presuper{0\!}G^2_{k}\;\presuper{0\!}F^\alpha_{kk'})$ using Def.  [^1] and the dynamic homogeneous susceptibility, $\presuper{0\!}X^\alpha=2\int_k\presuper{0\!}L^{\alpha}_k$. However, $\presuper{0\!}L^{}$ does not contribute to the static susceptibility $\presuper{\infty\!}X$ in Eq. . This can be seen using the Ward identity . The frequency integral over $\nu$ implied in $\presuper{0\!}X^\alpha=2\int_k\presuper{0\!}L^{\alpha}_k=-2\int_k\frac{dG}{d\nu}$ leads to zero, since Green’s function vanishes at the boundaries $\pm\infty$. Physically this is a consequence of total charge and spin conservation. Therefore, the entire static susceptibility $\presuper{\infty\!}X$ is given by the remainder on the right-hand-side of Eq. . Lastly, we show that Eq.  leads to a decomposition of the susceptibility due to Leggett. Solving Eq.  for $\presuper{\infty\!}\Lambda$ via matrix inversion and inserting the result into Eq.  we can bring the latter into the following form, $$\begin{aligned} \presuper{\infty\!}X^\alpha=2\iint_{k,k'}\presuper{0\!}\Lambda^\alpha_kR_k\left(\delta_{kk'}-\presuper{0\!}F^\alpha_{kk'}R_{k'}\right)^{-1}\presuper{0\!}\Lambda^\alpha_{k'},\label{eq:legget2}\end{aligned}$$ where the inverse indicates a matrix inversion with respect to the indices $k$ and $k'$. In Eq.  we have already omitted the vanishing $\presuper{0\!}X^\alpha$. The static susceptibility $\presuper{\infty\!}X$ is therefore determined entirely by the Fermi liquid parameters $R$, $\presuper{0\!}\Lambda$, and $\presuper{0\!}F$. In case of an isotropic Fermi liquid one may use Eq.  for the discontinuity $R_k$, expand $\presuper{0\!}F$ into the Legendre polynomials, and perform the integrations in Eq.  analytically, which leads to a geometric series, i.e., Leggett’s result [@Legget65]. Diagrammatic derivations of Leggett’s formula were recently presented in Refs. [@Wu18; @Chubukov18]. Fermi liquid parameters in DMFT {#sec:dmft} =============================== We collect the necessary tools to observe within DMFT the Fermi liquid parameters introduced above and discuss how these quantities can be recovered by extrapolation from finite temperature. DMFT approximation {#sec:dmftvertices} ------------------ In DMFT the electronic self-energy is approximated with a local frequency-dependent self-energy $\Sigma_k\equiv\Sigma(\nu)$ which is obtained from the auxiliary impurity model, so that the lattice Green’s function reads $$\begin{aligned} G_{{\ensuremath{\mathbf{k}}}\nu}=&[\nu-\varepsilon_{{\ensuremath{\mathbf{k}}}}+\mu-\Sigma(\nu)+\imath\eta]^{-1}\label{eq:gdmft}.\end{aligned}$$ A self-consistent set of $G$ and $\Sigma$ is obtained through the self-consistent cycle described in Sec. \[sec:dmftapproximation\]. Therefore, in the Fermi liquid regime the quasi-particle dispersion in Eq.  is given as $\tilde{\varepsilon}_{\ensuremath{\mathbf{k}}}-\mu=Z[\varepsilon_{{\ensuremath{\mathbf{k}}}}-\mu+\Re\Sigma(0)]$, where $Z$ is the ${\ensuremath{\mathbf{k}}}$-independent quasi-particle weight of the DMFT approximation. In order to evaluate the vertex function it is necessary to use an appropriate approximation to the two-particle self-energy $\Gamma$. A consistent choice for $\Gamma$ is the functional derivative of the single-particle self-energy $\Sigma$, $\gamma=\frac{\delta\Sigma}{\delta g}$, where $g$ is the local Green’s function of the auxiliary Anderson impurity model (AIM), hence, $$\begin{aligned} \Gamma^{\alpha}_{kk'q}\equiv\gamma^{\alpha}_{\nu\nu'\omega}.\label{eq:gammadmft}\end{aligned}$$ In turn, the single-particle self-energy $\Sigma$ of DMFT is given as the functional derivative of the local Luttinger Ward functional $\phi$ of the AIM, $\Sigma=\frac{\delta\phi}{\delta g}$. In combination with the self-consistency condition  this is sufficient to satisfy global conservation laws at the one-particle level [@Potthoff06-2]. The choice of $\Gamma$ in Eq.  implies that DMFT is also conserving at the two-particle level [@Baym62] and consequently satisfies the Ward identity [@Hafermann14-2; @Krien17], which is a crucial element of the Fermi liquid theory (cf. Eqs. - and Refs. [@Noziere62-1; @Noziere62-2]). Conservation laws at the two-particle level guarantee the thermodynamic consistency of approximations, which is expressed by the Ward identities  and . In DMFT we can therefore study response functions at the one-particle level (e.g., $\frac{d\langle m\rangle}{dh}$) or at the two-particle level ($\presuper{\infty\!}X^{\ensuremath{\text{sp}}}$), leading to the same result [@vanLoon15] and predicting the same divergences [@Janis17]. We stress that the Ward identity is ultimately satisfied in DMFT due to the self-consistency condition  [@Krien17]. Therefore, particular care has to be taken in the implementation, which needs to provide numerically exact convergence, which can be reasonably reached within CTQMC, while the exact diagonalization method [@Caffarel94] may lead to deviations from thermodynamic consistency. Fermi liquid parameters {#sec:flparms} ----------------------- The DMFT approximation in Eqs.  and  leads to several simplifications at the two-particle level. Due to the momentum-independence of the two-particle self-energy $\gamma$, the vertex function $F$ depends only on the transferred momentum ${\ensuremath{\mathbf{q}}}$, not on the momenta ${\ensuremath{\mathbf{k}}}$ and ${\ensuremath{\mathbf{k}}}'$. Therefore, the Bethe-Salpeter equation  in the limit $q\rightarrow0$ simplifies to, $$\begin{aligned} \presuper{\infty\!}F^\alpha_{\nu\nu'}=\presuper{0\!}F^\alpha_{\nu\nu'} +\frac{1}{2\pi}\int d\nu''\presuper{0\!}F^\alpha_{\nu\nu''}R(\nu'')\presuper{\infty\!}F^\alpha_{\nu''\nu'}, \label{eq:bselimits_inv_dmft}\end{aligned}$$ Here we have introduced the local discontinuity, $R(\nu)=\frac{1}{N}\sum_{\ensuremath{\mathbf{k}}}R_{{\ensuremath{\mathbf{k}}}\nu}$. Using the explicit expression for $R$ in Eq.  and for the quasi-particle dispersion $\tilde{\varepsilon}_{\ensuremath{\mathbf{k}}}$ we may write, $$\begin{aligned} R(\nu)=&-2\pi\imath Z^2\delta(\nu){D}^*(0),\label{eq:rdmft}\end{aligned}$$ where we defined the renormalized (quasi-particle) density of states (DOS) at the Fermi level, ${D}^*(0)=\frac{1}{N}\sum_{\ensuremath{\mathbf{k}}}\delta(\tilde{\varepsilon}_{\ensuremath{\mathbf{k}}}-\mu)=D(0)/Z$, and $D(0)=\frac{1}{N}\sum_{\ensuremath{\mathbf{k}}}\delta[{\varepsilon}_{\ensuremath{\mathbf{k}}}-\mu+\Re\Sigma(0)]$ is the interacting DOS at the Fermi level, which coincides with the non-interacting one because of the Luttinger theorem for a momentum-independent self-energy [@muellerhartmann89]. One may now derive the usual Fermi liquid relations [@Landau80; @Legget65]. Using Eq.  we can evaluate the Bethe-Salpeter equation  at the Fermi level, $\nu=\nu'=0$, leading to, $$\begin{aligned} \presuper{\infty\!}F^\alpha_{00}=\frac{\presuper{0\!}F^\alpha_{00}}{1+\mathfrak{f}^{\,\alpha}}\label{eq:f00dmft},\end{aligned}$$ where we defined the Landau parameter as, $$\begin{aligned} \mathfrak{f}^{\,\alpha}=\imath Z^2D^*(0)\presuper{0\!}F^\alpha_{00}\label{def:landauparameter},\end{aligned}$$ which arises from the dynamic limit $\presuper{0\!}F^\alpha_{00}$ of the vertex function at the Fermi level. Furthermore, from Eqs.  and  we obtain, $$\begin{aligned} \presuper{\infty\!}\Lambda^\alpha_{0}=\frac{1}{Z(1+\mathfrak{f}^{\,\alpha})}\label{eq:lambda0dmft},\\ \presuper{\infty\!}X^\alpha=\frac{-2\imath D^*(0)}{1+\mathfrak{f}^{\,\alpha}}\label{eq:xdmft},\end{aligned}$$ where we note that in DMFT the three-leg vertex $\Lambda_{kq}=\Lambda_{\nu q}$ does not depend on the momentum ${\ensuremath{\mathbf{k}}}$, similar to the vertex function. The Ward identity , $\presuper{0\!}\Lambda^\alpha_{0}=Z^{-1}$, was used to derive Eq. . We further evaluate the double limit $q\rightarrow0$ of the response function $L_{kq}=G_kG_{k+q}\Lambda_{\nu q}$. Note that even in DMFT $L_{kq}$ *does* depend on ${\ensuremath{\mathbf{k}}}$, due to the attached bubble. We show in Appendix \[app:lindmft\] that Eq.  implies, $$\begin{aligned} \!\presuper{\infty\!}L^{\alpha}_{k}=&\presuper{0\!}L^{\alpha}_{k}\!+\!\frac{\presuper{\infty\!}X^\alpha Z}{2} \!\!\left[\frac{2\pi\delta(\nu)\delta(\tilde{\varepsilon}_{\ensuremath{\mathbf{k}}}-\mu)}{D^*(0)}\!+\!\presuper{0\!}G^2_{k}\presuper{0\!}F^\alpha_{\nu0}\right]\label{eq:g3decompdmft}\!.\end{aligned}$$ The algebraic relations - are of course well-known, however, we stress that here they arise as exact results for DMFT at zero temperature, which are valid for any lattice dispersion. As such, these expressions have to our best knowledge not been derived rigorously in the literature before, although they have been used [@Capone02]. We note that next to the Landau parameter $\mathfrak{f}$ of the lattice approximation one may also define an *impurity* Landau parameter. Since DMFT is not two-particle self-consistent [@Krien17], such a quantity is in general not equivalent to $\mathfrak{f}$. Extrapolation from finite temperature {#sec:extrapolation} ------------------------------------- The Fermi liquid relations - can be evaluated when the quasi-particle weight $Z$, the quasi-particle DOS $D^*(0)$, and one additional quantity are known. In our calculations this will be the static total response $\presuper{\infty\!}X$ at finite temperature. In this subsection $\nu_n=(2n+1)\pi T$ and $\omega_m=2m\pi T$ denote fermionic and bosonic Matsubara frequencies, the labels $n,m$ will be dropped. $k, q$ comprise momentum and Matsubara frequency, respectively. In order to calculate $\presuper{\infty\!}X$ we use the following Bethe-Salpeter equation for the vertex function (see, e.g., [@Hafermann14-2]), $$\begin{aligned} F^\alpha_{\nu\nu'}(q)=f^\alpha_{\nu\nu'\omega}+T\sum_{\nu''}f^\alpha_{\nu\nu''\omega}\tilde{X}^0_{\nu''}(q)F^\alpha_{\nu''\nu'}(q).\label{eq:bsedmftphi}\end{aligned}$$ Here, $\tilde{X}^0_\nu(q)=\frac{1}{N}\sum_{\ensuremath{\mathbf{k}}}(G_k-g_\nu)(G_{k+q}-g_{\nu+\omega})$ is a bubble of non-local DMFT Green’s functions $G_k-g_\nu$, where the lattice Green’s function $G_k$ and the impurity Green’s function $g_\nu$ are known on the Matsubara frequencies. $f$ denotes the impurity vertex function $f$ (the impurity analogue to $F$ [^2]). $g$ and $f$ are known numerically exactly. We note that in Eq.  the two-particle self-energy $\gamma$ of the impurity does not appear explicitly \[cf. Appendix, Eq. \]. This formulation of the Bethe-Salpeter equation is reminiscent of the dual fermion and dual boson approaches [@Rubtsov08; @Rubtsov12], we use it here because $\gamma$ may be divergent in the non-critical Fermi liquid regime [@Gunnarsson17], which is to our best knowledge not the case for $f$. After $F$ has been calculated, we obtain the three-leg vertex as, $$\begin{aligned} \Lambda^\alpha_{\nu}(q) = 1 + \frac{T}{N}\sum_{{\ensuremath{\mathbf{k}}}'\nu'}F^\alpha_{\nu\nu'}(q)G_{k'}G_{k'+q},\label{eq:lambda_dmft}\end{aligned}$$ and the response function $L^{\alpha}_{k}(q)=G_kG_{k+q}\Lambda^\alpha_{\nu}(q)$, both given at the Matsubara frequencies. Finally, the total response is given by $\presuper{\infty\!}X^\alpha=2\imath\frac{T}{N}\sum_{k}L^{\alpha}_{k}(q=0)$. Note that the limit $q\rightarrow0$ is not ambiguous on the Matsubara frequencies, since they are discrete, and it always leads to the static homogeneous limit $\mathfrak{r}=\infty$. In order to evaluate dynamic limits we consider the finite frequencies $\omega_1=2\pi T$ and $\nu_{-1}=-\pi T\equiv\bar{\nu}$ \[this notation will be used throughout\] at low temperature. From these frequencies we can obtain the dynamic three-leg vertex $\presuper{0\!}\Lambda_0$ at the Fermi level in the limit $T\rightarrow0$. To see this, we use the Ward identity for the Matsubara three-leg vertex $\Lambda$, which is derived in Appendix \[app:lambdadyn\]. Evaluating the latter at $\bar{\nu}$ and $\omega_1$ yields, $$\begin{aligned} \Lambda^\alpha_{\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)=&1-\frac{\Sigma_{-\bar{\nu}}-\Sigma_{\bar{\nu}}}{-2\imath\bar{\nu}}=1-\frac{\Im\Sigma(\pi T)}{\pi T},\label{eq:lambdaz}\end{aligned}$$ where we used $\omega_1=-2\bar{\nu}$ and $\Sigma(-\bar{\nu})=\Sigma^*(\bar{\nu})$. The right-hand-side of Eq.  approaches $Z^{-1}$ in the limit $T\rightarrow0$ [@Serene91; @Arsenault12], which recovers the zero temperature result in Eq. . In fact, we show in Appendix \[app:ac:ward\] that the Ward identity can be used to perform the analytical continuation of the three-leg vertex $\Lambda$, or of the respective response function $L$, from Matsubara frequencies $\nu_n$ and $\omega_m$ to any pair of real frequencies $\nu$ and $\omega$. In our numerical results we use $Z^{-1}\approx1-\frac{\Im\Sigma(\pi T)}{\pi T}$ at finite temperature as an approximation. Similarly, we approximate the density of states at the Fermi level as [@vanLoon14], $$\begin{aligned} D(0)\approx-(\pi T)^{-1}g\left(\tau=1/2T\right),\label{eq:dosatfermi}\end{aligned}$$ where $g$ is the impurity Green’s function and $\tau$ the imaginary time. The quasi-particle density of states is then obtained as $D^*(0)=Z^{-1}D(0)$. We note that these approximations become exact in the limit $T\rightarrow0$. Response of coherent and incoherent electronic states {#sec:coherent} ===================================================== In Sec. \[sec:flt0\] we have recollected the Fermi liquid theory and emphasized the importance of the static and dynamic limits of the three-leg vertex $\Lambda$ and of the response function $L$ to this theory. However, the latter are rarely evaluated in practice. For this reason we provide here a physical intuition of the response function $L$, and of its limits $\presuper{\infty\!}L$ and $\presuper{0\!}L$. We also discuss the relation  between these limits and its integral , i.e., Leggett’s decomposition of the static susceptibility. Our discussion complements the one in Ref. [@vanLoon18]. In the following we consider a Fermi liquid and two non-Fermi liquid scenarios: The Mott insulator and the Hubbard atom, as an exact toy model. We assume particle-hole symmetry only for simplicity, but the results do not rely on it. Fermi liquid {#sec:coherent:fl} ------------ The top panel of Fig. \[fig:flmu\] sketches the DOS of a generic Fermi liquid at $T=0$ with a quasi-particle peak at the Fermi level and two Hubbard bands due to the interaction, corresponding to the DMFT approximation to the Hubbard model [@Georges96], [^3]. The blue DOS shows the spin $\uparrow$ states, red the spin $\downarrow$ states. Let us consider the role of the response function $L$. To this end, we recall Eq.  for its static limit $\presuper{\infty\!}L$, which we sum over ${\ensuremath{\mathbf{k}}}$ for simplicity, $\frac{1}{N}\sum_{\ensuremath{\mathbf{k}}}L_k=L(\nu)$, $$\begin{aligned} \!\presuper{\infty\!}L^{\alpha}(\nu)\!=\!&\presuper{0\!}L^{\alpha}(\nu)\!+\!\frac{\presuper{\infty\!}X^\alpha Z}{2} \!\!\left[2\pi\delta(\nu)\!+\!\frac{1}{N}\!\!\sum_{\ensuremath{\mathbf{k}}}\!\presuper{0\!}G^2_{k}\presuper{0\!}F^\alpha_{\nu0}\!\right]\label{eq:g3decomploc}\!\!.\end{aligned}$$ Note that $D^*$ drops out, according to its definition below Eq. . We use the Ward identities  and  to identify the left-hand-side of Eq.  as, $$\begin{aligned} \presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}(\nu)=-\frac{dg(\nu)}{d\mu},\;\;\;\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}(\nu)=-\frac{dg_\uparrow(\nu)}{dh},\end{aligned}$$ where we made use of the DMFT self-consistency condition, $\frac{1}{N}\sum_{\ensuremath{\mathbf{k}}}G_k=g(\nu)$. Since $g$ determines the DOS we can understand $\presuper{\infty\!}L^{\alpha}(\nu)$ as the response of the latter to an applied field $\mu$ or $h$, respectively. First, we assume that the Fermi liquid is subjected to a [small]{} change $\delta\mu$ of the chemical potential, we can therefore focus on one spin species, e.g., $\uparrow$. The chemical potential probes the static homogeneous charge response, therefore, electronic states with energy $\nu$ respond to the change $\delta\mu$ with a decrease or enhancement $-\delta\mu \presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}(\nu)$ of their spectral weight \[note that $L$ is not positive/negative definite\]. Straight arrows in Fig. \[fig:flmu\] show where spectral weight is typically added or removed due to a small positive $\delta\mu$. ; ; ; ; ; ; (0.705,0.53) – (0.705,0.38) ; (0.23,0.38) – (0.23,0.53) ; (0.48,0.53) – (0.48,0.38) ; (0.46,0.38) – (0.46,0.53) ; (0.76,0.72) – (0.76,0.57) ; (0.76,0.2) – (0.76,0.35) ; (0.65,0.57) – (0.65,0.72) ; (0.65,0.35) – (0.65,0.2) ; (0.76-0.475,0.72) – (0.76-0.475,0.57) ; (0.76-0.475,0.2) – (0.76-0.475,0.35) ; (0.65-0.475,0.57) – (0.65-0.475,0.72) ; (0.65-0.475,0.35) – (0.65-0.475,0.2) ; (0.5,0.9) – (0.5,0.75) ; (0.5,0.0) – (0.5,0.15) ; (0.44,0.75) – (0.44,0.9) ; (0.44,0.15) – (0.44,0.0) ; at (0.08,.85) [$\sigma=\;\uparrow$]{}; (0.8,0.9) – (0.89,0.9) node\[right\] [$\delta\mu$]{}; (0.8,0.8) – (0.89,0.8) node\[right\] [$\delta h$]{}; at (0.08,.1) [$\sigma=\;\downarrow$]{}; at (0.95,.4) [$\nu$]{}; (0.465,1.1) node [DOS]{}; ; ; ; ; (0.705,0.32+0.04) – (0.705,0.17+0.04) ; (0.23,0.17+0.04) – (0.23,0.32+0.04) ; (0.76,0.72) – (0.76,0.57) ; (0.76,0.2-0.35) – (0.76,0.35-0.35) ; (0.65,0.57) – (0.65,0.72) ; (0.65,0.35-0.35) – (0.65,0.2-0.35) ; (0.76-0.475,0.72) – (0.76-0.475,0.57) ; (0.76-0.475,0.2-0.35) – (0.76-0.475,0.35-0.35) ; (0.65-0.475,0.57) – (0.65-0.475,0.72) ; (0.65-0.475,0.35-0.35) – (0.65-0.475,0.2-0.35) ; at (0.95,.22) [$\nu$]{}; We now consider the total charge response $\presuper{\infty\!}X^{{\ensuremath{\text{ch}}}}=\frac{2}{2\pi}\int d\nu\presuper{\infty\!}L^{\ensuremath{\text{ch}}}(\nu)$. Taking the integral in Eq. , we can identify the dynamic limit of the three-leg vertex at the Fermi level \[cf. Def. \], $$\begin{aligned} \frac{1}{2\pi}\int_{-\infty}^{+\infty}\!\!\!\!d\nu \left[2\pi\delta(\nu)\!+\!\frac{1}{N}\!\sum_{\ensuremath{\mathbf{k}}}\!\presuper{0\!}G^2_{k}\presuper{0\!}F^{\ensuremath{\text{ch}}}_{\nu0}\right]\! =\!\presuper{0\!}\Lambda^{{\ensuremath{\text{ch}}}}_0=\frac{1}{Z}\label{eq:integratelambda},\end{aligned}$$ where the Ward identity Eq.  was used in the last step. We can see from Eq.  that the second term one the right-hand-side of Eq.  yields the entire total response $\presuper{\infty\!}X^{{\ensuremath{\text{ch}}}}$, whereas the integral of $\presuper{0\!}L^{\ensuremath{\text{ch}}}(\nu)=-\frac{dg(\nu)}{d\nu}$ contributes nothing. $\presuper{0\!}L$ represents a response of electronic states that does not lead to a change $-\delta\mu\presuper{\infty\!}X^{{\ensuremath{\text{ch}}}}$ of the occupation number $\langle n\rangle$. However, which electronic states *do* contribute to the total charge response $\presuper{\infty\!}X^{{\ensuremath{\text{ch}}}}$? If the chemical potential is increased by an infinitesimal amount $\delta\mu$, only spectral weight of states very near to the Fermi level is shifted below it, leading to an increase of the occupation number $\langle n\rangle$. However, in the Fermi liquid the states close to the Fermi level correspond to coherent quasi-particles. Therefore, the charge response of a Fermi liquid is coherent, that is, proportional to the amount of spectral weight $Z$ near the Fermi level. This is in agreement with analytical estimates for the Fermi liquid (see, e.g., supplemental material of Ref. [@Kokalj13]), and it can also be observed in DMFT calculations [@Hafermann14-2]. We now turn to the spin channel, where the situation is quite different. The wiggly arrows in Fig. \[fig:flmu\] indicate a typical change $-\delta h\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}(\nu)$ in spectral weight due to a small magnetic field $\delta h$ applied along the $\uparrow$ direction. Other than in case of the charge density $\langle n\rangle$, the magnetization $\langle m\rangle$ changes not only due to the quasi-particles at the Fermi level but also due to the shift of spectral weight from the spin-$\uparrow$ Hubbard bands to the spin-$\downarrow$ Hubbard bands and vice versa. Hence, $\presuper{\infty\!}X^{\ensuremath{\text{sp}}}$ accounts for the response of coherent and of incoherent states, it is not merely a coherent response, in contrast to the charge susceptibility $\presuper{\infty\!}X^{\ensuremath{\text{ch}}}$. The origin of the incoherent contribution to the spin susceptibility can not be the dynamic response $\presuper{0\!}L^{{\ensuremath{\text{sp}}}}$, since its integral is again zero. Therefore, in the spin channel the second term on the right-hand-side of Eq.  accounts for the magnetic response of coherent and of incoherent electronic states. As the example of the magnetic response shows, it is misleading to suggest that the static susceptibility of a Fermi liquid is determined in general only by the quasi-particles. In the Fermi liquid formula , $\presuper{\infty\!}X^\alpha=-2\imath D^*(0)/(1+\mathfrak{f}^{\,\alpha})$, the response of incoherent states has merely been comprised into the Landau parameter $\mathfrak{f}$. Mott insulator {#sec:coherent:mott} -------------- We discuss the response functions of the Mott insulating phase of the Hubbard model \[cf. Eq. \] at $T=0$, whose gapped DOS is sketched in the lower panel of Fig. \[fig:flmu\]. A small change $\delta\mu$ of the chemical potential merely leads to a redistribution of the Hubbard bands but no spectral weight is shifted across the Fermi level. Therefore, the charge susceptibility $\presuper{\infty\!}X^{\ensuremath{\text{ch}}}$ of the Mott insulator is exactly zero. On the other hand, the spin susceptibility $\presuper{\infty\!}X^{\ensuremath{\text{sp}}}$ of the Mott insulator does not vanish [@Rozenberg94], since it is still possible for a small magnetic field $\delta h$ to polarize the Hubbard bands. Let us consider the fate of the static response functions $\presuper{\infty\!}L_k$ and $\presuper{\infty\!}X$ at the interaction-driven Mott transition. We begin with the total response function $\presuper{\infty\!}X$: It is known that $\presuper{\infty\!}X^{\ensuremath{\text{ch}}}\propto Z$ near the transition [@Kokalj13], we can therefore deduce from the Fermi liquid formula  that $-2\imath D^*(0)/(1+\mathfrak{f}^{\,{\ensuremath{\text{ch}}}})\propto Z$. The quasi-particle DOS $D^*(0)=Z^{-1}D(0)$ diverges at the transition $\propto Z^{-1}$, since the bandwidth of the quasi-particle dispersion $\tilde{\varepsilon}_{\ensuremath{\mathbf{k}}}$ shrinks to zero, and hence the symmetric Landau parameter diverges, $$\begin{aligned} \mathfrak{f}^{\,{\ensuremath{\text{ch}}}} \propto Z^{-2}.\end{aligned}$$ We now come to a remarkable result. Instead of expressing the total response function in terms of the Landau parameter $\mathfrak{f}$, we can also express it in terms of the forward scattering amplitude $\presuper{\infty\!}F^\alpha_{00}$ at the Fermi level. Combining Eqs.  and  leads to, $$\begin{aligned} \presuper{\infty\!}X^\alpha=-2\imath{D}^*(0)\left[1-\imath Z^2{D}^*(0)\presuper{\infty\!}F^\alpha_{00}\right]\label{eq:xfromf00}.\end{aligned}$$ Usually a divergence of the forward scattering is associated to a Pomeranchuk instability [@Pomeranchuk59], leading to the divergence of the corresponding $\presuper{\infty\!}X^\alpha$. We see from Eq.  that this is indeed the case when $Z$ is finite. However, at the Mott transition the forward scattering must diverge in order for $\presuper{\infty\!}X^{\ensuremath{\text{ch}}}$ to vanish as $Z\rightarrow0$, $$\begin{aligned} \presuper{\infty\!}F^{\ensuremath{\text{ch}}}_{00}\propto Z^{-1}.\end{aligned}$$ If $\presuper{\infty\!}F^{\ensuremath{\text{ch}}}_{00}$ remained finite at the transition the total charge response of the Mott insulator would be divergent. In the spin channel the situation is slightly different, since $\presuper{\infty\!}X^{\ensuremath{\text{sp}}}$ does not vanish at the Mott transition. In the case that it remains finite [^4], the anti-symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ and the forward scattering vertex $\presuper{\infty\!}F^{\ensuremath{\text{sp}}}_{00}$ both diverge $\propto Z^{-1}$. Lastly, we consider the static response function $\presuper{\infty\!}L_{k}$. The latter is given by Eq. , we examine the second term on its right-hand-side, $$\begin{aligned} \frac{\presuper{\infty\!}X^\alpha Z}{2}\left[2\pi\delta(\nu)+\frac{1}{N}\sum_{\ensuremath{\mathbf{k}}}\presuper{0\!}G^2_{k}\presuper{0\!}F^\alpha_{\nu0}\right].\label{eq:remainder}\end{aligned}$$ The term in braces is proportional $Z^{-1}$, which can be seen by integrating over $\frac{1}{2\pi N}\int_{-\infty}^{+\infty} d\nu\sum_{\ensuremath{\mathbf{k}}}$, as in Eq. . Therefore, in the charge channel the whole term is proportional $\presuper{\infty\!}X^{\ensuremath{\text{ch}}}\propto Z$ and thus vanishes at the Mott transition. Hence, $\presuper{\infty\!}L^{\ensuremath{\text{ch}}}_{k}=\presuper{0\!}L^{\ensuremath{\text{ch}}}_{k}$ in the Mott insulator. According to this relation the static charge response of the Hubbard bands is given by the dynamic response. A similar situation does not occur in the spin channel, since the integral over $\presuper{0\!}L^{\ensuremath{\text{sp}}}$ is zero, whereas the integral of $\presuper{\infty\!}L^{\ensuremath{\text{sp}}}$ yields the spin susceptibility $\presuper{\infty\!}X^{\ensuremath{\text{sp}}}$, which does not vanish at the Mott transition. Hubbard atom {#sec:coherent:atom} ------------ We discuss the response functions $\presuper{\infty\!}L$ and $\presuper{0\!}L$ of an exactly solvable system, the Hubbard atom with Hamiltonian $H=U n_{\uparrow}n_{\downarrow}-\mu n - h m$ at half-filling, $\mu=\frac{U}{2}$, and vanishing magnetic field $h=0$. This system loosely resembles the Mott insulator, due to its Hubbard peaks at $\pm\frac{U}{2}$ and the vanishing of the total charge response at $T=0$. We calculate the causal Green’s function $G^c$ and the causal response function $L^{c}$ of this model in Appendix \[app:atom\] in the limits $\presuper{\infty\!}L^{c}$ and $\presuper{0\!}L^{c}$. We drop the label $c$ in the following. Black lines in Figs. \[fig:alch\] and \[fig:alsp\] show the causal Green’s function $G(\nu)$ for a value of $U=1$ and temperatures $T=0.2$ (top panels) and $T=0.1$ (bottom panels) in units of $U$. For visibility we use a broadening of $|\eta|=(\pi T)^2$. We note that the lower Hubbard peak lies below the Fermi level and is therefore hole-like (advanced), giving the peak positive spectral weight, whereas the upper Hubbard peak is particle-like (retarded) and has negative spectral weight \[cf. also Eq. \]. At half-filling $G_\sigma(\nu)$ integrates to $\frac{1}{2\pi}\int_{-\infty}^{+\infty}G_\sigma(\nu)=\imath[\langle n_\sigma\rangle-\frac{1}{2}]=0$. (image1) at (0,0) [![\[fig:alch\] (Color online) The causal Green’s function $G$ (black), the static charge response $\frac{dG}{d\mu}$ (red), and the dynamic response $\frac{dG}{d\nu}$ (blue) of the half-filled Hubbard atom as a function of the real frequency $\nu$. Top: $T=0.2$, the static and dynamic response differ appreciably. Bottom: $T=0.1$, the limits almost coincide, at the same time the charge susceptibility is suppressed (see insets), the latter is given as the integral under the red curve ($\times-\pi^{-1}$). Arrows indicate the enhancement/decrease of Green’s function according to the red curve due to $\delta\mu>0$. ](g3chbeta5 "fig:"){width="49.00000%"}]{}; (image) at (0.,3.2) [![\[fig:alch\] (Color online) The causal Green’s function $G$ (black), the static charge response $\frac{dG}{d\mu}$ (red), and the dynamic response $\frac{dG}{d\nu}$ (blue) of the half-filled Hubbard atom as a function of the real frequency $\nu$. Top: $T=0.2$, the static and dynamic response differ appreciably. Bottom: $T=0.1$, the limits almost coincide, at the same time the charge susceptibility is suppressed (see insets), the latter is given as the integral under the red curve ($\times-\pi^{-1}$). Arrows indicate the enhancement/decrease of Green’s function according to the red curve due to $\delta\mu>0$. ](chichbeta5 "fig:"){width="20.00000%"}]{}; (0.33,0.68) – (0.33,0.78) ; (0.45,0.73) – (0.45,0.63) ; (0.68,0.23) – (0.68,0.33) ; (0.56,0.37) – (0.56,0.27) ; at (0.9,0.9) [$T=0.2$]{}; at (0.87,.43) [$\nu$]{}; at (0.15,.7) [$X^{\ensuremath{\text{ch}}}(\omega_m)$]{}; at (0.44,.97) [$m$]{}; (0.75,0.32) – (0.85,0.32) node\[right,black\] [$\Im G$]{}; (0.75,0.2) – (0.85,0.2) node\[right,black\] [$\Im \frac{dG}{d\mu}$]{}; (0.75,0.08) – (0.85,0.08) node\[right,black\] [$\Im \frac{dG}{d\nu}$]{}; (image2) at (0,0) [![\[fig:alch\] (Color online) The causal Green’s function $G$ (black), the static charge response $\frac{dG}{d\mu}$ (red), and the dynamic response $\frac{dG}{d\nu}$ (blue) of the half-filled Hubbard atom as a function of the real frequency $\nu$. Top: $T=0.2$, the static and dynamic response differ appreciably. Bottom: $T=0.1$, the limits almost coincide, at the same time the charge susceptibility is suppressed (see insets), the latter is given as the integral under the red curve ($\times-\pi^{-1}$). Arrows indicate the enhancement/decrease of Green’s function according to the red curve due to $\delta\mu>0$. ](g3chbeta10 "fig:"){width="49.00000%"}]{}; (0.365,0.52) – (0.365,0.62) ; (0.4,0.62) – (0.4,0.52) ; (0.61,0.39) – (0.61,0.29) ; (0.645,0.29) – (0.645,0.39) ; at (0.9,0.9) [$T=0.1$]{}; at (0.87,.4) [$\nu$]{}; at (0.15,.7) [$X^{\ensuremath{\text{ch}}}(\omega_m)$]{}; at (0.44,.95) [$m$]{}; (image) at (0.,3.2) [![\[fig:alch\] (Color online) The causal Green’s function $G$ (black), the static charge response $\frac{dG}{d\mu}$ (red), and the dynamic response $\frac{dG}{d\nu}$ (blue) of the half-filled Hubbard atom as a function of the real frequency $\nu$. Top: $T=0.2$, the static and dynamic response differ appreciably. Bottom: $T=0.1$, the limits almost coincide, at the same time the charge susceptibility is suppressed (see insets), the latter is given as the integral under the red curve ($\times-\pi^{-1}$). Arrows indicate the enhancement/decrease of Green’s function according to the red curve due to $\delta\mu>0$. ](chichbeta10 "fig:"){width="20.00000%"}]{}; (image1) at (0,0) [![\[fig:alsp\] (Color online) The static magnetic response $\frac{dG}{dh}$ (red). Note that the causal Green’s function (black) and the dynamic response (blue) are the same as in Fig. \[fig:alch\] for $T=0.2$ (top) and $T=0.1$ (bottom), respectively. The difference between the static and dynamic response grows at low temperatures. Wiggly arrows indicate the response to $\delta h>0$. The integral under the red curve yields the spin susceptibility ($\times -\pi^{-1}$), see insets, which diverges as $T\rightarrow0$. Note that the integral under the blue curve is always exactly zero. ](g3spbeta5 "fig:"){width="49.00000%"}]{}; (image) at (0.,3.2) [![\[fig:alsp\] (Color online) The static magnetic response $\frac{dG}{dh}$ (red). Note that the causal Green’s function (black) and the dynamic response (blue) are the same as in Fig. \[fig:alch\] for $T=0.2$ (top) and $T=0.1$ (bottom), respectively. The difference between the static and dynamic response grows at low temperatures. Wiggly arrows indicate the response to $\delta h>0$. The integral under the red curve yields the spin susceptibility ($\times -\pi^{-1}$), see insets, which diverges as $T\rightarrow0$. Note that the integral under the blue curve is always exactly zero. ](chispbeta5 "fig:"){width="20.00000%"}]{}; (0.35,0.33) – (0.35,0.43) ; (0.65,0.175) – (0.65,0.275) ; at (0.9,0.9) [$T=0.2$]{}; at (0.87,.18) [$\nu$]{}; at (0.15,.7) [$X^{\ensuremath{\text{sp}}}(\omega_m)$]{}; at (0.44,.95) [$m$]{}; (0.8,0.77) – (0.9,0.77) node\[right,black\] [$\Im G$]{}; (0.8,0.65) – (0.9,0.65) node\[right,black\] [$\Im \frac{dG}{dh}$]{}; (0.8,0.53) – (0.9,0.53) node\[right,black\] [$\Im \frac{dG}{d\nu}$]{}; (image2) at (0,0) [![\[fig:alsp\] (Color online) The static magnetic response $\frac{dG}{dh}$ (red). Note that the causal Green’s function (black) and the dynamic response (blue) are the same as in Fig. \[fig:alch\] for $T=0.2$ (top) and $T=0.1$ (bottom), respectively. The difference between the static and dynamic response grows at low temperatures. Wiggly arrows indicate the response to $\delta h>0$. The integral under the red curve yields the spin susceptibility ($\times -\pi^{-1}$), see insets, which diverges as $T\rightarrow0$. Note that the integral under the blue curve is always exactly zero. ](g3spbeta10 "fig:"){width="49.00000%"}]{}; (image) at (0.,3.2) [![\[fig:alsp\] (Color online) The static magnetic response $\frac{dG}{dh}$ (red). Note that the causal Green’s function (black) and the dynamic response (blue) are the same as in Fig. \[fig:alch\] for $T=0.2$ (top) and $T=0.1$ (bottom), respectively. The difference between the static and dynamic response grows at low temperatures. Wiggly arrows indicate the response to $\delta h>0$. The integral under the red curve yields the spin susceptibility ($\times -\pi^{-1}$), see insets, which diverges as $T\rightarrow0$. Note that the integral under the blue curve is always exactly zero. ](chispbeta10 "fig:"){width="20.00000%"}]{}; (0.37,0.34) – (0.37,0.44) ; (0.63,0.15) – (0.63,0.25) ; at (0.9,0.9) [$T=0.1$]{}; at (0.87,.25) [$\nu$]{}; at (0.15,.7) [$X^{\ensuremath{\text{sp}}}(\omega_m)$]{}; at (0.44,.95) [$m$]{}; The red lines show the imaginary part of the charge response $\frac{dG}{d\mu}=-\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}$ in Fig. \[fig:alch\] and of the magnetic response $\frac{dG}{dh}=-\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}$ in Fig. \[fig:alsp\]. Straight and wiggly arrows indicate where according to $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}$ and $\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}$ spectral weight of Green’s function is enhanced or suppressed upon a change $\delta\mu$ or $\delta h$ of the respective conjugate field. Note that a net increase/decrease of spectral weight of the causal Green’s function is possible. In fact, the integral under the red curves yields the static susceptibility, $\frac{2}{2\pi}\int_{-\infty}^{+\infty}d\nu \presuper{\infty\!}L^{\alpha}(\nu)=\presuper{\infty\!}X^{\alpha}$. Blue lines indicate the dynamic response function $\frac{dG}{d\nu}=-\presuper{0\!}L^{}$, which is the same for $\alpha={\ensuremath{\text{ch}}}$ and $\alpha={\ensuremath{\text{sp}}}$. The integral of $\presuper{0\!}L^{}$, the dynamic susceptibility $\presuper{0\!}X$, is exactly zero. We first discuss the charge response $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}$ for $T=0.2$ in the top panel of Fig. \[fig:alch\]. $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}$ changes the spectral weight in such a way that the two Hubbard peaks are effectively shifted to the left when the chemical potential $\mu$ increases. At the high temperature $T=0.2$ also the occupation number $\langle n\rangle$ changes due to $\delta\mu$. Therefore, the integral over $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}(\nu)$ is finite, representing a net increase of spectral weight due to $\delta\mu>0$. The resultant charge susceptibility is shown in the inset of the top panel on the Matsubara axis, $\presuper{\infty\!}X^{{\ensuremath{\text{ch}}}}$ is marked at $\omega_0=0$. The top panel of Fig. \[fig:alch\] also shows that the static and dynamic charge response $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}$ (red) and $\presuper{0\!}L^{}$ (blue) are similar but not equivalent at $T=0.2$. We observe the same correlation functions in the lower panel of Fig. \[fig:alch\] for a lower temperature $T=0.1$. Still, $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}$ indicates a shift of the Hubbard peaks to the left due to $\delta\mu$. However, charge excitations are suppressed exponentially with decreasing $T$, leading to an almost vanishing charge response $\presuper{\infty\!}X^{{\ensuremath{\text{ch}}}}$. On the same note, $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}$ and $\presuper{0\!}L^{}$ have become virtually equivalent. The integral over the former hence (almost) vanishes, since this is exactly the case for the latter. We note that $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}=\presuper{0\!}L^{}$ holds exactly at $T=0$ \[see Appendix \[app:atom\]\]. For its integral we have likewise $\presuper{\infty\!}X^{\ensuremath{\text{ch}}}=0$ at $T=0$. We now turn to the spin channel, whose response function is drawn into Fig. \[fig:alsp\]. A small magnetic field $\delta h$ leads to a shift in spectral weight according to $\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}$, its effect is qualitatively different from the charge channel. As indicated by the wiggly arrows, the magnetic field enhances the lower Hubbard peak and suppresses the upper one. (Note that $G_\uparrow$ is shown, the shift is reversed for $G_\downarrow$.) We observe that $\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}$ and $\presuper{0\!}L^{}$ are quite different, both at $T=0.2$ and at $T=0.1$. The analytical result in Appendix \[app:atom\] shows that they do not become equivalent at $T=0$. In fact, the spin susceptibility $\presuper{\infty\!}X^{{\ensuremath{\text{sp}}}}$ diverges in this limit, whereas the equivalence of $\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}$ and $\presuper{0\!}L^{}$ would imply a vanishing spin susceptibility. It follows that $\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}\neq\presuper{0\!}L^{}$. The integral $\presuper{\infty\!}X^{\ensuremath{\text{sp}}}$ represents the response of the Hubbard peaks to the magnetic field that leads to a net change in the magnetization, $-\delta h\presuper{\infty\!}X^{\ensuremath{\text{sp}}}$. We note that the scenario $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}=\presuper{0\!}L^{{\ensuremath{\text{ch}}}}$ and $\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}\neq\presuper{0\!}L^{{\ensuremath{\text{sp}}}}$ at $T=0$ that we find for the Hubbard atom is similar to the one that we found in Sec. \[sec:coherent:mott\] for the Mott insulator. The Mott transition {#sec:mott} =================== In Sec. \[sec:coherent:mott\] we considered the fate of the static and dynamic response functions at the Mott transition and within the Mott phase. Our discussion suggests that the following things happen at the transition from the Fermi liquid to the Mott insulator at zero temperature: 1. The discontinuity $R$ vanishes with the quasi-particle weight $Z$. 2. The dynamic limit of the three-leg vertex diverges in the charge and in the spin channel, $\presuper{0\!}{\Lambda}^{\ensuremath{\text{ch}}}=\presuper{0\!}{\Lambda}^{\ensuremath{\text{sp}}}=Z^{-1}$. It remains divergent throughout the Mott insulating phase. 3. The symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ diverges $\propto~Z^{-2}$, the forward scattering vertex $\presuper{\infty\!}F^{\ensuremath{\text{ch}}}_{00}$ diverges $\propto~Z^{-1}$. In the following we will verify these expectations one by one using DMFT results for the half-filled Hubbard model on the triangular lattice Eq. . We stress that while our DMFT results were obtained at finite temperature, our main aim is to make conclusions about the Mott transition in the limit $T\rightarrow0$. In our calculations we rely on a modern CTQMC solver [@ALPS2] based on the ALPS libraries with improved estimators for the impurity vertex function [@Hafermann12; @Hafermann14]. We note that in this section, unless clearly marked differently, we consider Matsubara correlation functions and vertices, $G^m, X^{m,\alpha}, ...$, and so on. The label $m$ will be dropped in the following. (image) at (0,0) [![\[fig:gtau\] (Color online) The approximate DOS at the Fermi level, $D(0)$ \[see text\]. The inflection point of each curve indicates the value $U_M(T)$ of the Mott crossover/transition. Diamonds and small dots indicate metallic and insulating solutions, respectively. This labeling was obtained from Fig. \[fig:ev\_dyn\], as described in Sec. \[sec:eigenvalues\], and is used consistently also in Figs. \[fig:suscs\], \[fig:rlat\], and \[fig:fdyn\_ch\]. ](gtau_U "fig:"){width="49.00000%"}]{}; at (0.5,-.02) [$U$]{}; at (0.02,0.5) [DOS$(0)$]{}; Spectral weight at the Fermi level and static susceptibility ------------------------------------------------------------ To set the stage, we firstly identify the metal and Mott regimes of the Hubbard model  within the DMFT approximation. We note that near the Mott transition/crossover solutions of smaller $U$ were used as an input for the DMFT loop at larger $U$. We do not consider the coexistence of insulating and metallic solutions or the first order critical line at low temperature, see, for example, Refs. [@Werner07; @Balzer09]. (image) at (0,0) [![\[fig:suscs\] (Color online) Top: Charge susceptibility $\frac{d\langle n\rangle}{d\mu}$ as a function of the temperature $T$. The shaded arrow roughly indicates the evolution of the coherence temperature $T_\text{coh}$ of the Fermi liquid with $U$, below which an upturn of $\frac{d\langle n\rangle}{d\mu}$ indicates the re-entry into the Fermi liquid. Bottom: Inverse of the spin susceptibility $\frac{d\langle m\rangle}{dh}$, which is largely unaffected by the transition. Labels as in Fig. \[fig:gtau\]. ](comp_T "fig:"){width="47.00000%"}]{}; at (0.0,0.5) [$\frac{d\langle n\rangle}{d\mu}$]{}; (1.8,-4.15+5) – (6.7,-2.7+5); (image) at (0,0) [![\[fig:suscs\] (Color online) Top: Charge susceptibility $\frac{d\langle n\rangle}{d\mu}$ as a function of the temperature $T$. The shaded arrow roughly indicates the evolution of the coherence temperature $T_\text{coh}$ of the Fermi liquid with $U$, below which an upturn of $\frac{d\langle n\rangle}{d\mu}$ indicates the re-entry into the Fermi liquid. Bottom: Inverse of the spin susceptibility $\frac{d\langle m\rangle}{dh}$, which is largely unaffected by the transition. Labels as in Fig. \[fig:gtau\]. ](magn_T "fig:"){width="46.50000%"}]{}; at (0.5,-.02) [$T$]{}; at (0.0,0.5) [$\left[\frac{d\langle m\rangle}{dh}\right]^{-1}$]{}; Figs. \[fig:gtau\] and \[fig:suscs\] show the spectral weight at the Fermi level and the susceptibilities $\presuper{\infty\!}{X}^\alpha$, respectively. In both panels diamonds are used to label metallic solutions, whereas small circles are used for insulating ones. The labeling was done using a novel criterion to determine the Mott crossover, which is introduced in Sec. \[sec:eigenvalues\]. We begin with Fig. \[fig:gtau\], which shows the approximate spectral weight at the Fermi level $D(0)=-g(\tau=1/2T)/(\pi T)$ [^5] as a function of $U$ for temperatures $0.05\leq T\leq 0.55$ in units of the hopping $\tilde{t}=1$. $g$ is the impurity Green’s function. The lines show inflection points at elevated $U$, which indicate the interaction $U_M(T)$ of the Mott crossover/transition. Below $U_M(T)$ Fig. \[fig:gtau\] shows for lower temperature that the spectral weight at the Fermi level increases with $U$. This is a particularity of the triangular lattice, whose quasi-particle peak and van Hove singularity merge near the critical interaction [@Aryanpour06]. In the limit $T\rightarrow0$ the spectral weight at the Fermi level vanishes completely for $U\geq U_M(T=0)\approx 12$ [@Aryanpour06]. The top panel of Fig. \[fig:suscs\] shows the static homogeneous charge susceptibility, $\frac{d\langle n\rangle}{d\mu}=\imath\presuper{\infty\!}X^{\ensuremath{\text{ch}}}$ \[cf. Eq. \]. Data points in the Mott regime tend toward zero with decreasing temperature, whereas those in the metallic regime tend towards finite values. For moderate interaction $\frac{d\langle n\rangle}{d\mu}$ shows an upturn as $T$ is lowered, which occurs near the coherence temperature of the Fermi liquid [@Mezio17]. This pattern crosses the panel diagonally from the bottom left to the top right (arrow), near $U_M(T)$ it leads to a $T$-driven insulator-to-metal crossover. Above $U_M(T=0)$ a re-entry into the Fermi liquid at low temperature does no longer occur. At $T=0$ the charge susceptibility then vanishes exactly [@Werner07]. The bottom panel of Fig. \[fig:suscs\] shows the inverse of the spin susceptibility, $\frac{d\langle m\rangle}{dh}=\imath\presuper{\infty\!}X^{\ensuremath{\text{sp}}}$. The latter does not vanish in the Mott phase and it appears to be unaffected by the transition, in agreement with early DMFT results [@Rozenberg94]. This can be seen well for $U=10, 10.5$, and $11$, where $\frac{d\langle m\rangle}{dh}$ changes continuously at the $T$-driven crossover. The discontinuity {#sec:rplus} ----------------- We verify in Fig. \[fig:rlat\] that the singular value $R(\nu=0)=-2\pi\imath Z \,\text{DOS}(\nu=0)\delta(0)$ of the local discontinuity \[cf. Eq. \] vanishes at the Mott transition. To calculate this quantity at finite temperature we use the approximation $\text{DOS}(\nu=0)=-g(\tau=1/2T)/T$, as before, and $Z^{-1}\approx 1-\frac{\Im\Sigma(\pi T)}{\pi T}$ \[cf. Eq. \]. Similar to the charge susceptibility in Fig. \[fig:suscs\], Fig. \[fig:rlat\] shows the re-entry into the Fermi liquid at low temperature for $U<U_M(T=0)$. For insulating solutions the behavior of $R(\nu=0)$ is consistent with its vanishing at $T=0$ for $U>U_M(T=0)$. (image) at (0,0) [![\[fig:rlat\] (Color online) The absolute value of the discontinuity $R(\nu=0)=-2\pi\imath Z \,\text{DOS}(\nu=0)\delta(0)$ in DMFT calculations \[cf. Eq. \] at finite temperature. At $T=0$ the discontinuity is finite in the metal and zero in the Mott insulator. Labels as in Fig. \[fig:gtau\]. ](rlat_T "fig:"){width="47.00000%"}]{}; at (0.53,-.02) [$T$]{}; at (0.0,0.5) [$2\pi Z\; \text{DOS}(\nu=0)$]{}; Divergence of the three-leg vertex {#sec:lambdadiv} ---------------------------------- We verify the divergence of the dynamic limit of the three-leg vertex $\presuper{0\!}\Lambda=Z^{-1}$ at the Mott transition. Since this is an analytical statement, it is certain that this divergence occurs as $Z\rightarrow0$. We show in the following that this is a direct consequence of the Ward identity. (image) at (0,0) [![\[fig:localward\] (Color online) Imaginary part of the Ward identity  in the metal (left) and in the Mott insulator (right) for several fixed values of the bosonic frequency $\omega_m=2m\pi T$. Discrepancies between the left-hand-side (lines) and right-hand-side (symbols) are due to numerical noise. Black circles mark data points at $\nu_n=-\omega_m/2$, where $\Sigma(\nu_n+\omega_m)-\Sigma({\nu_n})=-2\imath\Im\Sigma(\nu_n)$, which diverges in the Mott phase as $T\rightarrow0$. ](ward "fig:"){width="49.00000%"}]{}; at (0.5,1.05) [(square lattice, Figure reprinted from Ref. [@Hafermann14-2])]{}; In DMFT the self-consistency condition  leads to the equivalence of the Ward identity of the Hubbard model  with the Ward identity of the Anderson impurity model, which is a local relation [@Hafermann14-2; @Krien17], $$\begin{aligned} \Sigma_{\nu+\omega}-\Sigma_{\nu}=T\sum_{\nu'}\gamma^\alpha_{\nu\nu'\omega}[g_{\nu'+\omega}-g_{\nu'}].\label{eq:localward}\end{aligned}$$ Here, $\gamma^\alpha$ is the two-particle self-energy of the AIM, $g$ is the impurity Green’s function. We note that both $\gamma^{\ensuremath{\text{ch}}}$ and $\gamma^{\ensuremath{\text{sp}}}$ satisfy this equation. Fig. \[fig:localward\] shows a numerical validation of Eq.  in the metal (left panel) and in the insulator (right panel). Note that the figure corresponds to a DMFT calculation for the square lattice from Ref. [@Hafermann14-2] at $T=0.08$ in our units of the hopping ($\tilde{t}=1$). In order to demonstrate the significance of the Ward identity for the divergence of $\presuper{0\!}\Lambda$ at the Mott transition we have marked in Fig. \[fig:localward\] those combinations of the Matsubara frequencies $\nu$ and $\omega$ with black circles that satisfy the constraint $\omega=-2\nu$. Evaluating the left-hand-side of Eq.  at these points simply yields $-2\imath\Im\Sigma(\nu)$. The extrapolation of the marked points in the left and right panel of Fig. \[fig:localward\] to the Fermi level therefore directly indicates the metallic and insulating regime, respectively: $-2\imath\Im\Sigma(\nu)$ extrapolates to zero in the left panel and to $-\infty$ in the right panel. This indicates, of course, that the spectral weight at the Fermi level vanishes (notwithstanding residual incoherent spectral weight due to thermal excitations), and correspondingly $Z\rightarrow0$. However, the Ward identity  is a relation between the one- and two-particle self-energies $\Sigma$ and $\gamma$. We can therefore expect to find divergences at the Mott transition and within the Mott phase also at the two-particle level. Indeed, we show in Appendix \[app:lambdadyn\] that Eq.  directly implies, $$\begin{aligned} \Lambda^\alpha_{\nu}({\ensuremath{\mathbf{q}}}_0,\omega)=&1-\frac{\Sigma_{\nu+\omega}-\Sigma_{\nu}}{\imath\omega}.\label{eq:lambdaward}\end{aligned}$$ As discussed in Sec. \[sec:extrapolation\], we can evaluate this relation at $\bar{\nu}=-\pi T$ and $\omega_1=2\pi T$ and take the limit $T\rightarrow0$ to recover the dynamic limit of the causal three-leg vertex at the Fermi level, $$\begin{aligned} \Lambda^{m,\alpha}_{\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)=\presuper{0\!}\Lambda^{c,\alpha}_{\nu=0}=&\frac{1}{Z}, \;\;\;T\rightarrow0.\end{aligned}$$ Here, the labels $m$ and $c$ indicate the Matsubara or the causal three-leg vertex, respectively. These should not be confused, since in general an analytical continuation is required to recover the causal vertex $\Lambda^c$ from the Matsubara vertex $\Lambda^m$ \[see Appendix \[app:ac:ac\]\]. As a consequence of the Ward identity  DMFT captures the divergence of $\presuper{0\!}\Lambda^{c}$ at the critical interaction $U_M(T=0)$ of the Mott transition. The divergence occurs both in the charge and in the spin channel. Landau parameter {#sec:lparm} ---------------- (image) at (0,0) [![\[fig:fdyn\_ch\] (Color online) Top: The symmetric Landau parameter $\mathfrak{}f^{\,{\ensuremath{\text{ch}}}}$ calculated from Eq.  (bold lines) as a function of temperature. The dashed lines indicate $Z^2 D^*(0)F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$ where $\bar{\nu}=-\pi T$ and $\omega_1=2\pi T$, which shows agreement with $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ at small $T$. Bottom: The anti-symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$. Diamonds indicate metallic solutions, small dots insulating ones. The inset shows $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ as a function of $U$ for $T=0.05$. ](landau_ch "fig:"){width="49.00000%"}]{}; at (0.04,0.5) [$\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$]{}; (image) at (0,0) [![\[fig:fdyn\_ch\] (Color online) Top: The symmetric Landau parameter $\mathfrak{}f^{\,{\ensuremath{\text{ch}}}}$ calculated from Eq.  (bold lines) as a function of temperature. The dashed lines indicate $Z^2 D^*(0)F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$ where $\bar{\nu}=-\pi T$ and $\omega_1=2\pi T$, which shows agreement with $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ at small $T$. Bottom: The anti-symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$. Diamonds indicate metallic solutions, small dots insulating ones. The inset shows $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ as a function of $U$ for $T=0.05$. ](landau_sp "fig:"){width="46.00000%"}]{}; at (0.5,-.02) [$T$]{}; at (-0.025,0.5) [$\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$]{}; (image) at (3.2,2.8) [![\[fig:fdyn\_ch\] (Color online) Top: The symmetric Landau parameter $\mathfrak{}f^{\,{\ensuremath{\text{ch}}}}$ calculated from Eq.  (bold lines) as a function of temperature. The dashed lines indicate $Z^2 D^*(0)F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$ where $\bar{\nu}=-\pi T$ and $\omega_1=2\pi T$, which shows agreement with $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ at small $T$. Bottom: The anti-symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$. Diamonds indicate metallic solutions, small dots insulating ones. The inset shows $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ as a function of $U$ for $T=0.05$. ](landau_sp_beta20 "fig:"){width="16.00000%"}]{}; at (4.7,2.7) [$U$]{}; at (4.2,4) [$T=0.05$]{}; (image) at (0,0) [![\[fig:scaling\] (Color online) Scaling of the symmetric Landau parameter with the quasi-particle weight $Z$ at $T=0.15$ (blue diamonds). The blue line indicates a fit of $(\mathfrak{f}^{\,{\ensuremath{\text{ch}}}})^{-1}$ with the function $aZ^{b}$, where $b\approx2.19(3)$. Open green circles show the quantity $Z^2D^*(0)\Re F^{{\ensuremath{\text{ch}}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$ \[cf. Fig. \[fig:fdyn\_ch\]\], the fit of its *inverse* yields $b\approx2.01(1)$ (green line), which confirms that $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}\propto Z^{-2}$ \[see text\]. Black squares show $\frac{d\langle n\rangle}{d\mu}$, the fit yields $b\approx1.22(8)$ (black line). Gray squares show the forward scattering vertex calculated from Eq. , the fit of its *inverse* yields $b\approx1.04(1)$. Full red circles show the static Matsubara vertex $\Re F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_0=0)$. All fits were performed for $0\leq Z\leq0.3$. ](scaling.pdf "fig:"){width="47.00000%"}]{}; at (0.55,-.02) [$Z$]{}; (0.5,0.88) circle\[radius=3pt\] node\[right,black\] [$\;Z^2 D^*(0)F^{{\ensuremath{\text{ch}}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$]{}; (0.68,0.75) circle\[radius=2pt\] node\[right,black\] [$\;F^{{\ensuremath{\text{ch}}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_0)$]{}; at (0.55,0.17) [$T=0.15$]{}; node\[diamond,blue!75,fill,scale=0.4\] at (0.15,0.85) node\[right,black\] at (0.15,0.85) [$\;\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$]{}; node\[rectangle,black!40,fill,scale=0.5\] at (0.15,0.55) node\[right,black\] at (0.15,0.55) [$\;\presuper{\infty\!}F^{{\ensuremath{\text{ch}}}}_{00}$]{}; node\[rectangle,black,fill,scale=0.5\] at (0.15,0.32) node\[right,black\] at (0.15,0.32) [$\;\frac{d\langle n\rangle}{d\mu}$]{}; We evaluate the Landau parameter $\mathfrak{f}$ defined in Eq.  using Eq. , which allows to calculate $\mathfrak{f}$ from the quasi-particle weight $Z$, from the quasi-particle DOS $D^*(0)=Z^{-1}D(0)$, where $D(0)$ is the *non-interacting* DOS, and from the total response $\presuper{\infty\!}X^\alpha$ as, $$\begin{aligned} \mathfrak{f}^{\,\alpha}=-1-2\imath D^*(0)/\presuper{\infty\!}X^\alpha.\label{eq:ffromx}\end{aligned}$$ Bold lines in the top panel of Fig. \[fig:fdyn\_ch\] show the symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$, which grows rapidly and monotonously with the interaction $U$. As a function of the temperature $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ extrapolates towards finite values in the Fermi liquid (diamonds), insulating solutions (small dots) are consistent with its divergence at $T=0$ above $U_M(T=0)$. For comparison dashed lines in Fig. \[fig:fdyn\_ch\] also show $Z^2 D^*(0)\Re F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$, where $F^{\ensuremath{\text{ch}}}$ is the Matsubara vertex function at ${\ensuremath{\mathbf{q}}}_0=\mathbf{0}$ and at the finite bosonic frequency $\omega_1=2\pi T$. This quantity shows a remarkable agreement with $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ at low temperature for all interactions, as the finite Matsubara frequency $\bar{\nu}=-\pi T$ approaches the Fermi level. The bottom panel of Fig. \[fig:fdyn\_ch\] shows the anti-symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$. The latter remains small compared to $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ and has a non-monotonous dependence on $U$ and $T$. For insulating solutions (small dots) the computed $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ shows the trend to a divergence at $T=0$. At low temperature this trend can also be observed in the metallic solutions, see inset of Fig. \[fig:fdyn\_ch\]. The expression for the Landau parameter in Eq.  is rigorous only at $T=0$. Within the temperature range of our calculations a quantitative analysis of $\mathfrak{f}$ is therefore only reliable at small to moderate $U$, where its dependence on the temperature is weak enough for an extrapolation to $T=0$ \[see Fig. \[fig:fdyn\_ch\]\]. However, our data allow to make several qualitative statements: (i) Both $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ and $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ are strictly larger than $-1$, which means that Pomeranchuk instabilities do not occur, as expected. (ii) The trend to a divergence at $T=0$ at the Mott transition is much stronger in the symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ than in the anti-symmetric $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$. This is consistent with the discussion in Sec. \[sec:coherent:mott\], which implies a scaling of these quantities with the quasi-particle weight $\propto Z^{-2}$ and $\propto Z^{-1}$, respectively. (iii) Fig. \[fig:scaling\] shows that at $T=0.15$ the symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ indeed roughly scales $\propto Z^{-2}$. Fig. \[fig:scaling\] also shows $Z^2D^*(0)F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$, which is in good agreement with $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ according to Fig. \[fig:fdyn\_ch\], and which confirms accurately the $\propto Z^{-2}$ scaling of $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$. The agreement that we find between $Z^2D^*(0)F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$ and $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}=\imath Z^2 D^*(0)\presuper{0\!}F^{\ensuremath{\text{ch}}}_{00}$ is non-trivial, since in general an analytical continuation is required to recover the dynamic vertex $\presuper{0\!}F^{\ensuremath{\text{ch}}}_{00}$ from the Matsubara frequencies. However, apparently these quantities are directly related in the limit $T\rightarrow0$. We also discuss the divergence of the static charge vertex function that was predicted in Sec. \[sec:coherent:mott\]. To this end, we solve Eq.  for $\presuper{\infty\!}F^{\ensuremath{\text{ch}}}_{00}$, $$\begin{aligned} \imath\presuper{\infty\!}F^{\ensuremath{\text{ch}}}_{00}=\frac{1}{Z^2 D^*(0)}+\frac{\presuper{\infty\!}X^{\ensuremath{\text{ch}}}}{2\imath[ZD^*(0)]^2}.\label{eq:f00fromx}\end{aligned}$$ This quantity is marked in Fig. \[fig:scaling\] with gray squares and scales $\propto Z^{-1}$, whereas black squares indicate the total charge response $\presuper{\infty\!}X^{\ensuremath{\text{ch}}}$, which indeed vanishes simultaneously $\propto Z$. Red circles in Fig. \[fig:scaling\] also mark our result for the static Matsubara vertex $\Re F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_0)$ for $\bar{\nu}=-\pi T$, which shows however an unexpected scaling of roughly $\propto Z^{-3}$. The mismatch to $\imath\presuper{\infty\!}F^{\ensuremath{\text{ch}}}_{00}$ may arise due to a subtlety in the analytical continuation of the vertex function. To perform the latter, $F$ has to be considered within up to $8$ separate analytical regions of the $\mathbb{C}^3$-space spanned by its three frequency indices [@Oguri01; @Eliashberg61] \[see also Appendix \[app:ac:ac\]\]. It can be expected that the value of $\presuper{\infty\!}F^{\ensuremath{\text{ch}}}_{00}$ at the Fermi level is recovered at low temperature as a combination of several Matrix elements $F^{\ensuremath{\text{ch}}}_{\nu\nu'}({\ensuremath{\mathbf{q}}}_0,\omega_0)$ of the Matsubara vertex, for example, $\nu=\pm\pi T, \nu'=\pm\pi T$. Therefore, the cancellation of a $\propto Z^{-3}$ dependence of $F$ may occur. Lastly, we note that among the divergences that are indicated in Fig. \[fig:scaling\] the one of $F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_0)$ was the most difficult to verify. In our CTQMC calculations at low temperature the deviation of the density $\langle n\rangle$ from half filling had to be less than $10^{-6}$, otherwise the static vertex often showed a sign change. Furthermore, a scaling analysis was not possible in the spin channel, since the divergences of $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ and $\presuper{\infty\!}F^{\ensuremath{\text{sp}}}_{00}$ predicted in Sec. \[sec:coherent:mott\] for $T=0$ are apparently visible only at very low temperature $T\lesssim0.05$, see also the following Sec. \[sec:eigenvalues\]. Character of the divergent scatterings {#sec:eigenvalues} -------------------------------------- We have seen in Sec. \[sec:lambdadiv\] that a divergence of the dynamic three-leg vertex $\presuper{0\!}\Lambda$ occurs as $Z\rightarrow0$. In fact, this divergence can only occur when also the dynamic vertex function $\presuper{0\!}F$ diverges, since the latter gives rise to $\presuper{0\!}\Lambda$ by attaching a bubble \[cf. Fig. \[fig:3leg\] a)\], which is finite. Here we consider the leading eigenvalue of the Bethe-Salpeter equation , which was used to calculate the vertex function $F^\alpha_{\nu\nu'}({\ensuremath{\mathbf{q}}}_0,\omega)$. This will reveal the driving factors behind its divergence at finite bosonic frequency, we consider $\omega=\omega_1=2\pi T$. (image) at (0,0) [![\[fig:ev\_dyn\] (Color online) Top: The leading eigenvalue $\Re\lambda_\text{max}$ of the Bethe-Salpeter equation  for $q=({\ensuremath{\mathbf{q}}}_0,\omega_1)$ in the charge (upper data set) and spin (lower data set) channel. We used the maximum $d\Re\lambda_\text{max}(U)/dU=0$ of the charge channel to distinguish metallic (diamonds) and insulating solutions (small dots) in Figs. \[fig:gtau\], \[fig:suscs\], \[fig:rlat\], and \[fig:fdyn\_ch\]. Bottom: The charge vertex $F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$ (blue) and the impurity vertex $f^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}\omega_1}$ (dashed red) for $T=0.15$. Notice the steep increase beyond $U_M(T=0.15)\gtrsim10.5$, while $\Re\lambda_\text{max}$ in the upper panel drops (arrows). ](ev_dyn_U_chsp "fig:"){width="47.00000%"}]{}; (0.92,0.53)–(0.87,0.51); at (0.2,0.53) [$\alpha={\ensuremath{\text{ch}}}$]{}; at (0.2,0.17) [$\alpha={\ensuremath{\text{sp}}}$]{}; at (-0.005,0.5) [$\Re\lambda_\text{max}$]{}; (image) at (0,0) [![\[fig:ev\_dyn\] (Color online) Top: The leading eigenvalue $\Re\lambda_\text{max}$ of the Bethe-Salpeter equation  for $q=({\ensuremath{\mathbf{q}}}_0,\omega_1)$ in the charge (upper data set) and spin (lower data set) channel. We used the maximum $d\Re\lambda_\text{max}(U)/dU=0$ of the charge channel to distinguish metallic (diamonds) and insulating solutions (small dots) in Figs. \[fig:gtau\], \[fig:suscs\], \[fig:rlat\], and \[fig:fdyn\_ch\]. Bottom: The charge vertex $F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$ (blue) and the impurity vertex $f^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}\omega_1}$ (dashed red) for $T=0.15$. Notice the steep increase beyond $U_M(T=0.15)\gtrsim10.5$, while $\Re\lambda_\text{max}$ in the upper panel drops (arrows). ](fdyn_lat_imp_U_ch "fig:"){width="49.00000%"}]{}; (0.78,0.82)–(0.83,0.77); at (0.53,-.02) [$U$]{}; at (0.3,0.8) [$T=0.15$]{}; (0.2,0.6) – (0.3,0.6) node\[right,black\] [$f^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}\omega_1}$]{}; (0.2,0.5) – (0.3,0.5) node\[right,black\] [$F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$]{}; The Bethe-Salpeter equation  represents the repeated application of the $\nu,\nu'$-matrix $A^\alpha_{\nu\nu'}({\ensuremath{\mathbf{q}}},\omega)=Tf^\alpha_{\nu\nu'\omega}\tilde{X}^0_{\nu'}({\ensuremath{\mathbf{q}}},\omega)$ upon itself. Here, $f$ is the impurity vertex function and $\tilde{X}^0$ is the non-local bubble defined below Eq. . A divergence of the lattice vertex function $F$ may occur for two reasons: (i) The leading eigenvalue of the matrix $A$ approaches unity. (ii) The impurity vertex function $f$ diverges. The top panel of Fig. \[fig:ev\_dyn\] shows the leading eigenvalue $\Re\lambda_\text{max}$ of $A^\alpha({\ensuremath{\mathbf{q}}}_0,\omega_1)$ as a function of the interaction $U$. The upper set of lines belongs to the charge channel $\alpha={\ensuremath{\text{ch}}}$, the lower set to the spin channel $\alpha={\ensuremath{\text{sp}}}$. For each temperature $T$ the curve $\Re\lambda_\text{max}(U)$ has a clearly defined maximum that lies at smaller $U$ for larger $T$. We will argue in the following that this maximum lies at the critical interaction $U_M(T)$ of the Mott transition/crossover. Let us consider first that we approach the Mott transition from the Fermi liquid side at $T=0$. On this side the divergence of $F({\ensuremath{\mathbf{q}}}_0,\omega_1)$ must be caused due to $\Re\lambda_\text{max}\rightarrow1$, since the building blocks $f$ and $\tilde{X}^0$ of the Bethe-Salpeter equation are finite in the Fermi liquid. The top panel of Fig. \[fig:ev\_dyn\] shows that for $T=0.05$ the leading eigenvalue is indeed very close to unity as $U\rightarrow U_M(T=0.05)\lesssim 12$. This shows that on the Fermi liquid side of the transition the driving force behind the divergence is a series of many scattering events at different lattice sites. This can be understood considering the Bethe-Salpeter equation in real space, where it connects the local vertices $f(i)$ and $f(j)$ at lattice sites $i$ and $j$ via the non-local DMFT Green’s function $G_{ij}-g\delta_{ij}$. This is shown in Fig. \[fig:bse\_realspace\]. Let us now consider that we enter the Mott phase. Within this phase the dynamic vertices must remain divergent due to $\presuper{0\!}\Lambda=Z^{-1}$, since $Z$ is zero throughout the insulator. The question is therefore which mechanism sustains the divergence for $U>U_M(T=0)$. Mathematically this could be achieved if $\Re\lambda_\text{max}$ was exactly unity everywhere in the insulator. However, our DMFT results in Fig. \[fig:ev\_dyn\] at finite temperature suggest that the leading eigenvalue $\Re\lambda_\text{max}(U)$ *decreases* beyond $U_M(T)$. We therefore propose a scenario for $T=0$ where $\Re\lambda_\text{max}=1$ is only realized exactly at $U=U_M(T=0)$. Beyond this point the Bethe-Salpeter equation diverges no longer due to scattering events at different lattice sites but because each of its building blocks $f$ diverges for $U>U_M(T=0)$. (0,-.25) – (1,-0.25) – (1.,.75) – (0,.75) – (0,-.25); (.5,.25) node [$F$]{}; (2.5,.25) node[$=$]{}; (0,-.25) – (1,-0.25) – (1.,.75) – (0,.75) – (0,-.25); (.5,.25) node [$f(i)$]{}; (4.5,.25) node[$+$]{}; (0,-.25) – (1,-0.25) – (1.,.75) – (0,.75) – (0,-.25); (.5,.25) node [$f(i)$]{}; (0,-.25) – (1,-.25); (0,.75) – (1,0.75); (0,-.25) – (1,-0.25) – (1.,.75) – (0,.75) – (0,-.25); (.5,.25) node [$f(j)$]{}; (8.5,.25) node[$+$ ...]{}; This scenario seems likely because we find at finite temperature that the drop of the leading eigenvalue at $U_M(T)$ does not lead to a decrease in $F^\alpha_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$, where $\bar{\nu}=-\pi T$. This can be seen for $\alpha={\ensuremath{\text{ch}}}$ in the lower panel of Fig. \[fig:ev\_dyn\] for $T=0.15$. In fact, $F^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}}({\ensuremath{\mathbf{q}}}_0,\omega_1)$ grows even faster above $U_M(T=0.15)\gtrsim10.5$. The driving factor must therefore be the impurity vertex function $f$. The lower panel of Fig. \[fig:ev\_dyn\] also shows its matrix element $f^{\ensuremath{\text{ch}}}_{\bar{\nu}\bar{\nu}\omega_1}$, which indeed shows a steep increase at $U_M(T=0.15)$. We also verified that the ratio of $F^{\ensuremath{\text{ch}}}$ to $f^{\ensuremath{\text{ch}}}$ decreases above $U_M(T=0.15)$, which shows that vertex corrections contribute less and less in the insulating regime. The Ward identity $\presuper{0\!}\Lambda^{\ensuremath{\text{sp}}}=Z^{-1}$ implies that the divergence should also occur in the spin channel. However, the lower data set in the top panel of Fig. \[fig:ev\_dyn\] shows that we did not reach sufficiently low temperatures to achieve $\Re\lambda_\text{max}\lesssim1$. We note that often the two-particle self-energy $\gamma$ is used to solve the Bethe-Salpeter equation \[cf. Appendix, Eq. \]. Here we used the impurity vertex function $f$ instead to solve Eq. . This was done because $\gamma$ shows some divergences that do not occur at the Mott transition [@Schafer13] and that have also been found in the Hubbard atom [@Thunstroem18], which we discussed in Sec. \[sec:coherent:atom\]. Summary and Discussion {#sec:discussion} ====================== Over the course of this work we have highlighted the important role that two-particle quantities play for the Fermi liquid theory. An example is the response function $L_{kq}$, which describes the response of individual electronic states with momentum and energy $k=({\ensuremath{\mathbf{k}}},\nu)$ to an applied field, its integral over $k$ yields the susceptibility $X_q$ \[see Fig. \[fig:3leg\] b)\]. Often one is interested in the static homogeneous response, that is, the response to a time-independent homogeneous field. Therefore, of particular importance is the forward scattering limit $q\rightarrow0$, where $q$ comprises the transferred momentum ${\ensuremath{\mathbf{q}}}$ and energy $\omega$ of particle-hole scatterings. However, in the Fermi liquid the forward scattering limit is ambiguous, since these limits do not commute, which is a consequence of the pole of weight $Z$ at the Fermi level \[cf. Secs. \[sec:flt0:disc\] and \[sec:landauparm\]\]. One refers to the two ambiguous forward scattering limits as the static and the dynamic homogeneous limit, respectively. One may say that the main line of thought in the derivation [@Landau80; @Noziere97; @Abrikosov75] of the Fermi liquid theory is to express the physical static homogeneous limit of several two-particle quantities in terms of the unphysical dynamic homogeneous limit. The latter is then treated as a free parameter. For example, closely related to the response function $L$ is the three-leg vertex $\Lambda$ \[see Fig. \[fig:gglambda\]\]. Its static limit $\presuper{\infty\!}\Lambda$ can be expressed in terms of the dynamic limit $\presuper{0\!}\Lambda$, the relation between these objects is also called Boltzmann’s equation \[cf. Eq. \]. The dynamic three-leg vertex can be calculated using Ward’s identity, at the Fermi level this yields $\presuper{0\!}\Lambda=Z^{-1}$, where $Z$ is assumed to be known, for example, from the experiment. In turn, the three-leg vertex $\Lambda$ arises from the vertex function $F$ \[see Fig. \[fig:3leg\] a)\]. The static vertex $\presuper{\infty\!}F$ can be expressed in terms of the dynamic vertex $\presuper{0\!}F$, which defines the Landau parameter $\mathfrak{f}\propto \presuper{0\!}FZ^2$ \[see Sec. \[sec:landauparm\]\], also assumed to be known. As a result, the quasi-particle weight $Z$ and the Landau parameter $\mathfrak{f}$ are the only free parameters of the Fermi liquid theory. We applied the DMFT approximation to the Bethe-Salpeter equation and arrived at the well-known Fermi liquid relation for the total static response \[see Sec. \[sec:flparms\]\], $\presuper{\infty\!}X^\alpha=-2\imath D^*(0)/(1+\mathfrak{f}^{\,\alpha})$, where $D^*(0)$ is the quasi-particle DOS at the Fermi level and $\mathfrak{f}$ the Landau parameter. In DMFT one routinely calculates the total static response $\presuper{\infty\!}X$. Thus, when the latter is known, the Landau parameter can be obtained from the exact expression, $$\begin{aligned} \mathfrak{f}^{\,\alpha}=-1-2\imath D^*(0)/\presuper{\infty\!}X^\alpha.\label{eq:ffromx_summary}\end{aligned}$$ If $\mathfrak{f}\rightarrow-1$ a Pomeranchuk instability to the phase separation $(\alpha={\ensuremath{\text{ch}}})$ or to ferromagnetism $(\alpha={\ensuremath{\text{sp}}})$ occurs. This criterion is of course equivalent to the divergence of the total response $\presuper{\infty\!}X^\alpha$ in Eq. . We considered the fate of the Fermi liquid parameters at the interaction-driven Mott transition at zero temperature, where $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ diverges $\propto Z^{-2}$, and $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ diverges $\propto Z^{-1}$. Our DMFT calculations at finite temperature confirmed the result for $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ \[see Sec. \[sec:lparm\]\], while our eigenvalue analysis of the Bethe-Salpeter equation showed that the divergences associated to the spin channel are visible only at much lower temperatures than in the charge channel \[cf. Sec. \[sec:eigenvalues\]\]. Remarkably, in order for the total charge response $\frac{d\langle n\rangle}{d\mu}$ to vanish at the Mott transition, it is required that the forward scattering amplitude $\presuper{\infty\!}F^{\ensuremath{\text{ch}}}_{00}$ diverges $\propto Z^{-1}$ \[cf. Eq. \]. A further peculiar relation followed for the response function $L$ \[see. Sec. \[sec:coherent:mott\]\]. At the Mott transition, and presumably within the entire Mott phase, the response of the Hubbard bands to the chemical potential is given by the dynamic response, $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}=\presuper{0\!}L^{{\ensuremath{\text{ch}}}}$. According to the Ward identity it follows that \[cf. Sec. \[sec:wardidt0\]\], $$\begin{aligned} \frac{dG_{{\ensuremath{\mathbf{k}}}\nu}}{d\mu}=\frac{dG_{{\ensuremath{\mathbf{k}}}\nu}}{d\nu},\;\;\;\;\;\;(T=0),\label{eq:finalresult}\end{aligned}$$ where $G$ is the causal Green’s function, $\mu$ the chemical potential, and $\nu$ the real frequency. The physical background of Eq.  is that in the Fermi liquid $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}$ and $\presuper{0\!}L^{{\ensuremath{\text{ch}}}}$ differ by a coherent quasi-particle contribution. The latter vanishes at the Mott transition. In the spin channel the equivalence of $\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}$ and $\presuper{0\!}L^{{\ensuremath{\text{sp}}}}$ does not occur. In the interacting Fermi liquid both the coherent quasi-particles and incoherent states contribute to the change of the magnetization due to the magnetic field $h$. At the Mott transition the coherent contribution vanishes, while the incoherent one does not. As a consequence, $\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}$ and $\presuper{0\!}L^{{\ensuremath{\text{sp}}}}$ are different in the Mott phase. We verified that $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}=\presuper{0\!}L^{{\ensuremath{\text{ch}}}}$ and $\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}\neq\presuper{0\!}L^{{\ensuremath{\text{sp}}}}$ hold in the exactly solvable Hubbard atom at $T=0$ \[see Sec. \[sec:coherent:atom\]\]. At the two-particle level the Mott transition is characterized by the divergence of the dynamic homogeneous three-leg vertex $\presuper{0\!}\Lambda$ [^6]. This sets this phase transition apart from the more conventional charge and spin Pomeranchuk instabilities, signaled by divergences of the static vertices $\presuper{\infty\!}\Lambda^{\ensuremath{\text{ch}}}=1-\frac{d\Sigma}{d\mu}$ and $\presuper{\infty\!}\Lambda^{\ensuremath{\text{sp}}}=1-\frac{d\Sigma}{dh}$. The latter associate a conjugate field with the respective transition, while $\presuper{0\!}\Lambda=1-\frac{d\Sigma}{d\nu}=Z^{-1}$ does not [^7]. It is nevertheless possible to study the Mott transition in a similar way, by an analysis of the leading eigenvalue of the Bethe-Salpeter equation for a transferred frequency $\omega\neq0$. In our DMFT calculations this analysis reveals the scattering mechanism that drives the divergence of $\presuper{0\!}\Lambda$ at the Mott transition: On the Fermi liquid side the divergence is driven by scatterings at many lattice sites, while in the Mott insulator the scattering amplitude diverges at each site on its own. This shows how smoothly DMFT captures the breakdown of the Fermi liquid picture at the transition point \[Sec. \[sec:eigenvalues\]\]. It follows that the maximum $d\lambda_\text{max}(U)/dU=0$ of the leading eigenvalue of the Bethe-Salpeter equation  may be used to distinguish between the metal and the Mott regime: In the metal the effect of scatterings at many lattice sites increases with $U$, in the Mott regime this effect decreases \[see Fig. \[fig:ev\_dyn\]\]. We find that this criterion is consistent with the drop in the spectral weight at the Fermi level, which is often used to determine the critical interaction $U_M$ of the transition/crossover \[see Fig. \[fig:gtau\]\]. Conclusions {#sec:conclusions} =========== We have presented a comprehensive analysis of the microscopic Fermi-liquid theory of the single-band Hubbard model and of the Mott-Hubbard transition in the paramagnetic sector. In particular, we have completely characterized the theory at the two-particle level obtaining complete information about the Landau parameters describing the residual interactions between the heavy quasi-particles with quasi-particle weight $Z$ which vanishes at the Mott transition. We applied the dynamical mean-field theory (DMFT) approximation to the Bethe-Salpeter equation and derived the Fermi liquid expression, $\presuper{\infty\!}X=-2\imath D^*(0)/(1+\mathfrak{f})$, where $\presuper{\infty\!}X$ is the total static homogeneous response function, $D^*(0)$ the quasi-particle density of states at the Fermi level, and $\mathfrak{f}$ is the Landau parameter. This well-known result is thus valid in DMFT for an arbitrary lattice dispersion and it allows to calculate the Landau parameter explicitly from $D^*(0)$ and $\presuper{\infty\!}X$. Within DMFT the vertex function does not depend on the fermionic momenta. This implies that spatially inhomogeneous deformations of the Fermi surface are not allowed. As a result we have only two Landau parameters, $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ (symmetric) and $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ (anti-symmetric), which correspond to the lowest order ($l=0$) Legendre coefficients in the continuum. The two Landau parameters correspond to the two basic Pomeranchuk instabilities of the single-band Hubbard model which can be captured in DMFT, namely the uniform charge phase separation and ferromagnetic ordering. In order to obtain Landau parameters of higher order, it would be necessary to account for a momentum dependence of the one- and two-particle self-energies. At the interaction-driven Mott transition at zero temperature we find that the symmetric Landau parameter $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ diverges $\propto Z^{-2}$, where $Z$ is the quasi-particle weight, while the anti-symmetric one $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ diverges $\propto Z^{-1}$. The result for $\mathfrak{f}^{\,{\ensuremath{\text{ch}}}}$ is in agreement with the variational Gutzwiller approach to the interaction-driven metal-insulator transition [@Vollhardt84]. On the other hand, $\mathfrak{f}^{\,{\ensuremath{\text{sp}}}}$ remains finite in the Gutzwiller picture, and the homogeneous spin susceptibility diverges, since this approximation does not capture the effective exchange [@Georges96], as DMFT does. The Ward identity implies the divergence of the dynamic three-leg vertex $\presuper{0\!}\Lambda=Z^{-1}$ and of the dynamic vertex function $\presuper{0\!}F$ at the critical interaction $U_M$ of the Mott transition. Our numerical results show that the scattering mechanism that leads to these divergences is non-local on the Fermi liquid side and local on the Mott side of the transition, which allows to pinpoint the Mott crossover/transition via an eigenvalue analysis of the Bethe-Salpeter equation. An exact result of our analysis is that the vanishing of the total charge response $\presuper{\infty\!}X^{\ensuremath{\text{ch}}}$ at the Mott transition requires the static forward scattering vertex $\presuper{\infty\!}F^{\ensuremath{\text{ch}}}$ to diverge, as predicted in Ref. [@Chitra01], and we find that it scales with the quasi-particle weight as $\propto Z^{-1}$. It is tempting to connect the divergence of the charge vertex $\presuper{\infty\!}F^{\ensuremath{\text{ch}}}$ to the proximity of the Mott insulator to a phase separation instability of the doped Hubbard model, which can be captured in DMFT by virtue of its frequency-dependent two-particle self-energy [@Yamakawa15; @Nourafkan18]. We speculate that non-local effects beyond DMFT increase the tendency towards phase separation in low dimensional Hubbard models, and in particular in two dimensions. The calculation of the vertex function across the doping-driven Mott transition thus seems to be an appealing outlook. However, the finite-doping analysis would require to carefully handle the existence of two solutions leading to the finite-temperature first order Mott transition, which we ignored in this work since the metallic solution is stable in its whole range of existence. We further discussed the response of individual electronic states to a change of the chemical potential or magnetic field. The analytical continuation of this response function to the real axis can be done by means of the Ward identity. We showed that at the Mott transition the charge response of the Hubbard bands is given by the dynamic response, hence, $\frac{dG}{d\mu}=\frac{dG}{d\nu}$ in the Mott insulator, where $G$ is Green’s function, $\mu$ the chemical potential, and $\nu$ the real frequency. We verified that this relation holds in the exactly solvable atomic limit of the Hubbard model. We like to thank Angelo Valli for his critical reading of the manuscript. F.K. likes to thank Igor Krivenko for sharing his knowledge of analytical continuation, Jernej Mravlje and Rok Žitko for a fruitful discussion about the doped Mott insulator, and Daniele Guerci for discussions regarding the spin susceptibility of the Mott insulator. E.G.C.P. v. L. and M.I.K. acknowledge support from ERC Advanced Grant 338957 FEMTO/NANO. M.C. acknowledge support from the H2020 Framework Programme, under ERC Advanced GA No. 692670 “FIRSTORM”, MIUR PRIN 2015 (Prot. 2015C5SEJJ001) and SISSA/CNR project “Superconductivity, Ferroelectricity and Magnetism in bad metals” (Prot. 232/2015). A.I.L. acknowledges support from the excellence cluster ”The Hamburg Centre for Ultrafast Imaging - Structure, Dynamics and Control of Matter at the Atomic Scale” and from the North-German Supercomputing Alliance (HLRN) under project number hhp00040. Causal Green’s function {#app:gf} ======================= We relate the causal Green’s function to the retarded, advanced and Matsubara Green’s functions. The causal and the Matsubara Green’s function are defined as, $$\begin{aligned} G^c_{{\ensuremath{\mathbf{k}}}\sigma}(t)=-\imath\langle T_t c_{{\ensuremath{\mathbf{k}}}\sigma}(t)c^\dagger_{{\ensuremath{\mathbf{k}}}\sigma}(0)\rangle,\\ G^m_{{\ensuremath{\mathbf{k}}}\sigma}(\tau)=-\langle T_\tau c_{{\ensuremath{\mathbf{k}}}\sigma}(\tau)c^\dagger_{{\ensuremath{\mathbf{k}}}\sigma}(0)\rangle,\end{aligned}$$ respectively, where $t$ is the real time and $\tau$ the imaginary time. We perform the frequency transforms $G^c(\nu)=\int_{-\infty}^{+\infty} e^{\imath\nu t}G(t)dt$ and $G^m(\nu_n)=\int_{0}^\beta e^{\imath\nu_n\tau}G(\tau)d\tau$, where $\nu$ and $\nu_n$ are real and Matsubara frequency, respectively. The spin label $\sigma$ will be dropped. We further define the greater and lesser Green’s functions, $$\begin{aligned} G^>_{\ensuremath{\mathbf{k}}}(\nu)=&\sum_{ij} w_j |\langle j|c_{\ensuremath{\mathbf{k}}}|i\rangle|^2\delta(\nu-E_i+E_j),\notag\\ G^<_{\ensuremath{\mathbf{k}}}(\nu)=&\sum_{ij} w_i |\langle j|c_{\ensuremath{\mathbf{k}}}|i\rangle|^2\delta(\nu-E_i+E_j),\end{aligned}$$ where $E_i$ and $|i\rangle$ are the eigenenergies and eigenvectors of the Hubbard model , $w_i=\frac{e^{-\beta E_i}}{\mathcal{Z}}$, and $\mathcal{Z}=\sum_i e^{-\beta E_i}$ is the partition sum. The spectral density can be written as, $S_{\ensuremath{\mathbf{k}}}(\nu)=G^>_{\ensuremath{\mathbf{k}}}(\nu)+G^<_{\ensuremath{\mathbf{k}}}(\nu)$. We use $S$, $G^>$, and $G^<$ to express the retarded ($r$), advanced ($a$), causal ($c$) and the Matsubara Green’s function ($m$), $$\begin{aligned} G^{c}_{\ensuremath{\mathbf{k}}}(\nu)=&\int_{-\infty}^\infty\left\{\frac{G^>_{\ensuremath{\mathbf{k}}}(\nu')}{\nu-\nu'+\imath0^+}+\frac{G^<_{\ensuremath{\mathbf{k}}}(\nu')}{\nu-\nu'-\imath0^+}\right\}d\nu'\notag,\\ G^{m}_{\ensuremath{\mathbf{k}}}(\nu_n)=&\int_{-\infty}^\infty\frac{S_{\ensuremath{\mathbf{k}}}(\nu')}{\imath\nu_n-\nu'}d\nu'\notag,\\ G^{r/a}_{\ensuremath{\mathbf{k}}}(\nu)=&\int_{-\infty}^\infty\frac{S_{\ensuremath{\mathbf{k}}}(\nu')}{\nu-\nu'\pm\imath0^+}d\nu'.\label{app:retadv}\end{aligned}$$ Here, $0^+$ is a positive infinitesimal real number. The retarded and advanced Green’s functions arise by analytical continuation of $G^m(\imath\nu_n\rightarrow\nu\pm\imath0^+)$ into the upper/lower complex half-plane, respectively. The right superscripts $r,a,c,m$ that are used here must not be confused with the left superscript $\mathfrak{r}=|{\ensuremath{\mathbf{q}}}|/\omega$, nor with the channel label $\alpha$. We express the causal Green’s function in terms of the retarded and advanced ones. Using the identity, $$\begin{aligned} \frac{1}{x+\imath0^+}-\frac{1}{x-\imath0^+}=-2\pi\imath\delta(x),\label{app:dirac}\end{aligned}$$ we reformulate $G^c$ in Eq.  as, $$\begin{aligned} G^{c}_{\ensuremath{\mathbf{k}}}(\nu)=&G^r_{\ensuremath{\mathbf{k}}}(\nu)+2\pi\imath G^<_{\ensuremath{\mathbf{k}}}(\nu)\notag\\ =& G^r_{\ensuremath{\mathbf{k}}}(\nu)+2\pi\imath S_{\ensuremath{\mathbf{k}}}(\nu)n_f(\nu)\notag\\ =& \Re G^r_{\ensuremath{\mathbf{k}}}(\nu)+\imath[1-2n_f(\nu)]\Im G^r_{\ensuremath{\mathbf{k}}}(\nu)\notag\\ =& n_f(-\nu)G^r_{\ensuremath{\mathbf{k}}}(\nu)+n_f(\nu) G^a_{\ensuremath{\mathbf{k}}}(\nu).\label{app:gcgr}\end{aligned}$$ In the first line we used Eq. . From the first to the second line we used the fluctuation-dissipation theorem, $G^<_{\ensuremath{\mathbf{k}}}(\nu)=e^{-\beta\nu}G^>_{\ensuremath{\mathbf{k}}}(\nu)=S_{\ensuremath{\mathbf{k}}}(\nu)n_f(\nu)$, where $n_f(\nu)=(e^{\beta\nu}+1)^{-1}$ is the Fermi function. From the second to the third line we used the relation between the spectral density and the retarded Green’s function, $S_{\ensuremath{\mathbf{k}}}(\nu)=-\frac{1}{\pi}\Im G^r_{\ensuremath{\mathbf{k}}}(\nu)$. In the last step we used $1=n_f(-\nu)+n_f(\nu)$ and $G^a=(G^r)^*$. Note that the causal Green’s function is not positive/negative definite and integrates to $\frac{1}{2\pi}\int_{-\infty}^{+\infty}d\nu G^c_{{\ensuremath{\mathbf{k}}}\sigma}(\nu)=\imath[\langle n_{{\ensuremath{\mathbf{k}}}\sigma}\rangle-\frac{1}{2}]$, which can be seen by integrating its Lehmann representation . Decomposition of the static response {#app:decomp} ==================================== We derive Eqs.  and  in the main text. $k,q$ imply momenta and *real* frequencies, the temperature is zero. We begin with Eq. , which we multiply with the static limit of the bubble $\presuper{\infty\!}G^2_{k'}$ and integrate over $k'$, then we add $1$ on both sides, $$\begin{aligned} &1+\int_{k'}\presuper{\infty\!}F_{kk'}\presuper{\infty\!}G^2_{k'}\\ =&1+\int_{k'}\presuper{0\!}F_{kk'}\presuper{\infty\!}G^2_{k'} +\iint_{k'k''}\presuper{0\!}F_{kk''}R_{k''}\presuper{\infty\!}F_{k''k'}\presuper{\infty\!}G^2_{k'}\notag.\end{aligned}$$ We have dropped the label $\alpha$. We identify the static three-leg vertex $\presuper{\infty\!}\Lambda_{k}=1+\int_{k'}\presuper{\infty\!}F_{kk'}\presuper{\infty\!}G^2_{k'}$ on the left-hand-side. In the second term of the right-hand-side we express the static limit $\presuper{\infty\!}G^2$ through the discontinuity $R$ and the dynamic limit $\presuper{0\!}G^2$ \[cf. Eq. \], $\presuper{\infty\!}G^2_{k'}=\presuper{0\!}G^2_{k'}+R_{k'}$, leading to $$\begin{aligned} \presuper{\infty\!}\Lambda_{k}=&1+\int_{k'}\presuper{0\!}F_{kk'}\presuper{0\!}G^2_{k'}\notag\\ +&\int_{k'}\presuper{0\!}F_{kk'}R_{k'}+\iint_{k'k''}\presuper{0\!}F_{kk''}R_{k''}\presuper{\infty\!}F_{k''k'}\presuper{\infty\!}G^2_{k'}\notag.\end{aligned}$$ In the first line we identify the dynamic three-leg vertex $\presuper{0\!}\Lambda_{k}=1+\int_{k'}\presuper{0\!}F_{kk'}\presuper{0\!}G^2_{k'}$, in the second line we exchange the labels $k'\leftrightarrow k''$ of the double integral and factor out a term $\presuper{0\!}F_{kk'}R_{k'}$, $$\begin{aligned} \presuper{\infty\!}\Lambda_{k}=&\presuper{0\!}\Lambda_{k} +\int_{k'}\presuper{0\!}F_{kk'}R_{k'}\left(1+\int_{k''}\presuper{\infty\!}F_{k'k''}\presuper{\infty\!}G^2_{k''}\right).\label{app:fbvertex}\end{aligned}$$ The braces yield $\presuper{\infty\!}\Lambda_{k'}$, which leads to Eq.  in the main text, the Boltzmann equation. We multiply Eq.  by $\presuper{\infty\!}G^2_{k}$, this yields the static fermion-boson response function $\presuper{\infty\!}L_k=\presuper{\infty\!}G^2_{k}\presuper{\infty\!}\Lambda_{k}$ on the left-hand-side, $$\begin{aligned} \presuper{\infty\!}L_{k}=&\presuper{\infty\!}G^2_{k}\presuper{0\!}\Lambda_{k}+\presuper{\infty\!}G^2_{k}\int_{k'}\presuper{0\!}F_{kk'}R_{k'}\presuper{\infty\!}\Lambda_{k'}\notag.\end{aligned}$$ We use $\presuper{\infty\!}G^2_{k}=\presuper{0\!}G^2_{k}+R_{k}$ in both terms on the right-hand-side, $$\begin{aligned} \presuper{\infty\!}L_{k}=&\presuper{0\!}G^2_{k}\presuper{0\!}\Lambda_{k}+\presuper{0\!}G^2_{k}\int_{k'}\presuper{0\!}F_{kk'}R_{k'}\presuper{\infty\!}\Lambda_{k'}\notag\\ +&R_{k}\presuper{0\!}\Lambda_{k}+R_{k}\int_{k'}\presuper{0\!}F_{kk'}R_{k'}\presuper{\infty\!}\Lambda_{k'}\notag\end{aligned}$$ The dynamic fermion-boson response $\presuper{0\!}L_k=\presuper{0\!}G^2_{k}\presuper{0\!}\Lambda_{k}$ arises in the first line, in the second line we use Eq. , which simply yields $R_{k}\presuper{\infty\!}\Lambda_{k}$, hence, $$\begin{aligned} \presuper{\infty\!}L_{k}=&\presuper{0\!}L_{k}+R_{k}\presuper{\infty\!}\Lambda_{k}+\presuper{0\!}G^2_{k}\int_{k'}\presuper{0\!}F_{kk'}R_{k'}\presuper{\infty\!}\Lambda_{k'}\notag\end{aligned}$$ We introduce a trivial integration and factor out a term $R_{k'}\presuper{\infty\!}\Lambda_{k'}$, $$\begin{aligned} \presuper{\infty\!}L_{k}=&\presuper{0\!}L_{k}+\int_{k'}\left(\delta_{kk'} +\presuper{0\!}G^2_{k}\presuper{0\!}F_{kk'}\right)R_{k'}\presuper{\infty\!}\Lambda_{k'},\label{app:g3decomp}\end{aligned}$$ this is Eq.  in the main text. Note that $\delta_{kk'}$ implies a factor $2\pi N$. The static response in DMFT {#app:lindmft} =========================== We derive Eq.  for the static response $\presuper{\infty\!}L_{k}$ in DMFT. To this end, we insert the expression for the discontinuity $R_k$ in Eq.  into Eq.  (note that the label $\alpha$ is dropped), $$\begin{aligned} \presuper{\infty\!}L_{{\ensuremath{\mathbf{k}}}\nu}=&\presuper{0\!}L_{{\ensuremath{\mathbf{k}}}\nu}+\frac{1}{2\pi N}\int_{\nu'}\sum_{{\ensuremath{\mathbf{k}}}'}\left(2\pi N\delta_{{\ensuremath{\mathbf{k}}}{\ensuremath{\mathbf{k}}}'}\delta_{\nu\nu'} +\presuper{0\!}G^2_{{\ensuremath{\mathbf{k}}}\nu}\presuper{0\!}F_{\nu\nu'}\right)\notag\\ \times&\left[-2\pi\imath Z^2\delta(\nu')\delta(\tilde{\varepsilon}_{{\ensuremath{\mathbf{k}}}'}-\mu)\right]\presuper{\infty\!}\Lambda_{\nu'}.\label{app:g3decompexplicit}\end{aligned}$$ Here we have made all energy-momentum dependencies $k=({\ensuremath{\mathbf{k}}},\nu)$ and the prefactors of $\int_k=\frac{1}{2\pi N}\int_{\nu}\sum_{{\ensuremath{\mathbf{k}}}}$ and $\delta_{kk'}=2\pi N\delta_{{\ensuremath{\mathbf{k}}}{\ensuremath{\mathbf{k}}}'}\delta_{\nu\nu'}$ explicit. We used that $Z$, $\Lambda$ and $F$ do not depend on ${\ensuremath{\mathbf{k}}}$ (or ${\ensuremath{\mathbf{k}}}'$) in DMFT. We perform the integration/summation in Eq. , $$\begin{aligned} \presuper{\infty\!}L_{{\ensuremath{\mathbf{k}}}\nu}=&\presuper{0\!}L_{{\ensuremath{\mathbf{k}}}\nu}-2\pi\imath Z^2\delta_{\nu0}\delta(\tilde{\varepsilon}_{{\ensuremath{\mathbf{k}}}}-\mu)\presuper{\infty\!}\Lambda_{0}\label{app:lindmft_intermediate}\\ -&\imath Z^2D^*(0)\presuper{0\!}G^2_{{\ensuremath{\mathbf{k}}}\nu}\presuper{0\!}F_{\nu0}\presuper{\infty\!}\Lambda_{0},\notag\end{aligned}$$ where we used the definition of the quasi-particle DOS, $D^*(0)=\frac{1}{N}\sum_{\ensuremath{\mathbf{k}}}\delta(\tilde{\varepsilon}_{{\ensuremath{\mathbf{k}}}}-\mu)$. According to Eqs.  and  the static three-leg vertex at the Fermi level can be expressed in terms of the total response, $\presuper{\infty\!}\Lambda_{0}=\presuper{\infty\!}X[-2\imath Z D^*(0)]^{-1}$. Using this expression in Eq.  and factoring out ${\presuper{\infty\!}X Z}/{2}$ leads to Eq. . Ward identity and analytical continuation of three-point functions {#app:ac} ================================================================== We derive an exact relation between the Matsubara and real axis notation of three-point correlation functions by means of the Ward identity. For further information see also Refs. [@Oguri01] and [@Eliashberg61]. Firstly, we note that the last line of Eq.  demonstrates that the causal Green’s function $G^c(\nu)$ can be decomposed into two functions that are analytical either in the upper or lower complex half-plane, $G^c(\nu)=n_f(-\nu)G^r(\nu)+n_f(\nu) G^a(\nu)$. The analytic regions of $G^r$ and $G^a$ combined cover the entire complex plane $\mathbb{C}$ and their prefactors are given by the Fermi function $n_f$. $G^r$ and $G^a$ can be obtained from the Matsubara Green’s function $G^m(\nu_n)$ by analytical continuation into the upper or lower half-plane. Eq.  therefore allows to recover $G^c$ from $G^m$. Our strategy is to find a similar decomposition of the causal fermion-boson response function $L^{c}(\nu,\omega)$ into several component functions, whose analytic regions cover the entire $\mathbb{C}^2$-space spanned by their two complex arguments. These component functions are supposed to arise by analytical continuation of the Matsubara correlation function $L^{m}(\nu_n,\omega_m)$. In principle, this task could be approached from the Lehmann representations of $L^{c}$ and $L^{m}$ [@Oguri01; @Tagliavini18], which is however tedious. We choose a simpler approach here using the Ward identity. Fermion-boson response function {#app:ac:g3} ------------------------------- We define the causal fermion-boson response function, $$\begin{aligned} &L^{c,\alpha}_{{\ensuremath{\mathbf{k}}}{\ensuremath{\mathbf{q}}}}(t_1,t_2,t_3)=\frac{\imath\langle n\rangle}{2}\sum_\sigma G^c_{{\ensuremath{\mathbf{k}}}\sigma}(t_1-t_2)\delta_{\ensuremath{\mathbf{q}}}\delta_{\alpha,{\ensuremath{\text{ch}}}}\label{app:gsusc_real}\\ +&(\imath)^2\frac{1}{2}\sum_{\sigma\sigma'}s^\alpha_{\sigma'\sigma} \left\langle{T_t c^{}_{{\ensuremath{\mathbf{k}}}\sigma}(t_1)c^{\dagger}_{{\ensuremath{\mathbf{k}}}+{\ensuremath{\mathbf{q}}},\sigma'}(t_2)\rho^\alpha_{\ensuremath{\mathbf{q}}}(t_3)}\right\rangle,\notag\end{aligned}$$ where $s^\alpha$ are the Pauli matrices $(\alpha={\ensuremath{\text{ch}}},x,y,z)$ and $\rho^\alpha_{\ensuremath{\mathbf{q}}}=\frac{1}{N}\sum_{{\ensuremath{\mathbf{k}}}}c^{\dagger}_{{\ensuremath{\mathbf{k}}}\sigma}s^\alpha_{\sigma\sigma'}c^{}_{{\ensuremath{\mathbf{k}}}+{\ensuremath{\mathbf{q}}},\sigma'}$ is the respective density operator, $\langle n\rangle=\langle n_\uparrow\rangle+\langle n_\downarrow\rangle$. The correlation function in Eq.  depends on three real times $t_i$. One obtains the Matsubara response $L^{m}$ by replacing $t_i\rightarrow\tau_i$, $G^c\rightarrow G^m$, and omitting the factor $\imath$ in the first line and the factor $\imath^2$ in the second line of Eq. . We note that the term in the first line cancels an uncorrelated part of the charge ($\alpha={\ensuremath{\text{ch}}}$) correlation function. The (connected) susceptibility is given by $X^{c,\alpha}_{\ensuremath{\mathbf{q}}}(t-t')=\frac{2}{N}\sum_{{\ensuremath{\mathbf{k}}}}L^{c,\alpha}_{{\ensuremath{\mathbf{k}}}{\ensuremath{\mathbf{q}}}}(t',t',t)$. The transformation of $L^{c}$ in Eq.  to the frequency domain is defined as, $$\begin{aligned} L^{c}(t_1,t_2,t_3)=\frac{1}{(2\pi)^2} \iint\limits_{-\infty}^{\;\;\;+\infty} L^{c}_{\nu\omega}e^{-\imath[\nu t_1-(\nu+\omega)t_2+\omega t_3]}d\nu d\omega.\end{aligned}$$ The analogous transform of $L^{m}$ follows by the replacement $(2\pi)^{-1}\int_{-\infty}^{+\infty}d\nu\rightarrow T\sum_{\nu_n}$, where $\nu_n$ is a fermionic Matsubara frequency, and likewise for the bosonic frequencies $\omega$ and $\omega_m$. Ward identity {#app:ac:ward} ------------- The Ward identity is an exact relation between the response function $L^{c}$ in Eq.  and the single-particle Green’s function $G^c$. It arises from the continuity equation $\partial_t\rho^\alpha(t)=\imath[\rho^\alpha(t),H]$ of the density operator $\rho^\alpha$ [@Behn78]. For the Matsubara response $L^{m}$ this derivation is exercised in Ref. [@Krien17], here we merely state the result for the causal response $L^{c}$ in the homogeneous limit ${\ensuremath{\mathbf{q}}}={\ensuremath{\mathbf{q}}}_0=\mathbf{0}$, $$\begin{aligned} -\omega L^{c,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}}_0,\omega)=G^c_{{\ensuremath{\mathbf{k}}},\nu+\omega}-&G^c_{{\ensuremath{\mathbf{k}}}\nu}\notag\\ =n_f(-\nu-\omega)G^r_{{\ensuremath{\mathbf{k}}},\nu+\omega}+&n_f(\nu+\omega)G^a_{{\ensuremath{\mathbf{k}}},\nu+\omega}\notag\\ -n_f(-\nu)G^r_{{\ensuremath{\mathbf{k}}}\nu}-&n_f(\nu)G^a_{{\ensuremath{\mathbf{k}}}\nu}.\label{app:wardidcausal}\end{aligned}$$ Note that the correlation functions in the first line are causal. From the first to the second line we used Eq.  to express the causal Green’s function $G^c$ through the retarded and advanced Green’s functions $G^r$, $G^a$, and the Fermi function $n_f$. Note that the limit $\omega\rightarrow0$ of Eq.  implies Eq.  in the main text. We like to relate the expression in Eq.  to the Matsubara response $L^{m}$. As shown in Appendix A of Ref. [@Krien17], a similar Ward identity holds for $L^m$, $$\begin{aligned} -\imath\omega_m L^{m,\alpha}_{{\ensuremath{\mathbf{k}}}\nu_n}({\ensuremath{\mathbf{q}}}_0,\omega_m)=G^m_{{\ensuremath{\mathbf{k}}},\nu_n+\omega_m}-G^m_{{\ensuremath{\mathbf{k}}}\nu_n},\label{app:wardidmatsubara}\end{aligned}$$ which is a relation between Matsubara correlation functions, note however the similarity to Eq. . Analytical continuation {#app:ac:ac} ----------------------- The analytic continuation of the right-hand-side of Eq.  can be performed into four analytic regions by replacing $\imath(\nu_n+\omega_m)\rightarrow\nu+\omega\pm\imath0^+$ and $\imath\nu_n\rightarrow\nu\pm\imath0^+$. On the right-hand-side this gives rise to the retarded and advanced Green’s functions $G^r$ and $G^a$, respectively. We denote the four combinations explicitly as, $$\begin{aligned} -\omega L^{rr,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}}_0,\omega)=&G^r_{{\ensuremath{\mathbf{k}}},\nu+\omega}-G^r_{{\ensuremath{\mathbf{k}}}\nu},\notag\\ -\omega L^{ra,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}}_0,\omega)=&G^r_{{\ensuremath{\mathbf{k}}},\nu+\omega}-G^a_{{\ensuremath{\mathbf{k}}}\nu},\notag\\ -\omega L^{ar,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}}_0,\omega)=&G^a_{{\ensuremath{\mathbf{k}}},\nu+\omega}-G^r_{{\ensuremath{\mathbf{k}}}\nu},\notag\\ -\omega L^{aa,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}}_0,\omega)=&G^a_{{\ensuremath{\mathbf{k}}},\nu+\omega}-G^a_{{\ensuremath{\mathbf{k}}}\nu}.\label{app:fourregions}\end{aligned}$$ We use these expressions to rewrite Eq.  as, $$\begin{aligned} &-\omega L^{c,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}}_0,\omega)\notag\\ =&-\omega\left\{n_f(-\nu-\omega)L^{rr,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}}_0,\omega)\right.\notag\\ &+[n_f(\nu+\omega)+n_f(-\nu)-1]L^{ar,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}}_0,\omega)\notag\\ &+\left.n_f(\nu)L^{aa,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}}_0,\omega)\right\}.\label{app:g3cont}\end{aligned}$$ We have decomposed the causal response $L^{c}$ into retarded and advanced component functions, $L^{rr}, L^{ar}$, and $L^{aa}$, which can be readily obtained from the Matsubara response $L^{m}$ [^8]. We are thus able to recover $L^{c}$ from the latter by analytical continuation. Static homogeneous limit {#app:ac:w0} ------------------------ Strictly speaking, Eq.  can only be used to perform the analytic continuation for ${\ensuremath{\mathbf{q}}}={\ensuremath{\mathbf{q}}}_0=\mathbf{0}$ and $\omega\neq0$, the dynamic homogeneous limit. However, it is possible to show that that Eq.  also holds in the static homogeneous limit $\omega=0$. We demonstrate this here explicitly for the homogeneous magnetic response. To this end, we assume an infinitesimal magnetic field $\delta h$ along the $z$-axis, the Ward identity in Eq.  can then be written in the transversal channels $\alpha=x,y$ as [^9], $$\begin{aligned} (2\sigma\delta h-\omega) L^{c,\alpha=x,y}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}}_0,\omega)=G^c_{{\ensuremath{\mathbf{k}}},\nu+\omega,-\sigma}-G^c_{{\ensuremath{\mathbf{k}}}\nu\sigma}.\label{app:wardidcausalspin}\end{aligned}$$ We can now safely set $\omega=0$, leading to the static homogeneous limit $\presuper{\infty\!}L^{c}_{{\ensuremath{\mathbf{k}}}\nu}=\lim\limits_{{\ensuremath{\mathbf{q}}}\rightarrow0}\lim\limits_{\omega\rightarrow0}L^{c}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}},\omega)$, divide by $\delta h$ on both sides, and obtain for $\sigma=\uparrow$, $$\begin{aligned} &\presuper{\infty\!}L^{c,\alpha=x,y}_{{\ensuremath{\mathbf{k}}}\nu}=\frac{G^c_{{\ensuremath{\mathbf{k}}}\nu\downarrow}-G^c_{{\ensuremath{\mathbf{k}}}\nu\uparrow}}{2\delta h}\label{app:wardidcausalspinstat}\\ =&\frac{G^c_{{\ensuremath{\mathbf{k}}}\nu\downarrow}-G^c_{{\ensuremath{\mathbf{k}}}\nu}(h=0)}{2\delta h}-\frac{G^c_{{\ensuremath{\mathbf{k}}}\nu\uparrow}-G^c_{{\ensuremath{\mathbf{k}}}\nu}(h=0)}{2\delta h}\notag\\ =&-\frac{dG^c_{{\ensuremath{\mathbf{k}}}\nu\uparrow}}{dh}=-n_f(-\nu)\frac{dG^r_{{\ensuremath{\mathbf{k}}}\nu\uparrow}}{dh}-n_f(\nu)\frac{G^a_{{\ensuremath{\mathbf{k}}}\nu\uparrow}}{dh}.\notag\end{aligned}$$ In the second line we added and subtracted Green’s function at vanishing field $h=0$, leading to the zero-field derivative $\frac{dG_\sigma}{dh}=\frac{G_\sigma(\delta h)-G_\sigma(h=0)}{\delta h}$. In the first step of the last line we used that both spin species respond in opposite ways to the magnetic field, $\frac{dG_{\uparrow}}{dh}=-\frac{dG_{\downarrow}}{dh}$. In the last step we used again Eq. . An analogous calculation for the Matsubara response $L^{m}$ leads to, $$\begin{aligned} \presuper{\infty\!}L^{m,\alpha=x,y}_{{\ensuremath{\mathbf{k}}}\nu_n}=&-\frac{dG^m_{{\ensuremath{\mathbf{k}}}\nu_n\uparrow}}{dh}.\label{app:wardidmatsubaraspinstat}\end{aligned}$$ By rotational invariance Eqs.  and  also hold in the longitudinal spin channel $\alpha=z$. Furthermore, similar results hold for the charge channel $\alpha={\ensuremath{\text{ch}}}$ [@Noziere97], where one has to replace the magnetic field $h$ by the chemical potential $\mu$. The analytical continuation of Eq.  is straightforward. There are only two distinct options, $\imath\nu_n\rightarrow\nu\pm\imath0^+$, giving rise to the retarded and advanced Green’s functions, e.g., $\presuper{\infty\!}L^{rr,{\ensuremath{\text{sp}}}}=-\frac{dG^{r}}{dh}$ and $\presuper{\infty\!}L^{aa,{\ensuremath{\text{sp}}}}=-\frac{dG^{a}}{dh}$. Using Eq.  we can write Eq.  as, $$\begin{aligned} &\presuper{\infty\!}L^{c,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}=n_f(-\nu)\presuper{\infty\!}L^{rr,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}+n_f(\nu)\presuper{\infty\!}L^{aa,\alpha}_{{\ensuremath{\mathbf{k}}}\nu}. \label{app:g3contstat}\end{aligned}$$ We are therefore allowed to divide Eq.  by $-\omega$ and use the result also in the static homogeneous limit $\omega=0$. We verified from the Lehmann representation of $L^{c}$ and $L^{m}$ of the Hubbard atom \[cf. Sec. \[sec:coherent:atom\] and Appendix \[app:atom\]\] that Eq.  yields the correct causal response $L^{c}$. This equation was derived for the homogeneous limit ${\ensuremath{\mathbf{q}}}={\ensuremath{\mathbf{q}}}_0$ of $L$ but we suspect that it displays the analytical continuation of any fermion-boson response function. Hubbard atom {#app:atom} ============ We derive the static and dynamic limits of the response function $L$ for the Hubbard atom with Hamiltonian $H=U n_{\uparrow}n_{\downarrow}-\mu (n_{\uparrow}+n_{\downarrow}) - h (n_{\uparrow}- n_{\downarrow})$. Using the basis set $\{|0\rangle,|\uparrow\rangle,|\downarrow\rangle,|\updownarrow\rangle\}$ we can calculate the causal Green’s function $G^c$ using the Lehmann representation in Sec. \[app:gf\] (we drop the label $c$), $$\begin{aligned} &G_\sigma(\nu)=\frac{1}{\mathcal{Z}}\left[\frac{e^{-\beta\mu}}{\nu+\mu+\sigma h+\imath0^+}+\frac{e^{-\beta\sigma h}}{\nu-U+\mu+\sigma h+\imath0^+}\right.\notag\\ &+\left.\frac{e^{+\beta\sigma h}}{\nu+\mu+\sigma h-\imath0^+}+\frac{e^{-\beta(U-\mu)}}{\nu-U+\mu+\sigma h-\imath0^+}\right],\end{aligned}$$ where $\mathcal{Z}=e^{-h\beta}+e^{+h\beta}+e^{-\mu\beta}+e^{-(U-\mu)\beta}$ is the partition function, $\beta=\frac{1}{T}$ the inverse temperature. The response function $L(\nu,\omega)$ can be calculated from the Lehmann representation of Eq. , [@Oguri01]. However, to evaluate this function at $\omega=0$ (static) and in the limit $\omega\rightarrow0$ (dynamic), which are in general *not* equivalent, it is much more convenient to use the Ward identities , , and . These yield the static limits $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}$, $\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}$ and the dynamic limit $\presuper{0\!}L^{{\ensuremath{\text{ch}}}}=\presuper{0\!}L^{{\ensuremath{\text{sp}}}}=\presuper{0\!}L^{}$ as derivatives of Green’s function with respect to $\mu$, $h$, and $\nu$, respectively. For $\mu=\frac{U}{2}$ and $h=0$ we obtain for the dynamic limit, $$\begin{aligned} &\presuper{0\!}L^{}(\nu)=-\frac{dG_\sigma(\nu)}{d\nu}\\ =&\frac{1}{\mathcal{Z}}\left[\frac{e^{-\frac{U}{2}\beta}}{(\nu+\frac{U}{2}+\imath0^+)^2} +\frac{1}{(\nu-\frac{U}{2}+\imath0^+)^2}\right.\notag\\ +&\left.\frac{1}{(\nu+\frac{U}{2}-\imath0^+)^2} +\frac{e^{-\frac{U}{2}\beta}}{(\nu-\frac{U}{2}-\imath0^+)^2}\right].\notag\end{aligned}$$ According to Eq.  the static limit can be expressed through the dynamic one and a remainder, $\presuper{\infty\!}L^{\alpha}=\presuper{0\!}L^{}+\mathcal{L}^\alpha$. In the Hubbard atom we can indeed express the static limit in this way. We obtain the following remainder functions for the charge and spin channel, $$\begin{aligned} \mathcal{L}^{\ensuremath{\text{ch}}}(\nu)&=\frac{\beta e^{-\frac{U}{2}\beta}}{\mathcal{Z}}\left[ \frac{1}{\nu+\frac{U}{2}+\imath0^+}-\frac{1}{\nu-\frac{U}{2}-\imath0^+}\right],\notag\\ \mathcal{L}^{\ensuremath{\text{sp}}}(\nu)&=\frac{\beta}{\mathcal{Z}}\left[ \frac{1}{\nu-\frac{U}{2}+\imath0^+}-\frac{1}{\nu+\frac{U}{2}-\imath0^+}\right].\notag\end{aligned}$$ In the charge channel the remainder $\mathcal{L}^{\ensuremath{\text{ch}}}$ vanishes as $\beta\rightarrow\infty$, in this limit therefore $\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}(\nu)=\presuper{0\!}L^{}(\nu)$, as expected. Hence, also the charge susceptibility vanishes, $\presuper{\infty\!}X^{\ensuremath{\text{ch}}}=\frac{2}{2\pi}\int_{-\infty}^{+\infty}d\nu\presuper{\infty\!}L^{{\ensuremath{\text{ch}}}}(\nu)=-\imath2\beta e^{-\beta\frac{U}{2}}\mathcal{Z}^{-1}\rightarrow0$. The remaining charge response of the Hubbard peaks is given by $\presuper{0\!}L^{}(\nu)$. It does not lead to a response of the density $\langle n\rangle$, since the integral $\presuper{0\!}X=\frac{2}{2\pi}\int_{-\infty}^{+\infty}d\nu\presuper{0\!}L^{}(\nu)$ is zero. Note that $\mathcal{L}^{\ensuremath{\text{ch}}}$ does not vanish for $T>0$, where the charge susceptibility is finite. In the spin channel the remainder $\mathcal{L}^{\ensuremath{\text{sp}}}$ diverges $\propto\beta$, which gives rise to the divergence of the spin susceptibility $\presuper{\infty\!}X^{\ensuremath{\text{sp}}}=-\imath2\beta\mathcal{Z}^{-1}$, corresponding to the local moment. Since $\mathcal{L}^{\ensuremath{\text{sp}}}$ is not zero, it can not be the case that $\presuper{\infty\!}L^{{\ensuremath{\text{sp}}}}(\nu)$ and $\presuper{0\!}L^{}(\nu)$ coincide. Dynamic limit of the three-leg vertex {#app:lambdadyn} ===================================== We derive Eq.  for the homogeneous three-leg vertex $\Lambda^\alpha_\nu({\ensuremath{\mathbf{q}}}_0,\omega\neq0)$ in the DMFT approximation from the Ward identity  of the AIM. In this section $\nu$ and $\omega$ are Matsubara frequencies. Making use of the DMFT self-consistency condition , $g_\nu=\frac{1}{N}\sum_{{\ensuremath{\mathbf{k}}}}G_{{\ensuremath{\mathbf{k}}}\nu}$, one writes Eq.  as, $$\begin{aligned} &\Sigma_{\nu+\omega}-\Sigma_{\nu}\notag\\ =&\frac{T}{N}\sum_{{\ensuremath{\mathbf{k}}}'\nu'}\gamma^\alpha_{\nu\nu'\omega}G_{{\ensuremath{\mathbf{k}}}'\nu'}G_{{\ensuremath{\mathbf{k}}}',\nu'+\omega} \left[G^{-1}_{{\ensuremath{\mathbf{k}}}'\nu'}-G^{-1}_{{\ensuremath{\mathbf{k}}}',\nu'+\omega}\right]\notag.\end{aligned}$$ In the brackets on the right-hand-side we insert the definition of the DMFT Green’s function in Eq.  and divide both sides by $-\imath\omega$, $$\begin{aligned} -&\frac{\Sigma_{\nu+\omega}-\Sigma_{\nu}}{\imath\omega}\label{app:dsigmaladder}\\ =&\frac{T}{N}\sum_{{\ensuremath{\mathbf{k}}}'\nu'}\gamma^\alpha_{\nu\nu'\omega}G_{{\ensuremath{\mathbf{k}}}'\nu'}G_{{\ensuremath{\mathbf{k}}}',\nu'+\omega}\left[1-\frac{\Sigma_{\nu'+\omega}-\Sigma_{\nu'}}{\imath\omega}\right].\notag\end{aligned}$$ We now consider the Bethe-Salpeter equation for the vertex function, $$\begin{aligned} &F^\alpha_{\nu\nu'}({\ensuremath{\mathbf{q}}},\omega)\label{eq:bsedmft}\\ =&\gamma^\alpha_{\nu\nu'\omega}+\frac{T}{N}\sum_{{\ensuremath{\mathbf{k}}}''\nu''}\gamma^\alpha_{\nu\nu''\omega} G_{{\ensuremath{\mathbf{k}}}''\nu''}G_{{\ensuremath{\mathbf{k}}}''+{\ensuremath{\mathbf{q}}},\nu''+\omega}F^\alpha_{\nu''\nu'}({\ensuremath{\mathbf{q}}},\omega)\notag,\end{aligned}$$ which is equivalent to Eq. , [@Hafermann14-2]. We multiply Eq.  with the bubble $G_{k'}G_{k'+q}$, sum over $k'=({\ensuremath{\mathbf{k}}}',\nu')$, and evaluate the resulting equation at $q=({\ensuremath{\mathbf{q}}}_0={\ensuremath{\mathbf{0}}},\omega)$, leading to $$\begin{aligned} \frac{T}{N}&\sum_{{\ensuremath{\mathbf{k}}}'\nu'}F^\alpha_{\nu\nu'}({\ensuremath{\mathbf{q}}}_0,\omega)G_{{\ensuremath{\mathbf{k}}}'\nu'}G_{{\ensuremath{\mathbf{k}}}',\nu'+\omega}\label{app:bsewithlegs}\\ =&\frac{T}{N}\sum_{{\ensuremath{\mathbf{k}}}'\nu'}\gamma^\alpha_{\nu\nu'\omega}G_{{\ensuremath{\mathbf{k}}}'\nu'}G_{{\ensuremath{\mathbf{k}}}',\nu'+\omega}\notag\\ \times&\left[1+\frac{T}{N}\sum_{{\ensuremath{\mathbf{k}}}''\nu''}F^\alpha_{\nu'\nu''}({\ensuremath{\mathbf{q}}}_0,\omega)G_{{\ensuremath{\mathbf{k}}}''\nu''}G_{{\ensuremath{\mathbf{k}}}'',\nu''+\omega}\right].\notag\end{aligned}$$ In the steps leading to Eq.  the summation labels $\nu'$ and $\nu''$ on the right-hand-side were exchanged. By comparison of Eqs.  and  we find that they actually express the same integral equation. We can therefore identify, $$\begin{aligned} -\frac{\Sigma_{\nu+\omega}-\Sigma_{\nu}}{\imath\omega}=\frac{T}{N}\sum_{{\ensuremath{\mathbf{k}}}'\nu'}F^\alpha_{\nu\nu'}({\ensuremath{\mathbf{q}}}_0,\omega)G_{{\ensuremath{\mathbf{k}}}'\nu'}G_{{\ensuremath{\mathbf{k}}}',\nu'+\omega}.\end{aligned}$$ Adding $1$ on both sides and using the definition of the three-leg vertex in Eq.  we arrive at Eq. . [^1]: Note that this is a ’left-handed’ three-leg vertex, with the tapered Green’s function legs in Fig. \[fig:3leg\] a) pointing to the left. In the limit $q\rightarrow0$ it is equivalent to the ’right-handed’ vertex due to the crossing symmetry. [^2]: The vertex function $f$ and the two-particle self-energy $\gamma$ of the impurity are related via the *impurity* Bethe-Salpeter equation, $f^\alpha_{\nu\nu'\omega}=\gamma^\alpha_{\nu\nu'\omega}+T\sum_{\nu''}\gamma^\alpha_{\nu\nu''\omega}g_{\nu''}g_{\nu''+\omega}f^\alpha_{\nu''\nu'\omega}$. [^3]: The density of states of the Hubbard model on the triangular lattice  within the DMFT approximation is shown in Ref. [@Aryanpour06]. [^4]: At zero temperature the total magnetic response of the Mott insulator is commonly believed to be finite, due to the effective exchange $\tilde{t}^{\,2}/U$. It has been argued that this is indeed the case in DMFT [@Rozenberg94; @Georges96], however, calculations are hindered practically by the divergence of the local spin susceptibility, due to the free local moment in the impurity model. Recently, a modification of the impurity problem has been suggested to circumvent this problem [@Guerci18]. [^5]: Note that for $T>0$ the interacting density of states $D(0)$ of the Fermi liquid is in general very different from the non-interacting one, since the pinning of its value to the non-interacting DOS according to the Luttinger theorem is realized only at very low $T$. [^6]: A relation of the dynamic three-leg vertex to quasi-particle criticality also exists at finite wave-vectors [@Abrahams14; @Woelfle17]. [^7]: A conjugate field and order parameter for the Mott transition are known in the limit of infinite dimensions [@Zitko15]. [^8]: $L^{ra}$ is redundant, since $L^{ra}_{{\ensuremath{\mathbf{k}}},\nu+\omega}(-\omega)=L^{ar}_{{\ensuremath{\mathbf{k}}}\nu}(\omega)$. [^9]: We explain why the derivation of Eq.  needs to be done from the transversal spin channels: Bubbles of the type $G_{\uparrow} G_{\downarrow}$ are used to construct the transversal magnetic response $L^{x,y}$ from the Bethe-Salpeter equation. The magnetic field $\delta h$ lifts the degeneracy of the poles of $G_{\uparrow}$ and $G_{\downarrow}$. Therefore, the limits ${{\ensuremath{\mathbf{q}}}\rightarrow\mathbf{0}}$ and ${\omega\rightarrow0}$ of the bubble $G_{{\ensuremath{\mathbf{k}}}\nu\uparrow}{G_{{\ensuremath{\mathbf{k}}}+{\ensuremath{\mathbf{q}}},\nu+\omega,\downarrow}}$ commute for $\delta h\neq0$, which can be seen easily in the non-interacting case. Taking the limits ${{\ensuremath{\mathbf{q}}}\rightarrow\mathbf{0}}$, ${\omega\rightarrow0}$ and *subsequently* the limit $\delta h\rightarrow0$ then leads to the static homogeneous limit, $\lim\limits_{{\ensuremath{\mathbf{q}}}\rightarrow0}\lim\limits_{\omega\rightarrow0}L^{x,y}_{{\ensuremath{\mathbf{k}}}\nu}({\ensuremath{\mathbf{q}}},\omega)$. Hence, in Eq.  $\omega$ goes effectively to zero before ${\ensuremath{\mathbf{q}}}$, which can not be achieved without a symmetry-breaking field \[cf. Eq. \]. However, this trick does not work in the longitudinal spin channel $\alpha=z$, since in this channel the response function $L^z$ is constructed from bubbles of the type $G_{{\ensuremath{\mathbf{k}}}\nu\sigma}{G_{{\ensuremath{\mathbf{k}}}+{\ensuremath{\mathbf{q}}},\nu+\omega,\sigma}}$, such that $\delta h$ does not lift the degeneracy of the poles.
--- abstract: 'We prove Furuta-type bounds for the intersection forms of spin cobordisms between homology $3$-spheres. The bounds are in terms of a new numerical invariant of homology spheres, obtained from $\pin$-equivariant Seiberg-Witten Floer K-theory. In the process we introduce the notion of a Floer $K_G$-split homology sphere; this concept may be useful in an approach to the 11/8 conjecture.' address: | Department of Mathematics, UCLA, 520 Portola Plaza\ Los Angeles, CA 90095 author: - Ciprian Manolescu bibliography: - 'biblio.bib' title: 'On the intersection forms of spin four-manifolds with boundary' --- [^1] Introduction ============ Let $X$ be a smooth, oriented, spin $4$-dimensional manifold. Donaldson’s diagonalizability theorem [@Donaldson; @DonaldsonOr] implies that if $X$ is closed, then $X$ cannot have a non-trivial definite intersection form. If $X$ is not closed but has boundary a homology $3$-sphere $Y$, its intersection form is still unimodular, and Fr[ø]{}yshov [@Froyshov] found constraints on the definite intersection forms of such $X$; see also [@FurutaBrazil; @Nicolaescu]. These constraints depend on an invariant associated to the boundary $Y$ and, later, various other invariants of this type have been developed [@FroyshovYM; @FroyshovHM; @AbsGraded; @KMOS; @swfh]. With respect to indefinite forms, the situation is less understood. If $X$ is closed, Matsumoto’s 11/8 conjecture [@Matsumoto] states that $b_2(X) \geq \frac{11}{8}|\sigma(X)|$. (Here, $\sigma$ denotes the signature.) Since $X$ is spin, its intersection form must be even. A unimodular, even indefinite form (of, say, nonpositive signature) can be decomposed as $$p(-E_8) \oplus q\hyp, \ \ p \geq 0, q > 0.$$ For forms coming from closed spin $4$-manifolds, Rokhlin’s theorem [@Rokhlin] implies that $p$ is even. Since $b_2(X) = 8p+2q$ and $p=|\sigma(X)|/8$, the 11/8 conjecture can be rephrased as $$\label{eq:11_8} q \geq 3p/2.$$ An important result in this direction was obtained by Furuta [@Furuta], who proved the inequality $b_2(X) \geq \frac{10}{8}|\sigma(X)| + 2$, i.e., $$\label{eq:10_8} q \geq p + 1.$$ The free coefficient $1$ in the bound can sometimes be improved slightly, depending on the value of $p$ mod $8$; see [@Crabb; @Stolz; @Schmidt]. The purpose of this paper is to obtain constraints on the indefinite intersection forms of spin four-manifolds with boundary. Although many of the results can be extended to the setting where the boundary $\del X = Y$ is a disjoint union of rational homology spheres (equipped with spin structures), for simplicity we will focus on the case where $Y$ consists of either one or two integral homology $3$-spheres. Then, the intersection form must still be of the type $p(-E_8) \oplus q\hyp$, and the parity of $p$ is given by the Rokhlin invariant of the boundary. (We allow here the case $p < 0$, and then we interpret $p(-E_8)$ as a direct sum of copies of the positive form $E_8$.) One method to obtain constraints on the intersection form is to pick a spin $4$-manifold $X'$ with boundary $-Y$, and apply Furuta’s bound to the closed manifold $X \cup_{Y} X'$. This method can be refined by choosing $X'$ to be an orbifold rather than a manifold, and applying the orbifold version of developed by Fukumoto and Furuta in [@FukumotoFuruta]. We refer to [@BohrLee; @FukumotoBG; @Donald] for some results obtained using these methods. In this paper we find further constraints by a different technique, based on adapting Furuta’s proof of to the setting of manifolds with boundary. (However, our proof does not use the Adams operations, so it is in fact closer in spirit to Bryan’s modification of Furuta’s argument [@Bryan].) Here is the first result: \[thm:main1\] To every oriented homology $3$-sphere $Y$ we can associate an invariant $\kappa(Y) \in \Z$, with the following properties: (i) The mod $2$ reduction of $\kappa(Y)$ is the Rokhlin invariant $\mu(Y)$; (ii) Suppose that $W$ is a smooth, spin, negative-definite cobordism from $Y_0$ to $Y_1$, and let $b_2(W)$ denote the second Betti number of $W$. Then: $$\kappa(Y_1) \geq \kappa(Y_0) + \frac{1}{8} b_2(W).$$ (iii) Suppose that $W$ is a smooth, spin cobordism from $Y_0$ to $Y_1$, with intersection form $p(-E_8) \oplus q\hyp$. Then: $$\kappa(Y_1) + q \geq \kappa(Y_0) + p-1.$$ Our main interest is in part (iii), but we listed properties (i) and (ii) in order to compare $\kappa$ with the invariants $\alpha, \beta, \gamma$ constructed in [@swfh], using $\pin$-equivariant Seiberg-Witten Floer homology. The invariants $\alpha, \beta, \gamma$ satisfy the analogues of (i) and (ii). Property (iii) seems specific to the invariant $\kappa$, which is constructed from the $\pin$-equivariant Seiberg-Witten Floer K-theory of $Y$. The use of $\pin$-equivariant K-theory is to be expected, because it also appeared in Furuta’s and Bryan’s proofs of . Roughly, the invariant $\kappa$ is defined as follows. We use the set-up from [@Spectrum; @swfh]: Pick a metric $g$ on $Y$ and consider a finite dimensional approximation to the Seiberg-Witten equations, depending on an eigenvalue cut-off $\nu \gg 0$. The resulting flow has a Conley index $I_{\nu}$, which is a pointed topological space with an action by the group $G=\pin$. After changing $I_{\nu}$ by a suitable suspension if necessary, we can arrange for the $S^1$-fixed point set of $I_{\nu}$ to be equivalent to a complex representation sphere of $G$. We then consider the reduced $G$-equivariant K-theory of $I_{\nu}$. The inclusion of the $S^1$-fixed point set into $I_{\nu}$ induces a map $$\label{eq:iota} \iota^*: \tK_G(I_{\nu}) \to \tK_G(I_{\nu}^{S^1}).$$ We have a Bott isomorphism $$\tK_G(I_{\nu}^{S^1}) \cong K_G(\pt) = \Z[w, z]/(w^2-2w, wz-2w).$$ We let $$k(I_{\nu}) = \min \{k \geq 0 \mid \exists \ x \in \operatorname{\operatorname{image}}(\iota^*) \subseteq K_G(\pt), wx = 2^k w \},$$ and obtain $\kappa(Y)$ from $2k(I_{\nu})$ by subtracting a correction term depending on $\nu$ and $g$. The invariant $\kappa(Y)$ can be computed explicitly in some cases. For example: \[thm:brieskorn\] $(a)$ We have $\kappa(S^3)=0$. $(b)$ Consider the Brieskorn spheres $\Sigma(2,3,m)$ with $\gcd(m,6)=1$, oriented as boundaries of negative definite plumbings. Then: $$\begin{aligned} \kappa(\Sigma(2,3,12n-1)) &= 2, \hskip1cm \kappa(\Sigma(2,3,12n-5)) = 1, \\ \kappa(\Sigma(2,3,12n+1)) &= 0, \hskip1cm \kappa(\Sigma(2,3,12n+5)) = 1.\end{aligned}$$ $(c)$ For the same Brieskorn spheres with the orientations reversed, we have $$\begin{aligned} \kappa(-\Sigma(2,3,12n-1)) &= 0, \hskip1cm \kappa(-\Sigma(2,3,12n-5)) = 1, \\ \kappa(-\Sigma(2,3,12n+1)) &= 0, \hskip1cm \kappa(-\Sigma(2,3,12n+5)) = -1.\end{aligned}$$ Observe that $\kappa(-Y)$ is not determined by $\kappa(Y)$. However, we can prove that $\kappa(Y) + \kappa(-Y) \geq 0$. (See Proposition \[prop:duals\].) Furthermore, observe that for the examples appearing in Theorem \[thm:brieskorn\], the values for $\kappa$ coincide with those for the invariant $\alpha$ defined in [@swfh]. We conjecture that $\kappa \neq \alpha$ in general; see Section \[sec:alphas\] for a discussion of this. Note also that when $Y_0 = Y_1 = S^3$, the bound in Theorem \[thm:main1\] (iii) is weaker than Furuta’s bound . We can remedy this by introducing the following concept: \[def:FloerSplit\] We say that a homology sphere is [*Floer $K_G$-split*]{} if the image of the map $\iota^*$ from is an ideal of $\Z[w, z]/(w^2-2w, wz-2w)$ of the form $(z^k)$ for some $k \geq 0$. For example, one can show that the three-sphere $S^3$, the Brieskorn spheres $\pm \Sigma(2,3,12n+1)$ and $\pm \Sigma(2,3,12n+5)$ are all Floer $K_G$-split, but the Brieskorn spheres of the form $\pm \Sigma(2,3,12n-1)$ and $\pm \Sigma(2,3,12n-5)$ are not Floer $K_G$-split. If the starting $3$-manifold in a cobordism $W$ is Floer $K_G$-split, we can strengthen the bound in Theorem \[thm:main1\] (iii): \[thm:main2\] Suppose that $W$ is a smooth, spin cobordism from $Y_0$ to $Y_1$, with intersection form $p(-E_8) \oplus q\hyp$ and $q > 0$. If $Y_0$ is $K_G$-split, then: $$\kappa(Y_1) + q \geq \kappa(Y_0) + p + 1.$$ Applying this to $Y_0= S^3$, which is Floer $K_G$-split, we obtain: \[cor:1bdry\] Let $X$ be a smooth, compact, spin four-manifold with boundary a homology sphere $Y$. If the intersection form of $X$ is $p(-E_8) \oplus q\hyp$ and $q > 0$, then: $$q \geq p + 1- \kappa(Y).$$ When $Y=S^3$, we recover Furuta’s $10/8$ Theorem . When $Y=\pm \Sigma(2,3,m)$ with $\gcd(m, 6)=1$, we get specific bounds by combining Theorem \[thm:brieskorn\] (ii) and (iii) with Corollary \[cor:1bdry\]. In some of these cases, the bounds given by Corollary \[cor:1bdry\] can be obtained more easily by applying the orbifold version of Furuta’s theorem to a filling of $X$. However, the bounds we get in the cases $Y=+\Sigma(2,3,12n+1)$ and $Y=+\Sigma(2,3,12n+5)$ appear to be new. We refer to Section \[sec:bounds\] for a detailed discussion. The techniques developed in this paper may also be of interest in studying closed $4$-manifolds. Indeed, Bauer [@Bauer] proposed a strategy for proving the $11/8$ conjecture (in the simply connected case) by decomposing the $4$-manifold along homology spheres. Specifically, suppose we had a counterexample to , i.e., a closed, spin $4$-manifold $X$ with intersection form $2r(-E_8) \oplus q\hyp$ and $q < 3r$. By adding copies of $S^2 \times S^2$, we can assume that $q=3r-1$. If $\pi_1(X) =1$, then by a theorem of Freedman and Taylor [@FreedmanTaylor] we can find a decomposition $$\label{eq:bauer} X = X_1 \cup_{Y_1} X_2 \cup_{Y_2} \dots \cup_{Y_{r-1}} X_r$$ such that: - $Y_i$ is an integral homology $3$-sphere for all $i$; - For $1 \leq i \leq r-1$, the manifold $X_i$ has intersection form $2(-E_8) \oplus 3\hyp$; - $X_r$ has intersection form $2(-E_8) \oplus 2\hyp$. (There are several variations of this; e.g., one could ask for $X_1$ to have intersection form $2(-E_8) \oplus 4\hyp$ and for $X_r$ to have intersection for $2(-E_8) \oplus \hyp$, as in [@Bauer].) If the homology spheres $Y_i$ are arbitrary, Theorem \[thm:main1\] (iii) is not sufficient to preclude the existence of such decompositions. On the other hand, Theorem \[thm:main2\] has the following immediate consequence: \[thm:cor\] There exists no closed four-manifold $X$ with a decomposition of the type , such that all the homology spheres $Y_i$ are Floer $K_G$-split. In view of this result, it would be worthwhile to find topological conditions guaranteeing that a homology sphere is Floer $K_G$-split. [**Acknowledgements.**]{} I would like to thank Mike Hopkins, Peter Kronheimer and Ron Stern for some very enlightening conversations, and the Simons Center for Geometry and Physics (where part of this work was written) for its hospitality. I am also grateful to Jianfeng Lin, Brendan Owens and the referee for comments on a previous version of this paper. Some of the results in this article have been obtained independently by Mikio Furuta and Tian-Jun Li [@FurutaLi]. Equivariant K-theory {#sec:eqK} ==================== Background ---------- We start by reviewing a few general facts about equivariant K-theory, mostly collected from [@Segal]; see also [@AtiyahBP]. We assume familiarity with ordinary K-theory, as in [@AtiyahK]. Let $G$ be a compact topological group and $X$ a compact $G$-space. The equivariant K-theory of $X$, denoted $K_G(X)$, is the Grothendieck group associated to $G$-equivariant complex vector bundles on $X$. When $X$ is a point, $R(G) = K_G(\pt)$ is the representation ring of $G$. In general, $K_G(X)$ is an algebra over $R(G)$. \[fact:zero\] A continuous map $f:X \to X'$ induces a map $f^*:K_G(X') \to K_G(X)$. \[fact:res\] For every subgroup $H \subseteq G$, we have functorial restriction maps $K_G(X) \to K_H(X)$. \[fact:free\] If $G$ acts freely on $X$, then the pull-back map $K(X/G) \to K_G(X)$ is a ring isomorphism. \[fact:trivial\] If $G$ acts trivially on $X$, then the natural map $R(G) \otimes K(X) \to K_G(X)$ is an isomorphism of $R(G)$-algebras. Now suppose that $X$ has a distinguished base point, fixed under $G$. We define the reduced equivariant $K$-theory of $X$, denoted $\tK_G(X)$, as the kernel of the restriction map $K_G(X) \to K_G(\pt)$. \[fact:freebased\] If the action of $G$ on $X$ is free away from the basepoint, then the pull-back map $\tK(X/G) \to \tK_G(X)$ is a ring isomorphism. \[fact:prod\] There is a natural product map $\tK_G(X) \otimes \tK_G(X') \to \tK_G(X \wedge X')$. If $V$ is any real representation of $G$, let $\Sigma^V X = V^+ \wedge X$ denote the (reduced) suspension of $X$ by $V$. When $V=n\R$ is a trivial representation, we simply write $\Sigma^n X$ for $\Sigma^{n\R} X$. \[fact:bott\] If $V$ is a complex representation of $G$, we have an equivariant Bott periodicity isomorphism, $\tK_G(X) \cong \tK_G(\Sigma^V X)$. This is given by multiplication with a Bott class $b_V \in \tK_G(V^+)$, under the natural map $ \tK_G(V^+) \otimes \tK_G(X) \to \tK_G(\Sigma^V X)$. The Bott isomorphism is functorial with respect to based continuous maps $f:X \to X'$. \[fact:bott2\] Let $V$ be a complex representation of $G$. The composition of the Bott isomorphism with the map $\tK_G(\Sigma^V X) \to \tK_G(X)$ induced by the inclusion $X \hookrightarrow \Sigma^V X$ is a map $\tK_G(X) \to \tK_G(X)$ given by multiplication with the K-theoretic Euler class $$\lambda_{-1}(V) = \sum_d (-1)^d [\Lambda^i V] \in R(G).$$ Bott periodicity for $V=\C \cong \R^2$ says that $\tK_G(\Sigma^{2} X) \cong \tK_G(X)$. For $i \in \Z$, we can define the reduced K-cohomology groups of $X$ by $$\tK_G^i(X) = \begin{cases} \tK_G(X) & \text{if} \ i \text{ is even}, \\ \tK_G(\Sigma X) & \text{if} \ i \text{ is odd}. \end{cases}$$ \[fact:les\] If $A \subseteq X$ is a closed $G$-subspace (containing the base point), there is a long exact sequence: $$\label{eq:pair} \to \tK_G^i(X \amalg_A CA) \to \tK^i_G(X) \to \tK^i_G(A) \to \tK_G^{i+1}(X \amalg_A CA) \to \dots$$ where $CA$ denotes the cone on $A$. A quick consequence of Fact \[fact:les\] is: \[fact:wedge\] If $X$ is a wedge sum $A \vee B$, then $\tK_G(X) \cong \tK_G(A) \oplus \tK_G(B)$. The [*augmentation ideal*]{} $\a \subseteq R(G)$ is defined as the kernel of the forgetful map (augmentation homomorphism) $R(G) \cong K_G(\pt) \to K(\pt) \cong \Z$. The following fact is closely related to the Atiyah-Segal completion theorem; see [@AtiyahSegal proof of Proposition 4.3] and [@AtiyahK 3.1.6]: \[fact:complete\] If $X$ is a finite, based $G$-CW complex and the $G$-action is free away from the basepoint, then the elements of the augmentation ideal $\a \subset R(G)$ act nilpotently on $\tK_G(X) \cong \tK(X/G)$. One can also define the equivariant $K$-groups when $X$ is only locally compact (see [@Segal]), e.g., for the classifying bundle $EG$. The following is a consequence of the Atiyah-Segal completion theorem; see [@AtiyahSegal Proposition 4.3] or [@MayBook Section XIV.5]: \[fact:EG\] The ring $K_G(EG) \cong K(BG)$ is isomorphic to $R(G)^{\wedge}_{\a}$, the completion of $R(G)$ at the augmentation ideal. The projection $EG \to \pt$ induces a map $K_G(\pt)\to K_G(EG)$, which corresponds to the natural map from $R(G)$ to its completion. The following is an immediate corollary of Fact \[fact:EG\]: \[fact:free2\] Let $X$ be a compact space with a free $G$-action. Let $Q = X/G$ and denote by $\pi$ the projection $ X \to \pt$. The induced map $\pi^*$ from $R(G) \cong K_G(\pt)$ to $K(Q) \cong K_G(X)$ can also be described as the composition $$R(G) \to R(G)^{\wedge}_{\a} \cong K(BG) \to K(Q),$$ where the first map is completion and the second is induced by the classifying map $Q \to BG$ for $X$. Pin(2)-equivariant K-theory --------------------------- From now on we specialize to the group $G = \pin$. If $\H = \C \oplus j \C$ denotes the algebra of quaternions, recall that $\pin$ can be defined as $S^1 \cup jS^1 \subset \H$. There is a short exact sequence $$1 \To S^1 \To G \To \Z/2 \To 1.$$ As in [@swfh], we introduce notation for the following real representations of $G$: - the trivial representation $\R$; - the one-dimensional sign representation $\tR$ on which $S^1 \subset G$ acts trivially and $j$ acts by multiplication by $-1$; - the quaternions $\H$, acted on by $G$ via left multiplication. We also denote by $\tC$ the complexification $\tR \otimes_{\R} \C$; this is isomorphic to $\tR^2$ as a real representation. The representation ring $R(G)$ of $G=\pin$ is generated by $\tc = [\tC]$ and $h = [\H]$, subject to the relations $\tc^2 = 1$ and $\tc h = h$. It will be convenient to use the generators: $$w=\lambda_{-1}(\tC)= 1-\tc, \ \ \ \ \ z=\lambda_{-1}(\H)=2-h.$$ We obtain: $$R(G) = \Z[w, z]/(w^2-2w, zw-2w).$$ The augmentation homomorphism is $$\label{eq:aug} R(G) \to \Z, \ \ \ w, z \mapsto 0.$$ Therefore, the augmentation ideal of $R(G)$ is $\a = (w, z)$. Observe also that restriction to the subgroup $S^1 \subset G$ induces the map $$\begin{aligned} \label{eq:rs1} R(G) &\to R(S^1) = \Z[\theta, \theta^{-1}], \\ \notag w &\mapsto 0, \\ \notag z &\mapsto 2-(\theta + \theta^{-1}),\end{aligned}$$ where $\theta$ is the class of the standard one-dimensional representation of $S^1$. The equivariant K-theory of spaces of type SWF {#sec:spacesSWF} ============================================== In [@swfh Section 2.3] we defined a class of topological spaces with a $\pin$-action, called spaces of type $\swf$. These appear naturally in the context of finite dimensional approximation in Seiberg-Witten Floer theory; see Section \[sec:swf\] below. In [@swfh], we found three numerical quantities (denoted $a$, $b$, and $c$) coming from the $\pin$-equivariant homology of a space of type $\swf$. Our goal in this section is to extract another quantity, denoted $k$, from the $\pin$-equivariant K-theory of a space of type $\swf$. A numerical invariant {#sec:k} --------------------- We recall the following definition from [@swfh Section 2.3]: Let $s \geq 0$. A [*space of type $\swf$ (at level $s$)*]{} is a pointed, finite $G$-CW complex $X$ with the following properties: (a) The $S^1$-fixed point set $X^{S^1}$ is $G$-homotopy equivalent to the sphere $(\tR^{s})^+$; (b) The action of $G$ is free on the complement $X - X^{S^1}$. We shall focus our attention on spaces of type $\swf$ at an [*even*]{} level. This is because if $s=2t$ is even, the $S^1$-fixed point set of $X$ is $G$-equivalent to the complex representation sphere $(\tC^t)^+$, so we can use equivariant Bott periodicity (Fact \[fact:bott\]) to get $$\tK_G(X^{S^1}) \cong \tK_G(S^0) \cong R(G).$$ We let $\iota: X^{S^1} \to X$ denote the inclusion, and let $\i(X)$ be the ideal of $R(G)$ with the property that the image of the induced map $\iota^*: \tK_G(X) \to \tK_G(X^{S^1})$ is $\i(X) \cdot b_{t\tC}$, where $b_{t\tC}$ is the Bott class. \[lem:ii\] For any space $X$ of type $\swf$ at an even level, there exists $k \geq 0$ such that $w^k \in \i(X)$ and $z^k \in \i(X)$. Apply the long exact sequence to $A= X^{S^1} \subseteq X$: $$\dots \to \tK_G(X) \xrightarrow{\iota^*} \tK_G(X^{S^1}) \to \tK_G^1(X/X^{S^1}) \to \dots$$ By the definition of spaces of type $\swf$, we know that $X/X^{S^1}$ has a free $G$-action away from the basepoint. By Fact \[fact:complete\], we know that the elements $z, w \in \a$ act nilpotently on $\tK_G^1(X/X^{S^1}) \cong \tK^1((X/X^{S^1})/G)$. If $k$ is such that $z^k$ and $w^k$ act by $0$ on $\tK_G^1(X/X^{S^1})$, then the exact sequence implies that $z^k$ and $w^k$ are in $\i(X)$. In $R(G)$ we have $w^2=wz=2w$, so $w \cdot w^k = w \cdot z^k = 2^k w$. In light of Lemma \[lem:ii\], we can make the following: \[def:kx\] Given a space $X$ of type $\swf$ at an even level, we let $$k(X) = \min \{k \geq 0 \mid \exists \ x \in \i(X), \ w x=2^k w\}.$$ Let us understand how the quantity $k$ behaves under suspensions. Note that if $X$ is a space of type $\swf$ at level $2t$, then $\Sigma^{\H}X$ is of type $\swf$ at the same level, and $\Sigma^{\tC} X$ is of type $\swf$ at level $2t+2$. \[lem:susp\] If $X$ is a space of type $\swf$ at an even level, then $$\i(\Sigma^{\tC}X) = \i(X), \ \ \ \i(\Sigma^{\H} X) = z \cdot \i(X),$$ and consequently $$k(\Sigma^{\tC}X) = k(X), \ \ \ k(\Sigma^{\H} X) = k(X) + 1.$$ The statements about $\Sigma^{\tC}X$ follow from the fact that $(\Sigma^{\tC} X)^{S^1} = \Sigma^{\tC}(X^{S^1})$, together with the functoriality of the Bott isomorphism (Fact \[fact:bott\]). To get the statements about $\Sigma^{\H}X$, note that inclusions of subspaces produce a commutative diagram $$\begin{CD} \tK_G(\Sigma^{\H}X) @>>> \tK_G(X) \\ @V{\iota_2^*}VV @VV{\iota_1^*}V \\ \tK_G((\Sigma^{\H} X)^{S^1}) @>{\cong}>> \tK_G(X^{S^1}). \end{CD}$$ Since $(\Sigma^{\H} X)^{S^1} = X^{S^1}$, the bottom horizontal map is just the identity. Under the Bott isomorphism identification $\tK_G(\Sigma^{\H}X)\cong \tK_G(X)$, the top horizontal map is multiplication by $\lambda_{-1}(\H) = z$. This implies that $$\i(\Sigma^{\H}X)= z \cdot \i(X) \subseteq R(G).$$ If we are given $x \in \i(X)$ with $wx=2^kw$, we get $w(zx) = 2wx = 2^{k+1}w$. Conversely, if we have $x \in \i(X)$ with $w(zx) = 2^kw$, then $2wx = 2^kw$, so $wx = 2^{k-1}w$. Therefore, $k(\Sigma^{\H}X) = k(X) +1$. Examples {#sec:exk} -------- The simplest example of a space of type $\swf$ is $S^0$, for which $\i(S^0)=(1)$ and $ k(S^0)= 0.$ From Lemma \[lem:susp\] we get that $$\i((\tC^{t} \oplus \H^l)^+) = (z^l) \ \ \text{and} \ \ k( (\tC^t \oplus \H^l)^+) = l.$$ Further, observe that if $X$ is a space of type $\swf$ at level $2t$, and $X'$ is a space with a free $G$-action away from the basepoint, then the wedge sum $X \vee X'$ is also of type $\swf$ at level $2t$. The term $\tK_G(X')$ in $\tK_G(X \vee X') \cong \tK_G(X) \oplus \tK_G(X')$ does not interact with the $S^1$-fixed point set through the map $\iota^*$. Therefore, $$\label{eq:kwedge} \i(X \vee X') = \i(X) \ \ \text{and} \ \ k(X \vee X') = k(X).$$ Combining the observations above, we find that if a space $X$ decomposes as a wedge sum of a representation sphere $(\tC^t \oplus \H^k)^+$ and a free $G$-space, then $\i(X)$ is of the form $(z^k)$. We call such spaces [*split*]{}. We now introduce the following notion, which will play an important role in the paper: \[def:ksplit\] A space of type $\swf$ at an even level is called [*$K_G$-split*]{} if the ideal $\i(X) \subseteq R(G)$ is of the form $(z^k)$ for some $k \geq 0$. Thus, the $K_G$-split spaces are those that are indistinguishable from split spaces in terms of the ideal $\i(X)$. Let us give two examples of spaces of type $\swf$ that are not $K_G$-split. These examples were also considered in [@swfh Section 2.4], and arise from the following construction. Suppose that $G$ acts freely on a finite $G$-CW-complex $Z$, and let $Q = Z/G$ be the respective quotient. Let $$\tilde Z = \bigl( [0,1]\times Z \bigr ) / (0,z) \sim (0, z') \text{ and } (1,z) \sim (1, z') \text{ for all } z,z' \in Z$$ denote the unreduced suspension of $Z$, where $G$ acts trivially on the $[0,1]$ factor. We view $\tilde Z$ as a pointed $G$-space, with one of the two cone points being the basepoint. Then $\tilde Z$ is of type $\swf$ at level $0$. There is a long exact sequence: $$\dots \To \tK_G(\tilde Z) \To \tK_G(S^0) \To \tK^{1}_G(\Sigma Z_+) \To \dots$$ Because $G$ acts freely on $Z$, we have $\tK^{1}_G(\Sigma Z_+) \cong \tK_G(Z_+) \cong K(Q)$, and the above sequence can be written $$\label{eq:z} 0 \To K^1(Q) \To \tK_G(\tilde Z) \xrightarrow{\phantom{b}\iota^*} R(G) \xrightarrow{\phantom{b}\pi^*} K(Q) \To \tK^1_G(\tilde Z) \To 0.$$ Here, the map $\pi^*$ is the one described in Fact \[fact:free2\]. Exactness tells us that the ideal $\i(\tZ)$ is the kernel of $\pi^*$. \[ex:G\] Take $Z = G$, acting on itself via left multiplication, so that the quotient $Q$ is a single point. In the exact sequence , the map $\pi^*$ is the augmentation homomorphism $R(G) \to \Z$ from . Therefore, $$\i(\tilde G) = (w, z) \ \ \text{and} \ \ k(\tilde G) = 1.$$ Further, since $K^1(Q) =0$, we can compute $$\tK_G(\tilde G) \cong (w,z) \ \ \text{and} \ \ \tK^1_G(\tilde G)=0.$$ \[ex:torus\] Now let $Z$ be the torus $$T = S^1 \times jS^1 \subset \C \oplus j\C = \H,$$ with the $G$-action coming from $\H$. The quotient is $Q \cong S^1$. Since the inclusion of a point in $S^1$ induces an isomorphism $$K(S^1) \xrightarrow{\cong} K(pt) = \Z,$$ we deduce that the ideal $\i(\tilde T)$ is the same as in the previous example, that is, $$\label{eq:zpn} \i(\tilde T) = (w, z) \ \ \text{and} \ \ k(\tilde T) = 1.$$ Further, from , we get that $ \tK_G^1(\tilde T) = 0$ and $\tK_G(\tilde T)$ is an extension of $(w,z)$ by $K^1(Q) \cong \Z$. (Here, $\Z$ is $R(G)/(w,z)$ as an $R(G)$-module.) Properties ---------- We now turn to some general properties of the invariant $k(X)$. \[lem:map0\] Let $X$ and $X'$ be spaces of type $\swf$ at even levels. Suppose that there exists a based, $G$-equivariant homotopy equivalence from $\Sigma^{r \R} X$ to $\Sigma^{r\R} X'$, for some $r \geq 0$. Then, we have $\i(X) = \i(X')$ and $k(X) = k(X')$. By suspending the $G$-equivalence $\Sigma^{r\R} X \to \Sigma^{r\R} X'$ with another copy of $\R$ we can assume that $r$ is even, so that $U:=r \R = (r/2) \C$ is a complex representation. Consider the commutative diagrams $$\xymatrix{ \tK_G(X') \ar[r]^{\cong} \ar[d] & \tK_G(\Sigma^U X') \ar[r]^{\cong} \ar[d] & \tK_G(\Sigma^U X) \ar[r]^{\cong} \ar[d] & \tK_G(X) \ar[d] \\ \tK_G\bigl ((X')^{S^1} \bigr) \ar[r]^{\cong} & \tK_G\bigl( (\Sigma^U X')^{S^1} \bigr) \ar[r]^{\cong} & \tK_G\bigl( (\Sigma^U X)^{S^1} \bigr) \ar[r]^{\cong} & \tK_G\bigl( (X)^{S^1} \bigr). }$$ Here, in each row, the first map is a Bott isomorphism, the second comes from the $G$-equivalence in the hypothesis, and the third is the inverse to a Bott isomorphism. The vertical arrows are given by restriction. Comparing the first vertical arrow to the last we obtain the desired conclusions. \[lem:map1\] Let $X$ and $X'$ be spaces of type $\swf$ at the same even level $2t$, and suppose that $f: X \to X'$ is a $G$-equivariant map whose $S^1$-fixed point set map is an $G$-homotopy equivalence. Then: $$k(X) \leq k(X').$$ Analyzing the commutative diagram $$\begin{CD} \tK_G(X') @>{f^*}>> \tK_G(X) \\ @V{(\iota')^*}VV @VV{\iota^*}V \\ \tK_G((X')^{S^1}) @>{\cong}>> \tK_G(X^{S^1}), \end{CD}$$ we see that $\i(X') \subseteq \i(X)$. This implies $k(X) \leq k(X')$. \[lem:map2\] Let $X$ and $X'$ be spaces of type $\swf$ at levels $2t$ and $2t'$, respectively, such that $t < t'$. Suppose that $f: X \to X'$ is a $G$-equivariant map whose $G$-fixed point set map is a homotopy equivalence. Then: $$k(X) +t \leq k(X') + t'.$$ Note that the $G$-fixed point set of a space of type $\swf$ is homotopy equivalent to $S^0$. We have commutative diagrams: $$\label{eq:2cd} \begin{CD} \tK_G(X') @>{f^*}>> \tK_G(X) \\ @V{(\iota')^*}VV @VV{\iota^*}V \\ \tK_G((X')^{S^1}) @>{(f^{S^1})^*}>> \tK_G(X^{S^1})\\ @V{\cdot w^{t'}}VV @VV{\cdot w^t}V \\ \tK_G((X')^G) @>{\cdot 1}>> \tK_G(X^G). \end{CD}$$ The bottom four groups are all isomorphic to $R(G)$, and we identified three of the maps with multiplications by elements of $R(G)$ using these isomorphisms; see Facts \[fact:bott\] and \[fact:bott2\]. We deduce that the middle horizontal map $(f^{S^1})^*$ in is multiplication by an element $y \in R(G)$ such that $$\label{eq:y} w^t \cdot y = w^{t'}.$$ On the other hand, since $t < t'$, in view of Fact \[fact:bott\], the map $$(f^{S^1})^*: \tK_{S^1}((X')^{S^1}) \to \tK_{S^1}(X^{S^1})$$ is zero. If we apply restriction maps $\tK_G \to \tK_{S^1}$ to the middle row in (compare Fact \[fact:res\]), we see that $y$ must be mapped to $0$ under the map $R(G) \to R(S^1)$ given by . This implies that $y=cw$ for some $c \in \Z$, and from we get $$2^t c w = c w^{t+1} = w^{t'} = 2^{t'-1}w,$$ so $c = 2^{t'-t-1}$. We deduce that $(f^{S^1})^*$ is multiplication by $2^{t'-t-1}w$. Let $x \in \i(X')$ be such that $wx = 2^{k'} w$, where $k'=k(X')$. From the top diagram in we see that $$(f^{S^1})^*(x) = 2^{t'-t-1}wx = 2^{k'+t'-t-1}w \in \i(X).$$ Since $w(2^{k'+t'-t-1}w) = 2^{k'+t'-t}w$, we get that $k(X) \leq k'+t'-t$. In the presence of the $K_G$-split assumption, we can strengthen Lemma \[lem:map2\]: \[lem:map3\] Let $X$ and $X'$ be spaces of type $\swf$ at levels $2t$ and $2t'$, respectively, such that $t < t'$ and $X$ is $K_G$-split. Suppose that $f: X \to X'$ is a $G$-equivariant map whose $G$-fixed point set map is a homotopy equivalence. Then: $$k(X) + t + 1 \leq k(X')+t'.$$ Since $X$ is $K_G$-split, we must have $\i(X) = (z^k)$ where $k = k(X)$. The only modification is now in the last step of the proof of Lemma \[lem:map2\]. We know that $ 2^{k'+t'-t-1}w \in \i(X)$. An arbitrary element of $\i(X)$ is of the form $z^k(\lambda w + P(z))= \lambda 2^{k}w + z^kP(z),$ where $\lambda \in \Z$ and $P$ is a polynomial in $z$. For such an element to be a multiple of $w$ we must have $P(z)=0$. We get that $2^{k'+t'-t-1}w = \lambda 2^kw$, and hence $k' + t'-t-1 \geq k$. Finally, we mention the behavior of $k$ under equivariant Spanier-Whitehead duality. Let $V$ be a finite dimensional representation of $G$. Recall from [@MayBook Section XVI.8] that two pointed, finite $G$-spaces $X$ and $X'$ are [*equivariantly $V$-dual*]{} if there exist $G$-maps ${\varepsilon}: X' \wedge X \to V^+$ and $\eta: V^+ \to X \wedge X'$ such that the following two diagrams are stably homotopy commutative: $$\xymatrix{ V^+ \wedge X \ar[r]^{\eta \wedge \operatorname{\operatorname{id}}} \ar[dr]^{\gamma} & X \wedge X' \wedge X \ar[d]^{\operatorname{\operatorname{id}}\wedge {\varepsilon}}\\ & X \wedge V^+ } \ \ \ \ \ \ \ \xymatrix{ X' \wedge V^+ \ar[r]^{\operatorname{\operatorname{id}}\wedge \eta} \ar[d]_{\gamma} & X' \wedge X \wedge X \ar[d]^{{\varepsilon}\wedge \operatorname{\operatorname{id}}} \\ V^+ \wedge X' \ar[r]_{r \wedge \operatorname{\operatorname{id}}} & V^+ \wedge X', }$$ where $r: V^+ \to V^+$ is the sign map, $r(v)=-v$, and $\gamma$ are the transpositions. \[lem:dual\] Let $X$ and $X'$ be spaces of type $\swf$ at levels $2t$ resp. $2t'$, such that $X$ and $X'$ are equivariantly $V$-dual for some $V \cong \tC^s \oplus \H^l$, with $s, l \geq 0$. Then: $$k(X) + k(X') \geq l.$$ Consider the duality maps ${\varepsilon}$ and $\eta$. Their restriction to the $S^1$-fixed point sets induce a $V^{S^1}$-duality between $X^{S^1} \simeq (\tC^t)^+$ and $(X')^{S^1} \simeq (\tC^{t'})^+$. Since $V^{S^1} \cong \tC^s$, this implies that $t+t'=s$. Let us view ${\varepsilon}^{S^1}$ and $\eta^{S^1}$ as $G$-equivariant (that is, $\Z/2$-equivariant) maps from the sphere $(\tC^s)^+$ to itself. Their restrictions to the $\Z/2$-fixed point sets induce a duality between $S^0$ and $S^0$, that is, a bijection. This means that, up to $\Z/2$-equivalence, the maps ${\varepsilon}^{S^1}$ and $\eta^{S^1}$ are unreduced suspensions of $\Z/2$-equivariant maps from the unit sphere $S(\tC^s)$ to itself. Up to $\Z/2$-equivalence, such maps $S(\tC^s) \to S(\tC^s)$ are determined by their degree (which must be odd by the Borsuk-Ulam theorem); see [@Olum]. We conclude that the maps ${\varepsilon}^{S^1}$ and $\eta^{S^1}$ are determined (up to $G$-homotopy equivalences) by their degrees $d({\varepsilon}^{S^1}), d(\eta^{S^1}) \in \Z$. The duality diagrams imply that $d({{\varepsilon}^{S^1}}) d(\eta^{S^1}) = \pm 1$, so ${\varepsilon}^{S^1}$ and $\eta^{S^1}$ must be $G$-homotopy equivalences. Applying Lemma \[lem:map1\] to both of these maps we deduce that: $$\label{eq:dualy} k(X \wedge X') = k(V^+)=l.$$ Next, recall from Fact \[fact:prod\] that we have a product map $\tK_G(X) \otimes \tK_G(X') \to \tK_G(X \wedge X')$. If $x \in \i(X)$ and $x' \in \i(X')$ are such that $wx=2^{k(X)}w$ and $wx'=2^{k(X')}w$, then $xx' \in \i(X)$ and $$w(x x') = 2^{k(X)}wx'= 2^{k(X)+k(X')} w.$$ We deduce that: $$\label{eq:smash} k(X \wedge X') \leq k(X) + k(X').$$ Combining this with , the conclusion follows. To see an example when the inequality  is strict, let $X$ be the space $\tilde G$ from Example \[ex:G\], and let $X'$ be the space $\tilde T$ from Example \[ex:torus\]. We showed that $k(\tilde G) = k(\tilde T)=1$, and it is observed in [@swfh Example 2.14] that $\tilde G$ and $\tilde T$ are $\H$-dual. From Equation  we get that $ k(\tilde G \wedge \tilde T) = 1 < k(\tilde G) + k(\tilde T) = 2.$ Pin(2)-equivariant Seiberg-Witten Floer K-theory {#sec:swf} ================================================ In this section we use the methods in [@Spectrum] and [@swfh] to construct $\pin$-equivariant Seiberg-Witten Floer K-theory. We will start by working in the setting of rational homology spheres (equipped with a spin structure), but when we discuss applications we will specialize to integral homology spheres. Finite dimensional approximation {#sec:fda} -------------------------------- Let us briefly review the construction of equivariant Seiberg-Witten Floer spectra. We refer to [@Spectrum] and [@swfh] for more details. Let $Y$ be a rational homology three-sphere, $g$ is a metric on $Y$, ${\mathfrak{s}}$ a spin structure on $Y$, and $\Spin$ the spinor bundle for ${\mathfrak{s}}$. Consider the global Coulomb slice in the Seiberg-Witten configuration space: $$V = i \ker d^* \oplus \Gamma(\Spin) \subset i\Omega^1(Y) \oplus \Gamma(\Spin).$$ Using the quaternionic structure on spinors, we find an action of the group $G=\pin$ on $V$. Precisely, an element $e^{i\theta} \in S^1$ takes $(a, \phi)$ to $(a, e^{i\theta}\phi)$, whereas $j \in G$ takes $(a, \phi)$ to $(-a, j\phi)$. Let $\rho:TY \to \text{End}(\Spin)$ denote the Clifford multiplication, and $\dirac : \Gamma(\Spin) \to \Gamma(\Spin)$ the Dirac operator. The Chern-Simons-Dirac functional $\csd: \Conf(Y, {\mathfrak{s}}) \to \R$, given by: $$\csd(a,\phi) = \frac{1}{2} \bigl(\int_Y \langle \phi, \dirac \phi + \rho(a)\phi \rangle dvol - \int_Y a \wedge da \bigr),$$ is invariant under the $G$-action. Its gradient (in a suitable metric) is the Seiberg-Witten map, which decomposes as a sum $$\ell + c: V \to V,$$ where $\ell$ is the linearization $\ell(a,\phi) = (*da, \dirac \phi)$. We refer to the gradient flow of $\csd$ as the Seiberg-Witten flow. The map $\ell$ is an elliptic, self-adjoint operator. We denote by ${V^\nu_\tau}$ the finite-dimensional subspace of $V$ spanned by the eigenvectors of $\ell$ with eigenvalues in the interval $(\tau,\nu]$. Note that, as a $G$-representation, ${V^\nu_\tau}$ decomposes as a direct some of some copies of $\tR$ and some copies of $\H$. We write this decomposition as $${V^\nu_\tau}= {V^\nu_\tau}(\tR) \oplus {V^\nu_\tau}(\H).$$ Next, we consider the gradient flow of the restriction $\csd|_{{V^\nu_\tau}}$, where $\nu \geq 0$ and $\tau \ll 0$. We view this as a finite dimensional approximation to the Seiberg-Witten flow. The eigenvalue cut-offs $\nu$ and $\tau$ can be chosen independently. However, for simplicity, we shall restrict to the case $\tau = - \nu$. We pick $R \gg 0$ (independent of $\nu$) such that all the finite energy Seiberg-Witten flow lines are inside the ball $B(R)$ in a suitable Sobolev completion of $V$. We then look at the approximate Seiberg-Witten flow on $\vnu$. It can be shown that the points lying on trajectories of this flow that stay inside $B(R)$ form an isolated invariant set. To this set one can associate an equivariant Conley index $I_{\nu}$, which is a pointed $G$-space, well-defined up to canonical $G$-homotopy equivalence. (Roughly, one can think of the Conley index as the quotient of $\vnu \cap B(R)$ by the subset of $\vnu \cap \del B(R)$ where the flow exits the ball.) The following facts are established in [@Spectrum; @swfh]: \[prop:conley\] $(a)$ The Conley index $I_{\nu}$ is a space of type $\swf$ at level $\dim V^0_{-\nu}(\tR)$. $(b)$ When we vary the choices in its construction, the Conley index $I_{\nu}$ changes as follows: (i) When we vary the radius $R$, it only changes by a $G$-equivalence; (ii) When we change the cut-off $\nu$ to some $\nu' > \nu$, the space $I_{\nu'}$ is $G$-equivalent to the suspension of $I_{\nu}$ by the representation $V^{-\nu}_{-\nu'}$; (iii) If we vary the Riemannian metric $g$ by a small homotopy, we can choose a cut-off $\nu$ such that the operator $\ell$ does not have $\nu$ or $-\nu$ as an eigenvalue during the homotopy. Then $I_{\nu}$ only changes by a $G$-equivalence. For a fixed metric $g$, we can build a universe made of the negative eigenspaces of $\ell$ (together with infinitely many copies of the trivial $G$-representation), and construct a spectrum $\swf(Y, {\mathfrak{s}}, g)$ as the formal de-suspension $\Sigma^{-V^0_{-\nu}} I_{\nu}$; see [@swfh Section 3.4]. In view of properties (i) and (ii) in Proposition \[prop:conley\](b), the spectrum $\swf(Y, {\mathfrak{s}}, g)$ is independent of $R$ and $\nu$, up to $G$-equivalence. We call $\swf(Y, {\mathfrak{s}}, g)$ the [*Seiberg-Witten Floer spectrum*]{} of the triple $(Y, {\mathfrak{s}}, g)$. When we vary the metric $g$, it is difficult to identify the universes that provide coordinates for our spectra. Note that, for fixed $\nu$, the dimension of $V^0_{-\nu}$ changes according to the spectral flow of the operator $\ell = *d \oplus \dirac$. The operator $*d$ has trivial spectral flow, but the Dirac operator has spectral flow given by the formula $$\label{eq:spflow} \text{S.F.}(\dirac) = n(Y, {\mathfrak{s}}, g_0) - n(Y, {\mathfrak{s}}, g_1).$$ Here, $g_0$ and $g_1$ are the initial and final metrics, and the quantities $$n(Y, {\mathfrak{s}}, g_i) \in \tfrac{1}{8} \Z \subset \Q$$ are linear combinations of the eta invariants associated to $*d$ and $\dirac$, for each metric. Alternatively, given a metric $g$ on $Y$, we can pick a compact spin four-manifold $W$ with boundary $Y$, let $\Dirac(W)$ be the Dirac operator on $W$ (with Atiyah-Patodi-Singer boundary conditions), and set $$\label{eq:n} n(Y, {\mathfrak{s}}, g) = \ind_{\C}\Dirac(W) + \frac{\sigma(W)}{8}.$$ Although $n(Y, {\mathfrak{s}}, g)$ is in general one-eighth of an integer, as we vary $g$ (and keep $Y$ and ${\mathfrak{s}}$ fixed) it changes by elements of $\Z$. Also, when $Y$ is an integral homology sphere, we have $n(Y, {\mathfrak{s}}, g) \in \Z$, and its parity is given by the Rokhlin invariant $\mu$: $$\label{eq:nmod2} n(Y, {\mathfrak{s}}, g) \mod 2 = \mu(Y) \in \Z/2.$$ Looking at , one is prompted to consider a formal de-suspension of $\swf(Y, {\mathfrak{s}}, g)$ by $n(Y, {\mathfrak{s}}, g)/2$ copies of the representation $\H$. (The factor of $1/2$ comes from the fact that counts complex dimensions of the eigenspaces of $\dirac$, rather than quaternionic dimensions.) This produces an invariant of $Y$ in the form of an equivalence class of formally de-suspended spaces. The relevant definition is given below. Stable even equivalence ----------------------- Consider the set of triples $(X, m, n)$, where $X$ is a space of type $\swf$ at an even level, $m \in \Z$ and $n \in \Q$. We introduce the following equivalence relation on such triples: \[def:see\] We say that $(X, m,n)$ is [*stably even equivalent*]{} to $(X', m', n')$ if $n-n' \in \Z$, and there exist $M, N, r \geq 0$ and a $G$-homotopy equivalence $$\Sigma^{r \R} \Sigma^{(M-m) \tC} \Sigma^{ (N-n) \H} X \xrightarrow{\sim} \Sigma^{r \R} \Sigma^{(M-m') \tC} \Sigma^{(N-n') \H} X.$$ Thus, a triple could be thought of as a “$G$-equivariant suspension spectrum,” given by the formal de-suspension of $X$ by $m$ copies of the representation $\tC$ and $n$ copies of the representation $\H$. We denote by $\E$ the set of stable even equivalence classes of triples $(X, m, n)$. Informally, we will refer to the elements of $\E$ as [*spectrum classes*]{}. If $(X, m, n)$ is a triple as above, we define its (reduced) equivariant Borel cohomology, with coefficients in an Abelian group $A$, by $$\tH^*_G(X, m, n; A) := \tH_G^{*+2m+4n}(X; A)$$ and its (reduced) equivariant K-cohomology by $$\tK^*_G(X, m, n) := \tK_G^{*+2m+4n}(X).$$ We also set $$k(X, m, n) = k(X)-n,$$ where $k$ is the invariant defined in Section \[sec:k\]. \[lem:evens\] Let $(X, m, n)$ be a triple as above. Then, the following are invariants of the spectrum class $\S=[(X, m, n)] \in \E$: - The isomorphism class of Borel cohomology, $\tH^*_G(\S; A) := [\tH^*_G(X, m, n; A)]$, as a graded module over $H^*(BG; A)$; - The isomorphism class of equivariant K-cohomology, $\tK_G^*(\S) := [\tK^*_G(X, m, n)]$, as a graded module over $R(G)$; - The quantity $k(\S) := k(X, m, n) \in \Q$. The first two statements follow from the invariance of the two theories under suspensions by complex representations; compare [@swfh Remark 2.3] and \[fact:bott\]. The third statement follows from the behavior of $k$ under $G$-equivalences (after stabilization by copies of $\R$) and under suspensions by $\tC$ and $\H$. These were established in Lemma \[lem:map0\] and Lemma \[lem:susp\], respectively. In the definition of stable even equivalence we only allowed de-suspensions by copies of $\tC = \tR \oplus \tR$ and $\H$, which are complex representations of $G$. We did this because equivariant cohomology and equivariant K-theory are invariant (up to a shift in degree) under such representations, whereas they are not invariant under suspending by an arbitrary real representation such as $\tR$. If we had been interested only in the equivariant cohomology with $\Z/2$ coefficients (as we were in [@swfh]), then we could have allowed de-suspensions by $\tR$, and dropped the condition on $X$ to be at an even level. Note also the presence of arbitrary suspensions by $\R$ in Definition \[def:see\]. This is not necessary for constructing a $3$-manifold invariant as a spectrum class (which we do in Section \[sec:swfclass\] below), but it makes computations more accessible. For example, when we compute some spectrum classes in Section \[sec:brieskorn\], we will be free to use standard facts from equivariant stable homotopy. The Seiberg-Witten Floer spectrum class {#sec:swfclass} --------------------------------------- If $Y$ is a rational homology sphere with a spin structure ${\mathfrak{s}}$, let $g, R$ and $\nu$ be as in Subsection \[sec:fda\]. Recall from Proposition \[prop:conley\](a) that the Conley index $I_{\nu}$ is a space of type $\swf$ at level $\dim V^0_{-\nu}(\tR)$. Define $$\S(Y, {\mathfrak{s}}) = \begin{cases} [(I_{\nu}, \tfrac{1}{2}\dim V(\tR)^0_{-\nu}, \dim_{\H} V(\H)^0_{-\nu} + \tfrac{1}{2}n(Y, {\mathfrak{s}}, g)] & \text{if} \ I_{\nu} \text{ is at an even level,} \\ [(\Sigma^{\tR} I_{\nu}, \tfrac{1}{2}(\dim V(\tR)^0_{-\nu} + 1), \dim_{\H} V(\H)^0_{-\nu} + \tfrac{1}{2}n(Y, {\mathfrak{s}}, g)] & \text{if} \ I_{\nu} \text{ is at an odd level.} \end{cases}$$ \[prop:inv\] The spectrum class $\S(Y, {\mathfrak{s}}) \in \E$ is an invariant of the pair $(Y, {\mathfrak{s}})$. This is a consequence of Proposition \[prop:conley\](b) and the formula for the spectral flow. Compare [@Spectrum Theorem 1]. We refer to $\S(Y, {\mathfrak{s}})$ as the [*Seiberg-Witten Floer spectrum class of $(Y, {\mathfrak{s}})$*]{}. In view of Lemma \[lem:evens\] and Proposition \[prop:inv\], we define the [*$G$-equivariant Seiberg-Witten Floer cohomology*]{} of $(Y, {\mathfrak{s}})$, with coefficients in an Abelian group $A$, as $$\swfh^*_G(Y, {\mathfrak{s}}; A) := \tH^*_G(\S(Y, {\mathfrak{s}}); A).$$ (One can also define equivariant Seiberg-Witten Floer homology in a similar manner.) Further, we define the [*$G$-equivariant Seiberg-Witten Floer K-cohomology*]{} of $(Y, {\mathfrak{s}})$ as $$\swfk^*_G(Y, {\mathfrak{s}}) := \tK^*_G(\S(Y, {\mathfrak{s}})).$$ This group in degree $2n(Y, {\mathfrak{s}}, g) \in \tfrac{1}{4}\Z$ can be called the [*$G$-equivariant Seiberg-Witten Floer K-theory*]{} of $(Y, {\mathfrak{s}})$. We define: $$\label{eq:kappa} \kappa(Y, {\mathfrak{s}}) := 2k(\S(Y, {\mathfrak{s}})) \in \tfrac{1}{8} \Z \subset \Q.$$ We say that the pair $(Y, {\mathfrak{s}})$ is [*Floer $K_G$-split*]{} if, for $\nu \gg 0$, either $I_{\nu}$ or $\Sigma^{\tR} I_{\nu}$ (depending on the parity of the level of $I_{\nu}$) is $K_G$-split in the sense of Definition \[def:ksplit\]; cf. Definition \[def:FloerSplit\] from the introduction. If $Y$ is an integral homology sphere, then it has a unique spin structure ${\mathfrak{s}}$, which we drop from the notation. Since $n(Y, g) \in \Z$, in this case $ \swfh^*_G(Y ; A)$ and $\swfk^*_G(Y)$ are integer-graded, and we have $ \kappa(Y) \in \Z$. Cobordisms ---------- Suppose $W$ is a four-dimensional, oriented cobordism between rational homology spheres $Y_0$ and $Y_1$, such that $b_1(W)=0$. Further, assume $W$ is equipped with a Riemannian metric $g$ and a spin structure $\t$. It is shown in [@Spectrum Section 9] and [@swfh Section 3.6] that one can do finite dimensional approximation for the Seiberg-Witten equations on $W$ to obtain a map: $$\label{eq:cob} f: \Sigma^{m_0 \tR} \Sigma^{n_0 \H} (I_0)_{\nu} \longrightarrow \Sigma^{m_1 \tR} \Sigma^{n_1 \H} (I_1)_{\nu}.$$ Here, $(I_0)_{\nu}$ and $(I_1)_{\nu}$ are the Conley indices for the approximate Seiberg-Witten flows on $Y_0$ and $Y_1$, respectively, corresponding to an eigenvalue cut-off $\nu \gg 0$. Let also $V_i$ denote the global Coulomb slice on $Y_i$, for $i=0,1$. The differences in suspension indices in are: $$m_0 - m_1 = \dim_{\R} \bigl( (V_1)^0_{-\nu}(\tR)\bigr) - \dim_{\R} \bigl((V_0)^0_{-\nu}(\tR)\bigr) - b_2^+(W)$$ and $$n_0 - n_1 = \dim_{\H} \bigl( (V_1)^0_{-\nu}(\H) \bigr) - \dim_{\H} \bigl( (V_0)^0_{-\nu}(\H) \bigr) + n(Y_1, \t|_{Y_1}, g)/2 - n(Y_0, \t|_{Y_0}, g)/2 - {\sigma(W)}/16.$$ Moreover, the $S^1$-fixed point set of is induced on the one-point compactifications by a linear injective map with cokernel of dimension $b_2^+(W)$. Note that both the domain and the target of the map are spaces of type $\swf$. The difference in their levels is $-b_2^+(W)$. If both levels happen to be even, then the difference in the values of $k$ for the domain and the target is $$\frac{1}{2} \bigl( \kappa(Y_0) - \kappa(Y_1) - \sigma(W)/8\bigr).$$ We can now give the proofs of the main results advertised in the introduction: Part (i) follows from the formula for $\kappa$, the definition of the spectrum class $\S(Y)$, and the fact that $n(Y, g)$ mod $2$ is the Rokhlin invariant; cf. . For part (ii), after doing surgery on loops, we can assume without loss of generality that $b_1(W)=0$. Consider the map associated to the cobordism $W$. Since $b_2^+(W) =0$, the domain and target of are at the same level. By suspending the map $f$ with $\tR$ if necessary, we can arrange that the common level is even. The conclusion then follows from Lemma \[lem:map1\]. For part (iii), again we can assume that $b_1(W)=0$. If the intersection form on $W$ is $p(-E_8) \oplus q\hyp$, the difference in levels in is $-q$. If $q=0$, we can simply apply part (ii). If $q > 0$ and $q$ is even, since $p=-\sigma(W)/8$, by applying Lemma \[lem:map2\] to we get: $$\kappa(Y_0) + p \leq \kappa(Y_1) + q.$$ If $q > 0$ and $q$ is odd, the best we can do is to take the connected sum of $W$ and a copy of $S^2 \times S^2$ to reduce to the case of $q$ even. We do this at the expense of weakening the bound above to: $\kappa(Y_0) + p -1 \leq \kappa(Y_1) + q.$ Parts (i) and (ii) of Theorem \[thm:main1\] admit straightforward generalizations to the case when $Y_0$ and $Y_1$ are rational homology spheres equipped with spin structures. There is also an analogue of part (iii) which can be used to get constraints on the indefinite intersection forms of spin cobordisms between rational homology spheres; however, these intersection forms are not generally unimodular, so we cannot write them as $p(-E_8) \oplus q\hyp$. The bound in (iii) can be expressed instead in terms of the second Betti number and the signature of $X$. The same argument as in part (iii) of Theorem \[thm:main1\] applies here, except that now we can use Lemma \[lem:map3\] instead of Lemma \[lem:map2\]. When $q$ is even we get $$\label{eq:qeven} \kappa(Y_0) + p + 1 \leq \kappa(Y_1) + q.$$ By part (i) of Theorem \[thm:main1\] the parity of $\kappa(Y_0)-\kappa(Y_1)$ is the Rokhlin invariant of the boundary of $W$, so it is the same as the parity of $p$. Therefore, for parity reasons we can improve the inequality to $$\kappa(Y_0) + p + 2 \leq \kappa(Y_1) + q.$$ When $q$ is odd, we add a copy of $S^2 \times S^2$ and we are left with the inequality . In Section \[sec:psc\] below we will prove that $\S(S^3) = [(S^0, 0, 0)]$, so $S^3$ is Floer $K_G$-split and $\kappa(S^3)=0$. Assuming this, we can apply Theorem \[thm:main2\] to the complement of a ball in $W$. Suppose such a decomposition exists. Applying Corollary \[cor:1bdry\] to the first piece $X_1$ we get $\kappa(Y_1) \geq 2+1-3 = 0.$ Next, apply Theorem \[thm:main2\] to the pieces $X_i$ for $i=1, \dots, r-1$. We obtain $\kappa(Y_{i+1}) \geq \kappa(Y_{i})$ for all such $i$, so $\kappa(Y_{r-1}) \geq \kappa(Y_1) \geq 0$. On the other hand, by applying Theorem \[thm:main2\] to the complement of a ball in $X_r$ we get $\kappa(Y_{r-1}) \leq -1$, a contradiction. A similar argument can be used to exclude any decompositions of $X$ into $r$ spin pieces, each of signature $-16$, and glued along Floer $K_G$-split homology spheres. Here is one last result mentioned in passing in the introduction: \[prop:duals\] If $Y$ is an oriented homology sphere and $-Y$ is the same manifold with the reverse orientation, then: $$\kappa(Y) +\kappa(-Y) \geq 0.$$ It is shown in [@swfh proof of Proposition 3.9] that the Conley indices $I_{\nu}$ (for $Y$) and $\bar I_{\nu}$ (for $-Y$) are equivariantly $(V^{\nu}_{-\nu})$-dual to each other. The result now follows from Lemma \[lem:dual\]. Calculations {#sec:calc} ============ In this section we prove Theorem \[thm:brieskorn\] from the introduction, about the values of $\kappa$ for $S^3$ and for the Brieskorn spheres $\pm \Sigma(2,3,m)$ with $\gcd(m, 6)=1$. We obtain some concrete bounds on the intersection forms of spin four-manifolds with boundary, and compare them to the bounds that can be obtained by simpler methods. Positive scalar curvature {#sec:psc} ------------------------- If $Y$ is a rational homology sphere admitting a metric $g$ of positive scalar curvature, by the arguments in [@Spectrum Section 10] or [@GluingBF Section7.1], we obtain $$\S(Y, {\mathfrak{s}}) = [(S^0, 0, n(Y, {\mathfrak{s}}, g)/2)]$$ and therefore $$\kappa(Y, {\mathfrak{s}}) = - n(Y, {\mathfrak{s}}, g).$$ In particular, $\S(S^3)=[(S^0, 0, 0)]$ and $\kappa(S^3)=0$. This proves part (i) of Theorem \[thm:brieskorn\]. A family of Brieskorn spheres {#sec:brieskorn} ----------------------------- We now move to parts (ii) and (iii) of Theorem \[thm:brieskorn\]. We use the arguments in [@GluingBF Section 7.2] and [@swfh Section 3.8] to compute explicitly the Seiberg-Witten Floer spectrum classes of $\pm \Sigma(2,3,m)$. The calculations are based on the description of the monopole solutions on $\Sigma(2,3,m)$, which was given by Mrowka, Ozsváth and Yu in [@MOY]. We start with the case $m=12n-1$. The Seiberg-Witten equations on $\Sigma(2,3,12n-1)$ have one reducible solution in degree zero, and $2n$ irreducibles in degree one. The irreducibles come in $n$ pairs related by the action of the element $j \in G$. Thus, a representative for $\S(\Sigma(2,3,12n-1))$ can be constructed by attaching $n$ free cells of the form $\Sigma G_+$ to a trivial cell $S^0$. The attaching map for each cell is determined by a stable homotopy class in $\{G_+, S^0\}_G \cong \{S^0, S^0\} \cong \Z$. Together the attaching maps given an element in $\Z^n$, and the spectrum class is determined by the divisibility of this element. The fact that it is primitive can be deduced from the calculation of the $S^1$-equivariant homology of $\S(\Sigma(2,3,12n-1))$, given in [@GluingBF Section 7.2]. (In fact, it even suffices to know the non-equivariant homology.) We obtain: $$\S(\Sigma(2,3,12n-1)) = [(\tilde G \vee \underbrace{ \Sigma G_+ \vee \dots \vee \Sigma G_+}_{n-1}, 0, 0)],$$ where $\tG$ is the unreduced suspension of $G$, considered in Example \[ex:G\]. We computed that $k(\tG) = 1$, and we know from that $k$ is unchanged by wedging with a free space. Therefore, we have $\kappa(\Sigma(2,3,12n-1)) = 2.$ The spectrum class of $-\Sigma(2,3,12n-1)$ is dual to that of $\Sigma(2,3,12n-1)$; compare the proof of Proposition \[prop:duals\]. We know from [@swfh Example 2.14] that $\tG$ is $\H$-dual to the space $\tilde T$ from Example \[ex:torus\]. Furthermore, $G_+$ is stably $(\R^{\dim G})$-dual to itself by the Wirthmüller isomorphism.[^2] Since $\Sigma^{\H} G_+ \simeq \Sigma^4 G_+$, we can write the dual of $\Sigma{G_+}$ as the formal de-suspension of $\Sigma^{2}G_+$ by $\H$. We deduce that: $$\S(-\Sigma(2,3,12n-1)) = [(\tilde T \vee \underbrace{\Sigma^2 G_+ \vee \dots \vee \Sigma^2 G_+}_{n-1}, 0, 1)].$$ In Example \[ex:torus\] we computed $k(\tilde T) =1$, so we find that $\kappa(-\Sigma(2,3,12n-1))= 2 \cdot (1-1) = 0.$ The case of $\Sigma(2,3,12n-5)$ is similar to $\Sigma(2,3,12n-1)$, except now the reducible is in degree $-2$ and the irreducibles in degree $-1$. Thus, $\S(\Sigma(2,3,12n-5))$ is a formal de-suspension of $\S(\Sigma(2,3,12n-1))$ by $1/2$ copies of the representation $\H$. Therefore, $$\S(\Sigma(2,3,12n-5)) = [(\tilde G \vee \underbrace{\Sigma G_+ \vee \dots \vee \Sigma G_+,}_{n-1} 0, 1/2)].$$ The spectrum class for $-\Sigma(2,3,12n-5)$ is the dual of $\S(\Sigma(2,3,12n-1))$, and the formal suspension of $\S(-\Sigma(2,3,12n-1))$ by $1/2$ copies of $\H$. Thus, $$\S(-\Sigma(2,3,12n-5)) = [(\tilde T \vee \underbrace{\Sigma^2 G_+ \vee \dots \vee \Sigma^2 G_+}_{n-1}, 0, 1/2)].$$ From here we deduce that $\kappa(\Sigma(2,3,12n-5)) = \kappa(-\Sigma(2,3,12n-5)) = 1.$ Next, consider the Seiberg-Witten flow for $\Sigma(2,3,12n+1)$. This has one reducible in degree $0$ and $2n$ irreducibles in degree $-1$, coming in $k$ pairs related by the action of $j$. The attaching maps have to be trivial for homotopical reasons. We get: $$\label{eq:s13} \S(\Sigma(2,3,12n+1)) = [(S^0 \vee \underbrace{\Sigma^{-1} G_+ \vee \dots \vee \Sigma^{-1} G_+}_n, 0, 0)].$$ Strictly speaking, by this we mean the spectrum class of $$(\H^+ \vee \underbrace{\Sigma^{3} G_+ \vee \dots \vee \Sigma^{3} G_+}_n, 0, 1)],$$ but we write it as in for simplicity. Its dual is: $$\S(-\Sigma(2,3,12n+1)) = [(S^0 \vee \underbrace{G_+ \vee \dots \vee G_+}_n, 0, 0)].$$ We obtain $\kappa(\Sigma(2,3,12n+1)) = \kappa(-\Sigma(2,3,12n+1)) =0.$ Finally, the Seiberg-Witten flow for $\Sigma(2,3,12n+5)$ is analogous to that for $\Sigma(2,3,12n+1)$, except for an upward shift in dimension by $2$. Therefore, $$\S(\Sigma(2,3,12n+5)) = [(S^0 \vee \underbrace{\Sigma^{-1} G_+ \vee \dots \vee \Sigma^{-1} G_+}_n, 0, -1/2)],$$ with dual $$\S(-\Sigma(2,3,12n+5)) = [(S^0 \vee \underbrace{G_+ \vee \dots \vee G_+}_n, 0, 1/2)].$$ We deduce that $\kappa(\Sigma(2,3,12n+5)) = 1$ and $\kappa(-\Sigma(2,3,12n+5)) =-1.$ This completes the proof of Theorem \[thm:brieskorn\]. Explicit bounds {#sec:bounds} --------------- For an integral homology sphere $Y$, define: $$\xi(Y) = \max \{p-q \mid p, q \in \Z, q > 1, \exists \ X^4 \text{ spin}, \del X = Y, \ Q(X) \equiv p (-E_8) \oplus q\hyp \},$$ where $Q(X)$ denotes the intersection form of $X$. The simplest way of obtaining an upper bound on $\xi(Y)$ is to find a compact spin $4$-manifold $X'$ with $\del X' = -Y$, and then apply Furuta’s 10/8 theorem to $X \cup_Y X'$. If $X'$ has intersection form $p' (-E_8) \oplus q'\hyp$, from we get: $$\label{eq:direct} \xi(Y) \leq q'-p'-1.$$ In particular, for $Y=S^3$, by taking $X'$ to be a four-ball we get $\xi(S^3) \leq -1$. Since the $K3$ surface has intersection form $2(-E_8) \oplus 3 \hyp$, we see that $$\xi(S^3)=-1.$$ A more refined way of getting upper bounds on $\xi(Y)$ is to find a compact, spin $4$-dimensional [*orbifold*]{}[^3] $X'$ with $\del X' = -Y$. Let $\t$ denote the spin structure on $X'$. Let also $X$ be a spin manifold with boundary $Y$ and intersection form $p(-E_8) \oplus q \hyp$, such that $q > 0$, as in the definition of $\xi$. Fukumoto and Furuta [@FukumotoFuruta] proved an analogue of the 10/8-theorem for closed, spin orbifolds. Applying it to $X \cup_Y X'$, it reads $$\label{eq:ff} b_2^+(X \cup_Y X') \geq 1 + \ind_{\C} \Dirac(X \cup_Y X').$$ In [@FukumotoFuruta], this is stated under the assumption $\ind_{\C} \Dirac(X \cup_Y X') > 0$. However, since $b_2^+(X \cup_Y X') = q + b_2^+(X') \geq q \geq 1,$ the inequality remains true if $\ind_{\C} \Dirac(X \cup_Y X') \leq 0$. Fukumoto and Furuta defined an invariant $$w(-Y, X', \t) = \ind_{\C} \Dirac(X \cup_Y X') + \frac{1}{8}\sigma(X).$$ This turns out to be independent of $X$. When $X'$ is a plumbed spin orbifold, Saveliev [@SavelievFF] proved that $w(-Y, X', \t)$ coincides with the Neumann-Siebenmann invariant $-\bar \mu(-Y) = \bar \mu(Y)$ from [@NeumannPlumbed; @SiebenmannPlumbed]. Thus, in this case, from we obtain: $$q + b_2^+(X') \geq 1 + \bar \mu(Y) + p.$$ In particular, if $Y$ is a Seifert fibered homology sphere $\Sigma(a_1, \dots, a_k)$ with at least one of the $a_i$ even, we can take $X'$ to be the orbifold $D^2$-bundle over $S^2(a_1, \dots, a_k)$ associated to the Seifert fibration; we choose the orientation of $X'$ so that $\del X' = -Y$. Then $X'$ has a unique spin structure $\t$, and we have $b_2^+(X') = 1, b_2^-(X') = 0$; compare [@FukumotoFurutaUe; @FukumotoBG]. We get the bound: $$\label{eq:xiS} \xi(\Sigma(a_1, \dots, a_k)) \leq - \bar \mu(\Sigma(a_1, \dots, a_k)).$$ Applying the same reasoning to $-Y$ and $-X'$ instead of $Y$ and $X'$, since $b_2^+(-X')=0$, we get the bound: $$\label{eq:xiSm} \xi(-\Sigma(a_1, \dots, a_k)) \leq \bar \mu(\Sigma(a_1, \dots, a_k)) - 1.$$ The $\bar \mu$ invariant for $\Sigma(a_1, \dots, a_k)$ can be computed explicitly; see [@NeumannPlumbed; @NeumannRaymond]. In particular, for the Brieskorn spheres $\pm \Sigma(2,3,m)$ with $\gcd(m,6)=0$, from and we get the concrete inequalities: $$\begin{aligned} \label{eq:Bound11} \xi(\Sigma(2,3,12n-1)) &\leq 0, \hskip1cm \xi(-\Sigma(2,3,12n-1)) \leq -1, \\ \label{eq:Bound7} \xi(\Sigma(2,3,12n-5)) &\leq -1, \hskip.72cm \xi(-\Sigma(2,3,12n-5)) \leq 0, \\ \label{eq:Bound13} \xi(\Sigma(2,3,12n+1)) &\leq 0, \hskip1cm \xi(-\Sigma(2,3,12n+1)) \leq -1,\\ \label{eq:Bound5} \xi(\Sigma(2,3,12n+5)) &\leq 1, \hskip1cm \xi(-\Sigma(2,3,12n+5)) \leq -2.\end{aligned}$$ This paper provides a new method for obtaining bounds on $\xi$. Indeed, by Corollary \[cor:1bdry\], we have: $$\label{eq:xikappa} \xi(Y) \leq \kappa(Y)-1.$$ Given the values of $\kappa$ for $\pm \Sigma(2,3,m)$ computed in Theorem \[thm:brieskorn\], we find that gives the following bounds: $$\begin{aligned} \label{eq:bound11} \xi(\Sigma(2,3,12n-1)) &\leq 1, \hskip1cm \xi(-\Sigma(2,3,12n-1)) \leq -1, \\ \label{eq:bound7} \xi(\Sigma(2,3,12n-5)) &\leq 0, \hskip1cm \xi(-\Sigma(2,3,12n-5)) \leq 0, \\ \label{eq:bound13} \xi(\Sigma(2,3,12n+1)) &\leq -1, \hskip.72cm \xi(-\Sigma(2,3,12n+1)) \leq -1,\\ \label{eq:bound5} \xi(\Sigma(2,3,12n+5)) &\leq 0, \hskip1cm \xi(-\Sigma(2,3,12n+5)) \leq -2.\end{aligned}$$ Comparing these with -, we see that $\kappa$ gives better bounds in two of the eight cases: namely, for $\xi(\Sigma(2,3,12n+1))$ and $\xi(\Sigma(2,3,12n+5))$. Let us see to what extent the information we get from , - and - allows us to calculate $\xi(\pm \Sigma(2,3,m)).$ We do a case-by-case analysis. $\mathbf{Y=\pm\Sigma(2,3,12n-1)}.$ The Brieskorn sphere $-\Sigma(2,3,12n-1)$ is the boundary of the nucleus $N(2n)$ inside the elliptic surface $E(2n)$. The nucleus can be represented by the Kirby diagram The intersection form of $N(2n)$ is equivalent to $\hyp$. By reversing the orientation of $N(n)$, we obtain a manifold with boundary $\Sigma(2,3,12n-1)$ and intersection form $\hyp$. From the definition of $\xi$, we get: $$-1 \leq \xi(\pm \Sigma(2,3,12n-1)).$$ In conjunction with , we obtain: $$\xi(\Sigma(2,3,12n-1)) \in \{0, -1\}, \ \ \ \ \xi(-\Sigma(2,3,12n-1)) = -1.$$ We do not know the value of $\xi(\Sigma(2,3,12n-1))$ in general. However, for $n=1$, the complement of $N(2n)$ in the $K3$ surface has intersection form $2(-E_8) \oplus 2\hyp$. Therefore, $$\xi(\Sigma(2,3,11)) = 0.$$ $\mathbf{Y=\pm\Sigma(2,3,12n-5)}.$ The manifold $-\Sigma(2,3,12n-5)$ is the boundary of the following plumbing of spheres: This plumbing has intersection form $(-E_8) \oplus \hyp$. If we reverse its orientation, we obtain a manifold with intersection form $E_8 \oplus \hyp$ and boundary $\Sigma(2,3,12n-5)$. We deduce that: $$-2 \leq \xi(\Sigma(2,3,12n-5)), \ \ \ \ 0 \leq \xi(-\Sigma(2,3,12n-5)).$$ In view of , we get: $$\xi(\Sigma(2,3,12n-5)) \in \{-2, -1\}, \ \ \ \ \xi(-\Sigma(2,3,12n-5)) = 0.$$ For $n=1$, observe that the complement of the $(-E_{10})$-plumbing inside the $K3$ surface has intersection form $(-E_8) \oplus 2\hyp$. Therefore, $$\xi(\Sigma(2,3,7)) = -1.$$ $\mathbf{Y=\pm\Sigma(2,3,12n+1)}.$ The Brieskorn sphere $-\Sigma(2,3,12n+1)$ is the boundary of the manifold with intersection form $\hyp$. Therefore, we have: $$-1 \leq \xi(\pm \Sigma(2,3,12n+1)).$$ Moreover, when $n=1$ or $2$, the manifolds $\Sigma(2,3,13)$ and $\Sigma(2,3,25)$ bound homology balls, so, by applying , we get: $$\xi(\pm \Sigma(2,3,13)) = \xi(\pm \Sigma(2,3,25)) =-1.$$ The inequalities in now give the answers for all $n$: $$\xi(\pm \Sigma(2,3,12n+1)) = -1.$$ Note that the result for $+\Sigma(2,3,12n+1)$ was not accessible from . This provides a first example where $\kappa$ gives a better bound than the one from the filling method. $\mathbf{Y=\pm\Sigma(2,3,12n+5)}.$ The manifold $\Sigma(2,3,12n+5))$ is the boundary of the plumbing with intersection form $(-E_8) \oplus \hyp$. By analogy with the case $\pm \Sigma(2,3,12n-5)$, we obtain $$0 \leq \xi(\Sigma(2,3,12n+5)), \ \ \ \ -2 \leq \xi(-\Sigma(2,3,12n+5)).$$ The right hand side of shows that: $$\xi(-\Sigma(2,3,12n+5))= -2.$$ On the other hand, to obtain the answer for $\xi(\Sigma(2,3,12n+5))$, we need the new bound , which gives: $$\xi(\Sigma(2,3,12n+5)) = 0.$$ When $n=1$, this could have also been seen by applying the inequality to the positive definite $E_8$ plumbing with boundary $-\Sigma(2,3,12n+5)$. An invariant similar to $\xi$ was considered by Bohr and Lee in [@BohrLee]: $$m(Y) = \max \{ \tfrac{5}{4} \sigma(X) - b_2(X) \mid X^4 \text{ spin}, \del X = Y \}.$$ This invariant was used in [@BohrLee] to study $\Z/2$-homology cobordism. We have: $$m(-Y)/2 = \max \{p-q \mid p, q \in \Z, \exists \ X^4 \text{ spin}, \del X = Y, \ Q(X) \equiv p (-E_8) \oplus q\hyp \}.$$ Note that, unlike in the definition of $\xi$, here we do not assume that $q > 0$. Nevertheless, by taking a connected sum with $S^2 \times S^2$, we obtain the bound $m(-Y)/2 \leq \xi(Y) + 1 \leq \kappa(Y)$. Relation to homological invariants {#sec:alphas} ================================== In this section we explore the relationship between the invariant $\kappa$ (constructed using equivariant K-theory) and the invariant $\alpha$ constructed in [@swfh] using equivariant (Borel) cohomology with $\Z/2$ coefficients. In the process we define yet another invariant of homology spheres, $\alpha_{\Q}$; this is constructed using equivariant cohomology with $\Q$ coefficients. The Borel homology of spaces of type SWF ---------------------------------------- Let $X$ be a space of type $\swf$ at an even level $2t$. In [@swfh Section 2.3], we associated to $X$ three quantities $a(X), b(X), c(X) \in \Z$. This can be done by considering either the Borel homology or the Borel cohomology of $X$. Let us start by reviewing the definition using Borel homology. Let $\F = \Z/2$ be the field with two elements. The reduced Borel homology $\tH_*^G(X; \F)$ is a module over the ring $$H^*(BG; \F) = \F[q, v]/(q^3),$$ where $q$ is in degree $1$ and $v$ in degree $4$. (Hence, $q$ and $v$ act on homology by lowering degrees by $1$ and $4$, respectively.) Consider the long exact sequence $$\dots \to \tH_*^G(X^{S^1}; \F) \to \tH_*^G(X; \F) \to \tH_*^G(X/X^{S^1}; \F) \to \cdots$$ Since the quotient $X/X^{S^1}$ has free $G$-action away from the basepoint, its homology is finite dimensional over $\F$. Therefore, in large enough degrees, the Borel homology $\tH_*^G(X; \F)$ looks like that of the fixed point set $X^{S^1} \sim (\tC^t)^+$, which in turn is just isomorphic to $H_*(BG; \F).$ We find that $\tH_*^G(X; \F)$ has an infinite “tail” of the form $$\xymatrixcolsep{.7pc} \xymatrix{ \dots & \F & \F \ar@/_1pc/[l]_{q} & \F \ar@/_1pc/[l]_{q} & 0 & \F \ar@/^1pc/[llll]^{v} & \F \ar@/_1pc/[l]_{q} \ar@/^1pc/[llll]^{v} & \F \ar@/_1pc/[l]_{q} \ar@/^1pc/[llll]^{v} & 0 & \dots \ar@/^1pc/[llll]^{v} & \dots \ar@/^1pc/[llll]^{v} & \dots \ar@/^1pc/[llll]^{v} }$$ Formally, the tail can be defined as the submodule $$\iH_*^G(X; \F) := \bigcap_{l \geq 0} \operatorname{\operatorname{image}}\bigl (v^l : \tH_{*+4l}^G(X; \F) \To \tH_*^G(X; \F) \bigr).$$ If we forget the action of $q$, the tail decomposes into three “sub-tails,” in degrees congruent to $2t, 2t+1$ and $2t+2$ mod $4$. We define $a(X), b(X), c(X)$ by asking that the minimal degrees of nonzero elements in each of the three sub-tails are $a(X), b(X)+1$ and $c(X)+2$, respectively. We will mostly be interested in the first quantity, $$a(X) = \min \{ r \equiv 2t \ (\mod 4) \mid \exists \ x,\ 0 \neq x \in {\iH}_r^G(X; \F) \}.$$ Let us now consider some variations of this, using Borel homology with coefficients in $\Z$ or $\Q$ rather than $\F$. Since $BG$ is an $\rp^2$-bundle over $\hp^{\infty}$ (see [@swfh Section 2.1]), we have $$H^*(BG; \Z) = \Z[s, v]/(s^2, 2s),$$ with $s$ in degree $2$ and $v$ in degree $4$. In large enough degrees $\tH_*^G(X; \Z)$ looks like the homology $H_*(BG; \Z)$, that is, $$\label{eq:Ztail} \xymatrixcolsep{.7pc} \xymatrix{ \dots & \Z & \Z/2 & 0 & 0 & \Z \ar@/^1pc/[llll]^{v} & \Z/2 \ar@/^1pc/[llll]^{v} & 0 & 0 & \dots \ar@/^1pc/[llll]^{v} & \dots \ar@/^1pc/[llll]^{v} }$$ with $v$ being an isomorphism between the corresponding groups. If we use $\Q$ coefficients, then $H^*(BG; \Q) = \Q[v]$ and $\tH_*^G(X; \Q)$ has an infinite tail of the form $$\xymatrixcolsep{.7pc} \xymatrix{ \dots & \Q & 0 & 0 & 0 & \Q \ar@/^1pc/[llll]^{v} & 0 & 0 & 0 & \dots \ar@/^1pc/[llll]^{v} }$$ For any Abelian group $A$ (in particular, for $A= \Z$ or $\Q$), we define $$\iH_*^G(X; A) := \bigcap_{l \geq 0} \operatorname{\operatorname{image}}\bigl (v^l : \tH_{*+4l}^G(X; A) \To \tH_*^G(X; A) \bigr).$$ Note that $\iH_*^G(X; \Q)$ is supported in degrees congruent to $2t$ mod $4$. We define an analogue of $a(X)$ using $\Q$ coefficients: $$a_{\Q}(X) = \min \{ r \mid \exists \ x,\ 0 \neq x \in {\iH}_r^G(X; \Q) \}.$$ The relationship between $a$ and $a_{\Q}$ can be found via Borel homology over $\Z$, using the universal coefficients theorem. In simple cases we expect that $a = a_{\Q}$, but not so in general. If we inspect the sub-tail consisting in copies of $\Z$ in , we observe two possible things that can “go wrong” towards the end of the sub-tail: First, the tail may end not in $\Z$ but in a torsion group. For example, the last $v$ map in the tail may be a projection $\Z \to \Z/2$. If so, the copy of $\Z/2$ survives in Borel homology with $\F$ coefficients, but not in Borel homology with $\Q$ coefficients, and we get $a(X) < a_{\Q}(X)$. \[ex:kh2\] Consider the quaternionic Hopf fibration $S(\H) \hookrightarrow S(\H^2) \to \hp^1$. Pull back this $S(\H)$-bundle under a degree $2$ map from $\hp^1 \cong S^4$ to itself, and let $Z$ be the total space of the resulting bundle. The group $G \subset S(\H)$ acts freely on $Z$, and the quotient $Q = Z/G$ is an $\rp^2$-bundle on $S^4$. The classifying map $Q \to BG$ induces a map on homology, and, if we identify both $H_4(Q; \Z)$ and $H_4(BG; \Z)$ with $\Z$, this map in degree $4$ is given by multiplication by $\pm 2$. The unreduced suspension $\tZ$ of $Z$ is a space of type $\swf$ at level zero. There is a long exact sequence (compare [@swfh Section 2.4]): $$\dots \to H_*(Q; \Z) \to H_*(BG; \Z) \to \tH_*^G(\tZ; \Z) \to \cdots ,$$ from which we deduce that the sub-tail of $\iH^*_G(\tZ; \Z)$ in degrees divisible by $4$ ends with a $\Z/2$ in degree $4$. Consequently, we have $a(\tZ)=4$ but $a_{\Q}(\tZ)=8$. Another thing that can happen at the end of the tail of $\Z$’s in is that the last nonzero $v$ map ends in a $\Z$ summand of $\tH^*(X; \Z)$, but this last map has nontrivial cokernel. For example, suppose that the tail ends in a copy of $\Z$ in degree $r \equiv 2t \pmod 4$, with the last $v$ map having cokernel $\Z/2$. Then, the tail of Borel homology with $\Q$ coefficients ends in degree $r$ as well, but the tail with $\F$ coefficients ends in a higher degree $a(X) > a_{\Q}(X)=r$. This situation appears, for instance, for a space that is equivariantly $\H^m$-dual (for some $m$) to the space $\tZ$ from Example \[ex:kh2\]. Nevertheless, in many cases neither of the above two anomalies appear. We have: \[prop:aaQ\] Suppose that $X$ is a space of type $\swf$ at an even level $2t$, such that, for any $r \equiv 2t \! \pmod 4$: (i) The group $\iH_r^G(X;\Z)$ has no $2$-torsion elements, and (ii) There are no elements $x \in H_r^G(X; \Z)$ such that $0 \neq 2x \in {\iH}_r^G(X;\Z)$ but $x \not \in {\iH}_r^G(X; \Z)$. Then, we have $a(X) = a_\Q(X)$. This is an application of the universal coefficients theorem. Observe that the assumptions of Proposition \[prop:aaQ\] are satisfied for the spaces $\tG$ and $\tT$ considered in Example \[ex:G\] and Example \[ex:torus\]. Let us now mention how the quantities $a$ and $a_{\Q}$ can be expressed in terms of Borel cohomology rather than Borel homology. When $A = \F$ or $\Q$, the Borel cohomology $\tH_G^*(X; A)$ has a tail similar to the one in Borel homology, except that the arrows increase degree. We get $$a(X) = \min \{ r \equiv 2t \ (\mod 4) \mid \exists \ x \in \tH^r_G(X; \F), \ v^l x \neq 0 \text{ for all } l \geq 0 \}$$ and $$\label{eq:aq} a_{\Q}(X) = \min \{ r \mid \exists \ x \in \tH^r_G(X; \Q), \ v^l x \neq 0 \text{ for all } l \geq 0 \}.$$ Equivariant K-theory and Borel cohomology {#sec:kh} ----------------------------------------- Let us now explore the connection between $a_{\Q}(X)$ and the quantity $k(X)$ introduced in Definition \[def:kx\]. We will use the fact that, when we use $\Q$ coefficients, the Chern character gives an isomorphism between (non-equivariant) K-cohomology and the completion of ordinary cohomology. Recall that $k(X)$ was defined in terms of the ideal $\i(X) \subset R(G)$, which is the image of the restriction map $\tK_*(X) \to \tK_*(X^{S^1}) \cong R(G)$. We also have an interpretation for $a_{\Q}(X)$ in terms of the ideal $\i(X)$: If $X$ is a space of type $\swf$ at an even level $2t$, then $$a_{\Q}(X) = 2t + 4\min \{k \geq 0 \mid \exists \ \lambda \in \Z^*, \mu \in \Z, \ \lambda z^k + \mu w \in \i(X) \}.$$ Let $F$ be the pointed space $(X/X^{S^1})/G$. The inclusion of $X^{S^1}$ into $X$ gives rise to a long exact sequence: $$\label{eq:giraffe} \dots \to \tH^*(F; \Q) \to \tH^*_G(X; \Q) \to \tH^*_G(X^{S^1}; \Q) \xrightarrow{f} \tH^{*+1}(F; \Q) \to \cdots$$ Let us identify $\tH^*_G(X^{S^1}; \Q)$ with $\tH^{*-2t}(BG; \Q)$ using the equivalence $X^{S^1} \sim (\tC^t)^+$. By and exactness, we can write $$\begin{aligned} a_{\Q}(X) &= \min \{ r \mid \exists \ x, 0 \neq x \in \tH^r(BG; \Q), f(x) = 0 \} \\ &= 2t + 4\min \{k \mid f(v^k) \neq 0\}. \end{aligned}$$ Similarly, we have a long exact sequence in equivariant K-theory: $$\label{eq:zebra} \dots \to \tK(F) \otimes \Q \to \tK_G(X) \otimes \Q \to \tK_G(X^{S^1}) \otimes \Q \xrightarrow{g} \tK^{1}(F) \otimes \Q \to \cdots$$ and we can identify $\tK^*_G(X^{S^1}) \otimes \Q$ with $R(G) \otimes \Q$ by the Bott isomorphism. The maps $f$ and $g$ from and are the compositions of the maps in the bottom, resp. top row of the commutative diagram: $$\begin{CD} R(G) \otimes \Q @>>> K(BG) \otimes \Q @>>> \tK^{1}(F)\otimes \Q\\ @. @V{\operatorname{ch}}V{\cong}V @V{\operatorname{ch}}V{\cong}V \\ H^*(BG; \Q) @>>> H^*(BG; \Q)^{\wedge}_{v} @>>> \tH^{\text{odd}}(F; \Q). \end{CD}$$ Here, the first maps in each row are given by completion: for $R(G)$ with respect to the augmentation ideal $\a=(w, z)$, and for the cohomology $H^*(BG; \Q) = \Q[v]$ with respect to the ideal $(v)$. Note that $w \in R(G)$ gets sent to zero under completion over $\Q$, so $K(BG) \otimes Q \cong R(G)^{\wedge}_{\a} \otimes \Q$ is the power series ring $\Q[[z]]$. The isomorphism in the second column (given by the Chern character) is the map $\Q[[z]] \to \Q[[v]], z \mapsto v$. From the diagram above we find an alternative expression for $a_{\Q}$ in terms of the top row: $$a_{\Q}(X) = 2t + 4\min \{ k \geq 0 \mid \exists \ \epsilon \in \Q, \ g(z^k + \epsilon w) = 0 \}.$$ The conclusion now follows from the exactness of . In view of this proposition, we can compare the quantities $a_{\Q}(X)$ and $k(X)$ simply by inspecting the ideal $\i(X)$. In particular, we have: \[cor:kh\] Suppose that $X$ is a space of type $\swf$ at an even level $2t$, such that $\i(X)$ is of the form $(z^k)$ or $(w^k, z^k)$ for some $k \geq 0$. Then, $$a_{\Q}(X) = 2t + 4k(X) = 2t + 4k.$$ Invariants of homology spheres ------------------------------ Let $Y$ be an integral homology sphere. (The whole discussion here can be extended to rational homology spheres with spin structures, but we restrict to integral homology spheres for simplicity.) In [@swfh Section 3.5], we extracted from the $G$-equivariant Seiberg-Witten Floer homology of $Y$ three numerical invariants $\alpha(Y), \beta(Y), \gamma(Y) \in \Z$. Let us focus on $\alpha(Y)$, which can be expressed as $$\alpha(Y) = \tfrac{1}{2} \min \{ r \equiv 2\mu(Y) \! \pmod 4 \mid \exists x, \ 0\neq x \in {\iswfh}_r^G(Y; \F) \},$$ where $ {\iswfh}_r^G(Y; \F)$ is the “infinite tail” of $\swfh_r^G(Y; \F)$, and $\mu(Y)$ is the Rokhlin invariant. More concretely, if $g$ is a metric on $Y$ and $\nu \gg 0$ an eigenvalue cut-off as in Section \[sec:fda\], we have: $$\alpha(Y) = (a(I_{\nu}) - \dim V^0_{-\nu})/2 - n(Y, g).$$ We can define a similar invariant using coefficients in $\Q$: $$\alpha_{\Q}(Y) = (a_{\Q}(I_{\nu}) - \dim V^0_{-\nu})/2 - n(Y, g).$$ Next, recall from Section \[sec:k\] that we have the Floer K-theoretic invariant $$\kappa(Y) = 2k(I_{\nu}) - \bigl( \dim_{\R} V^0_{-\nu}(\H) \bigr) /2 - n(Y, g).$$ \[prop:Qswf\] Let $Y$ be a homology sphere. $(a)$ Suppose that, for any $r \equiv 2\mu(Y) \! \pmod 4$, the group ${\iswfh}_r^G(X;\Z)$ has no $2$-torsion elements, and that there are no elements $x \in \swfh_r^G(X; \Z)$ such that $0 \neq 2x \in {\iswfh}_r^G(X;\Z)$ but $x \not \in {\iswfh}_r^G(X; \Z)$. Then, we have $\alpha(Y) = \alpha_\Q(Y)$. $(b)$ Let $g$ be a metric on $Y$. If for all $\nu \gg 0$, either the ideal $\i(I_\nu)$ or $\i(\Sigma^{\tR} I_{\nu})$ (whichever is well-defined, depending on the parity of the level of the Conley index $I_{\nu}$) is of one of the types $(z^k)$ or $(w^k, z^k)$ for some $k \geq 0$, then $\alpha_{\Q}(X)= \kappa(X)$. Part (a) follows from Proposition \[prop:aaQ\]. Part (b) follows from Corollary \[cor:kh\], using the fact that the level of $I_{\nu}$ is $\dim V^0_{-\nu}(\tR)$. Note that all the examples considered in Sections \[sec:psc\] and  \[sec:brieskorn\] satisfy the hypotheses in both parts of Proposition \[prop:Qswf\]. Hence, for those manifolds $Y$ we have $\alpha(Y) = \alpha_{\Q}(Y) = \kappa(Y)$. We expect that this fails in more complicated examples. [^1]: The author was supported by NSF grant DMS-1104406. [^2]: The Wirthmüller isomorphism [@Wirthmuller2; @LMS] is usually formulated in equivariant stable homotopy theory built on a complete universe; that is, by allowing suspensions by arbitrary representations of $G$. In our setting, we only use the representations $\R, \tR$ and $\H$. Nevertheless, what is essential is that we can embed $G$ in one of these representations, in our case $\H$. The Thom space of its normal bundle is then $\Sigma^{3\R} G_+$, which shows that $G_+$ and $\Sigma^{3\R} G_+$ are $\H$-dual. After suspending by $\R$, we get that $G_+$ and $\Sigma^{4\R} G_+ \cong \Sigma^{\H} G_+$ are $(\R \oplus \H)$-dual, which shows the Wirthmüller isomorphism explicitly. [^3]: Orbifolds were first introduced by Satake [@Satake] under the name of V-manifolds. The term V-manifold is used in some of the literature, e.g., in [@FukumotoFuruta] and [@SavelievFF].
Accounting for Incompleteness due to Transit Multiplicity in *Kepler* Planet Occurrence Rates ============================================================================================= Jon K. Zink**¹ , Jessie L. Christiansen**², and Bradley M. S. Hansen**¹ **¹Mani L. Bhaumik Institute for Theoretical Physics, Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 **²NASA Exoplanet Science Institute, California Institute of Technology, Pasadena, CA 91106 E-mail: \\hrefmailto:jzink@astro.ucla.edujzink@astro.ucla.edu Last updated 2018 December 18 ###### Abstract We investigate the role that planet detection order plays in the *Kepler* planet detection pipeline. The *Kepler* pipeline typically detects planets in order of descending signal strength (MES). We find that the detectability of transits experiences an additional 5.5% and 15.9% efficiency loss, for periods &lt;200 days and &gt;200 days respectively, when detected after the strongest signal transit in a multiple-planet system. We provide a method for determining the transit probability for multiple-planet systems by marginalizing over the empirical *Kepler* dataset. Furthermore, because detection efficiency appears to be a function of detection order, we discuss the sorting statistics that affect the radius and period distributions of each detection order. Our occurrence rate dataset includes radius measurement updates from the California Kepler Survey (CKS), *Gaia* DR2, and asteroseismology. Our population model is consistent with the results of Burke et al. ([2015](#bib.bib7)), but now includes an improved estimate of the multiplicity distribution. From our obtained model parameters, we find that only 4.0 ± 4.6% of solar-like GK dwarfs harbor one planet. This excess is smaller than prior studies and can be well modeled with a modified Poisson distribution, suggesting that the *Kepler* Dichotomy can be accounted for by including the effects of multiplicity on detection efficiency. Using our modified Poisson model we expect the average number of planets is 5.86 ± 0.18 planets per GK dwarf within the radius and period parameter space of *Kepler*. ###### keywords: methods: data analysis – planets and satellites: fundamental parameters ††pubyear: 2018††pagerange: Accounting for Incompleteness due to Transit Multiplicity in *Kepler* Planet Occurrence Rates–[8](#A0.F8 "Figure 8 ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates")\\patchcmd\\@combinedblfloats 1 Introduction -------------- The *Kepler* mission has revolutionized our understanding of the frequencies and properties of planets around Sun-like stars. With the final data release DR25, providing all of the data up until the failure of two reaction wheels (Mathur et al., [2017](#bib.bib37)), the primary phase of the project has officially concluded. Within this span, *Kepler* has provided evidence for ≈4, 500 transiting exoplanets.¹¹\\urlhttps://exoplanetarchive.ipac.caltech.edu Nearly 50% of these candidates have been confirmed or validated (Rowe et al., [2014](#bib.bib51); Morton et al., [2016](#bib.bib40)), demonstrating that planets are common and widespread in the Milky Way. There have been many attempts to quantify the frequency of planetary systems and the properties (radius and orbital period) of the planets themselves (Borucki et al., [2011](#bib.bib6); Catanzarite & Shao, [2011](#bib.bib11); Youdin, [2011](#bib.bib64); Howard et al., [2012](#bib.bib30); Batalha et al., [2013](#bib.bib2); Fressin et al., [2013](#bib.bib25); Petigura et al., [2013a](#bib.bib46); Dong & Zhu, [2013](#bib.bib21); Dressing & Charbonneau, [2013](#bib.bib19); Mullally et al., [2015](#bib.bib42); Dressing & Charbonneau, [2015](#bib.bib20); Burke et al., [2015](#bib.bib7); Mulders, Pascucci & Apai, [2015](#bib.bib43); Silburt et al., [2015](#bib.bib54)), with a special attention given to attempting to characterize the frequency of planets with Earth-like properties. One of the most challenging aspects of estimating these occurrence rates is understanding the completeness of the known exoplanet sample. The automation provided by the *Kepler* pipeline has produced a systematic method of detecting transiting exoplanets and thus offers the prospect of a rigorous determination of the survey completeness. With  3.5 years of nearly continuous light curves of  200,000 stars, it is possible to investigate period ranges out to 500 days. Furthermore, the high precision of the *Kepler* light detector has permitted the discovery of planets with radii *r* &lt; 1*r*⊕. Since the completion of the *Kepler* survey, several studies have used this data set to extract population parameters. Petigura et al. ([2013a](#bib.bib46)), using their own *TERRA* pipeline, implemented an *Inverse Detection Method*, where the population CDF (Cumulative Distribution Function) is divided by the detection efficiency. This study also introduced the idea of synthetic planet injections into the *Kepler* light curves to map completeness. Here, artificial transits were injected into the *Kepler* light curves, and the recovery fraction in the *TERRA* pipeline was used to understand the *Kepler* detection efficiency. To avoid confusion from multiple planet transits the Petigura et al. ([2013a](#bib.bib46)) occurrence rate calculation only included the highest SNR (Signal to Noise Ratio) planet in each system, ignoring any multiplicity. To characterize the official *Kepler* completeness, Christiansen et al. ([2015](#bib.bib15)) performed a pixel-level transit injection test to empirically measure how well the pipeline would detect various types of planets. This is discussed in more detail in Section [4](#S4 "4 Injection Recovery ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). The results of this study were then used by Burke et al. ([2015](#bib.bib7)) to perform a *Poisson Process Analysis*, where a Bayesian framework is implemented to determine the best population model parameters. The current work employs a similar method. Planet multiplicity introduces detection biases above and beyond those to which single transit systems are subject. When faced with a system of multiple transiting planets, the *Kepler* pipeline will typically find the largest MES (Multiple Event Statistic; comparable to SNR) signal, fit the transit function, and then discard the corresponding data points. The width of discarded data is 3× the transit duration, with 1.5× removed on each side of the transit center. Very few TTVs (Transit-Timing Variations) are large enough to escape this window. Such deletion is necessary to avoid confusion when looking for additional planets, but introduces data gaps into the light curve as noted by Schmitt et al. ([2017](#bib.bib52)). These gaps becomes more invasive in higher multiplicity systems where significant data is being discarded. With each planet removed, the available data set shrinks. This effect creates ‘‘swiss cheese’’-like holes in the light curves, where the number of holes increases with each detected planet. Beyond possible gaps in the light curve, the *Kepler* pipeline fails to detect some short-period planets because of a harmonic fitting function (Christiansen et al., [2013](#bib.bib14)). Here the pipeline attempts to remove sinusoidal variations in the light curve caused by stellar activity, but in doing so, the procedure can overfit a true planet signal and make low SNR planets difficult to detect. To clarify, the baseline wobble from the dataset is removed using a spline smoothing function. The harmonic fitting function is specifically looking for sinusoidal variations in the light curve. This function may or may not be applied, depending on whether the pipeline is able to detect such periodic variation in the light curve. In multiple-planet systems, the harmonic fitter can also overfit the periodic variations caused by transits and remove true signals. Because the pipeline follows these procedures, the order of planet detection can affect it’s detectability. Our goal in this paper is to assess the effect of planet multiplicity and detection order on the completeness of the Kepler results. In Section [2](#S2 "2 Stellar Selection ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") and [3](#S3 "3 Planet Selection ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") we describe our methods of stellar and planet selection. In Section [4](#S4 "4 Injection Recovery ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") we show that detection order affects the detection efficiency for a given planet. In Section [5](#S5 "5 Effects of mutual inclination ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") we describe how we account for mutual inclination within this study. In Section [6](#S6 "6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"), we lay out our process of accounting for overall detection efficiency. In Section [7](#S7 "7 The Likelihood Function ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") we present our expanded likelihood function used to calculate the posterior for the population parameters. In Section [8](#S8 "8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"), we discuss the results of our fitting method and the implications of our multiplicity parameters. We provide concluding remarks in Section [9](#S9 "9 Conclusion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates").  2 Stellar Selection ------------------- Using the final release of *Kepler* data (DR25) which includes Q1-Q17, we select a stellar sample for use in creating a detection efficiency map that accounts for *Kepler* completeness. We use the stellar parameters provided by Mathur et al. ([2017](#bib.bib37)) with improved radius values derived from *Gaia* DR2 (Berger et al., [2018](#bib.bib4)). The updates from *Gaia* DR2 have yet to provide updated corresponding mass values. Thus we must still utilize the *Kepler* DR25 stellar mass parameters (200,038 stars in total). To focus on the occurrence of planets around solar-like GK dwarfs, we only include stars with *T**e**f**f* &gt; 4200, *K* and *T**e**f**f* &lt; 6100*K* (135,494 stars remain). It is also important for completeness mapping that each star has a stellar radius and mass measurement available. ‘‘Null’’ values for either of these fields result in omission (133,056 stars remain). To avoid the inclusion of giants we limit the sample to *l**o**g*(*g*) ≥ 4 and *R*⋆ ≤ 2*R* (96,167 stars remain). We also place requirements on the duty cycle (*f**d**u**t**y*) and the time length of the light curve (*d**a**t**a**s**p**a**n*). These are *f**d**u**t**y* &gt; 0.6 and *d**a**t**a**s**p**a**n* &gt; 2 years are made (86,679 stars remain). The *f**d**u**t**y* limit requires that 60% of *d**a**t**a**s**p**a**n* has been collected. This ensures that a significant portion of the light curve is filled, while still including stars lost in the Q4 CCD loss (Batalha et al., [2013](#bib.bib2)). Time-varying noise measurements have been provided in the DR25 dataset through a value known as CDPP (Combined Differential Photometric Precision; Christiansen et al. [2012](#bib.bib13)). This parameter has been calculated for every field star over 14 different time periods: 1.5, 2.0, 2.5, 3.0, 3.5, 4.5, 5.0, 6.0, 7.5, 9.0, 10.5, 12.0, 12.5, and 15.0 hours (Mathur et al., [2017](#bib.bib37)). These values correspond to the amount of noise a planet signal will need to exceed, given a transit duration, to generate a 1*σ* detection. By requiring stars to have a *C**D**P**P*7.5*h* &lt; 1000 ppm, we minimize the inclusion of stellar and instrumental fluctuations (74 stars exceed this limit). From this we produce a stellar sample of 86,605 solar-like stars. | | | |-------|-------| | ![]() | ![]() | Figure 1: The smoothed recovery fraction at each MES bin. The vertical lines (light blue and red) represent the uncertainty in each bin under the assumption of a binary distribution. The bin values are plotted at the center of each bin. The solid lines (dark blue and red) represent the *Γ**C**D**F* distribution fit. The parameters of this model were fit using a *χ*² minimization. 3 Planet Selection ------------------ When available we utilize the updated planetary parameters provided by the California *Kepler* Survey (CKS) (Petigura et al., [2017](#bib.bib48); Johnson et al., [2017](#bib.bib34)) and the asteroseismic updates provided by Van Eylen et al. ([2018a](#bib.bib59)). One of the main advantages for the inclusion of these updates is the improved planet radius measurements. Since our study, like others, does not account for parameter uncertainty, such improvements are essential for accurate occurrence rates. Where CKS and asteroseismic data are unavailable, the measurements provided by the *Kepler* DR25 catalog (Thompson et al., [2018](#bib.bib56)), in conjunction with the *Gaia* DR2 radius updates (Berger et al., [2018](#bib.bib4)), are implemented. Through private communication, it was indicated that this early release of *Gaia* data may contain some planet radius outliers. To combat this issue, we test the radius values against the *Kepler* DR25 catalog. When the updated *Gaia* measurements differ from the *Kepler* DR25 data by &gt;3*σ*, we utilize the *Kepler* DR25 radius measurements. Overall, 19 planets exceed this outlier limit (statistically, we would expect only 8). All period measurements are drawn from the light curves; thus, improved measurements from *Gaia* and CKS have no effect on the inferred period measurements. We use the periods provided in the *Kepler* DR25 catalogs. Both the CKS and *Kepler* DR25 provide flags for false positives. We include data from both CONFIRMED and CANDIDATE planets in DR25 and *C**K**S**f**p* = *F**A**L**S**E* in the CKS update. To further avoid contamination from false positives, we only include planets with periods .5 &lt; *p* &lt; 500 days and radii .5 &lt; *r* &lt; 16*r*⊕. Periods beyond 500 days have been noted to be highly contaminated by false positives because they barely meet the three transit limit of the pipeline (Mullally et al., [2015](#bib.bib42)). Our period and radii range exceeds the conservative cutoffs adopted by many previous studies, but is necessary when exploring the effects of multiplicity. Often planetary systems span the entire range of the *Kepler* parameter space, thus the inclusion of nearly all the planets is needed for an accurate calculation. There exist 3 multi-planet systems (KIC: 3231341, 11122894, and 11709124) where one planet within the system fall beyond the range of this study. We only select the planets from these system that lie within our radius and period cuts. The inclusion of these planets is useful in providing a stronger statistical argument. Although some of the known planets, in these 3 systems, extend beyond the bounds of this study, we expect many other systems within the dataset to contain planets beyond the range of our selection bounds. Furthermore, if we include the planets that lay beyond our radius and period cuts, our analysis we will artificially inflate the number of inferred planets within this range. The accuracy of the *Kepler* detection order (‘‘TCE Planet Number’’) can be affected by systems with existing false positives. When removing these data points, we manually ensure that the detection order only reflects the order in which valid KOIs (*Kepler* Objects of Interest) are detected. For example, a system with 5 ‘‘real’’ KOIs and 1 false positive would have detection orders ranging from 1-5 regardless of order at which the false positive was detected. It should be noted that these false positives do create cuts in the data, similar to that of a planet and therefore affect the detection order. However, without reordering these systems we artificially inflate our multiplicity calculation in Section [7](#S7 "7 The Likelihood Function ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). Higher multiplicities are especially sensitive to mild increases as their detection probabilities are very low. Further discussion in Section [4](#S4 "4 Injection Recovery ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") shows that we use the same detection efficiency for all planets found after the first detected planet, thus only planets artificially being re-assigned to 1 are of concern. Since most false positives provide relatively weak signals, only 14 systems experience this artificial re-ordering. After making the discussed cuts we find that the highest detection order existing in the parameter space is 7. This means that the highest system multiplicity we consider in this study is a 7 planet system. We find 3062 KOIs meet the indicated period and radius requirements. It has been suggested that gas giants eject companion planets while migrating inward (Beaugé & Nesvorný, [2012](#bib.bib3)). Their large Hill radius forces the planets to become unstable as the Hill radius ratio falls below 10. These hot Jupiters create an independent population of single planet systems (Steffen et al., [2012](#bib.bib55)). If it forms via a distinct channel, this population has the ability to skew the inferred distribution of the model for the generic underlying population. To minimize such contamination, we remove all single planet systems with *r* &gt; 6.7*r*⊕ as indicated by Steffen et al. ([2012](#bib.bib55)). Further evidence of this independent population was discussed by Johansen et al. ([2012](#bib.bib33)), who showed that multi-planet systems with one planet of mass &gt;0.1 Jupiter mass are dynamically unstable on short timescales. This 0.1 Jupiter mass limit roughly corresponds to the *r* = 6.7*r*⊕ limit used here. We find that 120 of these single hot Jupiters exist in the dataset, leaving us with 2942 KOIs that fit all the parameter requirements described. Our final catalog of planets and their corresponding parameters can be found online.²²\\urlhttps://github.com/jonzink/ExoMult Table 1: The *Γ* function parameters used to fit the recovery CDF displayed in Figure [1](#S2.F1 "Figure 1 ‣ 2 Stellar Selection ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). | Period Range | Maximum Detection (*c*) | Shape (*a*) | Scale (*b*) | Offset (*x*₀) | |----------------------------|-------------------------|-------------|-------------|---------------| | *Γ**C**D**F**m* = 1 | | | | | | .5 &lt; *p* &lt; 200 days | 0.9825 | 29.3363 | 0.2856 | 0.0102 | | 200 &lt; *p* &lt; 500 days | 0.9051 | 18.4119 | 0.3959 | 1.0984 | | *Γ**C**D**F**m* ≥ 2 | | | | | | .5 &lt; *p* &lt; 200 days | 0.9276 | 21.3265 | 0.4203 | 0.0093 | | 200 &lt; *p* &lt; 500 days | 0.7456 | 5.5213 | 1.2307 | 2.9774 | 4 Injection Recovery -------------------- Here we shall discuss how we can account for the detection efficiency as a function of detection order. Christiansen ([2017](#bib.bib16)) injected artificial planet signals into the calibrated pixels of each of the *Kepler* field stars and processed the altered light curves with the standard detection pipeline. This allows the recovery fraction to be assessed, producing a probability function based on transit MES (Multiple Event Statistic; a detailed description of MES can be found in equation [14](#S6.E14 "(14) ‣ 6.1 Probability of Detection for = m 1 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates")). A *Γ**C**D**F* (Cumulative Distribution Function) was fit to the empirical probability of recovery, of the form: $${\\Gamma\_{CDF}{({MES})}} = {\\frac{c}{b^{a}{{({a - 1})}!}}{\\int\_{0}^{MES}{{({x - x\_{0}})}^{a - 1}e^{\\frac{- {({x - x\_{0}})}}{b}}{dx}}}}$$ (1) The purpose of this test was to establish an average detection efficiency function for the *Kepler* pipeline as determined by the properties of the target star sample. Therefore planet detection order was not considered. However, many of the target stars are known to host real KOIs, and these signals will remain in the Christiansen ([2017](#bib.bib16)) analysis. This provides an opportunity to consider the effects of detection order on recovery. Here we define detection order by the variable *m*, where *m*=1 indicates the first planet discovered in the system (i.e. highest MES). Likewise, planet *m*=2 and *m*=3 corresponds to the second and third planets found by the *Kepler* pipeline. The highest detection order existing in the parameter space is 7, thus we shall work in the range of *m* = 1 : 7.  We split the data from Christiansen ([2017](#bib.bib16)) into injection with a .5 &lt; *p* &lt; 200 days or 200 &lt; *p* &lt; 500 days. The break at 200 days was selected by testing different values. Beyond 200 days, we find that the distributions begin to change significantly. To focus on the relevant parameter space of our study, we remove all injections with periods beyond 500 days and only consider stars within 4200*K* &lt; *T**e**f**f* &lt; 6100*K* and *l**o**g*(*g*) ≥ 4. Because the goal for the original Christiansen ([2017](#bib.bib16)) experiment was to find an overall detection probability, only one artificial signal was injected into each light curve with a radius and period uniformly sampled from .25 − 7*r*⊕ and .25 − 500 days respectively. To understand the effect of multiple planets we therefore need to investigate systems with existing transit signals in the light curve. Over 30,000 unique signals are found within the *Kepler* data pipeline. Although most of these were later deemed false positives by external checks, the pipeline treats them no differently than an actual planet. It is even very likely that some of them are in fact ‘‘real’’ planets. Therefore, injections in these systems will be subject to the same systematic issues as that of an actual multiple-planet system. This offers a far greater number of *m* ≥ 2 injections than those provided by the KOI list alone. For the system with injected signals, for 200 &lt; *p* &lt; 500 days we find 2,099 *m* ≥ 2 systems and 1,579 *m* ≥ 2 systems for .5 &lt; *p* &lt; 200 days. We also separated the injections into *m* ≥ 3, but this data is extremely limited and cannot produce meaningful results without further injections. Thus, we shall focus only on m=1 and *m* ≥ 2 systems. The data are then binned in MES and the recovery fraction is determined at each binned region of MES space. Because the available data is relatively small compared to the original number of primary injections (31,302 for 200 &lt; *p* &lt; 500 days and 29,083 for .5 &lt; *p* &lt; 200 days), a smoothing technique is utilized. The bin width is set to 2 MES, but instead of moving each bin by steps of width 2, the bins were recalculated at steps of 0.01 MES. This produced 800 data points across a parameter space of 0-16 MES. Utilizing this technique avoids artifacts produced when binning smaller data samples. One issue that can arise from such smoothing is an artificial distribution skew. In acknowledging this possibility, we have tested various bin widths while smoothing and find little deviation from the results with the adopted binning. Since each injection within a bin can have two possible outcome, a detection or a failed detection, the distribution within each bin will follow a binomial model. Here the number of trials corresponds to the number of injections within the bin. Thus, the uncertainty for each bin is calculated assuming a binomial distribution. The recovery CDF is then fit with a 4-parameter *Γ* distribution using a *χ*² minimization. The results of the fit can be seen in Table [1](#S3.T1 "Table 1 ‣ 3 Planet Selection ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") and Figure [1](#S2.F1 "Figure 1 ‣ 2 Stellar Selection ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). One of the main motivations for creating these additional detection efficiency curves was a preliminary search of the results of the Christiansen ([2017](#bib.bib16)) injection test. This showed that 61 previously detected KOIs were lost when the injection of additional planets was made. Thirty-nine of these KOI planets had a low ‘‘Disposition Score’’, indicating that small perturbation to the light curve could easily disrupt their detectability. One KOI was lost because of transit interference, where a higher MES injection with overlapping transits caused some of the transits of the weaker signal planet to be missed. Twenty-one systems had indicated that the harmonic fitting function was triggered when the injection was made, likely overfitting to the transits themselves. Ten of these light curves had no detections of planets at all. Both the injection and the KOI were missed when the artificial planet was placed into the system. This indicates that multiple-planet systems are constrained by additional detection biases not experienced by single planet systems. 5 Effects of mutual inclination ------------------------------- Here, we shall discuss how the effects of mutual inclination are handled within our model. The initial recovery study (Christiansen, [2017](#bib.bib16)) was performed without consideration of higher multiplicity planets. Thus, there was no accounting for mutual inclination. The artificial planets were injected with a random impact parameter (b) from 0 to 1. To understand the effects of mutual inclination on detection efficiency we look at the difference of impact parameters (*Δ**b*) for recovered planet systems. *Δ**b* is calculated by taking the difference of the artificial planet and the largest MES KOI impact parameter in each system. Since an existing KOI is required for this test, we only look at systems with known planets. We find that the detected planets do not significantly differ in *Δ**b* than the difference of two randomly drawn populations of *b* values. Because the artificial planets were injected with uniformly drawn impact parameters, we conclude that the *Δ**b*, and therefore mutual inclination, plays an insignificant role in detection efficiency. However, larger mutual inclinations can cause certain planets to geometrically avoid transit completely. ### 5.1 Transit Probability Analytic models of transit probability have been found for double transit systems as a function of mutual inclination (Ragozzine & Holman, [2010](#bib.bib49)). However, larger multiplicity systems are more difficult and require semi-analytic models (Brakensiek & Ragozzine, [2016](#bib.bib10)). In order to simplify our calculation, we simulate various semi-major axis to stellar radius ratios (*a**p*/*R*⋆) and look at 10⁶ lines of sight to predict the probability of transit. To determine the period population we need a function for *m* transit probability at some semi-major axis value (*a**p*). In order to create a function for probability of transit in addition to *m* − 1 other transits, it is essential that we know the distributions of exoplanet periods. Clearly, this argument is circular in nature. We deal with this issue by using a non-uniform method of sampling from the empirical period population. This is performed for detection order *m* = 2 : 7, since the analytic probability (*R*⋆/*a**p*) is sufficient for m=1. To establish the desired detection order, the required number of planets are drawn from the empirical *Kepler* period data. For example, when looking at the case of m=3, (*a**p*/*R*⋆) is selected and then the two additional planets are drawn from the known *Kepler* period sample. The periods of the additional two planets are redrawn at each line of sight. This is the same as saying we marginalized the additional two planets over the *Kepler* period population. In order to properly account for the transit probability of higher detection orders, we need to know the unbiased underlying populations of periods. To approximate this, we sample the empirical distribution of *Kepler* planet periods, but weighted with a probability ∝*p*2/3. This is done to account for the geometric bias against the detection of longer period planets. To account for the mutual inclination between orbits, we follow the *σ**σ* distribution provided by Fang & Margot ([2012](#bib.bib22)). This mild distribution (⟨*σ*⟩ = 1.6*o*) was found by looking at the impact parameter ratios within *Kepler* systems. Once all orbits have been selected, the number of lines of sight where all planets transit is divided by 10⁶ to establish the transit probability. To determine whether a planet is transiting this equation must be satisfied: $$\\begin{array}{r} {{{{{cos{(i)}}\*c}os{(\\omega)}} - {{{sin{(i)}}\*s}in{(\\omega)}}} \\geq {R\_{\\star}/a\_{p}}} \\\\ {{{{{\\text{or~}sin{(i)}}\*s}in{(\\omega)}} - {{{cos{(i)}}\*c}os{(\\omega)}}} \\leq {R\_{\\star}/a\_{p}}} \\\\ \\end{array}$$ (2) where *i* is the inclination of the of the system and *ω* is the ascending node. Each line of sight is drawn uniformly over *s**i**n*(*i*) and the nodes of each orbit are also drawn uniformly over *s**i**n*(*ω*). For nodes between planets within the same system, we sample uniformly over *s**i**n*(*Δ**ω*). We note that Equation [2](#S5.E2 "(2) ‣ 5.1 Transit Probability ‣ 5 Effects of mutual inclination ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") is only valid for circular orbits. Consideration of eccentric orbits is presented in Section [8.5](#S8.SS5 "8.5 Considering Eccentricity ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). To avoid the creation of unstable systems, we check the planet separations (|*a**p*2 − *a**p*1|). If any separation is &lt;10% the semi-major axis of the outer planet we resample the entire system. This process is repeated until no separations fall below the 10% threshold. Although mutual Hill radius would provide a better measure of stability, our metric requires no assumptions about the mass of the planets. Furthermore, we find that changing (or removing) this threshold makes little difference to the probabilities calculated, indicating that stability accounting has little effect statistically. The results of this simulation can be seen in Figure [2](#S6.F2 "Figure 2 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates").  It is worth noting that equation [2](#S5.E2 "(2) ‣ 5.1 Transit Probability ‣ 5 Effects of mutual inclination ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") does not account for grazing transits. To properly account for this, *R*⋆ must becomes *R*⋆ ± *r*, where *r* is the radius of the transiting planet. Using a uniform distribution of *r* values from 0.5*r*⊕ to 16*r*⊕, we find that grazing transits provide an increase of 0.2% to the overall transit probability. However, this uniform distribution is weighted far more heavily towards large planets than the underlying planet radius distribution, thus we expect the true correction to be much smaller. To properly account for grazing transit one must have some understanding of the underlying radius population. Any attempt to do so here would add more uncertainty to the calculation and provide a very minimal correction. Thus, we ignore such complications here. 6 Detection Efficiency Grid --------------------------- To represent the *Kepler* survey detection efficiency a grid is created in period and radius space. Both *l**o**g*₁₀*p* and *l**o**g*₁₀*r* are divided into 100 bins, creating 10,000 regions of the parameter space. For every region we uniformly sampled in log space for period and radius, all 86,605 stars are assigned *m* planets based on the detection order of interest. For example, in the detection grid for the first transiting planet (m=1), the probability of detecting at least one planet is calculated at each bin. Similarly for m=2, the probability of detecting at least one planet at each bin in addition to finding another planet in some other arbitrary bin. The average detection probability for each region is calculated using these planetary assignments and the procedures provided in the next Sections ([6.1](#S6.SS1 "6.1 Probability of Detection for = m 1 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"); [6.2](#S6.SS2 "6.2 Probability of Detection for ≥ m 2 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates")). This process is then repeated for each of the 10,000 regions. We calculated 7 detection efficiency grids: first planet probability (m=1), second planet probability (m=2),…, and the seventh planet probability (m=7). This procedure is similar to that of Burke et al. ([2015](#bib.bib7)) and Traub ([2016](#bib.bib57)), but now with 7 different detection order grids. ![]() Figure 2: The probability for transit of high multiplicity systems using the Fang & Margot ([2012](#bib.bib22)) mutual inclination model. The solid black line represents the probability function used for an *m* = 1 planet transit (*R*⋆/*a**p*). A machine-readable version of this data is available \\hrefhttps://github.com/jonzink/ExoMultonline. ![]() ![]() ![]() ![]() ![]() ![]() Figure 3: The detection efficiency maps for m=1:4 exoplanet discovery orders. The color map is representative of *l**o**g*₁₀(Detection Probability). The fading of color across detection order (m) shows the decreasing detection probability. A machine-readable version of this data is available \\hrefhttps://github.com/jonzink/ExoMultonline. ### 6.1 Probability of Detection for *m* = 1 We shall begin with the formula for the detection of the first planet and then discuss the modifications made for the detection of higher order systems. In our base model we assume all planets have perfectly circular orbits and consider the effects of eccentricity in Section [8.5](#S8.SS5 "8.5 Considering Eccentricity ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). This assumption of little or no eccentricity is reasonable for the typical multiple systems sampled by *Kepler*, where non-circular orbits would result in unstable system architecture. To account for the geometric probability of transit we use: $$P\_{tr} = \\frac{R\_{\\star}}{a\_{p}}$$ (3) where *R*⋆ is the radius of the star and *a**p* is the semi-major axis of the planet orbit. The chord at which the planet transits across the stellar host is given by $$f\_{tr} = \\sqrt{1 - b^{2}}$$ (4) where *b* is the impact parameter of the planet transit. *b* is assigned by uniformly sampling between 0 and 1 for each planet. The duration of the transit can be calculated as $$t\_{dur} = {{{\\frac{R\_{\\star}\*f\_{tr}}{a\_{p}\*\\pi}{(\\frac{p}{1\\text{day}})}}\*24}\\text{hr}}$$ (5) where *p* is the orbital period of the planet. The expected number of transits can be found with $$n\_{tr} = \\frac{data\_{span}}{p}$$ (6) where *d**a**t**a**s**p**a**n* is the span of the data within the *Kepler* survey. Because of various shut downs and data downloads throughout the *Kepler* mission, it is possible that some of the transits may have been missed. To account for the probability of the transit occurring in the window of the *Kepler* mission we adopt the window function provided by Burke et al. ([2015](#bib.bib7)). $$j = \\frac{data\_{span}}{p}$$ (7) $$\\begin{array}{r} {P\_{win} = {1 - {({1 - {duty}})}^{j} - {{j\*d}uty{({1 - {duty}})}^{j - 1}}}} \\\\ {- \\frac{j{({j - 1})}duty^{2}{({1 - {duty}})}^{j - 2}}{2}} \\\\ \\end{array}$$ (8) where *d**u**t**y* is the duty fraction of the targeted stellar source. The *Kepler* pipeline requires at minimum 3 transits for candidate consideration; *P**w**i**n* is the probability that at least 3 transits will be detected by the available *Kepler* data. Since most targets have a *d**u**t**y* = .95, short period transits (*j* ≫ 3) produce a *P**w**i**n* nearly 1 and approach 0 as *j* &lt; 3. Almost all of our sample have data throughout the full data set span of 1458.931 days. The mean *d**a**t**a**s**p**a**n* for this study is 1427.445 days. Other studies have used various way to account for the effects of limb darkening such as that of Claret & Bloemen ([2011](#bib.bib17)). We attempt to mimic the pipeline by looking at the empirical limb darkening values chosen for existing KOIs (with the same stellar parameters discussed in Section [2](#S2 "2 Stellar Selection ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates")). We find that the two limb darkening parameters (*u*₁, *u*₂) used to fit planet transits within the pipeline are strongly correlated to stellar temperature (*T**e**f**f*). The best fit line to this correlation is as follows: $$\\begin{aligned} u\_{1} & {= {{- {1.93\*10^{- 4}\*T\_{eff}}} + 1.5169}} \\\\ u\_{2} & {= {{1.25\*10^{- 4}\*T\_{eff}} - 0.4601}} \\\\ \\end{aligned}$$ (9) We warn that these correlations mimic the choice of the pipeline rather than the true stellar features and should not be used for more evolved stars with *l**o**g*(*g*) &lt; 4. With the given calculated parameters, it is now possible to calculated the expected MES of the *Kepler* pipeline as presented by Burke & Catanzarite ([2017a](#bib.bib8)). $$k\_{rp} = \\frac{r}{R\_{\\star}}$$ (10) *c*₀ = 1 − (*u*₁ + *u*₂) (11) $$\\omega = {{\\frac{c\_{0}}{4} + \\frac{u\_{1} + {2\*u\_{2}}}{6}} - \\frac{u\_{2}}{8}}$$ (12) $$\\begin{array}{r} {depth = 1 - {(\\frac{c\_{0}}{4} + \\frac{{({u\_{1} + {2\*u\_{2}}})}\*{({1 - k\_{rp}^{2}})}^{\\frac{3}{2}}}{6}}} \\\\ {- \\frac{u\_{2}{({1 - k\_{rp}^{2}})}}{8})\\omega^{- 1}} \\\\ \\end{array}$$ (13) $${MES} = {\\frac{depth}{{CDPP}\*10^{6}}\*1.003\*n\_{tr}^{\\frac{1}{2}}}$$ (14) where *C**D**P**P* is in ppm from the *Kepler* stellar catalog, interpolated by the transit duration. Finally, we account for the systematic detection efficiency using the Gamma distribution CDF described in Section [4](#S4 "4 Injection Recovery ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). *P**t**i**p**m* = 1 = *Γ**C**D**F**m* = 1(*M**E**S*) (15) where the parameters for *Γ**C**D**F* are the given in Table [1](#S3.T1 "Table 1 ‣ 3 Planet Selection ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). Combining all of the discussed probabilities provides an estimate of the detection likelihood of the highest MES planet within the system. This probability is given as follows: *P**d**e**t**m* = 1 = *P**t**r* \* *P**w**i**n* \* *P**t**i**p**m* = 1 (16) This equation provides a metric for understanding the bias of the highest MES planet. This probability is dependent on detection order and we shall now discuss in the next section how higher multiplicity planets (*m* ≥ 2) can be accounted for. ### 6.2 Probability of Detection for *m* ≥ 2 For *m* ≥ 2 planets we follow much of what is described in the previous Section ([6.1](#S6.SS1 "6.1 Probability of Detection for = m 1 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates")), with a few mild changes to better model the differences in detection probability. We change the transit probability to reflect the probability of *m* planets transiting, accounting for the probability of finding this planet with at minimum *m* − 1 other planets. To best capture the probabilities of our simulation in Section [5.1](#S5.SS1 "5.1 Transit Probability ‣ 5 Effects of mutual inclination ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"), we interpolate between simulated data points for the transit probability. $$P\_{tr}^{m} = {\\text{Linear\\ Interpolate}{(m,\\frac{a\_{p}}{R\_{\\star}})}}$$ (17) For example, if we are looking at a planet with m=3 (the third planet detected) with *a**p*/*R*⋆ = 32, we would expect a transit probability of ∼0.008. This can be clearly seen in the data provided by Figure [2](#S6.F2 "Figure 2 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). Since no such simulated value exist at this exact point, we interpolate between the the two neighboring estimations to establish this value. Here we use the new detection efficiency for higher m planets. *P**t**i**p**m* ≥ 2 = *Γ**C**D**F**m* ≥ 2(*M**E**S*) (18) *P**d**e**t**m* = *P**t**r**m* \* *P**w**i**n* \* *P**t**i**p**m* ≥ 2 (19) where equation [8](#S6.E8 "(8) ‣ 6.1 Probability of Detection for = m 1 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") is again used for *P**w**i**n*. In reality, there are differing window functions for each detection order; when tested, we find that ≈0.4% of the light curve is lost with the addition of each planet. One can see that varying the *d**u**t**y* parameter of equation [8](#S6.E8 "(8) ‣ 6.1 Probability of Detection for = m 1 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") by even 3% has negligible effects on the *P**w**i**n* value. Because the detection efficiency is the same for *m* ≥ 2, the only difference between the *m* = 2 : 7 probability maps is the transit probability. This now produced 7 distinct detection grids (*m* = 1 : 7). The first four grids can be seen in Figure [3](#S6.F3 "Figure 3 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). The detection order of the exoplanet in question will dictate which grid is most appropriate for application. To summarize, we have described how the recovery probabilities (CDF) are a function of detection order (m). We use this to create 7 different detection efficiency maps (Figure [3](#S6.F3 "Figure 3 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates")). In order to create a map for m=1 planets, we sample across planet period and radius space. Doing so, we calculate the probability of detection and averaged over all stars within the *Kepler* stellar sample. We expand upon this idea, creating a map for m=2 planets. Here the new recovery CDF is implemented to account for the additional loss of planets at higher detection orders. Furthermore, we account for the probability of two planets within the system transiting using a mild mutual inclination model (Figure [2](#S6.F2 "Figure 2 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates")). Jumping from m=1 to m=2 we lose an additional 5.5% and 15.9% of the planets for periods &lt;200 days and periods &gt;200 days respectively. This is due to properties of the pipeline when fitting multiple transit systems. This procedure is repeated for m=3:7 each accounting for the appropriate number of transiting planets according to the data in Figure [2](#S6.F2 "Figure 2 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") (3-7 respectively). There is an additional loss of nearly 70% at each respective discovery order due to the unlikely event of multiple orbital alignment with our line of sight. It is clear that these two factors, geometric transit likelihood and pipeline recovery, have a significant effect on the multiplicity extracted from the *Kepler* data set. ![]() Figure 4: The sorting simulation for m=1 and m=2. The solid blue line represents the Beta distribution fit to the respective data set. The boxes are a histogram of the simulated data after being sorted. It is apparent that sorting has a more dramatic effect on radius than period. This is expected as *M**E**S* ∝ *r*²/*p*1/3. A mild deviations from the model is noted in the radius skew. This discrepancy dissolves as we move into higher detection orders. Furthermore, the effects of these deviations are insignificant, given the cuts on duty cycle, data span, and stellar type already made. Table 2: The parameters found in testing the sorting effects of MES. These parameters correspond to a Beta distribution skew expected for the CDF of each multiplicity population. | | m=1 | m=2 | m=3 | m=4 | m=5 | m=6 | m=7 | |--------------|-------|-------|-------|-------|-------|-------|-------| | *a**r**a**d* | 1.095 | 1.030 | 1.028 | 1.013 | 0.998 | 1.065 | 0.951 | | *b**r**a**d* | 0.923 | 1.470 | 2.206 | 3.063 | 4.013 | 4.898 | 6.614 | | *a**p**e**r* | 0.957 | 1.152 | 1.172 | 1.184 | 1.183 | 1.166 | 1.234 | | *b**p**e**r* | 1.004 | 1.010 | 1.000 | 0.999 | 0.997 | 1.006 | 0.994 | 7 The Likelihood Function ------------------------- Using the efficiency grids derived in the previous section, we can infer properties of the underlying planetary population. Here we will discuss the likelihood function required to implement Bayes theorem and extract these population parameters. We adopt the approach of previous studies (e.g. Youdin [2011](#bib.bib64); Petigura et al. [2013a](#bib.bib46); Burke et al. [2015](#bib.bib7)), modeling the underlying population as characterized by independent power-law distributions in period and radius. We also make explicit the assumption that there is a single planetary population – assuming that systems which show only one transit are drawn from the same underlying distribution as those which show multiple transits. We will examine the validity of this assumption in Section [8.1](#S8.SS1 "8.1 Forward Modeling the Results ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). Our focus on multiple systems also means that we include more of the *Kepler* parameter space than was used in most previous papers. The population of exoplanets is modeled as follows: $$\\frac{d^{2}N}{dpdr} = {fg{(p)}q{(r)}}$$ (20) $${g{(p)}} = \\left\\{ \\begin{array}{ll} {C\_{p1}p^{\\beta\_{1}}} & {\\text{if~}{p &lt; p\_{br}}} \\\\ {C\_{p2}p^{\\beta\_{2}}} & {\\text{if~}{p \\geq p\_{br}}} \\\\ \\end{array} \\right.$$ (21) $${q{(r)}} = \\left\\{ \\begin{array}{ll} {C\_{r1}r^{\\alpha\_{1}}} & {\\text{if~}{r &lt; r\_{br}}} \\\\ {C\_{r2}r^{\\alpha\_{2}}} & {\\text{if~}{r \\geq r\_{br}}} \\\\ \\end{array} \\right.$$ (22) where *f*, *α*₁, *α*₂, *β*₁, *β*₂, *p**b**r*, and *r**b**r* are all fit parameters. We require continuity at *r**b**r* and *p**b**r* through the normalization constants for *q*(*r*) and *g*(*p*). Our method expands on the Poisson process likelihood used by Youdin ([2011](#bib.bib64)). The main difference is the separation of planets by detection order (*m*). In doing so, we require different occurrence factors (*f*) for each *m*, increasing the required number of parameters. Previous studies such as Burke et al. ([2015](#bib.bib7)) have used a single occurrence value, providing an average occurrence factor. By separating the occurrence factor as a function of detection order, we can allow for differences in detection efficiency while simultaneously fitting for the occurrence of planet multiplicity. $${Likelihood} = {\\prod\\limits\_{m = 1}^{7}{{\\lbrack{\\prod\\limits\_{i = 1}^{n\_{m}}{f\_{m}\\eta\_{m}{(p\_{i},r\_{i})}g{(p\_{i})}q{(r\_{i})}}}\\rbrack}e^{- N\_{m}}}}$$ (23) $$\\begin{array}{ll} & {N\_{m} =} \\\\ & {86,{605f\_{m}{\\int\_{.5\\text{days}}^{500\\text{days}}{\\int\_{.5r\_{\\oplus}}^{16r\_{\\oplus}}{\\eta\_{m}{(p,r)}O\_{m}{(p\_{i},r\_{i})}g{(p)}q{(r)}{dr}{dp}}}}}} \\\\ \\end{array}$$ (24) where *N**m* represents the expected number of planets detected for each discovery order (*m*) and *f**m* is an occurrence factor for each *m*. This value provides information on the occurrence of each *m* multiplicity. However, to find meaningful information from these values, they must be disentangled from each other as discussed in Section [7.0.3](#S7.SS2.SSS3 "7.0.3 Occurrence Factor ‣ 7 The Likelihood Function ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). The 86,605 accounts for the number of stars in our test sample and *η**m*(*p*, *r*) is the detection probability at the given detection order. The function *O**m*(*p**i*, *r**i*) is the sorting order correction for the (PDF) Probability Distribution Function. This function is necessary to account for the bias in the PDF introduced by our sorting in terms of detection order (discussed further in [7.0.2](#S7.SS2.SSS2 "7.0.2 Sorting Order ‣ 7 The Likelihood Function ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates")). It is often more useful to consider the natural log of the likelihood, which can be simplified: $${ln{({Likelihood})}} \\propto {{\\sum\\limits\_{m = 1}^{7}{\\lbrack{\\sum\\limits\_{i = 1}^{n\_{m}}{ln{({f\_{m}g{(p\_{i})}q{(r\_{i})}})}}}\\rbrack}} - N\_{m}}$$ (25) Using the *l**n*(*L**i**k**e**l**i**h**o**o**d*) is common practice with fitting algorithms, where the ratio of likelihoods are compared to determine the best fit (maximum likelihood). Since *η**m*(*p*, *r*) is not dependent on the fitting parameters it can be considered constant. #### 7.0.1 Calculating *N**m* To find *η**m*(*p*, *r*) we use the detection maps found in Section [6.1](#S6.SS1 "6.1 Probability of Detection for = m 1 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") and [6.2](#S6.SS2 "6.2 Probability of Detection for ≥ m 2 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). Here we are assuming an average probability of detection over the stellar population. To properly treat this integral, one would have to compute the detection probability for each star. Such a procedure would be computationally expensive and provide a minimal increase in precision. Table 3: The mixture probabilities for each detection order. For example, this accounts for the possibility that two and three planet systems may only be found with a single planet. These values were found using our transit probability model described in Section [5.1](#S5.SS1 "5.1 Transit Probability ‣ 5 Effects of mutual inclination ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). | | m=1 | m=2 | m=3 | m=4 | m=5 | m=6 | |--------------------------------------------|------|------|------|------|------|------| | $\\frac{P{(2|\\overline{(m,1)})}}{P{(m)}}$ | 0.67 | - | - | - | - | - | | $\\frac{P{(3|\\overline{(m,2)})}}{P{(m)}}$ | 0.68 | 0.50 | - | - | - | - | | $\\frac{P{(4|\\overline{(m,3)})}}{P{(m)}}$ | 0.53 | 1.05 | 0.50 | - | - | - | | $\\frac{P{(5|\\overline{(m,4)})}}{P{(m)}}$ | 0.53 | 1.12 | 1.52 | 0.46 | - | - | | $\\frac{P{(6|\\overline{(m,5)})}}{P{(m)}}$ | 0.37 | 1.07 | 1.85 | 1.69 | 1.22 | - | | $\\frac{P{(7|\\overline{(m,6)})}}{P{(m)}}$ | 0.33 | 0.71 | 1.64 | 1.90 | 1.25 | 1.22 | #### 7.0.2 Sorting Order Here we will provide a brief overview of order statistics and why it is an important feature of this model. As mentioned previously, the *Kepler* pipeline finds planets in order of decreasing MES. Such ordering will skew the distribution of planets found in each *m*. Larger, short period, planets will tend to be found in order *m* = 1 or *m* = 2, because there are more transits and deeper transit depths. Smaller, long-period planets will tend towards orders *m* = 6 or *m* = 7. To account for such a skew, a joint distribution model (*P**m*(*x*)) can be utilized (David & Nagaraja, [2003](#bib.bib18)). *P**m*(*x*) ∝ *P*₀(*x*)*C*₀(*x*)*a**m* − 1(1 − *C*₀(*x*))*b**m* − 1 (26) Here, *P*₀(*x*) is the true underlying probability distribution function and *C*₀(*x*) is the true cumulative distribution function. *a**m* and *b**m* can range from (0,inf ) and forces the skew of the distribution. Essentially, the PDF of the distribution is skewed by a Beta distribution of the CDF. In the case of *a**m* = *b**m* = 1 the sorting skew returns the original PDF (*P*₀(*x*)). The parameters *a**m* and *b**m* can be found analytically for equally sampled orders, but becomes far more complex in the decreasing case at hand (each *m* has fewer planets than the last). To determine the best values for this case, we choose to simulate this sorting mechanism on a uniform distribution, where the skew can be clearly isolated and extracted. In doing so, we force the ratio of each *m* sample to mimic that of the empirical population. Each system is then sorted by *r*²/*p*1/3, imitating *Kepler’s* MES sorting. For example, if a system of (r=1.2,p=25), (3.5,20), (4.1,150) were randomly drawn into *m* = 1, 2, 3 detection orders, they would be re-sorted as (3.5,20), (4.1,150), (1.2,25) corresponding to *m* = 1, 2, 3. As we can see, the highest MES will always rise to *m* = 1. This is then repeated for 10⁷ systems. Figure [4](#S6.F4 "Figure 4 ‣ 6.2 Probability of Detection for ≥ m 2 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") shows how the first two detection orders are skewed by this procedure. If sorting were not an issue, these distributions would maintain the uniform flat appearance. Fitting a Beta distribution to this skew, we can determine the best *a**m* and *b**m* parameters for our sample. These parameters are provided in Table [2](#S6.T2 "Table 2 ‣ 6.2 Probability of Detection for ≥ m 2 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). Since the this joint distribution is separable, we define the skew portion of the distribution as *O**m*(*p*, *r*). $$\\begin{array}{ll} {{O\_{m}{(p,r)}} = N} & {\*{C\_{r}{(r)}^{a\_{m,r} - 1}{\\lbrack{1 - {C\_{r}{(r)}}}\\rbrack}^{b\_{m,r} - 1}}} \\\\ & {\*{C\_{p}{(p)}^{a\_{m,p} - 1}{\\lbrack{1 - {C\_{p}{(p)}}}\\rbrack}^{b\_{m,p} - 1}}} \\\\ \\end{array}$$ (27) where *C**r*(*r*) and *C**p*(*p*) represents the CDF of the radius and period distributions respectively and *N* represents a normalization factor that we find numerically within the MCMC. #### 7.0.3 Occurrence Factor As noted, the value *f**m* is an integrated occurrence factor. In order to extract meaningful values, we realize that many *m* = 6, 7 planet systems will only provide detectable transits for one or two planets within the system. This will lead to an increased contribution to lower detection orders. Thus we adopt the following method for disentangling the true occurrence factors (*F**m*): $$f\_{m} = {F\_{m} + {\\sum\\limits\_{n = {m + 1}}^{7}{F\_{n}\\frac{P{(n|\\overline{({m:{n-1}})})}}{P{(m)}}}}}$$ (28) Here $P{(n|\\overline{({m:{n-1}})})}$ represents the probability of finding planet *n* given that planets (*m* : *n* − 1) are not found and *P*(*m*) is the probability of finding planet *m*. This ratio accounts for the dependence between occurrence factors. If the mutual inclination is purely isotropic and planets are truly independent this ratio would be one. We use our transit simulation from Section [5.1](#S5.SS1 "5.1 Transit Probability ‣ 5 Effects of mutual inclination ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") to extract these marginalized probabilities. Table [3](#S7.T3 "Table 3 ‣ 7.0.1 Calculating N m ‣ 7 The Likelihood Function ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") contains the results from this simulation. This model indicates that each multi-planet system will have more than one opportunity to find an *f*₁ planet. The physical interpretation of the *F**m* values is the fraction of stars that have at least *m* planets. ### 7.1 Fitting the Data We employ EMCEE (Foreman-Mackey et al., [2013](#bib.bib23)), an affine-invariant ensemble sampler (Goodman & Weare, [2010](#bib.bib27)), to explore the parameter space of our study. To better constrain the 13 fit parameters, a Bayesian framework is implemented. Linear space uniform priors are used for all parameters. For *α*₁, *α*₂, *β*₁, and *β*₂ the priors range from -30 to +30. For *r**b**r* and *p**b**r* the priors range from *r**m**i**n* and *p**m**i**n* to *r**m**a**x* and *p**m**a**x* of our planet sample respectively. One unique restriction for our prior is that *F**m* must be larger than *F**m* + 1. It is not possible to have a higher occurrence of *m* + 1 than *m* planets. To avoid truncation bias and maintain order, all *F**m* priors range from 0 to *F**m* − 1. In the special case of *m* = 1, the prior ranges from 0 to 1. It is important to remember that *F**m* represents the fraction of the population containing *m* planets. Therefore, this cascading prior still allows for larger multiplicity systems to be more common than smaller multiplicity systems.  8 Discussion ------------ In this section, we now apply the formalism we have developed to infer the revised occurrence rate parameters for planets orbiting GK dwarfs. This sample includes data from the final *Kepler* release DR25 and updated planet radius measurements from the CKS and *Gaia* DR2. Beyond these recent data improvements, we now include a corrected detection efficiency for multiple-planet systems. Given that many multiple-planet systems span much of the *Kepler* Parameter space, we include planets within .5 &lt; *r* &lt; 16*r*⊕ and .5 &lt; *p* &lt; 500 days. In implementing two detection efficiencies, this study expands on the Poisson process likelihood function used by other authors, allowing for the treatment of planet multiplicity. This Bayesian framework is fit using an MCMC, where 20,000 steps are used to model the posterior of each parameter. The resulting posteriors are presented in Figure [8](#A0.F8 "Figure 8 ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). From this model we infer best fit power-law values of $\\alpha\_{1} = {{- 1.65} \\pm\_{0.06}^{0.05}}$, *α*₂ = −4.35 ± 0.12, *β*₁ = 0.76 ± 0.05, and *β*₂ = −0.64 ± 0.02. The breaks in our best fit model occur at $p\_{br} = {7.08 \\pm\_{0.31}^{0.32}}$ days and *r**b**r* = 2.66 ± 0.06*r*⊕. One novel feature of our fitting method is the ability to extract exoplanet multiplicity. This information is provided through the *F**m* parameters. These values indicate the probability of a system having at least *m* planets. We find the following value best fit our model: $F\_{1} = {0.72 \\pm\_{0.03}^{0.04}}$, *F*₂ = 0.68 ± 0.03, *F*₃ = 0.66 ± 0.03, *F*₄ = 0.63 ± 0.03, *F*₅ = 0.60 ± 0.04, $F\_{6} = {0.54 \\pm\_{0.05}^{0.04}}$, and $F\_{7} = {0.39 \\pm\_{0.09}^{0.07}}$. | | | |-------|-------| | ![]() | ![]() | Figure 5: A plot of the forward modeled population derived by our Bayesian analysis. The red x marks symbolize the model values with their corresponding 68.3% confidence intervals. To find this interval the model is sampled 50 times using the posterior parameter distributions, the uncertainty reflects the fluctuations we find from these trials. The black points show the *Kepler* data with poisson uncertainty. For m=5:7 many of the bins have 1 or 0 planets, where small number statistics cause significant variations. In order to minimize this variations we present the resulting combination of *m* ≥ 4. However, it should be noted that our forward model does differentiate between these detection orders. Left: Forward model of multiple and single planet systems. Right: Forward model of only multiple-planet systems. This model was produced by only fitting to the data of multiple-planet systems. ### 8.1 Forward Modeling the Results Thus far, we have accounted for various parameter and population dependencies. To ensure that this process yields meaningful results, we choose to sample the extracted population and subject it to the detection constraints described in Section [6](#S6 "6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). Here we present the \\hrefhttps://github.com/jonzink/ExoMultExoMult forward modeling software. This code, developed in *R*, simulates these detection effects and produces a population of detected planets. Using this program, we can make far fewer assumptions and directly recover the expected population. For example, the probability of transit for all 7 planets can be directly accounted for by sampling system inclination, mutual inclination and the argument of periapsis directly. Furthermore, the detection probability will not be marginalized over all stars, but rather reviewed for each system independently. The first step in our forward model is drawing each system of planets according to the population parameters given in Figure [8](#A0.F8 "Figure 8 ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). Each system is randomly oriented with mutual inclinations drawn from a Rayleigh distribution. For planets with detectable impact parameters (*b* &lt; 1), the planets within each system are sorted in decreasing MES. The probability of recovery is assigned to each planet according to the procedure laid out in Sections [6.1](#S6.SS1 "6.1 Probability of Detection for = m 1 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") and [6.2](#S6.SS2 "6.2 Probability of Detection for ≥ m 2 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). Based on the calculated probability of detection, the planet is either detected or lost by drawing from a random number generator. Figure [5](#S8.F5 "Figure 5 ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") shows the best fit model to the observed population obtained with this forward model. It is clear the our Bayesian method provides a reasonable model, where nearly all data points are within a 1*σ* deviation of the observed distribution. Fulton et al. ([2017](#bib.bib26)) and Berger et al. ([2018](#bib.bib4)) have provided evidence for a dip in the radius population around 1.5 − 2*r*⊕. This gap is apparent in the m=1 case of Figure [5](#S8.F5 "Figure 5 ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). While the deviation from a broken power-law is mild, we explore the effects here. When we remove the single planet systems from the data set, this gap is no longer apparent. One plausible explanation for this gap is a unique population of single planet systems (Although evidence from Weiss et al. ([2018](#bib.bib62)) shows that a weak gap can be seen in the multi-planet systems when aggregated). To explore this theory, we isolate the multi-planet systems and run our fitting procedure again. We find a mild difference in the extracted *α* or *β* power law values (*α*₁ = −1.98 ± 0.08 ; *α*₂ = −3.90 ± 0.16 ; *β*₁ = 0.96 ± 0.08 ; *β*₂ = −0.79 ± 0.03). This indicates that if a separate population does exist, the population parameters are weakly affected by their inclusion in our dataset. The resulting forward model of this fit is presented in Figure [5](#S8.F5 "Figure 5 ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). Furthermore, the increase in uncertainty seen in these parameters is due to the reduced samples used for fitting (1305 multiple system candidates vs. 2942 total candidates). It is notable that the empirical *Kepler* data set is sharply peaked, while the model does not provide a similar sharpness for the *m* = 1 radius population (Figure [5](#S8.F5 "Figure 5 ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") Right). This could be due to the existence of the mentioned radius gap. Furthermore, it is possible that a true accounting for planet period and radius covariance could produce such a peak. Millholland et al. ([2017](#bib.bib38)) and Weiss et al. ([2017](#bib.bib61)) show that the planets within multiple systems tend to have similar mass and radius components. Although these features are not properly accounted for here, Figure [5](#S8.F5 "Figure 5 ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") (Left) shows that these mild population characteristics remains small and do not deviate greatly from a simple broken power-law model. We hope to include such features in the next iteration of this software. It is possible that future studies may use this forward modeling technique to directly determine the population parameters. Unfortunately, it remains computationally expensive to properly account for all detection features. Traub ([2016](#bib.bib57)) overcame this cost by ignoring multiplicity. ### 8.2 Comparison with Prior Work We use a Bayesian method to infer population parameters for the *Kepler* exoplanet population, following much of the procedure presented in Youdin ([2011](#bib.bib64)). However, we build upon this method to extract information about the population multiplicity. Using a broken power-law distribution we find that population parameters of $\\alpha\_{1} = {{- 1.65} \\pm\_{0.06}^{0.05}}$, *α*₂ = −4.35 ± 0.12, *β*₁ = 0.76 ± 0.05, and *β*₂ = −0.64 ± 0.02 provide the best replication of the empirical population. The best fit breaks in these distributions are as follows: $p\_{br} = {7.08 \\pm\_{0.31}^{0.32}}$ days and *r**b**r* = 2.66 ± 0.06*r*⊕. Many prior studies have examined the occurrence of planets as determined by *Kepler*. Youdin ([2011](#bib.bib64)) provided an early estimate of the occurrence rate using a Poisson process likelihood, finding that the PDF exhibited a power law break at periods ∼7 days, with *α* = −2.44 and *β* = 3.23 at short periods, and *α* = −2.93 and *β* = −0.37 at longer periods (we have converted his numbers into the definitions of *α* and *β* adopted here). These suggest a steep rise towards smaller radius planets at all periods, and a sharp rise with increasing periods to the break, followed by a gradual decline to longer periods. This is consistent with other analysis at the same time (Catanzarite & Shao, [2011](#bib.bib11); Howard et al., [2012](#bib.bib30); Dong & Zhu, [2013](#bib.bib21)). With the accumulation of additional data and more detailed treatment of selection effects, subsequent analyses favored a flatter distribution extending to smaller radii (Fressin et al., [2013](#bib.bib25); Petigura et al., [2013b](#bib.bib47); Silburt et al., [2015](#bib.bib54); Traub, [2016](#bib.bib57)), and a distribution falling off inversely with period (*β* ∼ −1) at longer periods (Petigura et al., [2013a](#bib.bib46); Silburt et al., [2015](#bib.bib54)). The plateau at small radii is also found around lower mass hosts (Dressing & Charbonneau, [2013](#bib.bib19), [2015](#bib.bib20); Mulders, Pascucci & Apai, [2015](#bib.bib43)). Burke et al. ([2015](#bib.bib7)) have presented an extensive discussion of planet occurrence using the Q1-Q16 Kepler sample. For their baseline model, they found corresponding values of *α*₁ = −1.54 ± 0.50 and *β*₂ = −0.68 ± 0.17, with only weak evidence for a break in radius and assuming no break in period (they considered only periods &gt;50 days and radii &lt;2.5*r*⊕). This is perhaps the most directly comparable to our analysis, as it uses the completeness estimates from Christiansen et al. ([2015](#bib.bib15)); where this study uses the updated Christiansen ([2017](#bib.bib16)) completeness data. We find very similar values ($\\alpha\_{1} = {{- 1.65} \\pm\_{0.06}^{0.05}}$; *β*₂ = −0.64 ± 0.02) in a comparable regime. In particular, we note that both of these studies find an increasing occurrence of small radius planets down to the detection threshold, a result also supported by another Bayesian methods estimate in Hsu et al. ([2018](#bib.bib32)). Previous studies have used more limited parameter ranges to avoid issues of parameter covariance and susceptibility to completeness mapping. We approach the problem with a rigorous treatment of completeness mapping and a larger parameter space, recovering a similar power-law distribution. This congruity is an encouraging sign as it shows that the inclusion of a larger parameter space does not largely effect the model being inferred. Our inclusion of a broader range of periods and radius allow us to constrain the power-law uncertainty for radius and period to 3.8% and 5.4% respectively. We find breaks in our period and radius distributions occur at $p\_{br} = {7.08 \\pm\_{0.31}^{0.32}}$ days and *r**b**r* = 2.66 ± 0.06*r*⊕. These results are consistent with those found by prior authors. Table 4: A representation of the expected empirical multiplicity as a function of selection effects. Each column shows the expected population using the best fit model from this study (see Figure [8](#A0.F8 "Figure 8 ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates")). Starting from the left, moving right, each effect is adding in addition to all previous effects. The Multiple Detection Efficiency is broken into two columns. The Data column directly used the multiplicity values shown in Figure [6](#S8.F6 "Figure 6 ‣ 8.3 Survival Function ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). In contrast, the Model column uses the modified Poisson distribution inferred from the multiplicity data (*λ* = 8.40 ± 0.31 and *κ* = 0.70 ± 0.01). Geometric Mutual Inclination Single Detection Multiple Detection Multiple Detection Real *Kepler* Efficiency Efficiency (Data) Efficiency (Model) Data *S**i**n**g**l**e**s* 1870 1910 1558 1649 ± 71 1629 ± 61 1637 *D**o**u**b**l**e**s* 686 816 397 374 ± 29 375 ± 33 346 *T**r**i**p**l**e**s* 354 483 115 103 ± 15 113 ± 15 119 *Q**u**a**d**r**u**p**l**e**s* 207 282 30 26 ± 6 25 ± 6 43 *Q**u**i**n**t**u**p**l**e**s* 127 159 8 5 ± 3 5 ± 3 13 *S**e**x**t**u**p**l**e**s* 132 77 1 1 ± 1 1 ± 1 2 *S**e**p**t**u**p**l**e**s* 167 28 0 0 ± 1 0 ± 1 1 ### 8.3 Survival Function ![]() Figure 6: A plot of the modified Poisson survival function and the system fraction provided by our Bayesian analysis. This model is fit using a likelihood maximization technique, with the assumption of Gaussian uncertainty (essentially, a *χ*² minimization). The posterior distribution for the model is plotted by sampling 5000 models from the parameter posterior distributions in red. The dark red represents the 1*σ* range and the light red indicates the extent of the 2*σ* range. We have included the models provided by Hansen & Murray ([2013](#bib.bib29)) (gold △ makers) and Fang & Margot ([2012](#bib.bib22)) (olive green + markers) for comparison. Both Hansen & Murray ([2013](#bib.bib29)) and Fang & Margot ([2012](#bib.bib22)) models have been renormalized by our best fit *κ* value. Within this study, we only use planets provided by the *Kepler* pipeline. The highest multiplicity seen is m=7 for a GK type star. This is certainly not the actual highest multiplicity within this parameter space. Shallue & Vanderburg ([2018](#bib.bib53)) use a deep convolutional neural network to extract an 8th planet from the *Kepler*-90 light curve, proving this assertion to be true. Using a Poisson survival function we can extrapolate the probability of existence for these higher multiplicity systems. The *F**m* values found by this study represent the fraction of stars with at minimum *m* planets. This lends itself well to a survival function, where the probability of existing up to a certain value (or multiplicity) is obtained. Survival functions (*S*(*x*)) can simply be written as: *S*(*x*) = 1 − *C**D**F*(*x*) (29) where *C**D**F*(*x*) is the cumulative distribution function of the model. In this case we use a modified Poisson distribution to model multiplicity. Poisson distributions are ideal for planet multiplicity as these distributions are used for counting statistics. The modification is that the distribution is not truly normalized, but rather some fraction *κ* of one. Now that that distribution is no longer normalized the survival function must be modified slightly (*S*(*x*) = *κ* − *C**D**F*(*x*)). This modification allows for an excess or scarcity of zero planet systems. We are only interested in stars that do harbor planets, thus this modification is necessary. The CDF for this modified function is given as: $${CDF{(m)}} = {\\sum\\limits\_{n = 1}^{m}{\\kappa\\frac{\\lambda^{n}e^{- \\lambda}}{{(n)}!}}}$$ (30) where *κ* and *λ* are both fit parameters. Further discussion of this modified Poisson distribution can be found in Section 2.3 of Fang & Margot ([2012](#bib.bib22)). The results of this fit are presented in Figure [6](#S8.F6 "Figure 6 ‣ 8.3 Survival Function ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). We find that *λ* = 8.40 ± 0.31 and *κ* = 0.70 ± 0.01 provide the best match for this distribution. This large *λ* value incorporates a non-negligible fraction of systems with *m* &gt; 10. Since Equation [30](#S8.E30 "(30) ‣ 8.3 Survival Function ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") allows for an inflated number of star without planet, the global average for GK dwarfs in the *Kepler* parameter space (denoted as ⟨*N**p**l*⟩) can be found by multiplying *λ*, an estimate of the average number of planets a planet harboring system will contain, by *κ*, the fraction of stars that do harbor planets. We find that ⟨*N**p**l*⟩ = 5.86 ± 0.18 planets per star. This is likely a lower bound as we have excluded the single Jupiter sized planets that have cleared their systems through migration. Since these stars are currently assumed to have zero planets by this paper, inclusion of these additional planets would increase the *κ* value. However, we would expect our *λ* parameter to slightly decrease, with the inclusion of these additional singles, as this value only considers systems that do harbor planets. Overall the increase in *κ* will dominate, leading to an overall increase in ⟨*N**p**l*⟩. Previous studies have averaged over multiplicity and inferred the ⟨*N**p**l*⟩ value alone. These values are more difficult to compare as they are strongly dependent on the range of planet radius and period include in each study. Looking at short period (*p* &lt; 50 days) planets, Youdin ([2011](#bib.bib64)) found ⟨*N**p**l*⟩ = 1.36. Using our population parameters and making similar cuts we find a comparable value (⟨*N**p**l*⟩ = 1.34 ± .06). Turning the focus towards small planets (.75 &lt; *r* &lt; 2.5*r*⊕) and long periods (50 &lt; *p* &lt; 300 days), Burke et al. ([2015](#bib.bib7)) found ${\\langle N\_{pl}\\rangle} = {0.73 \\pm\_{.07}^{.19}}$. When we apply these same bounds to our model we again find a slightly larger value (⟨*N**p**l*⟩ = 1.15 ± .03). The most comparable parameter space to our study is that of Traub ([2016](#bib.bib57)), who finds ⟨*N**p**l*⟩ = 5.04 ± .23 using a nearly identical parameter range. While there appears to be a mild tension with this value, we note that Traub ([2016](#bib.bib57)) includes a much broader stellar temperature range and pre-*Gaia* radius measurements, likely leading to this deflated ⟨*N**p**l*⟩ value. With this function in hand, we can extrapolate to higher multiplicity. For example, our model suggests that 32.3 ± 2.7% of GK stars will harbor at least 8 planets within the *Kepler* parameter space. In the parameter space of the *Kepler* survey, our solar system has two planets (Venus and Earth). The radius of Mercury is slightly smaller (.387*r*⊕) than our range allows. Since we find ⟨*N**p**l*⟩ = 5.86 ± 0.18 planets per solar-like star in this range, it appears that our system is more underpopulated than most other systems within *p* &lt; 500 days. We would expect 30 ± 1% systems harbor zero planets, 4.0 ± 4.6% harbor just one planet, and 2.0 ± 4.2% harbor only two planets within the range of this study. This lack of multiplicity in our solar system could be important for habitability, but such claims still lack strong evidence. ### 8.4 *Kepler* Dichotomy Analysis of the statistics of the *Kepler* multiple planet systems (Lissauer et al., [2011](#bib.bib36); Fang & Margot, [2012](#bib.bib22); Hansen & Murray, [2013](#bib.bib29); Ballard & Johnson, [2016](#bib.bib1)) suggest that the underlying planetary population requires a two component model. One component is composed of systems with high planet multiplicity and a low inclination dispersion, while the other requires either low intrinsic multiplicity or a large inclination dispersion to reduce the frequency of transits by multiple planets. This has been termed the *Kepler* dichotomy. Lissauer et al. ([2011](#bib.bib36)) inferred that the two populations had roughly equal frequencies and subsequent analyses confirmed this. There have been several models proposed to explain this on dynamical grounds (Johansen et al., [2012](#bib.bib33); Moriarty & Ballard, [2016](#bib.bib39); Hansen, [2017](#bib.bib28)). The simplest solution is to consider a single population of planets in which some fraction have experienced excitation of their mutual inclinations. However, to meet the requirements of the transit statistics, the excitation is sufficiently large that dynamical stability is hard to maintain (Hansen, [2017](#bib.bib28)). Thus, the *Kepler* results seem to imply the existence of a low multiplicity population of planetary systems, whether due to formation or later dynamical instability. However, this finding rests on the relative frequencies of systems with single transiting planets versus multiple transiting planets. If the completeness is a function of the detection order, this may weaken the claim for a *Kepler* dichotomy. In Figure [6](#S8.F6 "Figure 6 ‣ 8.3 Survival Function ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") we show that a single Poisson distribution can account for the multiplicity probabilities (*F**m*) extracted from our analysis. We find a much smaller fraction of intrinsically single systems than Fang & Margot ([2012](#bib.bib22)) and find a distribution broadly similar to the model for a single, dynamically motivated population described in Hansen & Murray ([2013](#bib.bib29)). However, we still find ∼6% of stars harbor intrinsically single or double planet systems. To test the robustness of this low multiplicity contribution we forward model the inferred population using the Poisson multiplicity model. In Table [4](#S8.T4 "Table 4 ‣ 8.2 Comparison with Prior Work ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") under the label ‘‘Multiple Detection Efficiency (Model)’’ we present the multiplicity results of this model. We can see that almost all of the empirical population fall within 1*σ* of the multiplicity model. This indicated that that apparent deviations in our infer *F**m* values can be described by statistical fluctuations in population. Additionally, our *F**m* are very dependent on the choice of mixture values displayed in Table [3](#S7.T3 "Table 3 ‣ 7.0.1 Calculating N m ‣ 7 The Likelihood Function ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). A proper accounting of these values would require distribution dependence. Averaging over these parameters, as done here, can cause mild deviations in the inferred *F**m* values. In extracting the population *F**m* values, we have only employed a mild Rayleigh distribution to account for mutual inclination of each system as directed by Fang & Margot ([2012](#bib.bib22)) and have no larger inclination component. It appears that accounting for systematic loss of planets at higher multiplicity substantially reduces the low multiplicity population inferred as per the *Kepler* Dichotomy. We shall now discuss how this works. Using the forward model presented in Section [8.1](#S8.SS1 "8.1 Forward Modeling the Results ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"), we look at how the inclusion of detection efficiency affected the gap seen between systems with one transiting planet and those with two transiting planets. The population provided by the parameters in Figure [8](#A0.F8 "Figure 8 ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") is modeled 20 times and the median from each group is recorded in Table [4](#S8.T4 "Table 4 ‣ 8.2 Comparison with Prior Work ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). Using our population parameters and a mild mutual inclination model show that this anomaly is largely due to *Kepler* detection efficiency. Table [4](#S8.T4 "Table 4 ‣ 8.2 Comparison with Prior Work ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") shows how the frequency of detected systems of different transit multiplicity changes as we include different systematic effects. In the first column, we include only the correction of the probability of transit due to geometric alignment. For a simple numerical comparison, this results in a ratio of double transit to single transit systems of 0.37, to be compared to the observed value of 0.21 (the rightmost column). The inclusion of a small mutual inclination dispersion, comparable to that of Fang & Margot ([2012](#bib.bib22)), does not improve the ratio (second column). In the third column, we show the model in which we include the completeness corrections from Christiansen ([2017](#bib.bib16)) without the multiplicity treatment discussed here. This results in a partial improvement of the ratio to 0.25. It is also notable that the number of expected high transit multiplicity systems also drops significantly with the inclusion of this effect. Finally, in the fourth and fifth column, we show the expected numbers including the full, multiplicity-dependent completeness correction discussed here (Section [6.2](#S6.SS2 "6.2 Probability of Detection for ≥ m 2 ‣ 6 Detection Efficiency Grid ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates")). We find that the expected number of different transit multiplicities are now very well matched to the observed numbers, substantially weakening the need for an additional population to explain the observations. The ultimate reason for this is that high transit multiplicity systems usually contain several planets that lie in the low MES region of parameter space, so that the incompleteness (especially when including the detection order effects) knocks planets down the multiplicity scale, resulting in many single transit systems that, in an ideal world, would show two or three transiting planets. Furthermore, the improved stellar radius measurements from *Gaia* suggests that many stars have larger radii than previously believed (Berger et al., [2018](#bib.bib4)). Increasing the stellar radius of system will decreases the probability of detection for an exoplanet. This correction will overall increase the inferred occurrence measurements.  It is important to remember that our dataset does not include single hot Jupiter planets as discussed in Section [3](#S3 "3 Planet Selection ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates"). This observed population of 120 planets does not follow our power-law trend and appears to be uniquely single (Steffen et al., [2012](#bib.bib55)). While these outliers do provide some type of population dichotomy, their presence is not the most prominent cause of the excess of singles. Our extracted population parameters *F*₁ and *F*₂ indicate that 4.0 ± 4.6% of the underlying population does have only one planet, and that this contribution can be described by the modified Poisson distribution used to fit the higher multiplicity systems. There is dynamical evidence that single transiting systems are more dynamically excited than multiple systems (Morton & Winn, [2014](#bib.bib41); Xie et al., [2016](#bib.bib63); Van Eylen et al., [2018b](#bib.bib60)) and this is consistent with the notion that some fraction of compact planetary systems are dynamically perturbed by the existence of giant planets on larger scales. Previously, Hansen ([2017](#bib.bib28)) found that explaining the original excess of single transits required a frequency of giant planets on large scales that was roughly double that found by radial velocity surveys. The reduction found here substantially alleviates that discrepancy. Other recent work also supports the notion that single transiting systems are drawn from the same underlying planetary population as multiple harbor systems. Weiss et al. ([2018](#bib.bib62)) find that both populations share essentially the same stellar and planetary properties, while Zhu et al. ([2018](#bib.bib65)) use transit timing variations to infer that there is a strong correlation between multiplicity and dynamical excitation. They reject the notion that this is driven by giant planet excitation because they see no correlation with the metallicity of the host star, but such a correlation would be difficult to see at the level of 4% as found here. This is further supported by Munoz Romero & Kempton ([2018](#bib.bib44)), who find no metallicity difference between hosts of single and multiple transiting systems, but could easily accommodate mixtures at the 50% level. ### 8.5 Considering Eccentricity ![]() Figure 7: A CDF showing the retrieved eccentricities from our forward modeling pipeline. The red line illustrates the eccentricities used to draw the underlying the Beta distribution (Kipping, [2013](#bib.bib35)). The black line represents the empirical CDF of the detected single planet systems and the blue line represents the eccentricities of the detected multi-planet systems. Including eccentricity into our model increases the number of detected planets. We find that the best fit multiplicity parameters are as follows: *F*₁ = 0.72 ± 0.05, *F*₂ = 0.66 ± 0.03, *F*₃ = 0.63 ± 0.03, *F*₄ = 0.60 ± 0.03, *F*₅ = 0.56 ± 0.03, *F*₆ = 0.51 ± 0.04, and *F*₇ = 0.43 ± 0.07. These parameters are fit using an analog to the Hansen & Murray ([2013](#bib.bib29)) eccentricity model. The original modified Gamma distribution (scale=0.055) is unique to Hansen & Murray ([2013](#bib.bib29)). We map this model to a Beta distribution (*a* = 1.80 and *b* = 14.46), widely used among recent authors, for consistency. This model was inferred by simulating in situ gravitational assembly of planetary embryos and observing the resulting eccentricity population of the fully formed planets. Although derived within a specific scenario, this distribution matches well with a model in which planets explore the full range of available phase space subject to the constraint of dynamical stability (Tremaine, [2015](#bib.bib58)). As such, it represents a plausible description of the level of eccentricity to be expected in such systems. The average eccentricity of this population is ⟨*e*⟩ = 0.11. Comparing these values to those of our base model, we find that eccentricity flattens the CDF of planet multiplicity, slightly decreasing ⟨*N**p**l*⟩ to 5.69 ± 0.17 planets. Recently, Van Eylen et al. ([2018b](#bib.bib60)) provided evidence for two distinct populations of eccentricity (multi-planet systems and single planet systems). Using our forward modeling software (\\hrefhttps://github.com/jonzink/ExoMultExoMult), we test the strength of this hypothesis. Implementing only one true underlying eccentricity model, we inspect the detected eccentricity populations from both the single and multi-planet populations. When tested with the Hansen & Murray ([2013](#bib.bib29)) model (⟨*e*⟩ = 0.11), we find no significant difference between the the observed eccentricities of multi-planet and single planet systems. This indicates that the differences noted by Van Eylen et al. ([2018b](#bib.bib60)) may be real. However, Van Eylen et al. ([2018b](#bib.bib60)) suggests a Beta distribution for single planet systems with ⟨*e*⟩ = 0.26. This is a significantly larger average eccentricity than expected by the Hansen & Murray ([2013](#bib.bib29)) model. When larger eccentricities are tested we do find observable differences between the single and multi-planet systems. The Kipping ([2013](#bib.bib35)) model (*a* = 0.867 and *b* = 3.03) was calculated using radial velocity discoveries and contains a significant fraction of massive planets. This distribution is probably too eccentric (⟨*e*⟩ = 0.22) for the tightly packed model discussed here, but illustrates the effects of detection bias on the eccentricity population. In Figure [7](#S8.F7 "Figure 7 ‣ 8.5 Considering Eccentricity ‣ 8 Discussion ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") we present the results of our test on the Kipping ([2013](#bib.bib35)) model. We find that multi-planet systems tend to produce more low eccentricity detections than single planet detections despite being drawn from the same underlying population. Analyzing the statistical difference with an *Anderson-Darling* test produces a P-value of 10−7, suggesting these differences would appear statistically significant. Furthermore, we can see that neither of the detected populations closely mimic the true Beta distribution, highlighting the importance of detection efficiency consideration when performing eccentricity occurrence measurements. This effect is caused by the increased transit duration for higher eccentricity transits. Increasing the transit duration improves the planet MES, making the signal easier to detect. Since the highest MES planets are the most likely to be detected, this biases the empirical population toward higher eccentricity. The sorting order in combination with the multiplicity detection efficiency of the *Kepler* pipeline will further exaggerate this bias in the single planet systems. It is clear that low eccentricity distributions are less affected by this bias. Manually tuning the Beta distribution we find that models with ⟨*e*⟩ ≥ 0.18 will produce statistically significant (P-value≤0.001) differences between the empirical eccentricity population of singles and multiple planet systems. Since Van Eylen et al. ([2018b](#bib.bib60)) suggests a ⟨*e*⟩ = 0.26 model for the singles and a ⟨*e*⟩ = 0.05 model for the multi-planet systems, it is difficult to determine the effect of detection bias on their eccentricity model. At this point we cannot rule out that two distinct populations of eccentricity exist between the single and multi-planet systems, but propose that such claims require further evidence. ### 8.6 Extrapolation to Longer Periods As mentioned above, our general populations parameters do not differ greatly from those of previous studies. The quantity *Γ*⊕ is often quoted to avoid any need for understanding the habitable zone or habitable radius range. $$\\frac{dN}{d\\text{ln}p\_{\\oplus}\\ d\\text{ln}r\_{\\oplus}} = \\Gamma\_{\\oplus}$$ (31) We find *Γ*⊕ = 1.31 ± 0.07, consistent with the previous value of Burke et al. ([2015](#bib.bib7)) (*Γ*⊕ = 0.6 with a range of 0.04 to 11.5). Youdin ([2011](#bib.bib64)) found a much higher value of *Γ*⊕ = 2.75 ± 0.3, when extrapolating from periods &lt;50 days. The lack of long period planets provided a weaker power-law, producing the inflated *Γ*⊕ value. Furthermore, we find tension with Foreman-Mackey et al. ([2014](#bib.bib24)) (with $\\Gamma\_{\\oplus} = {0.019 \\pm\_{0.010}^{0.019}}$). Foreman-Mackey et al. ([2014](#bib.bib24)) avoid the assumption of a particular functional form for the extrapolation to longer periods, by using a Gaussian process regression to determine the shape of the distribution. However, they use the results of the *TERRA* pipeline in it’s original form, in which it only reported the highest signal to noise candidate around each star. Although they back out an estimate of the detection efficiency from the results of Petigura et al. ([2013b](#bib.bib47)), we have shown in Section [7](#S7 "7 The Likelihood Function ‣ Accounting for Incompleteness due to Transit Multiplicity in Kepler Planet Occurrence Rates") that detection order can bias the results. In particular, we expect Foreman-Mackey et al. ([2014](#bib.bib24)) to undercount small planets and long period periods. Both of these biases will lower the *Γ*⊕ value and we should regard the Foreman-Mackey et al. ([2014](#bib.bib24)) result as a lower limit. For the occurrence of habitable planets we follow the procedure provided by Burke et al. ([2015](#bib.bib7)). This *ζ*⊕ value is found by integrating the population distribution by 20% of *r*⊕ and *p*⊕ in both directions. We find *ζ*⊕ = 0.217 ± 0.014 using our inferred population parameters, similar to the *ζ*⊕ = 0.10 (with a range of 0.01 to 2) found in Burke et al. ([2015](#bib.bib7)). 9 Conclusion ------------ We present a new method for determining the frequency of exoplanet multiplicity within the *Kepler* dataset. In doing so we provide the following new fitting features and conclusions: 1.Previous studies have discussed and provided methods for calculating high multiplicity transit probabilities (Ragozzine & Holman ([2010](#bib.bib49)); Brakensiek & Ragozzine ([2016](#bib.bib10)); Read et al. ([2017](#bib.bib50))). For occurrence calculations these procedures are often too complex and computationally expensive to carry out. We provide a new method which marginalizes over mutual inclination and the empirical *Kepler* period set to determine the transit probabilities for *Kepler* multi-planet systems. Using this, we provide the transit probabilities for multiple systems containing up to 7 planets. This simplification is important and useful when trying to fit multiplicity parameters via MCMC or some other fitting method that requires 10⁴ calculations. Our method does make some simplification assumptions in the interests of speed. We assume the measurements of planet radius and period are perfect. The uncertainty in period is negligible, however the radius measurements retain significant uncertainty and the present dispersion may yet mask finer features in the distribution. In accounting for mutual inclination, we adopt the model provided by Fang & Margot ([2012](#bib.bib22)). This is derived using a different multiplicity model than that found here. All orbits are assumed to be circular in our base model. Because many of the systems are very compact, circular orbits are required for any type of stability. Tidal circularization will also force many of these planets into circular orbits. However, it is possible that some portion of the population, investigated here, contains varying amounts of eccentricity. We show that any amount of eccentricity will increases our the overall multiplicity values, but decreases the fraction of systems with planets. We have assumed the appropriate model for exoplanet occurrence is a broken power-law. Furthermore, we assume period and radius and uncorrelated. It has been shown by Owen & Wu ([2013](#bib.bib45)) and Weiss et al. ([2017](#bib.bib61)) that a mild correlation exist between period and radius at short periods where photoevaporation can take effect. Nevertheless, the fact that our forward modeling matches the data inspires confidence that the model provides a coherent description of the data. 2.In systems with more than one detected planet, we find that detection efficiency decreases for higher detection order planets. This conclusion was achieved by re-visiting the Christiansen ([2017](#bib.bib16)) injections and looking at systems with pre-existing planets. Multiple planets systems experience an additional loss, for lower MES planets within each system, of at least 5.5% and 15.9% for periods &lt;200 days and &gt;200 days respectively. This type of increased selection effects indicates that a larger fraction of the population is being missed. Being able to infer a larger population of multiple exoplanet systems significantly decreases the gap between single and double planet systems. The initial motivation for additional detection efficiencies for multi-planet systems, was the 61 known KOIs lost during the Christiansen ([2017](#bib.bib16)) injections. When testing our additional selection effects, for multiples, we expect 41 ± 7 planets should be lost due to a similar type of injection test. Because we find that 61 KOIs are lost (rather than 41) we suspect higher order detection efficiencies may be necessary for an accurate accounting of the true underlying populations. 3.Using Bayesian statistics, we expand the Poisson process likelihood to account for variations in detection order. Furthermore, we are able to infer population multiplicity from this fitting process. The results from this fit match that of Burke et al. ([2015](#bib.bib7)), but provide an improved measurement with reduced uncertainty from *Gaia*, CKS, and asteroseismology (Petigura et al., [2017](#bib.bib48); Johnson et al., [2017](#bib.bib34); Berger et al., [2018](#bib.bib4); Van Eylen et al., [2018a](#bib.bib59)). Furthermore, by looking at the occurrence of single and double-planet systems, we only find a 0.9*σ* difference between these two populations (4.0 ± 4.6%). This disparity can be explained by a modified Poisson distribution with *λ* = 8.40 ± 0.31 and *κ* = 0.70 ± 0.01, indicating that the *Kepler* Dichotomy (discussed by Lissauer et al. ([2011](#bib.bib36)); Fang & Margot ([2012](#bib.bib22)); Hansen & Murray ([2013](#bib.bib29)); Ballard & Johnson ([2016](#bib.bib1))) may largely be an artifact of detection efficiency and statistical fluctuation. Using a Poisson process likelihood requires that each planet is drawn independently, which is clearly not the case for planets in multiple systems. Much of the work in this study is accounting for these dependencies. Ignoring the independence requirement of Poisson process could be suspect, but is again justified by the success of our forward model, where this assumption is not necessary. The independence of radius between planets within a system has also not been accounted for within this study. 4.Given our inferred multiplicity model we can extrapolate to higher multi-planet systems. We find that 32.3 ± 2.7% of solar-like stars should contain at least 8 planets within 500 days. The existence of a single 7 planet system and a single 8 planet system (Kepler 90) indicates these systems should be rare but still detectable. We would expect to find &lt;1 eight planet systems within the constraints of this study. 5.We introduce (\\hrefhttps://github.com/jonzink/ExoMultExoMult) and demonstrate that forward modeling a broken power-law distribution can still provide a reasonable model for the exoplanet population, despite growing evidence for a gap in 1.5 − 2*r*⊕ range (Fulton et al., [2017](#bib.bib26); Berger et al., [2018](#bib.bib4); Weiss et al., [2018](#bib.bib62)). We find that our fitting model also produces similar populations of multiplicity to that of the empirical *Kepler* data set, indicating the success of this method. 6.Using the the eccentricity model of Hansen & Murray ([2013](#bib.bib29)), we show that eccentricity can affect the multiplicity occurrence by slightly decreasing the expected number of planets around each star. We also find that for eccentricity models with ⟨*e*⟩ ≥ 0.18 the *Kepler* pipeline will significantly skew the empirical population of eccentricity for single transiting systems, suggesting that differences seen between the single and multiple planet systems may be artificial. ### 9.1 Future Goals As mentioned previously, the uncertainties in the radius measurement are still quite large. Using a Bayesian hierarchical model, this uncertainty can be incorporated when fitting for population parameters (see Foreman-Mackey et al. [2014](#bib.bib24)). We hope to include this feature into our next generation of occurrence fitting. The multiplicity parameters derived here can be use in determining an Eta Earth measurement. The importance of neighboring planets could be essential for the long term stability of an Earth analog (Horner et al., [2017](#bib.bib31)), thus it is important to understand the likelihood of this Earth analog within a multiple system. The new detection efficiency is limited to *m* ≥ 2. Ideally, we would want the detection efficiency for each detection order. To do so one would need to perform an alternative injection experiment, where numerous planets are injected into each system and the recovery of each order can be better sampled. It would also be useful to understand the effects of resonance on detection efficiency. Looking at a select group of stars and injecting many planets at various period ranges could provide an understanding of these features (as performed by Burke & Catanzarite [2017b](#bib.bib9)). With the loss of *Kepler* and the upcoming release of *TESS* it will be essential to combine data across missions to calculate a more robust occurrence measurement. Doing so will require accounting for differing detection efficiencies across each mission. The method described here may provide a unique way of incorporating these different selection effects while producing a uniform population distribution. Acknowledgement --------------- We would like to thank the anonymous referee for useful feedback. The simulations described here were performed on the UCLA Hoffman2 shared computing cluster and using the resources provided by the Bhaumik Institute. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. References ---------- - Ballard & Johnson (2016) Ballard, S. & Johnson, J. A. 2016, \\apj, 816, 66 - Batalha et al. (2013) Batalha, N. M., Rowe, J. F., Bryson, S. T., et al. 2013, \\apjs,204, 24 - Beaugé & Nesvorný (2012) Beaugé, C. & Nesvorný, D. 2012, \\apj, 751, 2 - Berger et al. (2018) Berger, T. A., Huber, D., Gaido, E., Van Saders, J. L. 2018, submitted, arXiv:1805.00231 - Borucki et al. (2010) Borucki, W. J., et al. 2010, \\sci, 327, 977 - Borucki et al. (2011) Borucki, W. J. et al., 2011, \\apj, 736, 19 - Burke et al. (2015) Burke, C. J., Christiansen, J. L., Mullally, F., et al. 2015, \\apj, 809, 8 - Burke & Catanzarite (2017a) Burke, C., Catanzarite, J. 2017a, NTRS, KSCI-19101-002 - Burke & Catanzarite (2017b) Burke, C., Catanzarite, J. 2017b, NTRS, KSCI-19109-002 - Brakensiek & Ragozzine (2016) Brakensiek J., Ragozzine D. 2016 \\apj821 47 - Catanzarite & Shao (2011) Catanzarite, J. & Shao, M., 2011, \\apj738 151 - Ciardi et al. (2013) Ciardi, D. R., Fabrycky, D. C., Ford, E. B., et al. 2013, \\apj, 763, 1 - Christiansen et al. (2012) Christiansen, J. L., Jenkins, J. M., Caldwell, D. A., et al. 2012, \\pasp, 124, 992 - Christiansen et al. (2013) Christiansen, J. L., Clarke, B. D., Burke, C. J., et al. 2013, \\apjs, 207, 35 - Christiansen et al. (2015) Christiansen, J. L., et al. , 2015, \\apj, 810, 95 - Christiansen (2017) Christiansen, J. L., 2017, NTRS, KSCI-19110-001 - Claret & Bloemen (2011) Claret, A., & Bloemen, S. 2011, å, 529, AA75 - David & Nagaraja (2003) David, H. A.& Nagaraja, H. N., 2003, ‘‘Order Statistics’’ , ISBN 9780471722168 - Dressing & Charbonneau (2013) Dressing, C. D., Charbonneau, D. 2013, \\apj, 767, 95 - Dressing & Charbonneau (2015) Dressing, C. D., Charbonneau, D. 2015, \\apj, 807, 45 - Dong & Zhu (2013) Dong, S. & Zhu, Z., 2013, \\apj, 778, 53 - Fang & Margot (2012) Fang, J., & Margot, J.-L. 2012, \\apj, 761, 92 - Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, \\pasp, 125, 306 - Foreman-Mackey et al. (2014) Foreman-Mackey, D., Hogg, D. W., & Morton, T. D. 2014, \\apj, 795, 64 - Fressin et al. (2013) Fressin, F. et al., 2013, \\apj, 766, 81 - Fulton et al. (2017) Fulton, B. J., Petigura, E. A., et al. 2017, \\apj, 154, 109 - Goodman & Weare (2010) Goodman, J., & Weare, J. 2010, Communications in Applied Mathematics and Computational Science, 5, 65 - Hansen (2017) Hansen, B. M. S., 2017, \\mnras, 467, 1531 - Hansen & Murray (2013) Hansen, B. M. S., & Murray, N. 2013, \\apj, 775, 53 - Howard et al. (2012) Howard, A. W. et al., 2012, \\apjs, 201, 15 - Horner et al. (2017) Horner, J., Gilmore, J. B., & Waltham, D. 2017, arXiv:1708.03448 - Hsu et al. (2018) Hsu, D. C., Ford, E. B., Ragozzine, D. & Morehead, R. C., 2018, \\aj, 155, 205 - Johansen et al. (2012) Johansen, A., Davies, M. B., Church, R. P. & Holmelin, V., 2012, \\apj, 758, 39 - Johnson et al. (2017) Johnson, J. A., Petigura, E. A., Fulton, B. J., et al. 2017, \\aj,154, 108 - Kipping (2013) Kipping, D. M. 2013, \\mnras, 434, L51 - Lissauer et al. (2011) Lissauer, J. J., et al. 2011, \\apjs, 197, 8 - Mathur et al. (2017) Mathur, S., Huber, D., Batalha, N. M., et al. 2017, \\apjs, 229, 30 - Millholland et al. (2017) Millholland, S., Wang, S., Laughlin, G. 2017, \\apjl, 849, L33 - Moriarty & Ballard (2016) Moriarty, J., Ballard, S., 2016, \\apj, 832, 34 - Morton et al. (2016) Morton, T. D., Bryson, S. T., Coughlin, J. L., et al. 2016, \\apj, 822, 86 - Morton & Winn (2014) Morton, T. D. & Winn, J. N., 2014, \\apj, 796, 47 - Mullally et al. (2015) Mullally, F., Coughlin, J. L., Thompson, S. E., et al. 2015, arXiv:1502.02038 - Mulders, Pascucci & Apai (2015) Mulders, G. J., Pascucci, I. & Apai, D., \\apj, 798, 112 - Munoz Romero & Kempton (2018) Munoz Romero, C. E. & Kempton, E., M.-R., 2018, \\aj, 155, 134 - Owen & Wu (2013) Owen, J. E. & Wu, Y. 2013, \\apj, 775, 105 - Petigura et al. (2013a) Petigura E. A., Howard A. W. & Marcy G. W., 2013a, PNAS, 110, 19273 - Petigura et al. (2013b) Petigura E. A., Marcy G. W., & Howard A. W., 2013b, \\apj, 770, 69 - Petigura et al. (2017) Petigura, E. A., Howard, A. W., et al. 2017, \\aj, 154, 107 - Ragozzine & Holman (2010) Ragozzine, D., Holman, M. J. 2010, \\apj, in press, arXiv:1006.3727 - Read et al. (2017) Read, M. J., Wyatt, M. C., Triaud, A. H. M. J. 2017, \\mnras, 469, 1 - Rowe et al. (2014) Rowe, J. F., Bryson, S. T., et al. 2014, \\apj, 784, 1 - Schmitt et al. (2017) Schmitt, J. R., Jenkins, J. M., Fischer, D. A. 2017, \\aj, 153, 180 - Shallue & Vanderburg (2018) Shallue, C. J.,Vanderburg, A. 2018, \\aj, 155, 94 - Silburt et al. (2015) Silburt, A., Gaidos, E. & Wu, Y., 2015, \\apj, 799, 180 - Steffen et al. (2012) Steffen, J. H., Ragozzine, D., Fabrycky, D. C., et al. 2012, PNAS, 109, 21 - Thompson et al. (2018) Thompson, S. E., Coughlin, J. L., Hoffman, K., et al. \\apjs, 235, 38 - Traub (2016) Traub, W. A., 2016, \\apj, submitted (arXiv:1605.02255) - Tremaine (2015) Tremaine, S. \\apj, 807, 157 - Van Eylen et al. (2018a) Van Eylen, V., et al., 2018a, \\mnras, 479, 4 - Van Eylen et al. (2018b) Van Eylen, V., et al., 2018b, arXiv:1807.00549 - Weiss et al. (2017) Weiss, L. M., Marcy, G. W., Petigura, E. A., et al. 2017, \\apj, 155, 48 - Weiss et al. (2018) Weiss, L. M. et al., 2018, arXiv:1808.03010 - Xie et al. (2016) Xie, J.-W., et al., 2016, PNAS, 113, 11431 - Youdin (2011) Youdin, A. N. 2011, \\apj, 742, 38 - Zhu et al. (2018) Zhu, W., Petrovich, C., Wu, Y., Dong, S. & Xie, J., 2018, \\apj, 860, 101 ![]() Figure 8: The posterior distributions for the 13 parameters varied in this study. This is achieved using a burn-in of 100,000 steps and 20,000 steps to sample the posterior. The results of the fit are presented above the marginalized distribution of each parameter. The uncertainty is presented with a 68.3% confidence interval.
--- author: - Mario DeFranco title: 'On the Bernoulli Numbers via the Newton-Girard Identities' --- Introduction {#Introduction} ============ The Bernoulli numbers $B_{k}$ for $ k\geq 0$ are a sequence of rational numbers that appears in many areas of mathematics, from topology to number theory. See [@Mazur] for an overview of their significance. They are named after Jacob Bernoulli who used them to calculate the power sums $$\sum_{n=1}^N n^k$$ in his book *Ars Conjectandi* published posthumously in A.D. 1713. See [@Bernoulli] for an English translation. Seki Kowa is also credited with independently deriving these numbers (see [@Selin]). One of the well-known appearances of the Bernoulli numbers is in the evaluation of the Riemann zeta function at positive even integers: $$\label{zeta bernoulli} \zeta(2k) = \sum_{n=1}^\infty \frac{1}{n^{2k}} = (-1)^{k-1}\pi^{2k}\frac{2^{2k}B_{2k}}{2(2k)!}.$$ The values $\zeta(2k)$ were first evaluated by L. Euler in A.D. 1740 [@Euler]. The proof of equation traditionally given in the literature compares two different expansions of the function $\cot(x)$ (see [@Dwilewicz]). In this paper, we evaluate $\zeta(2k)$ another way using the Newton-Girard identities. These identities are combinatorial relations between elementary symmetric functions and power-sum symmetric functions. Named after I. Newton and A. Girard, they appear in Newton’s *Arithmetica Universalis* [@Newton] A.D. 1707 and Girard’s paper [@Girard] A.D. 1629. Thus by equation our evaluation for $\zeta(2k)$ also provides formulas for the Bernoulli numbers. We describe this evaluation now. We define a sequence of positive integers $A_k$ for $k \geq1$: $$1,1,10,945, 992250, 13575766050, 2787683360962500, 9732664704199465153125, \dots$$ and prove that $$\zeta(2k) = \frac{\pi^{2k}}{2} \frac{A_k }{ \prod_{i=1}^k (2i+1)!!}$$ where $$(2i+1)!! = \prod_{j=1}^i (2j+1).$$ To obtain $A_k$, we first define polynomials $P_k(x)$ and then define $$A_k = P_k(k).$$ We list the first five translated polynomials: $$\begin{aligned} P_1(\frac{x}{2}+1-\frac{3}{2}) &= 1\\ P_2(\frac{x}{2}+2-\frac{3}{2}) &= 1\\ P_3(\frac{x}{2}+3-\frac{3}{2}) &= 7+x \\ P_4(\frac{x}{2}+4-\frac{3}{2}) &= 465 + 130 x + 10 x^2\\ P_5(\frac{x}{2}+5-\frac{3}{2})&= 360045 + 142695 x + 19845 x^2 + 945 x^3\\ & \vdots\end{aligned}$$ Note that these polynomials have positive coefficients and that the sequence $A_k$ also appears as the leading coefficients. We prove these properties in Section \[Pk\]. We define the $P_k(x)$ recursively by defining operators $\mathcal{B}_k$. These constructions naturally arise from the Newton-Girard identities applied to symmetric functions in variables $z_n$ specialized to $$z_n = \frac{1}{n^2}$$ (see Definition \[e p\]). In section \[B\_k\], we present a combinatorial definition of $A_k$ as a sum over plane trees such that each term is positive. Combining this with our combinatorial evaluation [@DeFranco2] of $$\zeta (\{ 2\}^k) = \frac{\pi^{2k}}{(2k+1)!}$$ gives a combinatorial evaluation of $\zeta(2k)$. The Newton-Girard Identities ============================ We first present definitions necessary to prove the Newton-Girard identities and our evaluation of $\zeta(2k)$. \[e p\] Let $z_1, z_2, ...$ be an infinite sequence of indeterminates. For integer $k\geq 0$, let $e_k$ denote the elementary symmetric function $$e_k = e_k(z_1,z_2,...) = \sum_{1 \leq n_1<n_2<...<n_k} \prod_{i=1}^k z_{n_i}$$ with $e_0=1$; and for $k\geq 1$, let $p_k$ denote the power sum symmetric function $$p_k=p_k (z_1,z_2,...) = \sum_{n=1}^\infty z_n^k$$ Let $e_{\mathrm{inc}}(k;j)$ denote the incomplete $k$-th elementary symmetric function $$e_{\mathrm{inc}}(k;j) = e_k(z_1, z_2, ... , z_{j-1}, z_{j+1}, ...).$$ Let $S_k$ denote the symmetric group on the set $\{1,2,..,k \}$. For $\sigma \in S_k$, let $p_\sigma$ denote $$p_\sigma = \prod_{C \in \sigma} p_{|C|}$$ where $C$ denotes a cycle of $\sigma$ containing $|C|$ elements. For $|C|=n$, we say that $C$ has length $n$, or that $C$ is an $n$-cycle. We also let $\mathrm{sgn}(\sigma)$ denote the signature of the permutation $$\mathrm{sgn}(\sigma) = \prod_{C \in \sigma}(-1)^{|C|-1}.$$ We let $\overline{e}_k$ and $\overline{p}_k$ denote the specializations of these functions at $$z_n = \frac{1}{n^2}.$$ Define the linear operator $d_2$ by $$d_2(z_n) = z_n^2$$ and extend $d_2$ to act on monomials as a derivation. The next theorem is a well-known evaluation of the elementary symmetric function in terms of the power-sum symmetric functions. Our proof below is similar to that presented in [@DeFranco] applied to derivatives of the Gamma function, and to the one of K. Boklan [@Boklan]. \[cycle index\] $$e_k(z) = \frac{1}{k!}\sum_{\sigma \in S_k} \mathrm{sgn}(\sigma)p_\sigma$$ We use induction on $k$. The statement is true for $k=1$. Assume it is true for some $k \geq 1$. Then we obtain $e_{k+1}$ from $e_k$ by first multiplying $e_k$ by $p_1$: $$p_1 e_k = (k+1)e_{k+1}+ \sum_{j=1}^\infty z_j^2 e_{\mathrm{inc}}(k-1;j).$$ Now, since $$d_2(e_k)=\sum_{j=1}^\infty z_j^2 e_{\mathrm{inc}}(k-1;j),$$ we obtain $$\label{k+1 e} (p_1-d_2)e_k = (k+1)e_{k+1}.$$ Now we compute $\displaystyle (p_1-d_2)e_k $ another way. The action of $d_2$ on $p_n$ is $$d_2(p_n) = np_{n+1}.$$ We claim $$\label{Sk Sk+1} (p_1-d_2)\sum_{\sigma \in S_k} \mathrm{sgn}(\sigma)p_\sigma= \sum_{\sigma \in S_{k+1}} \mathrm{sgn}(\sigma)p_\sigma.$$ Let $\sigma \in S_k$. Multiplying by $p_1$ corresponds to adjoining the 1-cycle consisting of the element $k+1$ to $\sigma$. The action of $d_2$ on $p_\sigma$ corresponds to creating new permutations by adjoining $k+1$ to each cycle $C$ of $\sigma$; if $C$ is of length $n$, then there are $n$ ways to do this. Increasing the length of one cycle of $\sigma$ by 1 creates a new permutation with signature opposite to that of $\sigma$. This proves the claim. Using the induction hypothesis, equation implies $$\label{Sk+1} (p_1-d_2)e_k = \frac{1}{k!}\sum_{\sigma \in S_{k+1}} \mathrm{sgn}(\sigma)p_\sigma.$$ Combining equations and completes the induction step and proof. We next prove the Newton-Girard identities by partitioning the symmetric group. $$(-1)^{k-1}p_k=ke_k -\sum_{i=1}^{k-1} (-1)^{i-1}e_{k-i} p_i$$ We prove $$ke_k = \sum_{i=1}^{k} (-1)^{i-1}e_{k-i} p_i.$$ From Theorem \[cycle index\], this is equivalent to $$\sum_{\sigma \in S_k} \mathrm{sgn}(\sigma)p_\sigma = \sum_{i=1}^k (-1)^{i-1}(i-1)!{k-1 \choose i-1}p_i\sum_{\sigma \in S_{k-i}} \mathrm{sgn}(\sigma)p_\sigma$$ On the right side, we interpret a term of the form $$p_i p_\sigma$$ for $\sigma \in S_{k-i}$ as corresponding to a permutation $\sigma' \in S_k$ such that the element $k$ is in an $i$-cycle $C$ of $\sigma'$, and $\sigma'=\sigma$ when restricted to the elements not in $C$. There are $ \displaystyle {k-1 \choose i-1}$ ways to choose the elements that are in the cycle $C$ and $(i-1)!$ ways to construct the cycle. And $$\mathrm{sgn}(\sigma') = (-1)^{i-1}\mathrm{sgn}(\sigma).$$ This completes the proof. The polynomials $P_k(x)$ {#Pk} ======================== Evaluating $\zeta(2k)$ ---------------------- We have the well-known evaluation of $\overline{e}_k$: $$\overline{e}_k = \frac{\pi^{2k}}{(2k+1)!}.$$ See [@DeFranco2] for a combinatorial proof of this evaluation. Since $$\overline{p}_1 = \overline{e}_1,$$ we can thus use the Newton-Girard identities to successively solve for $\overline{p}_n$ in terms of the $\overline{p}_i$ for $i <n$ and the $\overline{e}_k$. We consider the partial sums in the Newton-Girard identities and prove a formula for them in Theorem \[Fnk\]. We define terms for that theorem next, including the recursive definition of the polynomials $P_k(x)$. For integer $n\geq 2$ and $k \geq 1$, define $F_n(k)$ by $$\begin{aligned} F_n(k) &= k\overline{e}_k -\sum_{i=1}^{n-1} (-1)^{i-1}\overline{e}_{k-i} \overline{p}_i \\ &= \frac{k\pi^{2k}}{(2k+1)!} -\sum_{i=1}^{n-1} (-1)^{i-1}\frac{\pi^{2k-2i}\zeta(2i)}{(2k-2i+1)!}.\end{aligned}$$ Define $$P_1(x) = 1$$ and for $k \geq 1$ $$\label{Pn+1} P_{k+1}(x) = \frac{P_k(k)(\prod_{i=1}^{k} (2x-2k+2i+1))-(\prod_{i=1}^{k} (2i+1))P_k(x) }{2x-2k}.$$ Note that $P_{k+1}(x)$ is a polynomial because the numerator of equation vanishes at $x=k$. \[Fnk\] For integer $n\geq 2$ and $k \geq 1$, $$F_n(k) = (-1)^{n-1} \frac{\pi^{2k}}{2}P_n(k) \frac{\prod_{i=1}^n (2k-2i+2)}{(2k+1)!\prod_{i=1}^{n-1} (2i+1)!!}.$$ We use induction on $n$. We have from the evaluation of $\overline{e}_k$ that $$\overline{e}_1 = \overline{p}_1 = \zeta(2) = \frac{\pi^2}{3!}.$$ For $n=2$ we have $$F_2(k)=\frac{k\pi^{2k}}{(2k+1)!} - \frac{\pi^{2k}}{(2k-1)!3!} =-\pi^{2k}\frac{2k(2k-2)}{3!(2k+1)!}.$$ Since $P_2(k)=1$, this proves the statement for $n=2$. Assume the statement is true for some $n \geq 2$. Then this implies by the Newton-Girard identities that $$\overline{p}_n = (-1)^{n-1}F_n(n).$$ Thus $$\begin{aligned} F_{n+1}(k) &= F_n(k) -(-1)^{n-1}\overline{e}_{k-n}\overline{p}_n\\ &= F_n(k) -\pi^{2k-2n}\frac{F_n(n)}{(2k-2n+1)!}.\end{aligned}$$ Using the induction hypothesis, this becomes $$\begin{aligned} &(-1)^{n-1} \frac{\pi^{2k}}{2\prod_{i=1}^{n-1}(2i+1)!!}\\ &\times \left( \frac{(2n+1)!P_n(k)\prod_{i=1}^n (2k-2i+2) - P_n(n) (\prod_{i=1}^n 2i)\prod_{i=1}^{2n} (2k+1-i)}{(2k+1)!(2n+1)!}\right).\end{aligned}$$ The quantity in parentheses simplifies to $$\begin{aligned} &\frac{(\prod_{i=1}^n (2i)(2k-2i+2))(2k-2n)}{(2n+1)!(2k+1)!} \left( \frac{P_n(k)\prod_{i=1}^n (2i+1) - P_n(n)\prod_{i=1}^n (2k-2n+2i+1) }{2k-2n}\right)\\ =& \frac{\prod_{i=1}^n (2k-2i+2)}{(2n+1)!!(2k+1)!} (-P_{n+1}(k)).\end{aligned}$$ Putting this together proves the induction step. This completes the proof. For integer $k \geq 1$, $$\zeta(2k) = \frac{\pi^{2k}}{2} \frac{P_k(k)}{\prod_{i=1}^{k}(2i+1)!!}$$ We have by the Newton-Girard identities for $k\geq 2$ $$\overline{p}_k = (-1)^{k-1}F_k(k).$$ We then evaluate $F_k(k)$ using the theorem. We check that the statement is also true for $k=1$. This completes the proof. The operators $\mathcal{B}_k$ {#B_k} ----------------------------- The recursive definition of $P_k(x)$ motivates the following definition of the operator $\mathcal{B}_k$. For integer $k \geq 1$ and a function $f(x)$, define the operator $\mathcal{B}_k$ by $$\mathcal{B}_k(f)(x) = \frac{f(k)(\prod_{i=1}^k (2x-2k+2i+1)) - f(x)\prod_{i=1}^k (2i+1)}{2x-2k}.$$ We thus can define the $P_k(x)$ by $$P_{k+1}(x) = \mathcal{B}_k \mathcal{B}_{k-1} ... \mathcal{B}_1 (1).$$ \[u ai\] Let $u$ and $a_i$ for $1 \leq i \leq k$ be indeterminates. Then $$\prod_{i=1}^k (u+a_i)-\prod_{i=1}^k a_i = u\sum_{j=1}^{k} ((\prod_{i=1}^{j-1} (u+a_i) )\prod_{i=j+1}^k a_i )$$ where we interpret an empty product to be equal to 1. We use induction on $k$. The statement is true for $k=1$. Assume it is true for some $k \geq 1$. Then we have $$\begin{aligned} \prod_{i=1}^{k+1} (u+a_i) &= (u+a_{k+1})\prod_{i=1}^{k} (u+a_i) \\ &=u\prod_{i=1}^{k} (u+a_i)+ a_{k+1}\left(u\sum_{j=1}^{k} ((\prod_{i=1}^{j-1} (u+a_i) )\prod_{i=j+1}^{k} a_i )+ \prod_{i=1}^{k} a_i \right)\\ &=u\prod_{i=1}^{k} (u+a_i)+ \left(u\sum_{j=1}^{k} ((\prod_{i=1}^{j-1} (u+a_i) )\prod_{i=j+1}^{k+1} a_i )+ \prod_{i=1}^{k+1} a_i \right)\\ &= \left(u\sum_{j=1}^{k+1} ((\prod_{i=1}^{j-1} (u+a_i) )\prod_{i=j+1}^{k+1} a_i ) \right)+ \prod_{i=1}^{k+1} a_i.\end{aligned}$$ This proves the induction step and completes the proof. Next we define terms necessary to state Lemma \[B action\]. For an integer $k \geq 1$, let $R(k)$ denote the set $$R(k) = \{3,5,7, ... , 2k+1 \}$$ with $R(0) = \emptyset$. For a set $S$ of integers and an integer $m$, let $S+m$ denote the set $$\bigcup_{s\in S} \{s+m\}$$ where $S+m= \emptyset$ if $S=\emptyset$. Given $k\geq 2$, suppose $S$ is a set of integers such that $$S \subset R(k-2).$$ Let $j$ be an integer $0 \leq j \leq k - 1-|S|$. Let $S_{\mathrm{low}}(j,k)$ denote the set consisting of the numbers in $S+2$ and the $j$-th smallest numbers in $$R(k-1) - (S+2)$$ with $S_{\mathrm{low}}(0,k) = S+2$. Let $$S_{\mathrm{high}}(j,k)$$ denote the set consisting of the numbers in $S+2$ and the $j$-th highest numbers in $$(R(k-1)+2) - (S+2)$$ with $S_{\mathrm{high}}(0,k)= S+2$. Define for non-empty $S$ $$\Pi S = \prod_{s \in S} s$$ and for $S = \emptyset$ $$\Pi S =1.$$ Let $f_{S,k}(x)$ denote $$f_{S;k}(x)= \prod_{s \in S} (2x-2k+ s).$$ \[B action\] For $k \geq 2$, suppose $S \subset R(k-2)$. Then $$\mathcal{B}_k (f_{S;k-1})(x) = \sum_{j=0}^{k -1-|S|} (\Pi S_{\mathrm{high}}(k-1-|S|-j,k) )f_{S_{\mathrm{low}}(j,k); k}(x)$$ Applying the definition of $f_{S;k}(x)$ we have $$f_{S;k-1}(x) = f_{S+2;k}(x).$$ Then $$\begin{aligned} \mathcal{B}_k (f_{S+2;k})(x) &= \frac{f_{S+2;k}(k)(\prod_{i=1}^k (2x-2k+2i+1)) - f_{S+2;k}(x)\prod_{i=1}^k (2i+1)}{2x-2k}\\ &= \frac{(\prod_{s \in S+2} s(2x-2k+s)) \Big((\prod_{s \in R(k)-(S+2)} (2x-2k+s)) - \prod_{s \in R(k)-(S+2)} s\Big)}{2x-2k}. \end{aligned}$$ Now we apply Lemma \[u ai\] with $$u = 2x-2k$$ and $a_i$ the $i$-th smallest number in the set $$R(k) - (S+2)$$ for $1 \leq i \leq k-|S|$. This completes the proof. For integer $k \geq 2$, the polynomial $P_{k}(x)$ is a positive linear combination of functions of the form $f_{S;k-1}(x)$ where $S \subset R(k-2)$. We use induction on $k$. The statement is true for $k=2$ as $$P_2(x) = 1 = f_{\emptyset,1}(x).$$ The induction step follows from Lemma \[B action\]. \[positive coefficients\] The polynomial $P_k(x+k-\frac{3}{2})$ has positive coefficients in $x$. For $S \subset R(k-2)$, the function $f_{S;k-1}(x)$ is either $1$ or a product of factors of the form $$(2x-2k+m)$$ where $m \geq 3$. By the theorem, $P_k(x)$ is a positive linear combination of functions $f_{S;k-1}(x)$. This completes the proof. We use Lemma \[B action\] to express $P_k(x)$ as a sum of positive terms over the set $\mathcal{T}_k$ of plane trees with $k$ vertices. To each tree $T$ we associate two finite sets of integers, $\mathrm{Low}(T)$ and $\mathrm{High}(T)$. For the trees $T$ consisting of one or two vertices, we set $$\mathrm{Low}(T) = \mathrm{High}(T)= \emptyset.$$ Suppose $T\in \mathcal{T}_k$ for $k \geq 3$ and let $v$ be the last vertex of $T$ traversed in the preorder. Say that $v$ is at the $i$-th level of $T$, where $i$ is the number of edges on the path between $v$ and the root. So $1 \leq i \leq k-1$. Let $T'$ denote $$T' = T \backslash v.$$ Then set $\mathrm{Low}(T) $ to be the set consisting of the elements in $\mathrm{Low}(T') +2$ and the $k-i-1$ smallest elements in $R(k-2) - (L(T')+2)$; and set $\mathrm{High}(T)$ to be the set consisting of the elements in $\mathrm{Low}(T')+2$ and the $i-1$ greatest elements in $(R(k-2)+2) - (\mathrm{Low}(T')+2)$. Now define the weight of $T$ to be $$\mathrm{wt}(T) = \mathrm{wt}(T') \Pi( \mathrm{High}(T))$$ with $\mathrm{wt}(T) = 1$ for $T \in \mathcal{T}_1$ or $\mathcal{T}_2$. Then $$\label{P tree} P_k(x) = \sum_{T \in \mathcal{T}_k} \mathrm{wt}(T) f_{\mathrm{Low}(T);k-1}(x)$$ and thus $$\label{A tree} A_k =P_k(k)= \sum_{T \in \mathcal{T}_k} \mathrm{wt}(T)\Pi(\mathrm{Low}(T)+2).$$ \[leading\] For integer $k \geq 2$, the leading coefficient of $P_k(x)$ is $$A_{k-1}2^{k-2}.$$ For $k \geq 2$, $P_k(x)$ has degree $k-2$. In the sum , the only trees that contribute a term of $x^{k-2}$ are the those trees whose last vertex $v$ in the preorder is at level 1. For such trees $T$ $$\mathrm{Low}(T) = R(k-2) \text{ and } \mathrm{High}(T) = \mathrm{Low}(T')+2.$$ The leading coefficient of $P_k(x)$ is thus $$\begin{aligned} &2^{k-2} \sum_{T \in \mathcal{T}_k, \mathrm{level}(v)=1} \mathrm{wt}(T)\\ &=2^{k-2} \sum_{T' \in \mathcal{T}_{k-1}} \mathrm{wt}(T') \Pi(\mathrm{Low}(T')+2)\\ &=2^{k-2} A_{k-1} \end{aligned}$$ by formula . This completes the proof. *We can express the rational sequence $\displaystyle2\frac{\zeta(2k)}{\pi^{2k}} $ as a transform of the sequence $R = \{R_n\}_{n=1}^\infty$ given by $$R_n =2n+1, \,\,\,\, n\geq 1.$$ We write $$\label{zeta rational} 2\frac{\zeta(2k)}{\pi^{2k}} = \frac{\sum_{T \in \mathcal{T}_k} \mathrm{wt}_R(T)}{\prod_{j=1}^k \Pi R(j)}$$ where we define $R(k)$ as above, but for $\mathrm{wt}_R(T)$ we interpret the sets $\mathrm{Low}(T)$ and $\mathrm{High}(T)$ as subsets of $R$; for such a subset $S$ we write $$S = \{ R_{i_1},..., R_{i_n}\}.$$ We may then express the operation $S+2$ as $$S+2 = \{ R_{i_1+1},..., R_{i_n+1}\}.$$ The sequence can thus be generalized by varying the sequence $R$.* A recursive relation -------------------- Next we prove a linear recursive relation among the coefficients of $P_k(x)$ in the basis $f_{R(n);k-1}(x)$. We prove the following lemma necessary for the recursion. \[2ni\] $$\prod_{i=1}^n (u+ 2i+3) = \sum_{i=0}^n 2^{n-i} \frac{n!}{i!} \prod_{j=1}^i (u+2j+1)$$ Evaluating at $u=-3$, we get that both sides are equal to $n!2^n$. Evaluating at $u = -2m-3$ for $1\leq m \leq n$, we get that the left side is 0 and that the right side is $$n! 2^n\sum_{i=0}^{m}(-1)^i {m \choose i} =0.$$ Both sides are polynomials in $u$ of degree $n$ that are equal at $n+1$ values of $u$. Therefore both sides are equal as polynomials. This completes the proof. For integer $k \geq 2$, let $$P_k(x) = \sum_{i=0}^{k-2} c_{i,k}\prod_{j=1}^{i} (2x-2k+2j+1).$$ with $$c_{0,2} = 1.$$ Then the coefficients $c_{i,k}$ satisfy $$c_{i,k+1} =(\prod_{j=i+1}^{k-1}(2j+3)) \sum_{n=0}^{i} (\prod_{j=1}^{n}(2j+1) )(\sum_{m=n}^{k-2}c_{m,k}2^{m-n} \frac{m!}{n!})$$ We have $$\prod_{j=1}^{i} (2x-2k+2j+1) = f_{R(i); k}(x).$$ Then the theorem follows directly from Lemmas \[B action\] and \[2ni\]. Further Work ============ - Use these formulas or others (such as the Euler zig-zag numbers) to show that $$\sum_{i=0}^n (-1)^i {n \choose i} \frac{\zeta(2k+2i)}{\pi^{i}}$$ is positive. These expressions arise from the constants $$\sum_{n=1}^\infty \frac{e^{-\pi n^2}}{(\pi n^2)^k}$$ after expressing the exponential using the derangement numbers. These constants arise from expansions of the Riemann xi function. - Find eigenvectors of the operators $\mathcal{B}_k$. - Vary the sequence $R$ and see if the transforms have asymptotics or generating functions analogous to those of the Bernoulli numbers. - Recover the recurrence relation and generating function for the Bernoulli numbers from these formulas. - See if the proofs for the Newton-Girard identities using the symmetric group can be generalized to other Weyl groups. [9]{} Bernoulli, Jacob, *The Art of Conjecturing, together with Letter to a Friend on Sets in Court Tennis*, translated by Edith Sylla, Baltimore: Johns Hopkins Univ. Press (2005) \[1713\] Boklan, K. D., “A note on identities for elementary symmetric and power sum polynomials," Discrete Mathematics, Algorithms and Applications. Vol. 10, No. 01, 1850004 (2018) DeFranco, M., “On the analytic extension of Stirling numbers of the first kind," Journal of Difference Equations and Applications. 16:9 pp. 1101-1120 DeFranco, M., “On the Multiple Zeta Values $\zeta(\{2\}^k)$" (2019) https://arxiv.org/abs/1911.07129 Dwilewicz, R. and Minac, J., “Values of the Riemann zeta function at integers," Materials Math. Vol. 6 (2009). http://mat.uab.cat/matmat/PDFv2009/v2009n06.pdf Euler, L., “Concerning the sums of series of reciprocals," Comment. acad. sc. Petrop. 7. 1740 (p. 124 onwards). translated by Ian Bruce. http://www.17centurymaths.com/contents/euler/e041tr.pdf Girard, A., “Invention Nouvelle en l’ Algèbre". Amsterdam (1629) Selin, H., ed. (1997), Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures, Berlin: Springer, 2008. Mazur, B., “Bernoulli numbers and the unity of mathematics," http://www.math.harvard.edu/ mazur/papers/slides.Bartlett.pdf Newton, I. and Halley, E., *Universal Arithmetick, Or, A Treatise of Arithmetical Composition and Resolution*, English translation by Joseph Raphson, London: J. Senex ..., W. Taylor ..., T. Warner ... and J. Osborn (1720)
--- abstract: 'Many of the current challenges in science and engineering are related to complex networks and distributed multiagent network systems are currently the focal point of many new applications. Such applications relate to the growing popularity of social networks, the analysis of large network data sets, and the problems that arise from interactions among agents in complex political, economic, and biological systems. Despite extensive progress for stability analysis of conventional multiagent networked systems with weakly coupled state-network dynamics, most of the existing results have shortcomings to address multiagent systems with highly coupled state-network dynamics. Motivated by numerous applications of such dynamics, in our previous work [@etesami2019simple], we initiated a new direction for stability analysis of such systems using a sequential optimization framework. Building upon that, in this paper we complete our results by providing another angle to multiagent network dynamics from a duality perspective which allows us to view the network structure as dual variables of a constrained convex program. Leveraging this idea, we show that the evolution of the coupled state-network multiagent dynamics can be viewed as iterates of a primal-dual algorithm to a static constrained optimization/saddle-point problem. This bridges the Lyapunov stability of state-dependent network dynamics and frequently used optimization techniques such as block coordinated descent, mirror descent, Newton method, and subgradient method. As a result, we develop a systematic framework to analyze the Lyapunov stability of state-dependent network dynamics using well-known techniques from nonlinear optimization.' author: - bibliography: - 'thesisrefs.bib' title: 'Duality and Stability in Complex Multiagent State-Dependent Network Dynamics' --- Lyapunov stability; multiagent systems; state-dependent network dynamics; saddle-point dynamics; block coordinate descent; Newton method; subgradient method; convex optimization. Introduction ============ Many of the current challenges in science and engineering are related to complex networks. These challenges may involve modeling the interactions of agents in complex networks, the establishment of stability in the agents’ interaction dynamics, and the design of efficient algorithms to obtain or approximate the equilibrium points. We can offer many motivating examples of relationships in political, social, and engineering applications that are governed by complex networks of heterogeneous agents. Agents may be strategic or the networks can be dynamic in the sense that they can vary over time depending on the agents’ states or decisions. The following are just a few examples that one can consider. *– Network security*: A basic task in network security is that of providing a mechanism for securing the operation of a set of networked heterogeneous agents (e.g., service providers, computers, or data centers) despite external malicious attacks (Figure \[Fig:security\]). One way of doing that is to incentivize the agents to invest in their security (e.g., by installing antivirus software) [@etesami2019dynamic; @grossklags2008secure]. However, since the agents are interconnected, the compromise of one agent may affect its neighbors, and such a failure can cascade over the entire network. As a result, the decision made by each agent on how much to invest in its own security level will indirectly affect all the others, and hence the connectivity structure of the network. Thus we face a highly dynamic network of heterogeneous agents where the agents’ states/decisions and the network structure are highly influenced by each other. *– Formation control*: A goal in formation control is to design a distributed protocol such that a set of agents (e.g., the aircraft in Figure \[Fig:aircraft\]) collectively form a certain structure and eventually accomplish a task [@bullo2009distributed]. Agents may have different communication capabilities and can only communicate with those in their local neighborhoods. Consequently, depending on the agents’ states (e.g., remaining power or relative positions), the communication network they share is subject to change. As a result, the agents’ states and the communication network are highly coupled and dynamically evolve based on each other. ![[]{data-label="Fig:opinion"}](Network-Security){width="\textwidth"} ![[]{data-label="Fig:opinion"}](Drones){width="\textwidth"} ![[]{data-label="Fig:opinion"}](social-network){width="\textwidth"} *– Social networks*: In social networks, there are often clear affinities among people based on heterogeneous political or cultural beliefs that define an interaction network among them. However, on specific issues, alliances form among people from different groups. Almost every congressional vote provides an example of this phenomenon, wherein some representatives break away from their respective parties to vote with the other party [@hegselmann2002opinion](Figure \[Fig:opinion\]). *– Stability of smart grids*: In the emerging smart grid, a significant amount of energy stems from renewable sources, electric vehicles, and storage units, many of which may be owned by consumers rather than utility companies. That phenomenon is turning every grid component into a *prosumer*: a joint producer and consumer of energy. Prosumers (agents) in the smart grid strategically interact with each other subject to power network constraints [@etesami2018stochastic]. In particular, depending on their states (e.g., energy consumption/production decisions) they may decide to buy/sell energy to different agents. Thus, a major challenge is that of providing decentralized algorithms to stabilize the demand and response given that the structure of the agents’ interactions is a function of their own and their neighbors’ states/decisions. Motivated by the above, and many other real applications, our objective in this paper is to provide a systematic approach to analyze stability and convergence of agents interacting over a rich dynamic network which may evolve or vary based on agents’ states. To this end, we provide new connections between analysis of multiagent network systems and developed techniques in the mature field of nonlinear programming. Utilizing such connections, we show how Lyapunov stability of seemingly complex multiagent network dynamics can be analyzed using iterative optimization algorithms for finding a minimum or saddle-point of nonlinear functions. Related Works ------------- A general multiagent network problem involves a set of $[n]=\{1,2,\ldots,n\}$ agents (social individuals, grid prosumers, unmanned vehicles, etc). At each time instance $k=0,1,2,\ldots$, there is an underlying network $\mathcal{G}_k=([n],\mathcal{E}_k)$ that determines the communication network shared by the agents. Here, $\mathcal{E}_k$ denotes the set of edges of the network at time $k$ which can be undirected or directed. The state of each agent $i\in[n]$ at time $k$ is given by a vector $x^k_i$, which evolves based on the interaction of agent $i$ with its neighbors. In particular, the overall state of the system at time $k+1$, denoted by $\boldsymbol{x}^{k+1}$, can be obtained by using a general update rule $\boldsymbol{x}^{k+1}=f_k(\boldsymbol{x}^k,\mathcal{G}_k), k=0,1,2,\ldots$, where $f_k(\cdot)$ can be a general time-varying function, depending on the problem setup, and captures the interaction laws among the agents. Therefore, the main goal here is to understand whether the generated sequence of states $\{\boldsymbol{x}^k\}_{k=0}^{\infty}$ will converge (stabilize) to any equilibrium. That has been the subject of much research effort, including work in distributed control and computation [@nedic2018network]. Unfortunately, despite enormous efforts in the area, the stability problem for such dynamics in its full generality is still far from having been solved. However, partial solutions to this problem under certain simplifying assumptions are known. For instance, there has been a rich body of literature on the analysis of multiagent network systems, mainly from the static point of view, in which a set of agents iteratively interact over a *fixed* network to achieve a certain goal, such as consensus or optimization of an objective function. The classical models of DeGroot [@degroot1974reaching] and Friedkin-Johnsen [@friedkin1997social] in social sciences are two special types of such systems [@nedic2018network; @AVP-RT:17]. Below is a sample result in this area [@olfati2004consensus; @levin2017markov]: \[thm-consensus\] Given a fixed and undirected connected network $\mathcal{G}=([n],\mathcal{E})$, at any time $k=0,1,\ldots$, let every agent $i$ take the average of its own state and those of its neighbors, $x_i^{k+1}=\sum_{j\in N_i}a_{ij}x_j^k, \forall i\in[n]$, where $a_{ij}>0$ are positive constant weights such that $\sum_{j}a_{ij}=1, \forall i$. Then agents’ states will converge to a consensus point, i.e., $\lim_{k\to \infty}x_i^k=x^*, \forall i$. Further, the convergence rate is exponential, i.e., $\|\boldsymbol{x}^k-x^*\boldsymbol{1}\|\leq \lambda^k \|\boldsymbol{x}^0-\boldsymbol{x}^*\|$, where $\boldsymbol{1}$ is the vector of all ones and $\lambda\in (0,1)$. By comparing this result with the aforementioned general dynamics, one can identify several simplifying assumptions that have been made in most of the existing results on multiagent network systems. Particularly: i) the underlying networks are fixed as $\mathcal{G}_k=\mathcal{G}, \forall k$; ii) the underlying networks are connected and undirected, and iii) the underlying networks $\mathcal{G}_k$ do not depend on the agents’ states $\boldsymbol{x}^k$. To relax these simplifying assumptions, a large body of literature has thus been developed to establish the stability of the above general dynamics under less restrictive situations. The results in the static case can often be generalized to time-varying networks by assuming a certain “independency" between the network process and the state dynamics. For instance, one of the commonly used assumptions is that the network dynamics are governed by an exogenous process that is uncoupled from the state dynamics [@olshevsky2009convergence; @nedic2009distributed; @nedic2015distributed; @bacsar2016convergence; @etesami2016convergence; @etesami2017potential; @zhu2011convergence; @tatarenko2017non]. Here is an extension which is given in [@blondel2005convergence]: \[thm:hendrix\] Consider a sequence of time-varying directed graphs $\mathcal{G}_k=([n],\mathcal{E}_k)$ with the weight of edge $(i,j)$ at time $k$ being $a_{ij}(k)$. Assume that the sequence of graphs is $B$-strongly connected, meaning that for any $k\ge 0$, the graph $\mathcal{G}=([n], \cup_{s=k}^{k+B}\mathcal{E}_k)$ is strongly connected. Moreover, given a positive constant $\alpha\in (0,1)$, assume $a_{ij}(k)\in [\alpha, 1]\cup\{0\}, \forall i,j,k$, and $a_{ii}(k)\ge \alpha, \ \sum_{j}a_{ij}(k)=1, \forall i,k$. Then the dynamics $x_i^{k+1}=\sum_{j\in N_i}a_{ij}(k)x_j^k, \ i\in[n]$, will asymptotically converge to a consensus point. While Theorem \[thm:hendrix\] relaxes the static communication network to time-varying networks, it still has shortcomings in addressing many realistic multiagent systems. For instance, the network connectivity must be preserved over any time window of length $B$, and it is hard to check whether that is happening (especially if the networks are generated endogenously based on the agents’ states). Secondly, the assumptions on the weight matrices are somewhat restrictive, as in real situations the weights can approach $0$ and then increase again to $1$. Besides, the theorem uses an implicit assumption on the symmetry of the networks by imposing strong connectivity. Finally, in realistic situations the evolution of the network itself depends on the evolution of agents’ states, while in the above theorem, the network dynamics are driven by an exogenous process that is independent of how the states evolve. A generalization of Theorem \[thm:hendrix\] is to allow weak coupling between the state and network dynamics, with certain network connectivity/symmetry assumptions [@olfati2004consensus; @hendrickx2013convergence; @jadbabaie2003coordination]. We refer to [@sonin2008decomposition; @touri2011existence] for other extensions of such results using backward product of stochastic matrices. While the existing results can properly address a large class of multiagent network systems, there are still many examples that do not fit into any of the aforementioned categories or for which the application of the above techniques provides poor results on the behavior of the agents. Our work is fundamentally different from the earlier literature in a sense that departing from conventional methods for stability analysis of multiagent averaging dynamics (e.g., Markov chains or product of stochastic matrices), we provide a deeper analysis than the existing model-free results by capturing the internal co-evolution of state and network dynamics. This approach allows us to relax some of the common assumptions such as global knowledge on the network connectivity over the course of the dynamics. It is worth mentioning that our work is also related to dynamic clustering where the goal is to provide a theoretical justification for cluster synchronization in multiagent systems using saddle-point dynamics [@burger2012hierarchical; @burger2014duality]. However, the network structure in that application is fixed and captured by a set of linear constraints, while in our work the network dynamically evolves as a complex function of the state variables. Contributions and Organization ------------------------------ Inspired by the above shortcomings and building upon our previous work [@etesami2019simple], in this paper we provide a principled framework from an optimization perspective to study Lyapunov stability of multiagent state-dependent network dynamics. We show that despite the challenges due to state-network coupling, it is still possible to capture the co-evolution of network and state dynamics for a broad class of multiagent systems, even under an asymmetric or nonlinear environment. More precisely, we show that often the network structure among the agents can be viewed as *dual variables* of a constrained optimization problem where the existence of an edge is related to the tightness of the corresponding constraint. As a result, we can view multiagent network dynamics as an iterative primal-dual algorithm to a static constrained optimization problem where the primal updates correspond to state updates of the dynamics and the dual updates correspond to the network evolution. The KKT optimality conditions also guide the coupling between the network and state dynamics. This allows us to view the constrained Lagrangian of the underlying static problem as a Lyapunov or “semi-Lyapunov" function for the multiagent dynamics. Therefore, we obtain a principled way to establish the stability of multiagent network dynamics in terms of asymptotic convergence of an iterative optimization algorithm. This makes a variety of iterative optimization methods amenable to study the stability of multiagent state-dependent network dynamics. In Section \[sec:model\], we first provide our problem formulation, modeling a large class of state-dependent network dynamics. In Section \[sec:BCD\], we apply a sequential optimization framework based on block coordinate descent method to establish Lyapunov stability for a large class of state-dependent network dynamics. We consider this method under both symmetric and asymmetric network structure and use the change of variables to generate other classes of state-dependent network dynamics. In Section \[sec:Saddle\], we use a saddle-point model to extend our results to a case where there is a conflict between the network structure and the state evolution. In Section \[sec:continuous-saddle\], we consider continuous-time dynamics where the edge emergence between agents is no longer a binary event, but rather a continuous weight process. We conclude the paper by identifying some future directions of research in Section \[sec:conclusion\]. Problem Formulation {#sec:model} =================== Let us consider a multiagent network system consisting of $[n]:=\{1,2,\ldots,n\}$ agents. At any given time $k=0,1,2,\ldots$, we denote the state of agent $i$ by $x_{i}^k\in \mathbb{R}$, and the state of the entire system at that time by $\boldsymbol{x}^k=(x_1^k,\ldots,x_n^k)^T$, where the superscript $T$ refers to the transpose of a vector.[^1] Moreover, we assume that each agent $i\in [n]$ has $n-1$ measurement functions $g_{ij}(x_i,x_j):\mathbb{R}^2\to\mathbb{R}$, one for every other agent $j\in [n]\setminus\{i\}$, which is assumed to be a convex function of its own state $x_i$ and agent $j$’s state $x_j$. For most parts in this paper we assume that the measurement functions $\{g_{ij}(x_i,x_j), i\neq j\}$ are twice continuously differentiable, denoted by $g_{ij}\in \mathbb{C}^2$. Given any state $\boldsymbol{x}$, we assume that the set of neighbors of an agent $i\in[n]$ is determined by the logic constraints $g_{ij}(x_i,x_j)\leq 0, \ j\in [n]\setminus\{i\}$. In other words, for a given state $\boldsymbol{x}$, agent $i$ is influenced by agent $j$ (or $j$ is a neighbor of $i$) if and only if $g_{ij}(x_i,x_j)\leq 0$. In particular, we denote the set of neighbors of agent $i$ at a state $\boldsymbol{x}$ by $N_i(\boldsymbol{x}):=\{j: g_{ij}(x_i,x_j)\leq 0\}$. At any time instance $k$, each agent $i\in [n]$ interacts with its neighbors and updates its state at the next time step to $$\begin{aligned} \label{eq:state-dependent-model} x_i^k=\phi_i\big(\boldsymbol{x}^k, N_i(\boldsymbol{x}^k)\big), \ \ i\in [n],\end{aligned}$$ where $\phi_i(\cdot)$ is an agent-specific update rule which is a function of states of agent $i$’s neighbors at time $k$. Note that the above discrete-time dynamics contain a fairly large class of state-dependent network dynamics where here the network at time $k$ is given by $\mathcal{G}_k=([n], \{(i,j): j\in N_i(\boldsymbol{x}^k)\})$. It is evident that the network structure at time $k$ depends on the agents’ states at that time, and the state at the next time step $k+1$ is a function of the network structure at the current time $k$. Therefore, our main objective in this paper is to provide a general class of update rules $\phi_i(\cdot)$ such that the state-dependent network dynamics converge to some equilibrium point or are Lyapunov stable in the following sense: A function $V:\mathbb{R}^{n}\to \mathbb{R}$ is called a Lyapunov function for the discrete time dynamical system $\boldsymbol{z}^{k+1}=h_k(\boldsymbol{z}^k), \ k=0,1,2,\ldots$, if it is decreasing along the trajectories of the dynamics, i.e., $V(\boldsymbol{z}^{k+1})< V(\boldsymbol{z}^k), \forall k$. We refer to a dynamical system which admits a Lyapunov function as Lyapunov stable. To illustrate the generality of the above model, let us consider the well-known Hegselmann-Krause (HK) model from social science [@hegselmann2002opinion]. In the HK model, there is a set of $[n]$ agents, and it is assumed that at each time instance $k=0, 1, 2, \ldots$, the opinion (state) of agent $i\in[n]$ can be represented by a scalar $x_{i}^k\in \mathbb{R}$. Each agent $i$ updates its state at time $k+1$ by taking the arithmetic average of its own state and those of all the others that are in its $\epsilon$-neighborhood at time $k$, i.e., $$\begin{aligned} \nonumber x_i^{k+1}=\frac{x_i^k+\sum_{j\in N_i(\boldsymbol{x}^k)} x_j^k}{1+|N_i(\boldsymbol{x}^k)|}, \ \ \ \ i\in [n].\end{aligned}$$ Here $\epsilon>0$ is a constant parameter, and $N_i(\boldsymbol{x}^k)=\{j\in[n]\setminus\{i\}: |x_i^k-x_j^k|\leq \epsilon\}$ denotes the set of neighbors of agent $i$ at time $k$. In fact, it is known that such dynamics are Lyapunov stable and converge to an equilibrium point [@hegselmann2002opinion; @proskurnikov2018tutorial]. Now it is easy to see that HK dynamics are a very special case of the state-dependent network dynamics , in which the measurement functions are given by $g_{ij}(x_i,x_j)=(x_i-x_j)^2-\epsilon^2$, and the update rule is given by $\phi_i\big(\boldsymbol{x}, N_i(\boldsymbol{x})\big):=\frac{\boldsymbol{x}_i+\sum_{j\in N_i(\boldsymbol{x})} x_j}{1+|N_i(\boldsymbol{x})|}$. Lyapunov Stability Using Block Coordinate Descent {#sec:BCD} ================================================= A popular approach to solving optimization problems is the so-called *block coordinate descent* (BCD) method, which is also known as the Gauss-Seidel method. At each iteration of this method, the objective function is minimized with respect to a single block of variables while the rest of the blocks are held *fixed*. More specifically, consider the optimization problem: $\min \{F(\boldsymbol{y}_1,\ldots,\boldsymbol{y}_n), \ \boldsymbol{y}_i\in Y_i, \forall i\}$, where $Y_i\subseteq \mathbb{R}^{m_i}$ is a closed convex set, and $F: \prod_{i=1}^{n}Y_i\to \mathbb{R}$ is a continuous function. At iteration $t=0,1,\ldots$ of the BCD method, the block variable $\boldsymbol{y}_i$ is updated by solving the subproblem: $\boldsymbol{y}_i^t=\arg\min_{\boldsymbol{z}_i\in Y_i} F(\boldsymbol{y}^{t}_1,\ldots,\boldsymbol{y}_{i-1}^{t},\boldsymbol{z}_i,\boldsymbol{y}_{i+1}^{t},\ldots,\boldsymbol{y}^t_n), \ i\in[n]$. Since in practice finding the exact minimum in each iteration might be difficult, one can consider an *inexact* BCD method, where a smooth regularizer is added to the objective function or it is approximated by a simpler convex function. In either case, and under some mild assumptions, it can be shown that the inexact BCD method will converge to a stationary point of the objective function $F(\cdot)$. Now let us consider the following constrained convex program: $$\begin{aligned} \nonumber &\min \ f(\boldsymbol{x}):=\sum_{i=1}^{n}f_i(x_i)\cr &\mbox{s.t.} \ \ g_{ij}(x_i,x_j)\leq 0, \ \forall i\neq j, \ \ \boldsymbol{x}\in \mathbb{R}^n,\end{aligned}$$ where $f_i(x_i), i\in [n]$ are continuous convex functions and $g_{ij}(x_i,x_j) ,i\neq j$ are the measurement convex functions between each pair of the agents. In other words, each agent $i\in[n]$ has a private convex function $f_i(x_i)$, and the agents collectively want to choose their states to minimize the global objective function $f(\boldsymbol{x}):=\sum_{i=1}^{n}f_i(x_i)$, while they all remain connected. Dualizing the constraints using dual variables $\lambda_{ij}\ge 0$, and forming the Lagrangian function we have, $$\begin{aligned} \nonumber L(\boldsymbol{x},\boldsymbol{\lambda})=f(\boldsymbol{x})+\sum_{i\neq j}\lambda_{ij}g_{ij}(x_i,x_j),\end{aligned}$$ which is a function consisting of two block variables, namely a *state* block variable $\boldsymbol{x}:=(x_1,\ldots,x_n)\in \mathbb{R}^{n}$, and a nonnegative *network* block variable $\boldsymbol{\lambda}:=(\lambda_{ij}, i\neq j)$. Now assume that we want to minimize the Lagrangian function using BCD method subject to the box constraints $\lambda_{ij}\in [0,1], \forall i,j$. As $L(\boldsymbol{x},\boldsymbol{\lambda})$ is a linear function of $\boldsymbol{\lambda}$, fixing the state block variable and minimizing $L(\boldsymbol{x},\boldsymbol{\lambda})$ with respect to $\boldsymbol{\lambda}\in [0,1]^{n(n-1)}$, we get $\lambda_{ij}=1$ if $g_{ij}(x_i,x_j)\leq 0$ (i.e., there is a directed edge from agent $i$ to agent $j$), and $\lambda_{ij}=0$ if $g_{ij}(x_i,x_j)> 0$ (i.e., no such an edge exists). In other words, fixing the state variable and minimizing the Lagrangian with respect to $\boldsymbol{\lambda}\in [0,1]^{n(n-1)}$, the dual variables precisely capture the network structure among the agents for that state. Motivated by this observation, we have the following theorem. The measurement functions $g_{ij}(\cdot)$ are called symmetric if for all $i\neq j$ we have $g_{ij}(x_i,x_j)=g_{ji}(x_j,x_i)$. Note that for symmetric measurement functions, the communication network among the agents is always an undirected graph. \[thm:majorizing\] Let $g_{ij}\in \mathbb{C}^2$ be symmetric convex and $f_i(x_i)\in \mathbb{C}^2$ be strictly convex functions such that their second order partial derivatives are bounded above by $m$.[^2] Assume that two agents $i$ and $j$ become each others’ neighbors if $g_{ij}(x_i,x_j)\leq 0$. Then the following state-dependent network dynamics $$\begin{aligned} \label{eq:majorizing-dynamics} x_i^{k+1}=x_i^k-\frac{\frac{\partial}{\partial x_i}f_i(x^k_i)+\sum_{j\in N_i(\boldsymbol{x}^k)} \frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j)}{2m(|N_i(\boldsymbol{x}^k)|+1)}, \ \ i\in [n]\end{aligned}$$ will converge. In particular, $V(\boldsymbol{x}):=\sum_i f_i(x_i)+\frac{1}{2}\sum_{i,j}\min\{g_{ij}(x_i,x_j),0\}$ serves as a Lyapunov function for dynamics such that $V(\boldsymbol{x}^{k+1})\leq V(\boldsymbol{x}^k)-m\|\boldsymbol{x}^k-\boldsymbol{x}^{k+1}\|^2$. Let us consider the following Lagrangian function $$\begin{aligned} \nonumber L(\boldsymbol{x},\boldsymbol{\lambda})=\sum_{i} f_i(\boldsymbol{x}_i)+\frac{1}{2}\sum_{i,j}\lambda_{ij}g_{ij}(x_i,x_j),\end{aligned}$$ and consider the BCD method applied to this function when $\boldsymbol{\lambda}\in [0,1]^{n(n-1)}$ and $\boldsymbol{x}\in \mathbb{R}^n$. Fixing the state variable to $\boldsymbol{x}^k$, and setting $\boldsymbol{\lambda}^k:=\argmin_{\boldsymbol{\lambda}\in [0,1]^{n(n-1)}}L(\boldsymbol{x}^k,\boldsymbol{\lambda})$, it is easy to see that $\boldsymbol{\lambda}^k$ precisely captures the network structure among the agents at the current state $\boldsymbol{x}^k$. Next let us fix the network variable to $\boldsymbol{\lambda}^k$, and consider $$\begin{aligned} \nonumber L_k(\boldsymbol{x}):=L(\boldsymbol{x},\boldsymbol{\lambda}^k)&=\sum_{i} f_i(\boldsymbol{x}_i)+\frac{1}{2}\sum_{i,j}\lambda^k_{ij}g_{ij}(x_i,x_j)=\sum_{i} f_i(\boldsymbol{x}_i)+\frac{1}{2}\sum_i\sum_{j\in N_i(\boldsymbol{x}^k)} g_{ij}(x_i,x_j),\end{aligned}$$ which is a strictly convex function. Ideally, we want to set the state at the next time step $\boldsymbol{x}^{k+1}$ to the unique minimizer of $L_k(\boldsymbol{x})$. However, since solving the minimization problem $\min_{\boldsymbol{x}\in\mathbb{R}^n}L_k(\boldsymbol{x})$ exactly might be difficult, we use an inexact BCD method, where instead a quadratic upper approximation of this function is minimized. More precisely, consider the quadratic approximation of $L_k(\boldsymbol{x})$ at the current point $\boldsymbol{x}^k$: $$\begin{aligned} \nonumber L_k(\boldsymbol{x})\sim L(\boldsymbol{x}^k)+(\boldsymbol{x}-\boldsymbol{x}^k)^T\nabla L_k(\boldsymbol{x}^k)+\frac{1}{2}(\boldsymbol{x}-\boldsymbol{x}^k)^T\nabla^2 L_k(\boldsymbol{x}^k)(\boldsymbol{x}-\boldsymbol{x}^k),\end{aligned}$$ where $\nabla L_k(\boldsymbol{x}^k)$ is the gradient of $L_k(\boldsymbol{x})$ at $\boldsymbol{x}^k$, whose $i$th component is given by $[\nabla L_k(\boldsymbol{x}^k)]_i=\frac{\partial}{\partial x_i}f_i(x_i)+\sum_{j\in N_i(\boldsymbol{x}^k)} \frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j)$. Moreover, $\nabla^2 L_k(\boldsymbol{x}^k)$ is the Hessian of $L_k(\boldsymbol{x})$ at $\boldsymbol{x}^k$, where the Hessian matrix function of $L_k(\boldsymbol{x})$ is given by $$\begin{aligned} \nonumber [\nabla^2 L_k(\boldsymbol{x})]_{ij}=\begin{cases} \frac{\partial^2}{\partial x^2_i}f_i(x_i)+\sum_{j\in N_i(\boldsymbol{x}^k)} \frac{\partial^2 g_{ij}(x_i,x_j)}{\partial x^2_i} & \mbox{if} \ j=i\\ \frac{\partial g_{ij}(x_i,x_j)}{\partial x_i \partial x_j} & \mbox{if} \ j\in N_i(\boldsymbol{x}^k)\\ 0 & \mbox{if} \ j\notin N_i(\boldsymbol{x}^k). \end{cases}\end{aligned}$$ Now by the assumption $|\frac{\partial g_{ij}(x_i,x_j)}{\partial x_i \partial x_j}|\leq m, |\frac{\partial^2f_{i}(x_i)}{\partial x^2_i}|\leq m, \forall i,j$, and using Gershgorin Circle Theorem one can see that for any $\boldsymbol{x}$, the Hessian $\nabla^2 L_k(\boldsymbol{x})$ is dominated by the diagonal matrix $Q_k:=2m\cdot \mbox{diag}(|N_1(\boldsymbol{x}^k)|+1,\ldots,|N_n(\boldsymbol{x}^k)|+1)$. Using the Tailor expansion, there exists an $\boldsymbol{\zeta}\in\mathbb{R}^n$ such that, $$\begin{aligned} \nonumber L_k(\boldsymbol{x})&=L(\boldsymbol{x}^k)+(\boldsymbol{x}-\boldsymbol{x}^k)^T\nabla L_k(\boldsymbol{x}^k)+\frac{1}{2}(\boldsymbol{x}-\boldsymbol{x}^k)^T\nabla^2 L_k(\boldsymbol{\zeta})(\boldsymbol{x}-\boldsymbol{x}^k)\cr &\leq L(\boldsymbol{x}^k)+(\boldsymbol{x}-\boldsymbol{x}^k)^T\nabla L_k(\boldsymbol{x}^k)+\frac{1}{2}(\boldsymbol{x}-\boldsymbol{x}^k)^TQ_k(\boldsymbol{x}-\boldsymbol{x}^k):=u_k(\boldsymbol{x}).\end{aligned}$$ Therefore, $u_k(\boldsymbol{x})$ is a quadratic upper approximation for $L_k(\boldsymbol{x})$ for any $\boldsymbol{x}$. Letting $$\begin{aligned} \label{eq:majorizing-compact-dynamics} \boldsymbol{x}^{k+1}=\argmin_{\boldsymbol{x}\in \mathbb{R}^n} u_k(\boldsymbol{x})=\boldsymbol{x}^k-Q_k^{-1}\nabla L_k(\boldsymbol{x}^k),\end{aligned}$$ we can write, $$\begin{aligned} \nonumber L_k(\boldsymbol{x}^{k+1})\leq u_k(\boldsymbol{x}^{k+1})\leq u_k(\boldsymbol{x}^k)=L_k(\boldsymbol{x}^k).\end{aligned}$$ This shows that the state-dependent network dynamics: $$\begin{aligned} \nonumber \boldsymbol{x}_i^{k+1}=\boldsymbol{x}_i^k-\frac{1}{2m}\cdot\frac{\frac{\partial}{\partial x_i}f_i(x^k_i)+\sum_{j\in N_i(\boldsymbol{x}^k)} \frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j)}{|N_i(\boldsymbol{x}^k)|+1},\end{aligned}$$ are Lyapunov stable ($L(\cdot)$ decreases regardless of the state or network updates), and $$\begin{aligned} \nonumber V(\boldsymbol{x}):=\min_{\boldsymbol{\lambda}\in [0,1]^{n(n-1)}}L(\boldsymbol{x},\boldsymbol{\lambda})=\sum_i f_i(x_i)+\frac{1}{2}\sum_{i,j}\min\{g_{ij}(\boldsymbol{x}),0\}\end{aligned}$$ serves as a Lyapunov function for them. In particular, denoting the $Q_k$-norm of a vector $\boldsymbol{v}$ by $\|\boldsymbol{v}\|^2_{Q_k}=\boldsymbol{v}^TQ_k\boldsymbol{v}$, the drift of this Lyapunov is lower bounded by, $$\begin{aligned} \nonumber V(\boldsymbol{x}^k)-V(\boldsymbol{x}^{k+1})&= L(\boldsymbol{x}^k,\boldsymbol{\lambda}^k)-L(\boldsymbol{x}^{k+1},\boldsymbol{\lambda}^{k+1})\ge L(\boldsymbol{x}^k,\boldsymbol{\lambda}^k)-L(\boldsymbol{x}^{k+1},\boldsymbol{\lambda}^{k})\cr &=L_k(\boldsymbol{x}^k)-L_k(\boldsymbol{x}^{k+1})=u_k(\boldsymbol{x}^k)-L_k(\boldsymbol{x}^{k+1})\cr &\ge u_k(\boldsymbol{x}^k)-u_k(\boldsymbol{x}^{k+1})= \frac{1}{2}(\nabla L_k(\boldsymbol{x}^k))^TQ_k^{-1}\nabla L_k(\boldsymbol{x}^k)\cr &=\frac{1}{2}\|Q_k^{-1}\nabla L_k(\boldsymbol{x}^k)\|^2_{Q_k}=\frac{1}{2}\|\boldsymbol{x}^k-\boldsymbol{x}^{k+1}\|^2_{Q_k}\ge m\|\boldsymbol{x}^k-\boldsymbol{x}^{k+1}\|^2,\end{aligned}$$ where the last equality follows from , and the last inequality holds because all the diagonal entries of $Q_k$ are greater than $2m$. Therefore, $V(\boldsymbol{x}^{k+1})\leq V(\boldsymbol{x}^k)-m\|\boldsymbol{x}^k-\boldsymbol{x}^{k+1}\|^2$. As $V(\cdot)$ is lower bounded by a finite value, we get $\lim_{k\to \infty} \|\boldsymbol{x}^{k+1}-\boldsymbol{x}^k\|=0$. This in view of and the fact that diagonal entries of $Q_k^{-1}$ are lower bounded by $\frac{1}{2mn}$ also implies $\lim_{k\to \infty}\nabla L_k(\boldsymbol{x}^k)=\boldsymbol{0}$. Finally, to show the convergence of the dynamics to an equilibrium point, we note that for every $k$, $L_k(\boldsymbol{x})$ belongs to the *finite* family of strictly convex functions $\mathcal{H}:=\{\sum_if_i(x_i)+\frac{1}{2}\sum_{i,j}\lambda_{ij}g_{ij}(x_i,x_j): \lambda_{ij}\in\{0,1\}, \forall i,j\}$, containing at most $2^{O(n^2)}$ functions. This is because $L_k(\boldsymbol{x})=L(\boldsymbol{x},\boldsymbol{\lambda}^k)$, where $\boldsymbol{\lambda}^k$ is the solution of the linear program $\min_{\boldsymbol{\lambda}\in [0,1]^{n(n-1)}}L(\boldsymbol{x}^k,\boldsymbol{\lambda})$, and must be an extreme point of $[0,1]^{n(n-1)}$. Now given any $h(\boldsymbol{x})\in \mathcal{H}$, let ${\rm h}_1<{\rm h}_2<\ldots$, be all the indicies $k$ for which $L_k(\boldsymbol{x})=h(\boldsymbol{x})$. Then we can partition the sequence $\{\boldsymbol{x}^k\}$ into at most $|\mathcal{H}|$ subsequences $\{\{\boldsymbol{x}^{{\rm h}_{\ell}}\}_{\ell\ge 1},h\in \mathcal{H}\}$. Since $\lim_{k\to \infty}\nabla L_k(\boldsymbol{x}^k)=\boldsymbol{0}$, this means that for any subsequence $\{\boldsymbol{x}^{{\rm h}_{\ell}}\}_{\ell\ge 1}$ we have, $\lim_{\ell\to \infty}\nabla h(\boldsymbol{x}^{{\rm h}_{\ell}})=\lim_{\ell\to \infty}\nabla L_{{\rm h}_{\ell}}(\boldsymbol{x}^{{\rm h}_{\ell}})=\boldsymbol{0}$. As $h(\cdot)$ is a strictly convex function, this means that the subsequence $\{\boldsymbol{x}^{{\rm h}_{\ell}}\}_{\ell\ge 1}$ must converge to the unique minimizer of $h(\cdot)$, denoted by $\boldsymbol{x}_h$. Since there are a finite number of such subsequences, for any $\epsilon>0$, there exists $K_{\epsilon}$ such that $\|\boldsymbol{x}^{{\rm h}_{\ell}}-\boldsymbol{x}_h\|<\epsilon, \forall h\in\mathcal{H}, \ell>K_{\epsilon}$. Let $\mathcal{X}=\{\boldsymbol{x}_h=\argmin h(\boldsymbol{x}): h\in \mathcal{H}\}$ be the finite set of minimizers of all the functions in $\mathcal{H}$, and choose $\epsilon:=\frac{1}{3}\min_{\boldsymbol{x}_p\neq \boldsymbol{x}_q\in \mathcal{X}}\|\boldsymbol{x}_p-\boldsymbol{x}_q\|$. Then for $\ell>K_{\epsilon}$, each subsequence $\{\boldsymbol{x}^{{\rm h}_{\ell}}\}_{\ell\ge 1}$ lies in an $\epsilon$-neighborhood of its limit point $\boldsymbol{x}_h$, and moreover, there is no jump of the iterates between two distinct $\epsilon$-neighborhood (otherwise, $\|\boldsymbol{x}^{k+1}-\boldsymbol{x}^{k}\|>\frac{\epsilon}{3}$ for some $k$, contradicting the fact that $\lim_{k\to \infty} \|\boldsymbol{x}^{k+1}-\boldsymbol{x}^k\|=0$). This shows that for $\ell>K_{\epsilon}$, all the subsequences $\{\{\boldsymbol{x}^{{\rm h}_{\ell}}\}_{\ell\ge 1}, h\in\mathcal{H}\}$ must lie in the same $\epsilon$-neighborhood, and hence the sequence $\{\boldsymbol{x}^k\}$ converge to a limit point in $\boldsymbol{x}^*\in \mathcal{X}$. Consider a special case where $g_{ij}(x_i,x_j)=\frac{(x_i-x_j)^2}{2}-\frac{\epsilon^2_{ij}}{2}$ with $\epsilon_{ij}=\epsilon_{ji}$ (thus agents $i$ and $j$ communicate if and only if their Euclidean distance is at most $\epsilon_{ij}$), and $f_i(x_i)=\frac{x_i^2}{2}$. It is easy to see that $\frac{\partial g_{ij}(x_i,x_j)}{\partial x_i \partial x_j}=\frac{\partial^2f_{i}(x_i)}{\partial x^2_i}=1, \forall i,j$, so choosing $m=1$ will satisfy the assumption of Theorem \[thm:majorizing\]. This shows that the dynamics: $$\begin{aligned} \nonumber \boldsymbol{x}_i^{k+1}=\boldsymbol{x}_i^k-\frac{1}{2}\frac{x_i^{k}+\sum_{j\in N_i(\boldsymbol{x}^k)} (x_i^k-x_j^k)}{|N_i(\boldsymbol{x}^k)|+1}=\frac{1}{2}x_i^k+\frac{1}{2}\frac{\sum_{j\in N_i(\boldsymbol{x}^k)}x_j^k}{|N_i(\boldsymbol{x}^k)|+1},\end{aligned}$$ will converge to an equilibrium point. Note that these dynamics are the lazy version of the HK dynamics where each agent is more suborn about its own opinion. Moreover, we know that the equilibrium point $\boldsymbol{x}^*$ must be the minimizer of a strictly convex function of the form $h^*(\boldsymbol{x}):=\sum_i\frac{x_i^2}{2}+\sum_{ij}\lambda^*_{ij}[\frac{(x_i-x_j)^2}{2}-\frac{\epsilon^2_{ij}}{2}]$ for some fixed undirected network adjacency matrix $\boldsymbol{\lambda}^*\in\{0,1\}^{n^2}$. Thus, $\boldsymbol{x}^*$ is a solution to $\nabla h^*(\boldsymbol{x})=0$, or equivalently $(I+\mathcal{L}^*)\boldsymbol{x}=0$, where $\mathcal{L}^*$ is the Laplacian matrix associated with $\boldsymbol{\lambda}^*$. As $I+\mathcal{L}^*$ is positive definite, we must have $\boldsymbol{x}^*=\boldsymbol{0}$. In the proof of Theorem \[thm:majorizing\] we restricted our attention to quadratic upper approximations. However, motivated by the mirror descent algorithm in convex optimization [@bubeck2015convex], we can use any smooth convex mirror map $\Psi:\mathbb{R}^n\to \mathbb{R}$ to construct an upper approximation for $L_k(\boldsymbol{x})$ at the point $x^k$. Doing this, we obtain alternative state-dependent network dynamics whose Lyapunov stability and convergence can be established using a similar fashion as in Theorem \[thm:majorizing\]. More precisely, given a smooth and strictly convex function $\Psi$, let $D_{\Psi}(\boldsymbol{x},\boldsymbol{x}^k):=\Psi(\boldsymbol{x})-\Psi(\boldsymbol{x}^k)-\nabla \Psi(\boldsymbol{x}^k)^T(\boldsymbol{x}-\boldsymbol{x}^k)$ be the Bregman divergence with respect to $\Psi$. Then as long as $\nabla^2 \Psi(\boldsymbol{x})\ge 2mn I$, $u_k(\boldsymbol{x}):=L(\boldsymbol{x}^k)+(\boldsymbol{x}-\boldsymbol{x}^k)^T\nabla L_k(\boldsymbol{x}^k)+D_{\Psi}(\boldsymbol{x},\boldsymbol{x}^k)$ serves as a convex upper approximation for $L_k(\boldsymbol{x})$. Therefore, updating the state at the next time step to $\boldsymbol{x}^{k+1}=\argmin_{\boldsymbol{x}\in\mathbb{R}^n} u_k(\boldsymbol{x})$, or equivalently to the solution of $$\begin{aligned} \label{eq:mirror-map} \nabla \Psi(\boldsymbol{x}^{k+1})=\nabla \Psi(\boldsymbol{x}^{k})-\nabla L_k(\boldsymbol{x}^{k}),\end{aligned}$$ will guarantee the decrease of the Lyapunov $V(\boldsymbol{x})=\sum_i f_i(x_i)+\frac{1}{2}\sum_{i,j}\min\{g_{ij}(\boldsymbol{x}),0\}$. For instance, choosing the mirror map to be the negative entropy function, i.e., $\Psi(\boldsymbol{x}):=\sum_{i=1}^{n}x_i\ln x_i$, and using , we obtain the following Lyapunov stable multiplicative dynamics: $$\begin{aligned} \nonumber x_i^{k+1}=x_i^k\cdot\exp \big(-\frac{\partial f(x_i)}{\partial x_i}-\!\!\!\!\!\sum_{j\in N_i(\boldsymbol{x}^k)} \!\!\!\frac{\partial g_{ij}(x^k_i,x^k_j)}{\partial x_i}\big).\end{aligned}$$ Asymmetric State-Dependent Network Dynamics ------------------------------------------- Asymmetric (directed) interconnections among the agents often introduce a major challenge in the analysis of the multiagent network dynamics. Unfortunately, the gradient operator is a “symmetric" operator in a sense that fixing the network variable in the BCD method and updating the state variable in the negative direction of the gradient will always generate a symmetric class of dynamics (i.e., an agent’s state not only is influenced by its out-neighbors but also by its in-neighbors). However, one way of tackling this issue using sequential optimization is to introduce an independent copy of the state variable while making sure that these two copies remain close to each other. In other words, we capture the asymmetry between the agents by introducing an extra block variable into the BCD method and adding an extra (possibly asymmetric) penalty term to the objective function to enforce the two copies of the state variables remain close to each other. Here, the choice of the penalty function can be very problem-specific, resulting in different asymmetric state-dependent network dynamics. However, one natural choice for the penalty function is the Bregman divergence between the two copies of the state variables, as it is shown in the following theorem. \[thm:asymmetric\] Let $g_{ij}(x_i,x_j)\in \mathbb{C}^2$ be $L$-Lipschitz convex and $f(\boldsymbol{x})=\sum_{i=1}^{n}f_i(x_i)$ be a smooth convex functions whose second order partial derivatives are all bounded above by $m$. If $\|\boldsymbol{y}-\boldsymbol{x}\|_1\leq \frac{1}{nL}D_{f}(\boldsymbol{y},\boldsymbol{x}), \forall \boldsymbol{x},\boldsymbol{y}$, where $\|\cdot\|_1$ is the $l_1$-norm and $D_{f}(\cdot)$ denotes the Bregman divergence with respect to $f(\cdot)$, then $V(\boldsymbol{x})=\sum_{i,j}\min\{g_{ij}(x_i,x_j),0\}$ serves as a Lyapunov function for the asymmetric state-dependent network dynamics: $$\begin{aligned} \label{eq:asymmetric} x_i^{k+1}=x_i^{k}-\frac{\sum_{j\in N_i(\boldsymbol{x}^k)}\frac{\partial}{\partial x_i}g_{ij}(x_i^k,x_j^k)}{m(|N_i(\boldsymbol{x}^k)|+1)}, \ \ \ i\in[n].\end{aligned}$$ Let $c_i(\boldsymbol{\boldsymbol{x}},\boldsymbol{\lambda}_i):=\sum_{j\neq i}\lambda_{ij}g_{ij}(x_i,x_j)$ denote the cost of agent $i$ with respect to its neighbors, and let $\boldsymbol{y}$ be an independent copy of the state variable $\boldsymbol{x}$. Consider the following function with three independent block variables $\boldsymbol{\lambda}\in [0,1]^{n(n-1)}, \boldsymbol{x},\boldsymbol{y}\in \mathbb{R}^n$: $$\begin{aligned} \nonumber L(\boldsymbol{y},\boldsymbol{x},\boldsymbol{\lambda}):=\sum_{i=1}^{n}c_i(y_i,\boldsymbol{\boldsymbol{x}}_{-i},\boldsymbol{\lambda}_i)+D_{f}(\boldsymbol{y},\boldsymbol{x})=\sum_{i=1}^{n}\sum_{j\neq i}\lambda_{ij}g_{ij}(y_i,x_j)+D_{f}(\boldsymbol{y},\boldsymbol{x}),\end{aligned}$$ where $D_{f}(\boldsymbol{y},\boldsymbol{x}):=f(\boldsymbol{y})-f(\boldsymbol{x})-(\boldsymbol{y}-\boldsymbol{x})^T\nabla f(\boldsymbol{x})$. Note that here we no longer require the symmetry assumption $g_{ij}(x_i,x_j)=g_{ji}(x_j,x_i)$. The reason for introducing the Bregman distance $D_{f}(\boldsymbol{y},\boldsymbol{x})$ into the objective function is that ideally we want the two copies of state variables coincide. But instead of adding the hard constraint $\boldsymbol{y}=\boldsymbol{x}$ into our optimization problem, we relax it by adding a soft penalty term to the objective function. Now let us apply the BCD method to the following minimization: $$\begin{aligned} \label{eq:three-block-objective} \min_{\boldsymbol{\lambda}\in [0,1]^{n(n\!-\!1)}}\min_{\boldsymbol{x}\in\mathbb{R}^n}\min_{\boldsymbol{y}\in\mathbb{R}^n}\{\sum_{i=1}^{n}\sum_{j\neq i}\lambda_{ij}g_{ij}(y_i,x_j)+D_{f}(\boldsymbol{x},\boldsymbol{y})\}.\end{aligned}$$ First, assume that both state variables are fixed to $\boldsymbol{y}=\boldsymbol{x}=\boldsymbol{x}^k$. Then minimizing the objective function with respect to $\boldsymbol{\lambda}\in [0,1]^{n(n\!-\!1)}$, we precisely capture the asymmetric network structure $\boldsymbol{\lambda}^k$ associated with the state $\boldsymbol{x}^k$ (i.e., $\lambda^k_{ij}=1$ if and only if $g_{ij}(x_i^k,x_j^k)\leq 0$). Next, let us fix $\boldsymbol{\lambda}=\boldsymbol{\lambda}^k$ and $\boldsymbol{x}=\boldsymbol{x}^k$, and consider minimizing with respect to the $\boldsymbol{y}$ variable. However, to obtain a closed-form for the optimal solution, instead of solving this minimization exactly, we minimize its quadratic upper approximation at the current state $\boldsymbol{x}^k$, given by $$\begin{aligned} \nonumber z_k(\boldsymbol{y}):=L(\boldsymbol{x}^k,\boldsymbol{x}^k,\boldsymbol{\lambda}^k)+(\boldsymbol{y}-\boldsymbol{x}^k)^T\nabla_y L(\boldsymbol{x}^k,\boldsymbol{x}^k,\boldsymbol{\lambda}^k)+\frac{1}{2}(\boldsymbol{y}-\boldsymbol{x}^k)^TP_k(\boldsymbol{y}-\boldsymbol{x}^k),\end{aligned}$$ where $P_k$ is a diagonal matrix whose $i$th diagonal entry is $m(|N_i(\boldsymbol{x}^k)|+1)$. To see why $L(\boldsymbol{y},\boldsymbol{x}^k,\boldsymbol{\lambda}^k)\leq z_k(\boldsymbol{y}), \forall \boldsymbol{y}$, we note that, $$\begin{aligned} \nonumber [\nabla_y L(\boldsymbol{y},\boldsymbol{x}^k,\boldsymbol{\lambda}^k)]_i=\sum_{j\in N_i(\boldsymbol{x}^k)}\frac{\partial}{\partial y_i}g_{ij}(y_i,x^k_j)+(\frac{\partial}{\partial y_i}f_i(y_i)-\frac{\partial}{\partial x_i}f_i(x_i^k)),\end{aligned}$$ which implies that $\nabla^2 L(\boldsymbol{y},\boldsymbol{x}^k,\boldsymbol{\lambda}^k)$ is a *diagonal* matrix with diagonal entries being $$\begin{aligned} \nonumber [\nabla^2 L(\boldsymbol{y},\boldsymbol{x}^k,\boldsymbol{\lambda}^k)]_{ii}=\frac{\partial^2}{\partial y^2_i}f_i(y_i)+\sum_{j\in N_i(\boldsymbol{x}^k)}\frac{\partial g_{ij}(y_i,x^k_{j})}{\partial^2 y_i}.\end{aligned}$$ As $|\frac{\partial^2 f_i(y_i}{\partial y^2_i}|\leq m$ and $|\frac{\partial g_{ij}(y_i,x^k_{j})}{\partial y^2_i}|\leq m, \forall i$, the Hessian matrix must be dominated by $P_k$, and the result follows from the Tailor expansion. Thus, the optimal solution to $\min_{\boldsymbol{y}\in \mathbb{R}^n} z_k(\boldsymbol{y})$ is given by $\boldsymbol{x}^k+P_k^{-1}\nabla_y L(\boldsymbol{x}^k,\boldsymbol{x}^k,\boldsymbol{\lambda}^k)$, which is precisely the next state of the dynamics . Therefore, updating the block variable $\boldsymbol{y}$ to $\boldsymbol{x}^{k+1}$, while fixing the other variables to $\boldsymbol{\lambda}=\boldsymbol{\lambda}^k, \boldsymbol{x}=\boldsymbol{x}^k$, will decrease the objective function as, $$\begin{aligned} \nonumber L(\boldsymbol{x}^{k+1},\boldsymbol{x}^k,\boldsymbol{\lambda}^k)\leq z_k(\boldsymbol{x}^{k+1})=\min_{\boldsymbol{y}\in \mathbb{R}^n}z_k(\boldsymbol{y})\leq z_k(\boldsymbol{x}^{k})=L(\boldsymbol{x}^{k},\boldsymbol{x}^k,\boldsymbol{\lambda}^k).\end{aligned}$$ Finally, let us fix $\boldsymbol{y}=\boldsymbol{x}^{k+1}, \boldsymbol{\lambda}=\boldsymbol{\lambda}^k$ (which are the solutions to their corresponding sub-optimizations in the BCD method), and consider $\min_{\boldsymbol{x}\in \mathbb{R}^n}L(\boldsymbol{x}^{k+1},\boldsymbol{x},\boldsymbol{\lambda}^k)$. In particular, showing that $L(\boldsymbol{x}^{k+1},\boldsymbol{x}^{k+1},\boldsymbol{\lambda}^k)\leq L(\boldsymbol{x}^{k+1},\boldsymbol{x}^k,\boldsymbol{\lambda}^k)$ will complete the BCD loop and imply that $L(\boldsymbol{y},\boldsymbol{x},\boldsymbol{\lambda})$ is decreasing along the trajectory of the asymmetric dynamics . Here is where the role of the penalty term in the objective function comes into play. More precisely, using Lipschitz property and the fact that $D_f(\boldsymbol{x}^{k+1},\boldsymbol{x}^{k+1})=0$, we can write, $$\begin{aligned} \nonumber L(\boldsymbol{x}^{k+1},\boldsymbol{x}^{k+1},\boldsymbol{\lambda}^k)-L(\boldsymbol{x}^{k+1},\boldsymbol{x}^k,\boldsymbol{\lambda}^k)&=-D_f(\boldsymbol{x}^{k+1},\boldsymbol{x}^k)+\sum_{i=1}^{n}\sum_{j\in N_i(\boldsymbol{x}^k)}\Big(g_{ij}(x^{k+1}_i,x^{k+1}_j)-g_{ij}(x^{k+1}_i,x^{k}_j)\Big)\cr &\leq -D_f(\boldsymbol{x}^{k+1},\boldsymbol{x}^k)+\sum_{i,j}|g_{ij}(x^{k+1}_i,x^{k+1}_j)-g_{ij}(x^{k+1}_i,x^{k}_j)|\cr &\leq -D_f(\boldsymbol{x}^{k+1},\boldsymbol{x}^k)+nL\sum_{j}|x^{k+1}_j-x^{k}_j|\cr &= -D_f(\boldsymbol{x}^{k+1},\boldsymbol{x}^k)+nL\|\boldsymbol{x}^{k+1}-\boldsymbol{x}^k\|_1<0,\end{aligned}$$ where the last inequality is by the assumption on the choice of the Bregman map $f(\cdot)$. This shows that $V(\boldsymbol{x}):=\min_{\boldsymbol{\lambda}\in [0,1]^{n(n-1)}}L(\boldsymbol{x},\boldsymbol{x},\boldsymbol{\lambda})=\sum_{i,j}\min\{g_{ij}(x_i,x_j),0\}$ is a decreasing function along the trajectories of the asymmetric dynamics . BCD Method with Change of Variables {#subsec:change-variable} ----------------------------------- In this part, we show how a suitable change of block variables in the BCD method can generate new state-dependent network dynamics, whose Lyapunov stability can be established using the same approach as before. The change of variable can be applied to either the state or the network variable (or a combination of both). However, in this section, we focus on a more interesting case where the change of variable is applied on the network variable, and we only illustrate the idea of a change of variable for the state through the following simple example. Let us recall the HK model where the state of agent $i$ at the next time step is updated to $x_i^{k+1}=\frac{x_i^k+\sum_{j\in N_i(\boldsymbol{x}^k)} x_j^k}{1+|N_i(\boldsymbol{x}^k)|}, i\in [n]$, where the neighborhood set of agent $i$ is given by $N_i(\boldsymbol{x}^k)=\{j\in[n]\setminus\{i\}: |x_i^k-x_j^k|\leq \epsilon\}$. It is easy to see that HK dynamics are invariant with respect to translation of all the states by a constant value. Therefore, without loss of generality we may assume that $x_i^k> 1, \forall i, k$. Now let us define a new state variable by setting $y_i:=\ln (x_i), i\in [n]$. Applying the HK dynamics on this new logarithmic states, we obtain $y_i^{k+1}=\frac{y_i^k+\sum_{j\in N_i(\boldsymbol{y}^k)} y_j^k}{1+|N_i(\boldsymbol{y}^k)|}$, with $N_i(\boldsymbol{y}^k)=\{j\in[n]\setminus\{i\}: |y_i^k-y_j^k|\leq \epsilon\}$, that are Lyapunov stable and converge to an equilibrium. Rewriting these dynamics in terms of $x$ variables, we get a new class of state-dependent geometric averaging dynamics $x_i^{k+1}=(\prod_{j\in \bar{N}_i(\boldsymbol{x}^k)}x_j^k)^{\frac{1}{|\bar{N}_i(\boldsymbol{x}^k)|}}$, where $\bar{N}_i(\boldsymbol{x}^k)=\{j\in[n]: e^{-\epsilon}\leq \frac{x_i^k}{x_j^k}\leq e^{\epsilon}\}$, that are also Lyapunov stable and converge. Now we turn our attention to the more interesting case of changing the network variables. So far, the dual variable $\lambda_{ij}$ in the Lagrangian function $L(\boldsymbol{x},\boldsymbol{\lambda})$ was used to capture the existence of an edge from agent $i$ to agent $j$. In particular, we saw that restricting $\lambda_{ij}$ to the unit interval $[0,1]$, and minimizing $L(\boldsymbol{x},\boldsymbol{\lambda})$ for the network variable in the BCD method would automatically enforce $\lambda_{ij}$ to take binary values in $\{1,0\}$ (hence capturing the switching behavior of the existence of an edge from $i$ to $j$). In particular, fixing the state block variable in $L(\boldsymbol{x},\boldsymbol{\lambda})=\sum_{i,j}\lambda_{ij}g_{ij}(x_i,x_j)$, and minimizing it with respect to $\boldsymbol{\lambda}\in [0,1]^{n(n-1)}$ will give us the network structure for that state, i.e., $\lambda^*_{ij}=1-\mbox{sgn}(g_{ij}(x_i,x_j))$, where $\mbox{sgn}(\cdot)$ is the sign function. This gives us a complicated characterization for $\lambda^*_{ij}$ which is a combination of the sign function and the measurement function. An alternative way to recover the same network structure is to first transform the original network variables from $\lambda_{ij}$ to $f_{ij}(\lambda_{ij}):=1-\mbox{sgn}(\lambda_{ij})$, in which case the transformed Lagrangian function becomes $\hat{L}(\boldsymbol{x},\boldsymbol{\lambda})=\sum_{i,j}f_{ij}(\lambda_{ij})g_{ij}(x_i,x_j)$. Now applying the BCD method to $\hat{L}(\boldsymbol{x},\boldsymbol{\lambda})$ by fixing the state variable and optimizing with respect to the *unconstrained* network variable $\lambda_{ij}\in \mathbb{R}$, we obtain a simpler optimal network variable $\hat{\lambda}_{ij}=g_{ij}(x_i,x_j)$. Note that $$\begin{aligned} \nonumber \hat{L}(\boldsymbol{x},\hat{\boldsymbol{\lambda}})=\min_{\boldsymbol{\lambda}\in \mathbb{R}^{n(n-1)}}\hat{L}(\boldsymbol{x},\boldsymbol{\lambda})=\min_{\boldsymbol{\lambda}\in [0,1]^{n(n-1)}}L(\boldsymbol{x},\boldsymbol{\lambda})=L(\boldsymbol{x},\boldsymbol{\lambda}^*). \end{aligned}$$ Such a transformation on the network variables has three advantages: i) it removes the box constraints (and hence switching behavior) on the network variables and absorbs them into the structure of the transformation function, ii) the optimal network variable in the BCD method after the change of variable has a simpler form, and iii) by choosing different transfer functions one can obtain different classes of state-dependent network dynamics. The following theorem provides a sample result using the idea of a change of network variables. \[thm:change-variable\] Let $g_{ij}(x_i,x_j)\in \mathbb{C}^2$ be symmetric convex functions such that their second order partial derivatives are bounded above by $m$. Moreover, let $f_{ij}(\lambda)\in \mathbb{C}$ be symmetric and nonnegative decreasing functions. Then the following state-dependent network dynamics $$\begin{aligned} \label{eq:change-variable-dynamics} x_i^{k+1}=x_i^k-\frac{\sum_{j}f_{ij}\big(g_{ij}(x_i^k,x_j^k)\big)\frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j)}{2m\sum_{j}f_{ij}\big(g_{ij}(x_i^k,x_j^k)\big)}, \ \ i\in [n]\end{aligned}$$ are Lyapunov stable with the Lyapunov function $V(\boldsymbol{x})=\sum_{i\neq j}\int_{0}^{g_{ij}(x_i,x_j)}\!\!f_{ij}(\lambda)d\lambda$. Let us consider the BCD method applied to the transformed Lagrangian $$\begin{aligned} \nonumber \hat{L}(\boldsymbol{x},\boldsymbol{\lambda}):=\sum_{i\neq j}\Big(f_{ij}(\lambda_{ij})g_{ij}(x_i,x_j)-h_{ij}(\lambda_{ij})\Big),\end{aligned}$$ with $\boldsymbol{x}\in \mathbb{R}^n$ and $\boldsymbol{\lambda}\in \mathbb{R}^{n(n-1)}$, for some real valued functions $h_{ij}(\lambda_{ij})$ to be determined later. If for any fixed state $\boldsymbol{x}$, the function $\hat{L}(\boldsymbol{x},\boldsymbol{\lambda})$ has a unique minimum with respect to $\boldsymbol{\lambda}\in \mathbb{R}^{n(n-1)}$, we can apply the BCD method and assure that this function decreases due to the network updates. Now let us first fix the state variable to $\boldsymbol{x}^k$. Assuming differentiability of the functions $f_{ij}, h_{ij}$, to find $\argmin_{\boldsymbol{\lambda}\in \mathbb{R}^{n(n-1)}}\hat{L}(\boldsymbol{x}^k,\boldsymbol{\lambda})$, we set $$\begin{aligned} \label{eq:phi-lamda-stationary} \frac{\partial}{\partial \lambda_{ij}}\hat{L}(\boldsymbol{x}^k,\boldsymbol{\lambda})=f'_{ij}(\lambda_{ij})g_{ij}(x^k_i,x^k_j)-h'_{ij}(\lambda_{ij})=0,\end{aligned}$$ which implies $\frac{h'_{ij}(\lambda_{ij})}{f'_{ij}(\lambda_{ij})}=g_{ij}(x^k_i,x^k_j)$. Therefore, if we define $h'_{ij}(\lambda):=\lambda f'_{ij}(\lambda)$, or equivalently, $h_{ij}(\lambda):=\int_{0}^{\lambda}sf'_{ij}(s)ds$, the equation has a unique solution $\lambda^*_{ij}=g_{ij}(x^k_i,x^k_j)$. To show that this solution is the minimizer of $\hat{L}(\boldsymbol{x}^k,\boldsymbol{\lambda})$, we note that $$\begin{aligned} \nonumber \frac{\partial}{\partial \lambda_{ij}}\hat{L}(\boldsymbol{x}^k,\boldsymbol{\lambda})=f'_{ij}(\lambda_{ij})[g_{ij}(x^k_i,x^k_j)-\lambda_{ij}].\end{aligned}$$ Since $f_{ij}(\cdot)$ is a decreasing function, $f'_{ij}(\lambda_{ij})< 0$. Thus for $\lambda_{ij}\leq g_{ij}(x^k_i,x^k_j)$ the function $\hat{L}(\boldsymbol{x}^k,\boldsymbol{\lambda})$ is decreasing with respect to $\lambda_{ij}$, and for $\lambda_{ij}\ge g_{ij}(x^k_i,x^k_j)$, it is increasing (note that $\hat{L}(\boldsymbol{x}^k,\boldsymbol{\lambda})$ is splittable over its $\boldsymbol{\lambda}$-components so we can analyze each of its summands separately). Thus given a fixed state $\boldsymbol{x}^k$, the unique global minimum of $\Phi(\boldsymbol{x}^k,\boldsymbol{\lambda})$ is obtained at $\boldsymbol{\lambda}^*$ with $\lambda^*_{ij}=g_{ij}(x^k_i,x^k_j)$. The rest of the proof follows along the same analysis as in Theorem \[thm:majorizing\], and we only sketch it here. Let us fix the network variable to $\boldsymbol{\lambda}^*$, and consider $\min_{\boldsymbol{x}\in \mathbb{R}^n} \hat{L}(\boldsymbol{x},\boldsymbol{\lambda}^*)$. To find a minimizer we use an inexact method by using its quadratic upper approximation at $\boldsymbol{x}^k$. The $i$th component of the gradient of $\hat{L}(\boldsymbol{x},\boldsymbol{\lambda}^*)$ at $\boldsymbol{x}^k$ is given by $$\begin{aligned} \nonumber [\nabla_{\boldsymbol{x}} \hat{L}(\boldsymbol{x}^k,\boldsymbol{\lambda}^*)]_i&=\sum_{j}\Big(f_{ij}(\lambda^*_{ij})\frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j)+f_{ji}(\lambda^*_{ji})\frac{\partial}{\partial x_i}g_{ji}(x^k_j,x^k_i)\Big)\cr &=2\sum_{j}f_{ij}(\lambda^*_{ij})\frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j)=2\sum_{j}f_{ij}(g_{ij}(x_i^k,x_j^k))\frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j),\end{aligned}$$ where the second equality is by the symmetry of the functions $f_{ij}, g_{ij}$. Similarly, the Hessian matrix is given by $$\begin{aligned} \nonumber [\nabla^2 \hat{L}(\boldsymbol{x},\boldsymbol{\lambda}^*)]_{ij}=\begin{cases} 2\sum_{j} f_{ij}(g_{ij}(x_i^k,x_j^k))\frac{\partial^2}{\partial x^2_i}g_{ij}(x_i,x_j) & \mbox{if} \ j=i\\ 2f_{ij}(g_{ij}(x_i^k,x_j^k))\frac{\partial}{\partial x_i \partial x_j}g_{ij}(x_i,x_j) & \mbox{if} \ j\neq i, \end{cases}\end{aligned}$$ which is dominated by a diagonal matrix with $i$th diagonal entry $4m\sum_{j} f_{ij}(g_{ij}(x_i^k,x_j^k))$. Therefore, the optimal solution to the quadratic upper approximation of $\hat{L}(\boldsymbol{x},\boldsymbol{\lambda}^*)$ is given by , and we have $\hat{L}(\boldsymbol{x}^{k+1},\boldsymbol{\lambda}^*)< \hat{L}(\boldsymbol{x}^k,\boldsymbol{\lambda}^*)$. As a result $$\begin{aligned} \nonumber V(\boldsymbol{x})&=\min_{\boldsymbol{\lambda}\in \mathbb{R}^n}\hat{L}(\boldsymbol{x},\boldsymbol{\lambda})=\sum_{i\neq j}\Big(f_{ij}(g_{ij}(x_i,x_j))g_{ij}(x_i,x_j)-\int_{0}^{g_{ij}(x_i,x_j)}\!\!\!\!\!\!\!\!\!\!\lambda f'_{ij}(\lambda)d\lambda\Big)=\sum_{i\neq j}\int_{0}^{g_{ij}(x_i,x_j)}\!\!\!\!\!f_{ij}(\lambda)d\lambda,\end{aligned}$$ serves as a Lyapunov function for the dynamics , where the last equality is by integration by parts. In fact, the differentiability of the transfer functions $f_{ij}$ in the above theorem can be furthere relaxed to any nonnegative decreasing symmetric functions $f_{ij}$. In particular, Theorem \[thm:change-variable\] is a substantial extension of [@roozbehani2008lyapunov Corollary 1] (see, also [@jabin2014clustering]) when $f_{ij}(\lambda)=f(\sqrt{\lambda}), \forall i,j$ and $g_{ij}(x_i,x_j)=(x_i-x_j)^2$. Stability Using Discrete-Time Saddle-Point Dynamics {#sec:Saddle} =================================================== In the previous section, we considered the stability of state-dependent network dynamics when the network structure and agents’ states are aligned with each other. More precisely, in the application of the BCD method on the Lagrangian function $L(\boldsymbol{x},\boldsymbol{\lambda})$, we considered a double minimization problem $\min_{\boldsymbol{x}}\min_{\boldsymbol{\lambda}}L(\boldsymbol{x},\boldsymbol{\lambda})$, which essentially means that the network coordinator (viewed as a *network player*), breaks/adds the links in favor of the agents’ states (viewed as a *state player*). This essentially means that there is no conflict between the network and state players as they are both minimizing the same Lagrangian function. But what if the network and state players have conflicting objectives? In that case, we have a 2-player zero-sum game between the network and the state with the payoff function $L(\boldsymbol{x},\boldsymbol{\lambda})$, so that the network player aims to maximize it while the state player aims to minimize it, i.e., $\min_{\boldsymbol{x}}\max_{\boldsymbol{\lambda}}L(\boldsymbol{x},\boldsymbol{\lambda})$. To model such a conflicting behavior, as before we assume that each agent $i\in[n]$ holds $n-1$ convex measurement functions $g_{ij}(x_i,x_j), j\in[n]\setminus\{i\}$. In this section, we only restrict our attention to symmetric measurement functions, however, for asymmetric measurement functions, similar results as in Theorem \[thm:asymmetric\] can be obtained. For a given state $\boldsymbol{x}$, two agents $i$ and $j$ become each others’ neighbors if $g_{ij}(x_i,x_j)\ge 0$ (note that as opposed to the previous section, the side of this logic constraint is now reversed). Intuitively, an edge is formed between two agents $i$ and $j$ if and only if their states are far from each other. Now let us consider the same convex program: $$\begin{aligned} \label{eq:convex-saddle-point} &\min \ f(\boldsymbol{x}):=\sum_{i=1}^{n}f_i(x_i)\cr &\mbox{s.t.} \ \ \frac{1}{2}g_{ij}(x_i,x_j)\leq 0, \ \forall i\neq j, \ \ \boldsymbol{x}\in \mathbb{R}^n,\end{aligned}$$ where $f_i(x_i), i\in [n]$ are agents’ private convex functions.[^3] To solve this problem, one can form the Lagrangian function $L(\boldsymbol{x},\boldsymbol{\lambda})=f(\boldsymbol{x})+\frac{1}{2}\sum_{i,j}\lambda_{ij}g_{ij}(x_i,x_j)$, and solve the saddle-point problem: $\min_{\boldsymbol{x}\in \mathbb{R}^n}\max_{\boldsymbol{\lambda}\ge \boldsymbol{0}}L(\boldsymbol{x},\boldsymbol{\lambda})$.[^4] Now using KKT optimality conditions, we know that if the constraint $g_{ij}(x_i,x_j)\leq 0$ is satisfied but not *tight* (i.e., $g_{ij}(x_i,x_j)< 0$), then the corresponding optimal dual variable must be zero, i.e., $\lambda_{ij}=0$. Viewing the dual variables as network variables, this means that there is no edge between the agents $i$ and $j$. This is consistent with the logical condition of not having an edge between $i$ and $j$ as $g_{ij}(x_i,x_j)< 0$. On the other hand, if the constraint $g_{ij}(x_i,x_j)\leq 0$ is not satisfied (i.e., $g_{ij}(x_i,x_j)> 0$), then one must set the corresponding dual variable to $\lambda_{ij}=\infty$ to maximize $\max_{\boldsymbol{\lambda}\ge \boldsymbol{0}}L(\boldsymbol{x},\boldsymbol{\lambda})$. But if the dual variables are upper bounded by 1, to achieve the maximum value in $\max_{\boldsymbol{\lambda}\in [0,1]^{n(n-1)}}\mathcal{L}(\boldsymbol{x},\boldsymbol{\lambda})$, we must set $\lambda_{ij}$ to its upper bound, i.e., $\lambda_{ij}=1$. This is again consistent with the logical condition of having an edge between $i$ and $j$ as $g_{ij}(\bar{x}_i,\bar{x}_j)> 0$. These facts together suggest that the network switches that may occur during the update process of state-dependent network dynamics are merely the KKT optimality conditions that guide the iterates to the optimal solution of , assuming that there is a budget constraint on the dual variables. In other words, if the dual constraints were free to be chosen from $[0,\infty)$, then the iterates of the dynamics would converge to the optimal solution of . However, the budget constraints on the dual variables do not allow us to penalize the violated constraints arbitrarily large and enforce them to be feasible. Therefore, the solutions that are obtained from state-network updates may not necessarily generate a feasible solution to . Nevertheless, this allows us to view the state-network dynamics as an iterative primal-dual algorithm guided by KKT optimality conditions for solving a saddle-point problem with box constraints on the dual variables. Alternatively, the state-dependent network dynamics can be viewed as Nash dynamics in a zero-sum game between a network player and a state player with budget constraints on the action set of the network player. In the following, we use these observations to develop Lyapunov stable and convergent state-dependent network dynamics using discrete-time saddle-point dynamics. \[thm:subgradient\] Let $g_{ij}(x_i,x_j)\in \mathbb{C}$ be symmetric convex and $f_i(x_i)\in \mathbb{C}$ be convex functions. Consider the following dynamics in which agent $i$ updates its state as $$\begin{aligned} \label{eq:dynamics-discrete} x_i^{k+1}=x_i^k-\alpha^k[\frac{\partial}{\partial x_i}f_i(x_i^k)+\!\!\!\!\sum_{j\in N_i(\boldsymbol{x}^k)}\!\!\!\frac{\partial}{\partial x_i}g_{ij}(x_i^k,x_j^k)],\end{aligned}$$ where $N_i(\boldsymbol{x}^k):=\{j: g_{ij}(x_i^k,x_j^k)>0\}$ denotes the set of neighbors of agent $i$ at time $k$. Then for any positive sequence $\alpha^k=\gamma_k[\sum_i\big(\frac{\partial}{\partial x_i}f_i(x_i^k)+\sum_{j\in N_i(\boldsymbol{x}^k)}\!\frac{\partial}{\partial x_i}g_{ij}(x_i^k,x_j^k)\big)^2]^{-\frac{1}{2}}$, with $\lim_k\gamma_k=0$ and $\sum_{k}\gamma_k=\infty$, the dynamics converge to an equilibrium $\boldsymbol{x}^*$. Moreover, for sufficiently small $\alpha^k$, $V(\boldsymbol{x})=\|\boldsymbol{x}-\boldsymbol{x}^*\|^2$ serves as a Lyapunov function. Let us consider the following Lagrangian function $$\begin{aligned} \nonumber L(\boldsymbol{x},\boldsymbol{\lambda})=f(\boldsymbol{x})+\frac{1}{2}\sum_{i,j}\lambda_{ij}g_{ij}(x_i,x_j),\end{aligned}$$ where $f(\boldsymbol{x})=\sum_{i=1}^nf_i(x_i)$, and suppose that we want to solve the following saddle-point problem with box constraints on the dual variables: $$\begin{aligned} \nonumber \min_{\boldsymbol{x}\in \mathbb{R}^n}\max_{\boldsymbol{\lambda}\in [0,1]^{n(n-1)}}L(\boldsymbol{x},\boldsymbol{\lambda})&=\min_{\boldsymbol{x}\in \mathbb{R}^n}\max_{\boldsymbol{\lambda}\in [0,1]^{n(n-1)}}\{f(\boldsymbol{x})+\frac{1}{2}\sum_{i,j}\lambda_{ij}g_{ij}(x_i,x_j)\}\cr &=\min_{\boldsymbol{x}\in \mathbb{R}^n}\big\{f(\boldsymbol{x})+\frac{1}{2}\sum_{i,j}\max\{g_{ij}(x_i,x_j),0\}\big\}.\end{aligned}$$ Defining $\Phi(\boldsymbol{x}):=f(\boldsymbol{x})+\frac{1}{2}\sum_{i,j}\max\{g_{ij}(x_i,x_j),0\}$, and noting that for any $i,j$, $\max\{g_{ij}(x_i,x_j),0\}$ is a convex function, one can easily see that $\Phi(\boldsymbol{x})$ is a convex function of $\boldsymbol{x}$. Therefore, applying a subgradient algorithm to the unconstrained convex problem $\min_{\boldsymbol{x}\in\mathbb{R}^n}\Phi(\boldsymbol{x})$ with appropriate choice of step sizes $\alpha^k, k=1,2,\ldots$, will converge to a minimizer of $\Phi(\boldsymbol{x})$, denoted by $\boldsymbol{x}^*$. More precisely, let us denote the subgradient of $\Phi(\boldsymbol{x})$ at $\boldsymbol{x}^k$ by $g^k$. Then, it is known that the discrete time dynamics $$\begin{aligned} \label{eq:subgradient-dynamics} \boldsymbol{x}^{k+1}=\boldsymbol{x}^k-\alpha_kg^k,\end{aligned}$$ with diminishing step length $\alpha^k=\frac{\gamma_k}{\|g^k\|}$ with $\lim_k\gamma_k=0$ and $\sum_{k}\gamma_k=\infty$ will converge to $\boldsymbol{x}^*$ [@boyd2004convex]. Now let $J=\{(r,s): g_{rs}(x_r^k,x_s^k)>0\}$ and $\bar{J}=\{(r,s): g_{rs}(x_r^k,x_s^k)\leq 0\}$. Then for every $(r,s)\in J$ the function $\max\{g_{rs}(x_r,x_s),0\}$ has a unique subgradient at $\boldsymbol{x}^k$, which is $\nabla g_{rs}(x^k_r,x^k_s)$. Moreover, for every $(r,s)\in \bar{J}$ the minimum of the convex function $\max\{g_{rs}(x_r,x_s),0\}$ equals $0$ which is achieved at $\boldsymbol{x}^k$. Thus $\boldsymbol{0}$ is a subgradient of $\max\{g_{rs}(x_r,x_s),0\}$ at $\boldsymbol{x}^k$ for every $(i,j)\in \bar{J}$. Using additivity rule of the subgradient, we conclude that $g^k=\nabla f(\boldsymbol{x})+\frac{1}{2}\sum_{(r,s)\in J}\nabla g_{rs}(x^k_r,x^k_s)$ is a subgradient for $\Phi(\cdot)$ at $\boldsymbol{x}^k$. In particular, the $i$th component of $g^k$ is given by, $$\begin{aligned} \label{eq:subgradient} g_i^k&=\frac{\partial}{\partial x_i}f(\boldsymbol{x}^k)+\frac{1}{2}\sum_{j}\Big(\boldsymbol{1}_{\{g_{ij}(x^k_i,x^k_j)> 0\}} \frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j)+\boldsymbol{1}_{\{g_{ji}(x^k_j,x^k_i)> 0\}} \frac{\partial}{\partial x_i}g_{ji}(x^k_j,x^k_i)\Big)\cr &=\frac{\partial}{\partial x_i}f_i(x_i^k)+\sum_{j}\Big(\boldsymbol{1}_{\{g_{ij}(x^k_i,x^k_j)> 0\}}\cdot \frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j)\Big)\cr &=\frac{\partial}{\partial x_i}f_i(x_i^k)+\sum_{j\in N_i(\boldsymbol{x}^k)}\frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j),\end{aligned}$$ where $\boldsymbol{1}_{\{\cdot\}}$ is the indicator function, the second equality holds by symmetry of the functions $g_{ij}(x_i,x_j)=g_{ji}(x_j,x_i)$, and the last equality is due to the definition of an edge emergence between two nodes $i$ and $j$. Substituting into we obtain the desired dynamics . Finally, using the definition of the subgradient we can write, $$\begin{aligned} \nonumber \|\boldsymbol{x}^{k+1}-\boldsymbol{x}^*\|^2&=\|\boldsymbol{x}^{k}-\boldsymbol{x}^*-\alpha^kg^{k}\|^2\cr &=\|\boldsymbol{x}^{k}-\boldsymbol{x}^*\|^2+(\alpha^k)^2\|g^k\|^2-2\alpha^k(g^{k})^T(\boldsymbol{x}^{k}-\boldsymbol{x}^*)\cr &\leq \|\boldsymbol{x}^{k}-\boldsymbol{x}^*\|^2+(\alpha^k)^2\|g^k\|^2-2\alpha^k(\Phi(\boldsymbol{x}^{k})-\Phi(\boldsymbol{x}^*)).\end{aligned}$$ Therefore, for any $\alpha^{k}\in [0, \frac{2(\Phi(\boldsymbol{x}^{k})-\Phi(\boldsymbol{x}^*))}{\|g^k\|^2}]$, we have $\|\boldsymbol{x}^{k+1}-\boldsymbol{x}^*\|^2\leq \|\boldsymbol{x}^{k}-\boldsymbol{x}^*\|^2$. Let $X$ be the set of minimizers of $\min_{\boldsymbol{x}\in \mathbb{R}^n}\Phi(\boldsymbol{x})$, which is a nonempty closed convex set. Moreover, let $d(\boldsymbol{x},X)=\|\boldsymbol{x}-\Pi_X[\boldsymbol{x}]\|$ be the minimum distance of the point $\boldsymbol{x}$ from the set $X$, where $\Pi_X[\boldsymbol{x}]$ is the projection of $\boldsymbol{x}$ on the set $X$. As for any $\boldsymbol{x}^*\in X$, $\|\boldsymbol{x}^{k+1}-\boldsymbol{x}^*\|^2\leq \|\boldsymbol{x}^{k}-\boldsymbol{x}^*\|^2$, by choosing $\boldsymbol{x}^*=\Pi_X[x^k]$ we have, $$\begin{aligned} \nonumber d(\boldsymbol{x}^{k+1}, X)=\|\boldsymbol{x}^{k+1}\!-\!\Pi_X[\boldsymbol{x}^{k+1}]\|\leq \|\boldsymbol{x}^{k+1}\!-\!\Pi_X[\boldsymbol{x}^{k}]\|\leq \|\boldsymbol{x}^{k}\!-\!\Pi_X[\boldsymbol{x}^{k}]\|=d(\boldsymbol{x}^{k}, X).\end{aligned}$$ Thus for sufficiently small step size $\alpha^{k}\in [0, \frac{2(\Phi(\boldsymbol{x}^{k})-\Phi(\boldsymbol{x}^*))}{\|g^k\|^2}]$, the distance of the iterates to the optimal set $X$ also serves as a Lyapunov function. An interesting special case of Theorem \[thm:subgradient\] is when $f_i(x_i)=0, \forall i\in [n]$, and the set of constraints $\{g_{ij}(x_i,x_j)\leq 0, \forall i,j\}$ is feasible. In this case the set of minimizers of the function $\Phi(\boldsymbol{x})=\frac{1}{2}\sum_{i,j}\max\{g_{ij}(x_i,x_j),0\}$ is precisely the feasible set $\{\boldsymbol{x}\in\mathbb{R}^n: g_{ij}(x_i,x_j)\leq 0, \forall i,j\}$. In particular, the minimum value of $\Phi(\cdot)$ is zero which is obtained at any feasible point $\boldsymbol{x}^*\in \{\boldsymbol{x}\in\mathbb{R}^n: g_{ij}(x_i,x_j)\leq 0, \forall i,j\}$. Now if the norm of the gradient of each measurement function $g_{ij}$ is bounded above by a constant $G$, we can write, $$\begin{aligned} \label{eq:epsilon-feasible} \frac{2(\Phi(\boldsymbol{x}^{k})-\Phi(\boldsymbol{x}^*))}{\|g^k\|^2}&=\frac{2\Phi(\boldsymbol{x}^{k})}{\|g^k\|^2}=\frac{\sum_i\sum_{j\in N_i(\boldsymbol{x}^k)}g_{ij}(x^k_i,x^k_j)}{\sum_i\big(\sum_{j\in N_i(\boldsymbol{x}^k)}\frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j)\big)^2}\cr &\ge \frac{\sum_i\sum_{j\in N_i(\boldsymbol{x}^k)}g_{ij}(x^k_i,x^k_j)}{n\sum_{i,j}\big(\frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j)\big)^2}\ge \frac{\max_{i,j}\{g_{ij}(x^k_i,x^k_j)\}}{n^2G^2}, \end{aligned}$$ Let us define the $\epsilon$-equilibrium set as the set of all the points where each constraint $g_{ij}(x_i,x_j)\leq 0$ is violated by at most $\epsilon$, i.e., $X_{\epsilon}=\{\boldsymbol{x}\in\mathbb{R}^n: g_{ij}(x_i,x_j)\leq \epsilon, \forall i,j\}$, and consider the dynamics with the constant step size $\alpha_k=\frac{\epsilon}{n^2G^2}$. Then if $\boldsymbol{x}^k\notin X_{\epsilon}$, we have $\max_{i,j}\{g_{ij}(x^k_i,x^k_j)\}>\epsilon$, which in view of implies that $\alpha^k\in [0, \frac{2(\Phi(\boldsymbol{x}^{k})-\Phi(\boldsymbol{x}^*))}{\|g^k\|^2}]$. This shows that as long as $\boldsymbol{x}^k\notin X_{\epsilon}$, $d(\boldsymbol{x}, X)$ serves as a Lyapunov function and we can write, $$\begin{aligned} \nonumber d(\boldsymbol{x}^{k+1}, X)&\leq d(\boldsymbol{x}^{k}, X)+(\alpha^k)^2\|g^k\|^2-2\alpha^k\Phi(\boldsymbol{x}^{k})\cr &=d(\boldsymbol{x}^{k}, X)+\frac{\epsilon^2}{n^4G^4}\|g^k\|^2-\frac{2\epsilon}{n^2G^2}\Phi(\boldsymbol{x}^{k})\cr &\leq d(\boldsymbol{x}^{k}, X)+\frac{\epsilon^2}{n^4G^4}n^2G^2-\frac{2\epsilon}{n^2G^2}\epsilon=d(\boldsymbol{x}^{k}, X)-\frac{\epsilon^2}{n^2G^2},\end{aligned}$$ where in the last inequality we have used the fact that $\|g^k\|^2\leq n^2G^2$ and $\Phi(\boldsymbol{x}^k)\ge \epsilon$ (as $\boldsymbol{x}^k\notin X_{\epsilon}$). Since $d(\boldsymbol{x}^{k},X)\ge 0, \forall k$, we conclude that after at most $\frac{d(\boldsymbol{x}^0,X)n^2G^2}{\epsilon^2}$ iterations the state-dependent network dynamics $x_i^{k+1}=x_i^k-\frac{\epsilon}{n^2G^2}\sum_{j\in N_i(\boldsymbol{x}^k)}\!\frac{\partial}{\partial x_i}g_{ij}(x_i^k,x_j^k)$ will reach to an $\epsilon$-neighborhood of its equilibrium set $X_{\epsilon}$. Saddle-Point Dynamics with Heterogeneous Step Size -------------------------------------------------- The subgradient method is not the only algorithm for minimizing a convex function and one can consider other alternative algorithms that can result in different state-dependent network dynamics. The following theorem provides another multiagent network dynamics motivated by the fact that often different agents have different scaling parameters in their update rules. These dynamics can be viewed as the *quasi-Newton* method [@boyd2004convex] in the context of multiagent network dynamics. A function $V:\mathbb{R}^{n}\to \mathbb{R}$ is called a semi-Lyapunov function for the discrete-time dynamics $\boldsymbol{z}^{k+1}=h(\boldsymbol{z}^k), \ k=0,1,2,\ldots$, if $V(\boldsymbol{z}^{k+1})< V(\boldsymbol{z}^k)$ for any $z^k\in \mathbb{R}^m\setminus D$, where $D$ is a measure-zero subset of $\mathbb{R}^{n}$. \[thm:Newton\] Let $g_{ij}\in\mathbb{C}^2$ be symmetric convex, and $f\in\mathbb{C}$ be a convex function. Define $\Phi(\boldsymbol{x}):=f(\boldsymbol{x})+\frac{1}{2}\sum_{i,j}\max\{g_{ij}(x_i,x_j),0\}$, and let $D:=\{\boldsymbol{x}: g_{ij}(x_i,x_j)=0 \ \mbox{for some}\ i,j\}$ be the measure-zero set of nondifferentiability points of $\Phi(\boldsymbol{x})$. If for any $\boldsymbol{x}\notin D$, there exists a positive-definite diagonal matrix $G^{x}$ such that $\Phi(\boldsymbol{y})\leq \Phi(\boldsymbol{x})+\!(\boldsymbol{y}-\boldsymbol{x})^Tg^x\!+\!\frac{1}{2}(\boldsymbol{y}-\boldsymbol{x})^TG^x(\boldsymbol{y}-\boldsymbol{x}), \forall\boldsymbol{y}\in L_x$, where $g^{\boldsymbol{x}}$ and $L_{\boldsymbol{x}}:=\{\boldsymbol{y}:\Phi(\boldsymbol{y})\leq \Phi(\boldsymbol{x})\}$ are the subgradient and the level set of $\Phi(\cdot)$ at $\boldsymbol{x}$, then the dynamics $$\begin{aligned} \label{eq:newton-dynamics} x_i^{k+1}=x_i^k-\frac{1}{G^k_{ii}}[\frac{\partial}{\partial x_i}f(\boldsymbol{x}^k)+\!\!\!\!\sum_{j\in N_i(\boldsymbol{x}^k)}\!\!\!\frac{\partial}{\partial x_i}g_{ij}(x_i^k,x_j^k)], \ \ \ i\in[n],\end{aligned}$$ admit the semi-Lyapunov function $\Phi(\boldsymbol{x})$. In particular, if there exists a constant $m$ such that $G^k\le m I, \forall k$, and $\boldsymbol{x}^k\in D$ for at most finitely many iterates $k$, then the dynamics will converge to the set of minimizers of $\Phi(\boldsymbol{x})$. Consider the convex function $\Phi(\boldsymbol{x}):=f(\boldsymbol{x})+\frac{1}{2}\sum_{i,j}\max\{g_{ij}(x_i,x_j),0\}$, which is differentiable at any point except at $D:=\{\boldsymbol{x}: g_{ij}(x_i,x_j)=0 \ \mbox{for some}\ i,j\}$. As before, we know that the $i$th component of the gradient of $\Phi(\boldsymbol{x})$ at $\boldsymbol{x}^k$ (subgradient if $\boldsymbol{x}^k\in D$) is given by $g_i^k=\frac{\partial}{\partial x_i}f(\boldsymbol{x}^k)+\sum_{j\in N_i(\boldsymbol{x}^k)}\frac{\partial}{\partial x_i}g_{ij}(x^k_i,x^k_j)$. By the assumption, there is a positive-definite diagonal matrix $G^k$ such that $$\begin{aligned} \nonumber u_k(\boldsymbol{y}):=\Phi(\boldsymbol{x}^k)+(\boldsymbol{y}-\boldsymbol{x}^k)^Tg^k+\frac{1}{2}(\boldsymbol{y}-\boldsymbol{x}^k)^TG^k(\boldsymbol{y}-\boldsymbol{x}^k),\end{aligned}$$ forms a quadratic approximation upper bound for $\Phi(\boldsymbol{y}), \forall \boldsymbol{y}\in L_{\boldsymbol{x}^k}$. Clearly, we have $u_k(\boldsymbol{x}^k)=\Phi(\boldsymbol{x}^k)$. On the other hand, it is easy to see that $$\begin{aligned} \nonumber \boldsymbol{x}^{k+1}=\boldsymbol{x}^k-(G^k)^{-1}g^k=\argmin_{\boldsymbol{y}\in \mathbb{R}^n}u_k(\boldsymbol{y}).\end{aligned}$$ Now let us consider an arbitrary $\boldsymbol{x}^k\notin D$. Then $g^k=\nabla \Phi(\boldsymbol{x}^k)$, and thus $-(G^k)^{-1}g^k$ is a descent direction for any positive definite matrix $(G^k)^{-1}$. This means that for sufficiently small $\delta>0$, $\Phi(\boldsymbol{x}^k-\delta (G^k)^{-1}g^k)\leq \Phi(\boldsymbol{x}^k)$, and hence $\boldsymbol{x}^k-\delta (G^k)^{-1}g^k\in L_{\boldsymbol{x}^k}$. Therefore, the line segment $\{(1-\alpha)\boldsymbol{x}^k+\alpha\boldsymbol{x}^{k+1}, \alpha\in [0,1]\}$ intersects $L_{\boldsymbol{x}^k}$ in at least two different points (for $\alpha=0,\delta$). Now if $\boldsymbol{x}^{k+1}\notin L_{x^k}$, that line segment must intersect with the boundary of $L_{\boldsymbol{x}^k}$ at another point $\bar{\boldsymbol{x}}:=\bar{\alpha}\boldsymbol{x}^k+(1-\bar{\alpha})\boldsymbol{x}^{k+1}$, for some $\bar{\alpha}\in (0,1)$ (note that the level set $L_{\boldsymbol{x}^k}$ is a closed convex set). Using continuity of $\Phi(\cdot)$, $\Phi(\bar{\boldsymbol{x}})=\Phi(\boldsymbol{x}^k)$, and we can write $$\begin{aligned} \nonumber u_k(\boldsymbol{x}^k)=\Phi(\boldsymbol{x}^k)=\Phi(\bar{\boldsymbol{x}})\leq u_k(\bar{\boldsymbol{x}})\leq \bar{\alpha} u_k(\boldsymbol{x}^k)+(1-\bar{\alpha})u_k(\boldsymbol{x}^{k+1})<u_k(\boldsymbol{x}^k),\end{aligned}$$ where the first inequality is because $\Phi(\boldsymbol{y})\leq u_k(\boldsymbol{y}), \forall \boldsymbol{y}\in L_{\boldsymbol{x}^k}$, and the second ineqaulity is by convexity of $u_k(\cdot)$. This contradiction shows that $\boldsymbol{x}^{k+1}\in L_{\boldsymbol{x}^k}$, which implies $\Phi(\boldsymbol{x}^{k+1})\leq \Phi(\boldsymbol{x}^{k})$. Therefore, $\Phi(\cdot)$ serves as a semi-Lyapunov function for the dynamics . In particular, the drift of this Lyapunov function at $\boldsymbol{x}^k\notin D$ equals to $$\begin{aligned} \nonumber \Phi(\boldsymbol{x}^k)-\Phi(\boldsymbol{x}^{k+1})&\ge \Phi(\boldsymbol{x}^k)-u_k(\boldsymbol{x}^{k+1})\cr &= \Phi(\boldsymbol{x}^k)-[\Phi(\boldsymbol{x}^k)\!+\!(\boldsymbol{x}^{k+1}\!-\!\boldsymbol{x}^k)^Tg^k\!+\!\frac{1}{2}(\boldsymbol{x}^{k+1}\!-\!\boldsymbol{x}^k)^TG_k(\boldsymbol{x}^{k+1}\!-\!\boldsymbol{x}^k)]\cr &=\frac{1}{2}(g^k)^T(G^k)^{-1}g^k=\frac{1}{2}\|g^k\|^2_{(G^k)^{-1}}.\end{aligned}$$ Summing the above inequality for $k=0,\ldots,K-1$, we obtain $$\begin{aligned} \nonumber \Phi(\boldsymbol{x}^{K})+\sum_{\{k: \boldsymbol{x}^k\in D\}}(\Phi(\boldsymbol{x}^{k+1})-\Phi(\boldsymbol{x}^{k}))\leq \Phi(\boldsymbol{x}^{0})-\frac{1}{2}\sum_{\{k: \boldsymbol{x}^k\notin D\}}\|g^{k}\|^2_{(G^k)^{-1}}.\end{aligned}$$ As this relation holds for any $K$, and $|\{k: \boldsymbol{x}^k\in D\}|<\infty$ by the assumption, we must have $\sum_{\{k: \boldsymbol{x}^k\notin D\}}\|g^{k}\|^2_{(G^k)^{-1}}<\infty$, and hence $\lim_{k\to \infty}\|g^{k}\|^2_{(G^k)^{-1}}=0$. Thus, if there exists $m>0$ such that $G^k\le m I, \forall k$, we get $\lim_{k\to \infty}\|g^{k}\|=0$. Since $\Phi(\cdot)$ is a convex function, we know that the set of minimizers of $\Phi(\cdot)$ are exactly the set of points having $\boldsymbol{0}$ as their subgradient. This shows that $\{\boldsymbol{x}^k\}_{k=0}^{\infty}$ must converge to the set of minimizers of $\Phi(\cdot)$. A natural choice for the matrices $G^k$ in Theorem \[thm:Newton\] is the Hessian matrix $\nabla^2 \Phi(\boldsymbol{x}^k)$ which is used in the Newton method for minimizing a smooth convex function. However, in practice it is often easier to work with a sparse modification of $\nabla^2\Phi(\boldsymbol{x}^k)$ given by a diagonal matrix containing only the diagonal entries of $\nabla^2 \Phi(\boldsymbol{x}^k)$. In particular, to assure positive definiteness, an identity matrix is added to this diagonal matrix to form the quasi-Newton update rule. Using such a quasi-Newton method for minimizing $\Phi(\cdot)$, one obtains the following state-dependent network dynamics $$\begin{aligned} \label{eq:hesian-newton} x_i^{k+1}=x_i^k-t_k\frac{\frac{\partial}{\partial x_i}f(\boldsymbol{x}^k)+\!\sum_{j\in N_i(\boldsymbol{x}^k)}\!\frac{\partial}{\partial x_i}g_{ij}(x_i^k,x_j^k)}{1+\frac{\partial^2}{\partial x^2_i}f(\boldsymbol{x}^k)+\!\sum_{j\in N_i(\boldsymbol{x}^k)}\!\frac{\partial^2}{\partial x^2_i}g_{ij}(x_i^k,x_j^k)},\end{aligned}$$ where $t_k$ is an appropriately chosen step size obtained using a line search or diminishing rule. In fact, it is known that for sufficiently small neighborhood of the minimizers of $\Phi(\cdot)$, the newton method with step size $t_k=1$ will converge quadratically fast to the set of optimal points [@boyd2004convex]. Therefore, we obtain a simple explanation for the convergence properties and equilibrium points of seemingly complex state-dependent network dynamics using the well-known quasi-Newton method. In particular, this provides a rigorous explanation on why the trajectories of the state-dependent network dynamics of the form (such as HK model) converge exponentially fast as they get close to their equilibrium points. Let us consider a special case where $f=0$ and $g_{ij}(x_i,x_j)=\frac{1}{2}(x_i-x_j)^2-\frac{\epsilon_{ij}^2}{2}$, where $\epsilon_{ij}=\epsilon_{ji}>0$. This means that two agents $i$ and $j$ become each others’ neighbors if their distance is larger than $\epsilon_{ij}$. Note that this is the complement of the original HK model. In this case, $\Phi(\boldsymbol{x})\!=\!\frac{1}{2}\sum_{ij}\max\{\frac{1}{2}(x_i-x_j)^2-\frac{\epsilon_{ij}^2}{2}, 0\}$, and thus for $\boldsymbol{x}^k\notin D:=\{\boldsymbol{x}: |x_i-x_j|=\epsilon_{ij}, \ \mbox{for some}\ i,j \}$, we have, $$\begin{aligned} \nonumber &\nabla_i\Phi(\boldsymbol{x}^k)=\sum_{j\in N_i(\boldsymbol{x}^k)}(x^k_i-x^k_j)=|N_i(\boldsymbol{x}^k)|x_i^k-\!\!\!\sum_{j\in N_i(\boldsymbol{x}^k)}\!\!x^k_j,\cr &\nabla^2_{ij}\Phi(\boldsymbol{x}^k)=\begin{cases}|N_i(\boldsymbol{x}^k)|\ \ &\mbox{if} \ i=j\\ -1\ \ &\mbox{if} \ j\in N_i(\boldsymbol{x}^k)\\ 0\ \ &\mbox{else}. \end{cases}\end{aligned}$$ In other words, the Hessian matrix at $\boldsymbol{x}^k$ is equal to the Laplacian of the connectivity network at state $\boldsymbol{x}^k$. As a result, the quasi-Newton dynamics for minimizing the piecewise quadratic function $\Phi(\boldsymbol{x})$ becomes, $$\begin{aligned} \nonumber x_i^{k+1}=x_i^k-t_k\frac{|N_i(\boldsymbol{x}^k)|x_i^k-\!\sum_{j\in N_i(\boldsymbol{x}^k)}\!x^k_j}{|N_i(\boldsymbol{x}^k)|+1},\end{aligned}$$ In particular, for sufficiently small choice of step size $t_k$ the function $\Phi(\boldsymbol{x})$ serves as a semi-Lyaponov function. Note that for unit step size $t_k=1$, the above dynamics can be explicitly written as $$\begin{aligned} \label{eq:unit-quasi} \boldsymbol{x}_i^{k+1}=\frac{\sum_{j\in N_i(\boldsymbol{x}^k)\cup\{i\}}x_j^k}{|N_i(\boldsymbol{x}^k)|+1}.\end{aligned}$$ As a result the dynamics of the complement-HK model can be viewed as iterates of a quasi-Newton method with unit step size for minimizing $\Phi(\boldsymbol{x})$. Of course, for $t_k=1$, there is no reason on why $\Phi(\boldsymbol{x})$ should serve as a Lyapunov function, unless the initial point of the dynamics is sufficiently close to a minimizer of $\Phi(\cdot)$ (in which case the exponentially fast convergence of the quasi-Newton method with $t_k=1$ is guaranteed). Nevertheless the function $\Phi(\boldsymbol{x})$ is still very useful as it globally guides the dynamics based on quasi-Newton iterates. In particular, the set of minimizers of $\Phi(\boldsymbol{x})$ characterize the equilibrium points of .[^5] This is because if $\lim_{k}\boldsymbol{x}^k=x^*$, we must have $\boldsymbol{x}^*=\lim_{k}\boldsymbol{x}^{k+1}=\boldsymbol{x}^*-\lim_{k}(G^k)^{-1}\nabla \Phi(\boldsymbol{x}^k)$, where here $(G^k)^{-1}=diag(\frac{1}{|N_1(\boldsymbol{x}^k)|+1},\ldots,\frac{1}{|N_n(\boldsymbol{x}^k)|+1})$. This implies that $\lim_{k\to \infty}G_k^{-1}\nabla \Phi(\boldsymbol{x}^k)=0$. As the entries of $G^{-1}_k$ are uniformly bounded below by $\frac{1}{n+1}$, we must have $\lim_{k\to \infty}\nabla \Phi(\boldsymbol{x}^k)=0$, and the result follows from convexity of $\Phi(\cdot)$. Continuous-Time Constrained Saddle-Point Dynamics {#sec:continuous-saddle} ================================================= In this section, we extend our discrete-time saddle-point dynamics to their continuous-time counterparts and show how they can be leveraged to establish Lyapunov stability of state-dependent network dynamics. Here due to continuity of the time index $t\in [0,\infty)$, an edge connectivity between a pair of agents $(i,j)$ is no longer a binary event $\lambda_{ij}\in \{0,1\}$, but rather a continuous weight function of time $\lambda_{ij}(t)\in [0, 1]$. Thus $\lambda_{ij}(t)$ can be viewed as a connectivity strength between agents $i$ and $j$ at time $t$ such that the maximum influence that two agents can have on each other is $1$ (i.e., fully connected) and the minimum influence is $0$ (i.e., fully disconnected). Motivated by the method of change of variables for discrete time dynamics in Section \[subsec:change-variable\], we state our results for continuous-time dynamics in a more general form where the agents’ state are transformed from $x_i$ to $p_i(x_i)$, and the network variables are transformed from $\lambda_{ij}$ to $q_{ij}(\lambda_{ij})$. Here we assume that $p_i(\cdot),q_{ij}(\cdot)$ are continuous and nondecreasing functions such that $p_i(0)=q_{ij}(0)=0, \forall i,j$. In particular, we let the Lagrangian function to have a more general form of $L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))$, as long as its partial derivatives exist and is convex with respect to its first argument $p(\boldsymbol{x})=(p_1(x_1),\ldots,p_n(x_n))^T$, and concave with respect to its second argument $q(\boldsymbol{\lambda})=\big(q_{ij}(\lambda_{ij}), i\neq j\big)^T$. \[rem:special-standard\] A special case of the above setting is when $p_i(x_i)=x_i, q_{ij}(\lambda_{ij})=\lambda_{ij}$ are identity functions, and $L(\boldsymbol{x},\boldsymbol{\lambda})=\sum_if_i(x_i)+\sum_{i\neq j}\lambda_{ij}g_{ij}(x_i,x_j), \boldsymbol{\lambda}\ge 0, \boldsymbol{x}\in \mathbb{R}^n$. It is clear that for convex measurement functions $g_{ij},f_i$, the standard Lagrangian function $L(\boldsymbol{x},\boldsymbol{\lambda})$ is convex with respect to $\boldsymbol{x}$, and concave (linear) with respect to $\boldsymbol{\lambda}$. To introduce a general class of continuous-time state-dependent network dynamics, let us consider the following static constrained saddle-point problem: $$\begin{aligned} \label{eq:saddle-point-general} \min_{\boldsymbol{x}\in \mathbb{R}^n}\max_{\boldsymbol{\lambda}\in[0,1]^{n(n-1)}}L(p(\boldsymbol{x}),q(\boldsymbol{\lambda})).\end{aligned}$$ To solve the above static saddle-point problem using continuous-time dynamics, we use the idea of *gradient flow* which was initially introduced in the seminal work of Arrow-Hurwicz-Uzawa [@arrow1958studies] and subsequently used in devising primal-dual algorithms for solving constrained optimization problems [@feijer2010stability]. However, to adopt these dynamics to our more general setting which has both lower and upper bound constraints on the dual variable $\boldsymbol{\lambda}$, we introduce the following generalized gradient flow dynamics: $$\begin{aligned} \label{eq:flow-constrained} &\dot{\boldsymbol{x}}(t)=-\nabla_{p(\boldsymbol{x})}L\big(p(\boldsymbol{x}),q(\boldsymbol{\lambda})\big)\cr &\dot{\boldsymbol{\lambda}}(t)=\big[\nabla_{q(\boldsymbol{\lambda})}L\big(p(\boldsymbol{x}),q(\boldsymbol{\lambda})\big)\big]^{[0,1]}_{\boldsymbol{\lambda}},\end{aligned}$$ where in the above dynamics $\nabla_{p(\boldsymbol{x})}L(p(\boldsymbol{x}),q(\boldsymbol{\lambda})):=\big(\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial p_1(x_1)},\ldots,\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial p_n(x_n)}\big)^T$ (similarly for $\nabla_{q(\boldsymbol{\lambda})}L\big(p(\boldsymbol{x}),q(\boldsymbol{\lambda})\big)$), and $[a]^{[0,1]}_{\lambda}$ denotes the projection of the network dynamics to the unit interval, $$\begin{aligned} \nonumber [a]^{[0,1]}_{\lambda}=\begin{cases}\min\{0,a\}, \ \ \ &\mbox{if} \ \lambda=1\\ a &\mbox{if} \ 0<\lambda<1\\ \max\{0,a\} &\mbox{if} \ \lambda=0. \end{cases}\end{aligned}$$ When $a$ is a vector rather than a scalar, the above projection is taken coordinatewise. The reason for introducing such a projection is that if for a pair of agents $(i,j)$ we have $\lambda_{ij}(t)\in(0,1)$, the edge variable $\lambda_{ij}(t)$ has not hit the boundary points $\{0,1\}$, and it can freely increase or decrease without violating the box constraint $\lambda_{ij}(t) \in[0,1]$. But if $\lambda_{ij}(t)=1$, then this edge variable is only allowed to decrease, and thus $\dot{\lambda}_{ij}(t)\leq 0$. Therefore, if $\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}\ge 0$, we set $\dot{\lambda}_{ij}(t)=0$ to block further increase of $\lambda_{ij}(t)$. Similarly, if $\lambda_{ij}(t)=0$ and $\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}\leq 0$, we set $\dot{\lambda}_{ij}(t)=0$ to block further decrease of $\lambda_{ij}(t)$. Therefore, provide a fairly general class of continuous-time state-dependent network dynamics where the strength of the edge connectivity dynamically changes as a function of the state variables. It is worth noting that in the special setting of Remark \[rem:special-standard\], the network dynamics in decompose to a simple form of $\dot{\lambda}_{ij}(t)=\big[g_{ij}(x_i(t),x_j(t))\big]^{[0,1]}_{\lambda_{ij}}, \forall i,j$. Thus, the more distant two agents $i$ and $j$ are from each other (i.e., larger measurement value $g_{ij}(x_i,x_j)$), the faster the edge connectivity between them grows (until it achieve its maximum connectivity at $1$). This is consistent with the discrete-time counterpart that an edge emerges between agents $i$ and $j$ if $g_{ij}(x_i,x_j)>0$. In order to establish the Lyapunov stability of the continuous-time state-network dynamics , let $(\bar{\boldsymbol{x}},\bar{\boldsymbol{\lambda}})$ be a saddle-point solution to . Note that by continuity and convex-concave property of the Lagrangian function, the existence of a saddle-point in is always guaranteed. Let us define $P_i(x_i):=\int_{\bar{x}_i}^{x_i}p_i(s)ds$ and $Q_{ij}(\lambda_{ij}):=\int_{\bar{\lambda}_{ij}}^{\lambda_{ij}}q_{ij}(s)ds$, where we note that by continuity and monotonicity of $p_i,q_{ij}$, the functions $P_i$ and $Q_{ij}$ are differentiable convex functions. Now we are ready to state the main result of this section. \[eq:thm-continuous\] Let $L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))$ be a convex function in $p(\boldsymbol{x})$ and a concave function in $q(\boldsymbol{\lambda})$. Then, the continuous-time state dependent network dynamics are Lyapunov stable. In particular, $$\begin{aligned} \nonumber V(\boldsymbol{x},\boldsymbol{\lambda}):=\sum_{i=1}^{n}D_{P_i}(x_i,\bar{x}_i)+\sum_{i\neq j}D_{Q_{ij}}(\lambda_{ij},\bar{\lambda}_{ij})\end{aligned}$$ serves as a Lyapunov function for the dynamics , where $D_{\phi}(u,v)=\phi(u)-\phi(v)-\phi'(v)(u-v)$ denotes the Bregman divergence with respect to the convex function $\phi(\cdot)$. Using the definition of the Bregman divergence, for every $i$ and $j$ we have: $$\begin{aligned} \nonumber &\dot{D}_{P_i}(x_i,\bar{x}_i)=\frac{\partial D_{P_i}(x_i,\bar{x}_i) }{\partial x_i}\dot{x}_i=-\big(p_i(x_i)-p_i(\bar{x}_i)\big)\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial p_i(x_i)},\cr &\dot{D}_{Q_{ij}}(\lambda_{ij},\bar{\lambda}_{ij})=\frac{\partial D_{Q_{ij}}(\lambda_{ij},\bar{\lambda}_{ij})}{\partial \lambda_{ij}}\dot{\lambda}_{ij}=\big(q_{ij}(\lambda_{ij})-q_{ij}(\bar{\lambda}_{ij})\big)\big[\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}\big]_{\lambda_{ij}}^{[0,1]}.\end{aligned}$$ Now we can write, $$\begin{aligned} \nonumber &\dot{V}(\boldsymbol{x},\boldsymbol{\lambda})\!=\!-\!\sum_i\!\big(p_i(x_i)\!-\!p_i(\bar{x}_i)\big)\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial p_i(x_i)}\!+\!\sum_{i\neq j}\!\big(q_{ij}(\lambda_{ij})\!-\!q_{ij}(\bar{\lambda}_{ij})\big)\big[\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}\big]_{\lambda_{ij}}^{[0,1]}\cr &\qquad\leq\!-\!\sum_i\!\big(p_i(x_i)\!-\!p_i(\bar{x}_i)\big)\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial p_i(x_i)}\!+\!\sum_{i\neq j}\!\big(q_{ij}(\lambda_{ij})\!-\!q_{ij}(\bar{\lambda}_{ij})\big)\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}\cr &\qquad=\big(\nabla_{p(\boldsymbol{x})} L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))\big)^T(p(\bar{\boldsymbol{x}})-p(\boldsymbol{x}))+\big(\nabla_{q(\boldsymbol{\lambda})} L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))\big)^T(q(\bar{\boldsymbol{\lambda}})-q(\boldsymbol{\lambda}))\cr &\qquad\leq L(p(\bar{\boldsymbol{x}}),q(\boldsymbol{\lambda}))-L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))-\big(L(p(\boldsymbol{x}),q(\bar{\boldsymbol{\lambda}}))-L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))\big)\cr &\qquad= \Big[L(p(\bar{\boldsymbol{x}}),q(\boldsymbol{\lambda}))-L(p(\bar{\boldsymbol{x}}),q(\bar{\boldsymbol{\lambda}}))\Big]+\Big[L(p(\bar{\boldsymbol{x}}),q(\bar{\boldsymbol{\lambda}}))-L(p(\boldsymbol{x}),q(\bar{\boldsymbol{\lambda}}))\Big]\leq 0.\end{aligned}$$ where in the above derivations the last inequality is due to the definition of the saddle-point, and the second inequality follows by convexity/concavity of $L(\cdot)$ with respect to its first/second argument. Finally, the first inequality is obtained by considering the following three cases: - If $\lambda_{ij}=0$, then $[\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}]_{\lambda_{ij}}^{[0,1]}=\max\{0,\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}\}\ge \frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}$, and $q_{ij}(\lambda_{ij})-q_{ij}(\bar{\lambda}_{ij})=q_{ij}(0)-q_{ij}(\bar{\lambda}_{ij})\leq 0$. - If $\lambda_{ij}\in (0,1)$, then $[\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}]_{\lambda_{ij}}^{[0,1]}=\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}$. - If $\lambda_{ij}=1$, then $[\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}]_{\lambda_{ij}}^{[0,1]}=\min\{0,\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}\}\leq \frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}$, and $q_{ij}(\lambda_{ij})-q_{ij}(\bar{\lambda}_{ij})=q_{ij}(1)-q_{ij}(\bar{\lambda}_{ij})\ge 0$. Thus in either of the above cases we have $$\begin{aligned} \nonumber (q_{ij}(\lambda_{ij})-q_{ij}(\bar{\lambda}_{ij}))[\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})}]_{\lambda_j}^{[0,1]}\leq (q_{ij}(\lambda_{ij})-q_{ij}(\bar{\lambda}_{ij}))\frac{\partial L(p(\boldsymbol{x}),q(\boldsymbol{\lambda}))}{\partial q_{ij}(\lambda_{ij})},\end{aligned}$$ and the result follows. It is worth noting that Theorem \[eq:thm-continuous\] is a continuous-time counterpart of Theorem \[thm:subgradient\] in a sense that in both of these theorems the Bregman distance of the iterates to a saddle-point serves as a Lyapunov function. However, due to the continuity of the network variables in the continuous-time model, the choice of the step size becomes irrelevant in Theorem \[eq:thm-continuous\], while for the discrete-time counterpart the step sizes should be small enough to guarantee the convergence of the dynamics. Conclusions {#sec:conclusion} =========== In this paper, we developed a new framework for the stability analysis of multiagent state-dependent network dynamics. We showed that the co-evolution between the network and the state dynamics can be cast as a primal-dual optimization algorithm to a convex program where the primal updates capture the state dynamics while the dual updates capture the network evolution. In particular, the constrained Lagrangian function serves as a Lyapunov function for the state-network dynamics. We considered our framework under two different settings: i) when the network and state dynamics are aligned, and ii) when the network and state dynamics have conflicting objectives. In the first case, we showed that the application of the BCD method with the change of variables can generate a variety of interesting state-dependent network dynamics. In particular, we provided a new technique to handle asymmetry in the network dynamics. In the second case, we reduced the stability of the state-network dynamics to a zero-sum game between the network player and the state player. This allowed us to establish the Lyapunov stability of the multiagent dynamics using saddle-point dynamics and in particular using the subgradient method and the quasi-Newton method. Finally, we extended our results to a continuous-time model and provided a general class of continuous-time state-dependent network dynamics in terms of generalized gradient flow and established their Lyapunov stability. As a future direction of research, one can use *augmented* Lagrangian functions or apply alternative optimization techniques to generate a broader class of stable state-dependent network dynamics. Moreover, in our analysis, we mainly used a quadratic upper approximation to derive the state updates. Thus a natural extension is to use other function approximations that include the quadratic approximation as their special case or to use approximations that are suitable to specific applications. [^1]: For simplicity of presentation, we assume that the agents’ states are scalar real numbers. However, all the results can be naturally extended to the case where agents’ states are vectors in $\mathbb{R}^d$. [^2]: For instance, any $m$-smooth function (a function with $m$-Lipschitz gradient) has this property. [^3]: Here each constrained is scaled by $\frac{1}{2}$ without changing the actual feasible set. [^4]: It means finding a solution $(\bar{\boldsymbol{x}},\bar{\boldsymbol{\lambda}})$ such that $L(\bar{\boldsymbol{x}},\boldsymbol{\lambda})\leq L(\bar{\boldsymbol{x}},\bar{\boldsymbol{\lambda}})\leq L(\boldsymbol{x},\bar{\boldsymbol{\lambda}}), \ \forall \boldsymbol{x}\in\mathbb{R}^n, \boldsymbol{\lambda}\ge \boldsymbol{0}.$ [^5]: In fact, using a sorted vector Lyapunov function $V(\boldsymbol{x})=sort(\{|x_i-x_j|, i\neq j\})$, it can be shown that the dynamics do converge where after each iteration $V(\boldsymbol{x})$ decreases lexicographically.
--- abstract: 'We calculate the nonequilibrium local density of states on a vibrational quantum dot coupled to two electrodes at $T=0$ using a numerically exact diagrammatic Monte Carlo method. Our focus is on the interplay between the electron-phonon interaction strength and the bias voltage. We find that the spectral density exhibits a significant voltage dependence if the voltage window includes one or more phonon sidebands. A comparison with well-established approximate approaches indicates that this effect could be attributed to the nonequilibrium distribution of the phonons. Moreover, we discuss the long transient dynamics caused by the electron-phonon coupling.' author: - 'K. F. Albrecht' - 'A. Martin-Rodero' - 'J. Schachenmayer' - 'L. Mühlbacher' bibliography: - 'papers.bib' title: Local density of states on a vibrational quantum dot out of equilibrium --- Recent experiments in the field of molecular electronics have pointed out the importance of electron-phonon interactions for the charge transport on the nanoscale.[@c60; @Smit02; @Zhitenev02; @PhysRevLett.92.206102; @Kushmerick04; @Liu04; @Natelson04; @Pasupathy05; @Sapmaz06; @carbon_nanotube_nature2009] In these experiments, a nanostructure – such as a single molecule or a carbon nanotube – is in contact with two electronic leads. Due to the tiny size of the structure, single-electron tunneling processes can cause a transient change of its electronic geometry. This change in combination with intermolecular interactions can couple the electronic to the vibrational degrees of freedom. An important consequence is the appearance of nonlinearities in the current-voltage characteristics and the conductance. [@Secker11; @Cuevas10; @Smit02; @Natelson04; @Ballmann10; @Park02; @Zhitenev02] These effects can be associated with the possibility of inelastic processes due to the bias and to the formation of phonon sidebands in the excitation spectrum. [@PhysRevB.50.5528; @Flensberg2003; @PhysRevB.69.245302; @carmina2010] From a theoretical perspective, such a quantum dot setup can be described by the Anderson-Holstein model [@Holstein1959; @Hewson2002]. To its full extent, this model accounts for a tunneling coupling between the quantum dot and electronic reservoirs, a linear coupling of the electrons occupying the quantum dot to phonons, as well as an on-site Coulomb interaction. In this paper, we are mainly interested in the effects of the molecule’s vibration on the charge transport through the quantum dot so that it is expedient to consider a single vibrational mode and spinless electrons. Therefore, the model can be simplified to account for a single electronic level which is linearly coupled to a local phonon, whereas the electron-electron coupling is disregarded. In the framework of this spinless Anderson-Holstein model, a lot of progress has been made, offering a deep insight into the physics caused by the coupling of the electron and the phonon (see, e.g., Refs. \[\]). Besides approximative approaches numerically exact methods have (recently) become possible for a nonequilibrium situation of a vibrational quantum dot (see, e.g., Refs. \[\]). A central quantity to describe the nonequilibrium transport through the quantum dot is its local density of states (LDOS). Single-particle observables such as the current or the dot occupation can be directly derived from it[@Meir92]. In equilibrium, the spectral density is well understood (see Refs. \[\] and references therein). Moreover, the close connection between nanomechanical vibrations of the quantum dot and the sidebands has been confirmed. The interplay between nanomechanical vibrations and a finite bias voltage, however, still remains a challenging task outside certain limiting cases. [@Galperin06; @PhysRevB.87.195112; @1367-2630-16-2-023007] In this paper, we address this problem in a numerically exact way by using the diagrammatic Monte Carlo method [@Lothar_diagMC; @Schiro09; @Werner2011] (diagMC). For this purpose, we use a two-terminal setup with an auxiliary electrode. This allows for an exact study of the LDOS on a quantum dot coupled to two electrodes in the limit of a vanishing coupling to the auxiliary lead.[@sun_guo_third_terminal2001; @lebanon_schiller_third_terminal2001; @Lothar_spectral_density] Throughout this paper we consider the deep quantum limit at $T=0$. For a thorough discussion of the numerical results, we use an interpolative self-energy approximation (ISA), in which it is possible to include electron-phonon interactions [@alvaro2008; @carmina2010], and the well-established single particle approximation [@PhysRevB.50.5528; @PhysRevB.66.085311; @PhysRevB.66.075303; @Flensberg2003; @PhysRevB.76.033417] (SPA). Although these approximations rely on completely different approximation schemes, one uses the underlying common assumption that the phonons are described via an equilibrium distribution. Consequently, effects due to a nonequilibrium distribution of phonons can be clearly identified by comparing these approximations to the numerically exact results. The structure of the paper is as follows: In Section \[model\] we introduce the Anderson-Holstein model. In Section \[measuring\_the\_spectral\_function\] we show how the diagMC can be used to calculate the LDOS for a quantum dot with an electron-phonon interaction by adapting the approach of Ref. \[\]. Moreover, the ISA, SPA and certain limiting cases are briefly summarized. The results for weak electron-phonon couplings are presented in Section \[electronic\_regime\]. The moderate polaronic regime is addressed in Section \[polaronic\_regime\]. The model {#model} ========= In our discussion, we consider a molecular quantum dot connected by a tunneling coupling to a left (L) and right (R) electrode. For such a system, a single-electron charging of the quantum dot can cause a nanomechanical vibration of the molecule[@c60]. A reasonable model Hamiltonian for this situation is provided by the spinless Anderson-Holstein model [@Glazman1988; @Galperin07] (throughout this paper we use units with $\hbar=e=k_{\text{B}}=1$): $$\begin{aligned} \nonumber H &= {\sum_{k \in \alpha}^{\vphantom{\dagger}}} \left( {\epsilon_{\alpha k}^{\vphantom{\dagger}}} - \mu_{\alpha} \right) a^{\dagger}_{\alpha k} {a_{\alpha k}^{\vphantom{\dagger}}} + \sum_{k \in \alpha} \gamma_{\alpha} \left( a^{\dagger}_{\alpha k} d + d^{\dagger} a^{\vphantom{\dagger}}_{\alpha k} \right) \\ & \quad + \epsilon_{\text{D}} d^\dagger d + \lambda d^{\dagger} d \left( b + b^{\dagger} \right) + \omega_0 b^{\dagger} b \,. \label{Eq:Anderson-Holstein_Hamiltonian}\end{aligned}$$ The quantum dot is modeled by a single electronic energy level at $\epsilon_{\text{D}}$. $d^{\dagger}$ ($d$) is the electron creation (annihilation) operator on the dot. $\alpha = \text{L}$ denotes the left and $\alpha = \text{R}$ the right electrode. The electronic creation (annihilation) operator on electrode $\alpha$ at energy level $\epsilon_{\alpha k}$ is denoted by $a^{\dagger}_{\alpha k}$ ($a_{\alpha k}$). The bias voltage, $V=\mu_{\text{L}} - \mu_{\text{R}}$, is defined as the difference between the two chemical potentials $\mu_{\alpha}$ of the respective electrode. This quantity is assumed to be constant for all times. $\gamma_{\alpha}$ are the tunneling amplitudes. The tunneling rates in the absence of manybody effects are given by $\Gamma_{\alpha} = 2 \pi \rho_{\alpha} | \gamma_{\alpha}| ^2 $, where $\rho_{\alpha}$ is the density of states of lead $\alpha$, which is assumed to be a flat band. The linear coupling of the vibrational mode to the electronic degrees of freedom is described by a single phonon mode with frequency $\omega_0$. $b^{\dagger}$ and $b$ are the phonon creation and annihilation operators, respectively. $\lambda$ denotes the electron-phonon coupling constant. In the subsequent discussion, we study the deep quantum limit at $T=0$. Moreover, a symmetric setup is assumed where $\Gamma_{\text{L}}=\Gamma_{\text{R}}=\Gamma/2$, $\mu_{\text{L}}=-\mu_{\text{R}}=V/2$ and we consider the particle-hole symmetric case, $\tilde{\epsilon}_{\text{D}}=0$. $\tilde{\epsilon}_{\text{D}} = \epsilon_{\text{D}} - \lambda^2/\omega_0$ is the polaron-shifted energy level of the quantum dot. This parameter regime is very interesting since the steady-state dot occupation is always $\langle n \rangle = 0.5$ due to the symmetry of the setup. Consequently, any bias dependence of the LDOS can only be caused by the electron-phonon interaction and not the electronic occupation of the quantum dot itself. ![Sketch of the three-terminal setup with a weak coupling of the auxiliary lead to the quantum dot $\Gamma_{\text{M}} \ll \Gamma_{\text{L}}, \Gamma_{\text{R}}$. The bias voltage $V_{\text{M}}$ can be tuned in order to access the desired energy in the spectral density of the quantum dot. $\mu_{\text{M}}$ is the respective chemical potential of the auxiliary electrode. \[Fig:sketch\_third\_electrode\]](third_electrode.eps){width="47.50000%"} Approaches for the spectral density of a vibrational quantum dot {#measuring_the_spectral_function} ================================================================ Diagrammatic Monte Carlo simulation method ------------------------------------------ Despite recent progress in developing analytical approaches for the Anderson-Holstein model, e.g., by means of diagrammatic resummation schemes [@ruben; @PhysRevB.83.085401; @apta], a complete solution outside certain limiting cases is currently unknown. In order to calculate the spectral density without having to rely on methods which involve intrinsic approximations, numerical methods are needed (see, e.g., Refs. \[\]). A suitable approach to access regimes of arbitrary voltage, electron-phonon interaction, and dot-lead coupling strength is the numerically exact diagMC method[@Lothar_diagMC; @Werner2009; @Schiro09; @Werner2011], which is able to simulate finite temperatures as well as $T=0$. In the subsequent discussion we use a similar approach to that of Ref. \[\], where the diagMC has been used to calculate the LDOS for the Anderson impurity model. Here, we briefly summarize this approach and show how it can be adapted for the case of a local electron-phonon interaction on the quantum dot. Following the lines of Refs. \[\], the spectral density of a two-lead quantum dot can be calculated exactly using an auxiliary lead at chemical potential $\mu_{\text{M}}$ with a vanishing dot-lead coupling (see Fig. \[Fig:sketch\_third\_electrode\] for a sketch of the setup). Their basic idea is to generalize the Meir-Wingreen formula for the current [@Meir92] to a three-terminal setup to obtain $$\begin{aligned} \nonumber I_{\text{M}} &= \frac{\Gamma_{\text{M}}}{\Gamma+\Gamma_{\text{M}}} \int \text{d} \omega \rho_{\text{D}}(\omega) \\ & \qquad \times \left[ \Gamma f_{\text{M}}(\omega) - \Gamma_{\text{L}} f_{\text{L}}(\omega) - \Gamma_{\text{R}} f_{\text{R}}(\omega) \right] \label{eq:current_third_lead} \,,\end{aligned}$$ where the subscript M denotes the auxiliary lead and $\rho_{\text{D}}(\omega)$ is the LDOS of the quantum dot. $f_{\alpha}(\omega)=1/(e^{\beta(\omega-\mu_{\alpha})}+1)$ denotes the Fermi function of lead $\alpha$, where $\beta$ is the inverse temperature. In the limit of a vanishing tunneling coupling of the auxiliary electrode, i.e. $\Gamma_{\text{M}} \to 0$, one obtains the thermally broadened LDOS of the two-electrodes setup by deriving Eq. (\[eq:current\_third\_lead\]) with respect to the chemical potential of the auxiliary lead:[@lebanon_schiller_third_terminal2001] $$\begin{aligned} \lim \limits_{\Gamma_{\text{M}} \to 0} \Gamma_{\text{M}}^{-1} \frac{\partial I_{\text{M}}}{\partial \mu_{\text{M}}} &= \int \text{d} \omega \rho_{\text{D}}(\omega) \frac{\partial f_{\text{M}}(\omega)}{\partial \mu_{\text{M}}} \label{eq:conductance_third_terminal_spectral_function} \,.\end{aligned}$$ In the deep quantum limit, i.e. at $T=0$, the derivative of the Fermi function in Eq. (\[eq:conductance\_third\_terminal\_spectral\_function\]) becomes a delta distribution so that the exact LDOS of the two-electrodes setup is obtained and the thermal broadening vanishes[@Lothar_spectral_density]: $$\begin{aligned} \rho_{\text{D}}(\mu_{\text{M}}) &= \lim \limits_{\Gamma_{\text{M}} \to 0} \Gamma_{\text{M}}^{-1} \frac{\partial I_{\text{M}}}{\partial \mu_{\text{M}}} \label{eq:steady_state_lebanon_schiller} \,.\end{aligned}$$ A convenient way to evaluate Eq. (\[eq:steady\_state\_lebanon\_schiller\]) for the Anderson-Holstein model is to use the diagrammatic expansion in the tunnel coupling[@Lothar_diagMC; @Werner2009; @Werner2011]. This expansion allows for a complete decoupling of the influence of the leads to the dot, denoted by $\mathcal{L}_{\text{M}}(\vec{s}_n)$, from the phononic one including the dot’s energy level, denoted by $\mathcal{G}(\vec{s}_n)$. Therefore, with the use of Eq. (\[eq:steady\_state\_lebanon\_schiller\]) one obtains the transient which establishes the LDOS starting from an initially decoupled preparation: $$\begin{aligned} \rho_{\text{D}}(\mu_{\text{M}}) &= 2 \lim \limits_{t \to \infty} \lim \limits_{\Gamma_{\text{M}} \to 0} \Gamma_{\text{M}}^{-1} \sum \limits_{n=1}^{\infty} \left( -1 \right)^{n} {\int \limits_{0}^{t} \text{d} \vec{s}_n \nonumber} \\ & \qquad\qquad \times \frac{\partial}{\partial \mu_{\text{M}}} \text{Re} \left \lbrace \mathcal{L}_{\text{M}}(\vec{s}_n) \mathcal{G}(\vec{s}_n) \right \rbrace \label{eq:diagrammatic_expansion} \,.\end{aligned}$$ We used the abbreviation $$\begin{aligned} \int \limits_0^{t} \text{d} \vec{s}_n \equiv \int \limits_0^{t} \text{d}s_{2n} \int \limits_0^{s_{2n}} \text{d}s_{2n-1} \cdots \int \limits_0^{s_2} \text{d}s_1 \,,\end{aligned}$$ where $\vec{s}_n=\{s_1,s_2,\dots,s_{2n}\}$ is the time-ordered sequence of $2n$ tunneling times $s_j$. While in Ref. \[\] $\mathcal{G}$ accounts for a Coulomb on-site interaction, in Eq. (\[eq:diagrammatic\_expansion\]) it provides the influence of the electron-phonon interaction given by [@Lothar_diagMC] $$\begin{aligned} \mathcal{G}(\vec{s}_n) &= \mathcal{F}[\vec{s}_n] \ {\text{e}}^{ {\text{i}}\tilde{\epsilon}_{\text{D}} \left( s_1 - s_2 + s_3 - \cdots \right) } \label{eq:phonon_influence} \,,\end{aligned}$$ where an initially empty quantum dot is considered. ${\mathcal F}[\vec{s}_n]$ denotes the Feynman-Vernon influence functional[@Feynman63]: $$\begin{aligned} {\mathcal F}[\vec{s}_n] &= \exp \left \lbrace - \int_{\mathcal{C}} \text{d}s_1 \int_{\mathcal{C}: s_2 < s_1} \! \text{d}s_2 q(s_1) L(s_1-s_2) q(s_2) \right \rbrace \,.\end{aligned}$$ The integrations are performed on the Keldysh contour $\mathcal{C}: 0 \to t \to 0$. $q(s)$ denotes the occupation of the quantum dot at time $s$ which is fully determined by the initial condition of the quantum dot and the position as well as the number of the tunneling events given by $\vec{s}_n$. For the considered single phonon mode, the bath autocorrelation function is given by $$\begin{aligned} L(s) &= \frac{ \lambda^2 }{ \omega_0 } \left[ \cos(\omega_0s) - {\text{i}}\sin(\omega_0 s) \right] \,.\end{aligned}$$ We would like to emphasize that the only assumption for this approach is that the quantum dot is initially decoupled from the leads. In detail, right before the coupling, the leads and the quantum dot are considered to be in their respective thermal equilibrium. The nonequilibrium aspect of the system enters via the coupling of the leads to the quantum dot at $t=0$. This causes some transient dynamics until a nonequilibrium steady state is reached. Since $\mathcal{G}(\vec{s}_N)$ is independent of the leads, the derivative with respect to the chemical potential of the auxiliary lead in Eq. (\[eq:diagrammatic\_expansion\]) only acts on $\mathcal{L}_{\text{M}}(\vec{s}_N)$, which is a determinant of a matrix consisting of lesser and greater self-energies of the decoupled leads. Calculating this derivative, one obtains at $T=0$: [@Lothar_spectral_density] $$\begin{aligned} \lim \limits_{\Gamma_{\text{M}} \to 0} \frac{1}{\Gamma_{\text{M}}} \frac{\partial}{\partial \mu_{\text{M}}} \mathcal{L}_{\text{M}}(\vec{s}_n) &= {\text{i}}^n \det ( \mathcal{S}^{\text{M}}(\vec{s}_n) ) \label{eq:leads_influence} \,,\end{aligned}$$ where $$\begin{aligned} \mathcal{S}^{\text{M}}_{j,k}(\vec{s}_n) &= \left \lbrace \begin{array}{ll} \Sigma^{<}(s_{2k-1},s_{2j}) &\mbox{, for } j \le k \\ \Sigma^{>}(s_{2k-1},s_{2j}) &\mbox{, for } j > k \\ \frac{{\text{i}}}{2\pi} {\text{e}}^{-{\text{i}}\mu_{\text{M}} (s_{2k-1}-s_{2j})} &\mbox{, if } (s_{2k-1} \lor s_{2j}) = t \end{array} \right. \label{eq:leads_determinant} \,.\end{aligned}$$ We would like to emphasize that the limit $\Gamma_{\text{M}} \to 0$ has been performed analytically so that the auxiliary lead is not influencing the two-terminal quantum dot. Consequently, the stationary limit of Eq. (\[eq:diagrammatic\_expansion\]) is the exact LDOS of the two-terminal setup. We would like to note that the steady state is defined with respect to the reduced dynamics of the quantum dot. Therefore, the complete system is in nonequilibrium even though a time-independent steady state for observables on the quantum dot is reached. Therefore, for a given sequence of tunneling events, $\vec{s}_n$, it is straightforward to calculate the influence of the leads on the dot as well as the phononic influence without any approximation. The summation and integration over all possible tunneling events in Eq. (\[eq:diagrammatic\_expansion\]) can be done conveniently in a numerical exact manner by using Monte Carlo sampling [@Lothar_diagMC; @Schiro09; @Werner2011]. Using this method, the only occurring error is a controllable statistical one. We note that whenever no error bar is visible in the subsequent figures, the error is smaller than the symbol size. In the following we will consider two different coupling procedures of the leads to the dot at $t=0$: A sudden, and a smooth switch-on (for details of the coupling procedure see, e.g., Ref. \[\]). In addition, we truncate the leads’ density of states at a value $\pm \epsilon_{\text{c}}$. The reason for this sharp cutoff is that an instantaneous coupling of the electrodes to the quantum dot can lead to excitations in the leads, which are arbitrarily high in energy at $t=0$. [@Schmidt2008] These short-living excitations are not only unphysical but also make a numerical evaluation using diagMC unfeasible. For our results, the cutoff is chosen to be the largest energy scale in the system so that a further increase of $\epsilon_{\text{c}}$ does not change our results for times $t \gtrsim \epsilon_{\text{c}}^{-1}$. [@Schmidt2008] Limiting cases {#limiting_cases} -------------- In this section, we briefly discuss two limiting cases, which can be solved analytically. The first one is the absence of electron-phonon interactions, $\lambda/\Gamma \to 0$. Here, it is straightforward to see that the LDOS is independent of the applied bias voltage [@PhysRev.124.41]: $$\begin{aligned} \rho_{\text{D}}(\omega) &= \frac{1}{2 \pi} \frac{ \Gamma }{ \left( \omega - \epsilon_{\text{D}} \right)^2 + \left( \Gamma / 2 \right)^2 } \label{eq:spectral_function_free} \,.\end{aligned}$$ Considering a weakly coupled phonon mode, the electron-phonon coupling can be treated perturbatively [@Flensberg2003; @PhysRevB.74.075326; @PhysRevLett.103.136601; @PhysRevB.80.041307; @PhysRevB.80.041309; @Riwar2009]. Consequently, Eq. (\[eq:spectral\_function\_free\]) indicates that in the perturbative regime no, or only a weak voltage dependence in the spectral density is expected. Similar arguments hold for the atomic limit, where the electron-phonon coupling becomes very large, $\Gamma/\lambda \to 0$. In this case, the LDOS is given by sharp delta-peaks at multiples of the phonon frequency [@Mahan1991] $$\begin{aligned} \rho_{\text{D}} (\omega) &= e^{-g} \sum_{k=0}^{\infty} \frac{g^k}{k!} \left \lbrace \left[ 1- \langle n_{\text{D}} \rangle \right] \delta(\omega-\tilde{\epsilon}_{\text{D}}-k \omega_0) \right. \nonumber \\ & \quad \left. + \langle n_{\text{D}} \rangle \delta(\omega-\tilde{\epsilon}_{\text{D}}+k \omega_0) \right \rbrace \label{eq:spectral_function_atomic_limit} \,,\end{aligned}$$ where $\langle n_{\text{D}} \rangle$ is the charge on the quantum dot, and $g=(\lambda/\omega_0)^2$ is the dimensionless electron-phonon coupling strength. Since in this limit the leads are decoupled from the quantum dot, the LDOS cannot depend on the bias voltage. To summarize these considerations, it is clear that in the weak as well as in the strong coupling limit, a possible voltage dependence of the LDOS can only be weak. Therefore, the nonequilibrium LDOS outside these limiting cases is expected to be the most interesting one. Approximative approaches ------------------------ Finally, we discuss two important and often used approximative approaches. A very popular approximate scheme is the SPA [@PhysRevB.50.5528; @PhysRevB.66.085311; @PhysRevB.66.075303; @Flensberg2003; @PhysRevB.76.033417; @carmina2009], where the electrons are decoupled from the phonons. For this approach the LDOS at temperature $T$ can be written as $$\begin{aligned} \rho_{\text{D}}(\omega) &= \frac{\Gamma}{2\pi} e^{-g(2n_{\text{B}}+1)} \sum \limits_{k=-\infty}^{\infty} I_k[2g\sqrt{(n_{\text{B}}+1)n_{\text{B}}}]e^{k\beta\omega_0/2} \nonumber \\ &\qquad \times \left[ \frac{ 1- \langle n_{\text{D}} \rangle }{ \left( \omega - \tilde{\epsilon}_{\text{D}} - k \omega_0 \right)^2 + \left( \Gamma/2 \right)^2 } \right. \nonumber \\ &\qquad \quad \left. + \frac{ \langle n_{\text{D}} \rangle }{ \left( \omega - \tilde{\epsilon}_{\text{D}} + k \omega_0 \right)^2 + \left( \Gamma/2 \right)^2 } \right] \label{eq:spectral_function_spa} \,,\end{aligned}$$ where the charge on the quantum dot, $\langle n_{\text{D}} \rangle$, has to be calculated self-consistently. $I_k$ is the modified Bessel function, $\beta=1/T$ and $n_{\text{B}}=1/(e^{\beta\omega_0}-1)$. The simple structure of the SPA allows for a straightforward evaluation. Moreover, it provides good results if the correlations between electrons and phonons are weak, or if the quantum dot is either empty or occupied. Furthermore, the atomic limit as well as the case of absent phonons is recovered. Outside these limiting cases, methods are needed which go beyond the simple SPA decoupling scheme. A well-established method, which is known to provide reasonable results for a broad range of parameters is the ISA [@alvaro2008]. The basic idea is to perform a functional interpolation of the self-energies from the weak to the strong coupling regime. This scheme was originally derived for the Anderson impurity model [@ISA_Anderson_1; @ISA_Anderson_2] and has been extended and widely used in different systems: multilevel quantum dots [@ISA_ML_QD], out-of-equilibrium transport through a single level [@ISA_SL_1; @ISA_SL_2], and in dynamical mean-field theory[@ISA_DMFT_1; @ISA_DMFT_2] to analyze the Mott transition in Hubbard-like models. For the nonequilibrium Anderson-Holstein model this approach provides accurate results beyond perturbation theory or SPA [@carmina2010]. Using the ISA and the SPA, the effects of the electron-phonon interaction on the charge transport can be discussed qualitatively as long as their basic underlying assumption is fulfilled: an equilibrium distribution of the phonons. Since both methods cover a broad range of parameters this in turn implies that if the physics is qualitatively not covered by either of these methods, there is a strong indication that the phonons no longer obey an equilibrium distribution. Weakly coupled phonon mode {#electronic_regime} ========================== We start the discussion of our results by considering the weak-coupling regime with $\lambda=\Gamma$, and a rather large phonon frequency $\omega_0=4\Gamma$. The corresponding polaronic self-energy can be determined to be $\Lambda_{\text{pol}}=\lambda^2 / \omega_0 = \Gamma/4$. Since $\Lambda_{\text{pol}} < \Gamma$ the formation of a polaron is a relatively rare event and the electrons are thus weakly coupled to the phonons. In Fig. \[Fig:spectral\_function\_weak\_transient\_mu0p5\] the transients which establish the spectral density for $V=2\Gamma$ at $\omega=\pm 0.5\Gamma$ are shown. An instantaneous coupling to the electrodes leads in this case to an overshooting, with the steady state being approached monotonically. The relevant timescales for the dynamics can be estimated to be $\mathcal{O} \left( \Gamma^{-1} \right)$. A smooth coupling of the quantum dot to the electrodes [@Lothar_spectral_density] establishes the steady state adiabatically. Since the particle-hole symmetric case is considered, the spectral density in the stationary limit must be symmetric with respect to $\omega=0$. In Fig. \[Fig:spectral\_function\_weak\_transient\_mu0p5\] it can be seen that in the weak-coupling case this property is well fulfilled even in the transient regime. ![Time-dependent [diagMC results for $\rho_{\text{D}}(\omega, t)$ with $\lambda=\Gamma$, $\omega_0=4\Gamma$ for $V=2\Gamma$ and $\omega=0.5\Gamma$ (red circles) as well as $\omega = -0.5\Gamma$ (blue diamonds)]{}. Empty symbols denote results for a switch-on time of $\tau_{\text{sw}}= 10 \Gamma^{-1}$. The bandwidth is $2\epsilon_{\text{c}}=8 \Gamma$.\[Fig:spectral\_function\_weak\_transient\_mu0p5\]](spectral_function_weak_transient_mu0p5.eps){width="47.50000%"} The resulting LDOS in the frequency domain is shown in Fig. \[Fig:spectral\_function\_weak\] for two different voltages: $V=2\Gamma$ and $V=10\Gamma$. Comparing results from ISA to the results extracted from the time-dependent diagMC calculations, we find excellent agreement. The overall shape of the spectral density is very similar for both voltages. A comparison with the results in the absence of phonons ($\lambda=0$) reveals that the height of the central peak remains almost unchanged in the low-voltage regime. The ISA reveals a slight decrease of the central peak when increasing the bias voltage. Since this decrease is small, the Friedel-Langreth sum rule [@Friedel1951; @PhysRev.150.516], which pins the height of the central peak to $ \rho_{\text{D}}(0) = 2/(\pi \Gamma)$, [@carmina2010] provides a good approximation also for the nonequilibrium situation. Small sidebands at multiples of the phonon frequency are observed, which are independent of the voltage within the accuracy of the results extracted from diagMC. The ISA reveals features at $|\omega|=k \omega_0 \pm V/2$, with $k$ being an integer, where the LDOS is changing rapidly (see inset of Fig. \[Fig:spectral\_function\_weak\]). These features can be attributed to inelastic electron tunneling processes as predicted by different theoretical approaches [@carmina2010; @ruben] to appear both in the spectral density and conductance. For the considered nonequilibrium spectral density with a particle-hole symmetric setup they appear at the condition $|\omega|=k \omega_0 \pm V/2$. For large $k$ the effect can be tiny due to the large amount of phonons involved. ![LDOS at zero temperature for a weakly coupled phonon mode with $\lambda=\Gamma$, $\omega_0=4\Gamma$. Red indicates $V=2\Gamma$ and green $V=10\Gamma$. The straight lines are the ISA, circles and diamonds denote the diagMC results. The LDOS in the absence of phonons ($\lambda=0$), given by Eq. (\[eq:spectral\_function\_free\]), is shown as a black dashed line. The inset shows a zoom into the region of the first phonon sideband at $\omega = \omega_0 = 4 \Gamma$. \[Fig:spectral\_function\_weak\]](spectral_function_weak_paper.eps){width="47.50000%"} We conclude that the voltage dependence of the spectral density for a weakly coupled phonon mode is very small. This observation is in excellent agreement with our discussion of the limiting cases given in Section \[limiting\_cases\]. An important consequence for future theoretical approaches is that in this regime it is sufficient to solve the nonequilibrium problem by calculating the LDOS using equilibrium theory. The nonequilibrium aspect of the system enters only via the integration limits for single-particle observables such as the dot occupation or the current. Spectral density in the moderate polaronic regime {#polaronic_regime} ================================================= In the polaronic regime, the formation time of a polaron is shorter than the average occupation time of the electron on the quantum dot. Consequently, the formation of a polaron becomes likely so that pronounced phonon sidebands are expected. The corresponding parameter regime can be determined to be $\Lambda_{\text{pol}} > \Gamma$. The focus of the subsequent discussion is on the voltage dependence of the spectral density. Our approach to distinguish different voltage regimes is by considering the number of phonon sidebands, which are included in the voltage window: If only the central transport channel is between the two chemical potentials, that is $V \lesssim \omega_0$, the low-voltage regime is realized. For voltage windows including one or more sidebands, we expect that nonequilibrium aspects are most pronounced. Low-voltage regime {#spectral_density_v2} ------------------ Important references in the low-voltage regime are equilibrium results such as the Friedel-Langreth sum rule, which is fulfilled at $V=0$ independent of the electron-phonon interaction strength[@carmina2009]. Since phonon sidebands form when increasing the electron-phonon coupling [@0022-3719-13-24-011; @Hewson2002; @Flensberg2003; @carmina2009], and the norm of the spectral density needs to be preserved, the central peak must be phonon-narrowed. Such a narrowing can be determined to be $\tilde{\Gamma} \simeq \Gamma {\text{e}}^{-g}$. [@0022-3719-13-24-011] This narrowing of the central peak for strong electron-phonon couplings is the origin of the Franck-Condon blockade effect discussed in detail in Ref. \[\] Regarding the nonequilibrium problem, in Ref. \[\] it was shown via an approximative study that such a narrowing of the resonances can increase the timescales relevant for the charge transport up to $$\begin{aligned} \tau_{\text{pol}} &= \exp \left[ \left( \lambda/\omega_0 \right)^2 \right] \Gamma^{-1} = \tilde{\Gamma}^{-1} \label{eq:narrowing_timescales} \,.\end{aligned}$$ Correspondingly, e.g., the transients for establishing the central peak of the spectral density at $\omega=0$ can be estimated to be: $$\begin{aligned} \rho_{\text{D}}(\omega=0,t) &= \frac{ 2 }{ \pi \Gamma } \left( 1 - e^{- t \tilde{\Gamma}/2} \right) \label{eq:long_transients_central_peak} \,.\end{aligned}$$ Our diagMC results confirm this behavior as shown in Fig. \[Fig:time\_dependent\_spectral\_function\_long\_transient\] for various electron-phonon couplings in the moderate polaronic regime. We would like to emphasize the broad range of phonon parameters, for which Eq. (\[eq:long\_transients\_central\_peak\]) provides accurate results. Besides confirming the existence of phonon-induced long timescales[@apta] determined by $\tilde{\Gamma}^{-1}$, an important consequence of this behavior is that in the low-bias regime the Friedel-Langreth sum rule is fulfilled. That is, the central peak is pinned to $\rho_{\text{D}}(\omega=0)=2/(\pi \Gamma)$ independent of the electron-phonon coupling strength. ![Time-dependent diagMC results for the central resonance at $\omega=0$ with $V=2\Gamma$ for various phonon parameters: $\lambda=3\Gamma$, $\omega_0=4\Gamma$ (yellow triangles), $\lambda=2\Gamma$, $\omega_0=2\Gamma$ (red circles), $\lambda=4\Gamma$, $\omega_0=3\Gamma$ (green diamonds), and $\lambda=8\Gamma$, $\omega_0=4\Gamma$ (blue pentagons). The bandwidth is set to $2 \epsilon_{\text{c}}=8\Gamma$. The corresponding transients in the spirit of Ref. \[\], given by Eq. (\[eq:long\_transients\_central\_peak\]), are lines with the same color code as the respective numerical data. The analytical result of the transient in the absence of phonons is shown as a black line. \[Fig:time\_dependent\_spectral\_function\_long\_transient\]](time_dependent_spectral_function_long_transient_paper.eps){width="47.50000%"} In the subsequent discussion, we will study a parameter regime where nonequilibrium effects are most pronounced. According to our preceding discussion, it must therefore be neither close to the limiting cases of a very weak phonon coupling nor in the strong polaronic regime. Another requirement is that we are able to extract the complete LDOS from the time-dependent diagMC results. Therefore, the steady state has to be reached within times which are accessible by diagMC. A reasonable choice of parameters fulfilling these requirements is $\lambda=\omega_0=2\Gamma$: the moderate polaronic regime is accessed and the longest timescales of the transients are roughly $\tilde{\Gamma}^{-1} \approx 2.7 \Gamma^{-1}$. The current implementation of the diagMC is able to simulate up to $t \approx \mathcal{O} (10 \Gamma^{-1})$ within reasonable computational effort. Consequently, we can not only discuss the transient dynamics, but it is also reasonable to extract the steady state of the system from the time-dependent results. In Fig. \[Fig:time\_dependent\_spectral\_function\_l2o2\_dip\] the transients which establish the spectral density at $\omega=0.5\Gamma$ and $\omega=\Gamma$ are shown. Similar to the weak-coupling regime, an instantaneous coupling of the electrodes to the quantum dot leads to an overshooting for small times. The steady state, however, is approached non-monotonically in an oscillating manner. The characteristic timescale for the convergence towards the steady state can be estimated to be given by $\tilde{\Gamma}^{-1}$, pronounced features in the oscillations occur with a periodicity given by $t \approx 2 \pi / \omega_0$. These transients can be reduced by a smooth switch-on procedure of the leads to the quantum dot. If a rather long switch-on time is used, the steady state can be extracted with good accuracy. ![diagMC results for $\rho_{\text{D}}(\omega,t)$ for $\lambda=\omega_0=V=2\Gamma$. The upper panel shows $\omega=0.5\Gamma$, the lower one $\omega=\Gamma$. Empty circles denote a smooth switching within $\tau_{\text{sw}}=12\Gamma^{-1}$, and filled ones correspond to an instantaneous coupling at $t=0$. The bandwidth is $2 \epsilon_{\text{c}}=8\Gamma$.\[Fig:time\_dependent\_spectral\_function\_l2o2\_dip\]](time_dependent_spectral_function_l2o2_dip_paper.eps){width="47.50000%"} It is worth noticing an interesting effect that is clearly observed in the transient regime for the strong-coupling case: while the particle-hole symmetry requires that the stationary spectral density is symmetric, $\rho_{\text{D}}(\omega)=\rho_{\text{D}}(-\omega)$, this relation doesn’t have to be fulfilled necessarily in the transients. In Fig. \[Fig:time\_dependent\_speck\_mum2\] an overshooting for a positive frequency, $\omega=\omega_0$, can be observed, whereas the transients for a negative frequency, $\omega=-\omega_0$, show a monotonic increase in time. This effect can be explained by the asymmetry in the initial preparation: right before the coupling of the quantum dot to the leads, the quantum dot is empty. Therefore, only resonances positive in energy exist, since no deexcitation of phonons is possible at $t=0$. On the finite timescale necessary to establish the spectral density [@Lothar_spectral_density] this asymmetry in the initial preparation leads to an asymmetry in the transients. Combined with the phonon-induced long timescales, this fact provides a deeper understanding of the splitting of the current depending on the initial preparation on a timescale given by $\mathcal{O}\left(\Gamma^{-1}\right)$, which was observed, e.g., in Ref. \[\]. This is further corroborated by the fact that for the considered particle-hole symmetric case one obtains $\rho_{\text{D}}^{\text{empty}}(\omega,t) = \rho_{\text{D}}^{\text{occupied}}(-\omega,t)$, where the superscript denotes the initial occupation of the quantum dot. ![Same plot as in Fig. \[Fig:time\_dependent\_spectral\_function\_l2o2\_dip\], however, for the first phonon sideband with $\omega= \omega_0= 2 \Gamma$ (red circles) and $\omega= -\omega_0= -2 \Gamma$ (blue diamonds). The green triangles denote the diagMC results of the average $\rho_{\text{D,av}}(\omega,t)$, as in Eq. (\[eq:average\_transient\_spectral\_density\]). A smooth switching within $\tau_{\text{sw}}=6\Gamma^{-1}$ is employed. \[Fig:time\_dependent\_speck\_mum2\]](time_dependent_speck_mum2_av_paper.eps){width="47.50000%"} Since neither $\rho_{\text{D}}(\omega=\omega_0,t)$, nor $\rho_{\text{D}}(\omega=-\omega_0,t)$ exhibit a clear steady state in Fig. \[Fig:time\_dependent\_speck\_mum2\], we use the particle-hole symmetry of the considered setup, and define the average of the transients by $$\begin{aligned} \rho_{\text{D,av}}(\omega,t) &= \frac{1}{2} \left[ \rho_{\text{D}}(\omega,t) + \rho_{\text{D}}(-\omega,t) \right] \label{eq:average_transient_spectral_density} \,.\end{aligned}$$ The particle-hole symmetry ensures that this function has the same steady-state value as $\rho_{\text{D}}(\omega,t)$ and $\rho_{\text{D}}(-\omega,t)$, separately. The transient dynamics, however, show a quicker convergence towards the plateau value due to the averaging between excitations and deexciations in Eq. (\[eq:average\_transient\_spectral\_density\]). Therefore, the steady state of $\rho_{\text{D,av}}(\omega,t)$ can be extracted with reasonable accuracy. We would like to note that similar convergences of the observables can also be found for the currents: While the average current reaches a plateau for $t \gtrsim 8 \Gamma^{-1}$, the left and the right current converge to a joint steady state for times $t \gtrsim 11 \Gamma^{-1}$. In Fig. \[Fig:spectral\_function\_l2o2\], we plot the extracted spectral density of the quantum dot, and we make a comparison between diagMC, ISA, as well as SPA. It should be stressed that the error bar of the extracted steady state from the time-dependent diagMC results is twice the total change of $\rho_{\text{D,av}}(\omega, t)$ from $t=8\Gamma^{-1}$ to $t=12\Gamma^{-1}$. A very sharp central peak at $\omega=0$ is observed with a height given by the Friedel-Langreth sum rule. Compared to the case of absent phonons, the width of the central peak is reduced to $\tilde{\Gamma} \approx \Gamma e^{-g}$, as it was also observed in Ref. \[\] for the equilibrium situation. This observation eventually proves the close connection between the long transients and phonon narrowing of the resonances, which was discussed in Ref. \[\] by means of an approximative method. Phonon sidebands can be found at multiples of the phonon frequency, with an exponentially decreasing height. This behavior reflects the fact that transport outside the voltage window is strongly suppressed. Moreover, clear dips in the spectral density appear between two phonon sidebands. The ISA describes the results obtained from diagMC with remarkably good accuracy. Small differences are only visible at the first phonon sidebands, where the ISA predicts a slightly larger value. Since one basic assump- tion of the ISA is an equilibrium phonon distribution, we conclude that effects due to a (possible) nonequilib- rium phonon distribution only play a minor role in the moderate polaronic regime at low biases. Regarding the low-frequency domain calculations using the SPA a clear deviation from the diagMC is observed. This means that the charge on the quantum dot is strongly correlated with the excitation of phonons for frequencies that are not too large. In the large frequency domain, however, a good agreement between the diagMC, and the SPA is observed. Therefore, the electrons are decoupled from the phonons for transport with energies much larger than the voltage window, which confirms the results of Ref. \[\]. ![Zero temperature spectral density for $\lambda=\omega_0=2\Gamma$ and $V=2\Gamma$. The results extracted from diagMC are shown with red filled circles, the ISA is a blue line, and SPA is green. \[Fig:spectral\_function\_l2o2\]](spectral_function_l2o2_paper.eps){width="47.50000%"} Far-from-equilibrium spectral density ------------------------------------- In the subsequent discussion, we will analyze nonequilibrium effects in the moderate polaronic regime. For this purpose we consider voltage windows that contain one or more phonon sidebands. While nonequilibrium effects are not important in the weak-coupling regime, as discussed in Section \[electronic\_regime\], the effect of a large bias voltage is strong in the moderate polaronic regime: The transients, which establish the central transport channel at $\omega=0$, are shown in Fig. \[Fig:time\_dependent\_spectral\_function\_l2o2\_voltage\], where the same phonon parameters are used as in the previous section: $\lambda=\omega_0=2\Gamma$. For a voltage window, which includes one or more phonon sidebands, the transients no longer follow the exponential convergence, given by Eq. (\[eq:long\_transients\_central\_peak\]). Moreover, the relevant timescales for the transient dynamics are significantly smaller so that the steady state is reached faster than in the low-bias regime. Furthermore, the steady-state value drops to a much smaller value, which reflects the fact that the transport through the quantum dot outside the low-voltage regime is dominated by the excitation of one or more phonons. This behavior clearly violates the Friedel-Langreth sum rule. ![Time-dependent diagMC results for the transients, which establish the central peak $\rho_{\text{D}}(\omega=0)$ for $\lambda=\omega_0=2\Gamma$ and various voltages: $V=2\Gamma$ (red circles, bandwidth $2 \epsilon_{\text{c}}=8\Gamma$), $V=6\Gamma$ (green triangles, $2 \epsilon_{\text{c}}=8\Gamma$), $V=10\Gamma$ (blue diamonds, $2 \epsilon_{\text{c}}=12\Gamma$), $V=14\Gamma$ (yellow reverted triangles, $2 \epsilon_{\text{c}}=16\Gamma$). The approximative description in the spirit of Ref. \[\], given by Eq. (\[eq:long\_transients\_central\_peak\]), is shown as a red line. \[Fig:time\_dependent\_spectral\_function\_l2o2\_voltage\]](time_dependent_spectral_function_l2o2_voltage_paper.eps){width="47.50000%"} The strongest voltage dependence of the height of the spectral density at $\omega=0$ is observed when increasing the voltage from $V=2\Gamma$ to $V=6\Gamma$. The reason for this behavior is that for $\lambda=\omega_0=2\Gamma$ the central transport channel as well as the first phonon sideband are most pronounced for $V=2\Gamma$ as it can be seen in Fig. \[Fig:spectral\_function\_l2o2\]. Including the first sideband into the voltage window by setting $V=6\Gamma$, charge transport involving single-phonon processes becomes likely and thus this important transport channel opens. This reduces the probability for charge transport without exciting a phonon and, consequently, the height of the central resonance at $\omega=0$ decreases. A further increase of the voltage does not exhibit this pronounced behavior since the transport channels involving two or more phonons are much smaller for the considered parameters. An important consequence of the decreasing central transport channel is that the weight of the LDOS is shifted towards larger frequencies. A similar shift has been reported recently in the differential conductance [@PhysRevB.87.195112]. This shift causes an increase of the phonon sidebands outside the voltage window as it can be seen in the lower panel of Fig. \[Fig:time\_dependent\_spectral\_function\_l2o2\_v10\_sideband\]. Including a phonon sideband into the voltage window causes a drop of the phonon resonance as it can be seen in the upper panel of Fig. \[Fig:time\_dependent\_spectral\_function\_l2o2\_v10\_sideband\]. The resulting spectral density extracted from the time-dependent diagMC for $V=10\Gamma$ is shown in Fig. \[Fig:spectral\_function\_l2o2\_v10\]. A comparison with the low-bias LDOS in Fig. \[Fig:spectral\_function\_l2o2\] reveals that inside the voltage window all peaks seem to align to a similar height, whereas peaks outside the voltage window are increased. Moreover, the narrowing of the central transport channel to $\tilde{\Gamma} \approx {\text{e}}^{-g} \Gamma$ is no longer observed in the large voltage regime. Rather, a width of approximately $\Gamma$, which is the value for the interaction-free case, is recovered. We note, however, that this is not an indication that the charge transport through the quantum dot is uncorrelated from the phonons: a comparison with the SPA reveals that the central transport channel is strongly suppressed. ![Time-dependent diagMC results for $\rho_{\text{D},\text{av}}(\omega,t)$ with $|\omega| = \omega_0$ (upper panel) and $|\omega| = 3 \omega_0$ (lower panel) with $\lambda=\omega_0=2\Gamma$. $V=2\Gamma$ is highlighted with red circles (bandwidth $2 \epsilon_{\text{c}}=8 \Gamma$ in the upper panel, and $2 \epsilon_{\text{c}}=14 \Gamma$ in the lower one), and $V=10\Gamma$ with blue diamonds (bandwidth $2 \epsilon_{\text{c}}=14 \Gamma$ for both panels). A smooth switching within $\tau_{\text{sw}}=6 \Gamma^{-1}$ is employed. \[Fig:time\_dependent\_spectral\_function\_l2o2\_v10\_sideband\]](./time_dependent_spectral_function_l2o2_peak_paper.eps){width="47.50000%"} In addition, Fig. \[Fig:spectral\_function\_l2o2\_v10\] reveals a clear deviation of the ISA from the diagMC results. For such a large voltage the ISA spectral density has (almost) converged to the SPA case. It is interesting to remark that a recent diagrammatic resummation scheme valid in the polaronic regime [@ruben] predicts a similar convergence towards the SPA for large voltages. As these approximate theories do not include the effect of the nonequilibrium distribution of phonons in a self-consistent way, the numerically exact diagMC results strongly indicate that this effect is important in the moderate polaronic regime in the large bias limit. A straightforward way to confirm that the deviations observed in Fig. \[Fig:spectral\_function\_l2o2\_v10\] are indeed produced by a nonequilibrium phonon distribution is to try to simulate the results by an equilibrium distribution at some effective temperature. For this purpose, we use the SPA since the ISA as well as the approach of Ref.  converge to the SPA for sufficiently large bias voltages. In Fig. \[Fig:spectral\_function\_l2o2\_v10\_temp\] the SPA results given by Eq. (\[eq:spectral\_function\_spa\]) for various effective phonon temperatures $T \to T_{\text{eff}}$ are compared to the diagMC results, which are calculated for $V=10 \Gamma$ and $T=0$. Strikingly, the central peak of the SPA spectral density decreases by increasing the phonon temperature, whereas phonon resonances well away from the central peak increase – a behavior similar to the diagMC results for increasing voltage. For an effective phonon temperature of roughly $T_{\text{eff}} \approx 3 \Gamma$, the results from SPA match the diagMC results for $T=0$ and $V=10\Gamma$ with good accuracy. We would like to note that despite the good overall agreement, small differences can be observed for the second phonon resonance, which means that not all effects can be completely described in detail by this effective theory. ![Same color code as Fig. \[Fig:spectral\_function\_l2o2\] but with $V=10\Gamma$.\[Fig:spectral\_function\_l2o2\_v10\]](./spectral_function_l2o2_v10_paper.eps){width="46.00000%"} ![Same diagMC results as in Fig. \[Fig:spectral\_function\_l2o2\_v10\]. The SPA has been calculated with an effective temperatures of the phonons: $T= \Gamma$ (blue), $T=3 \Gamma$ (black), and $T=10 \Gamma$ (green).\[Fig:spectral\_function\_l2o2\_v10\_temp\]](./spectral_function_l2o2_v10_paper_temp.eps){width="46.00000%"} Conclusions =========== In this paper we calculated the nonequilibrium spectral density of a vibrational quantum dot using the numerical exact diagMC technique and compared its predictions with those of approximate methods such as ISA and SPA. We showed that for a weak electron-phonon interaction the spectral density of the quantum dot resembles the equilibrium one independently of the bias voltage. For intermediate electron-phonon coupling strengths in the moderate polaronic regime, we determined a significant voltage dependence of the spectral density. An increasing bias voltage shifts the weight of the spectral density towards larger energies: The central resonance decreases, whereas phonon resonances outside the voltage window are increased with respect to the equilibrium results. Inside the voltage window our results indicate that the phonon peaks align to a similar height. We were able to link the voltage dependence of the spectral density to an effective “heating” of the phonons caused by inelastic excitations. The explicit voltage dependence of the spectral density points out the importance of accessing the spectral density directly, e.g., by means of a three-terminal setup [@sun_guo_third_terminal2001; @lebanon_schiller_third_terminal2001; @leturcq_three_terminal2005]. An indirect measurement, e.g., of the differential conductance might lead to a discrepancy between the result and the actual spectral density due to its voltage dependence. Another consequence of our findings is that for future descriptions by means of approximative approaches it is desirable to also account for nonequilibrium effects of the phonon distribution, which could be preformed, e.g., as proposed in Ref. \[\]. Finally, we would like to emphasize that for small voltages, we confirmed the existence of phonon-induced long transients previously proposed in Ref. \[\]. Moreover, we pointed out that the inverse of the width of the resonances in the spectral density determines the relevant timescale in the system. Acknowledgments {#acknowledgments .unnumbered} =============== The authors like to thank R. C. Monreal, A. Levy Yeyati, R. Seoane Souto, and A. Komnik for many fruitful discussions. KFA acknowledges the computing time at the bwGRID and Juropa in Jülich. This work was financially supported by Spanish Mineco through grant FIS2011-26516, and by the NSF (PIF-1211914 and PFC-1125844).
--- bibliography: - 'auto\_generated.bib' title: 'Search for flavor changing neutral currents in top quark decays in $\Pp\Pp$ collisions at 7' --- =1 $Revision: 160530 $ $HeadURL: svn+ssh://svn.cern.ch/reps/tdr2/papers/TOP-11-028/trunk/TOP-11-028.tex $ $Id: TOP-11-028.tex 160530 2012-12-07 16:44:09Z yuanchao $ Introduction {#sec:introduction} ============ The top quark decays with a branching fraction of nearly 100% to a bottom quark and a $\PW$ boson, $\cPqt \to \PW\cPqb$. However, some extensions of the standard model(SM) predict that the top quark can also decay through a neutral $\cPZ$ boson, $\cPqt \to \cPZ\cPq$, where $\cPq$ is a $\cPqu$ or $\cPqc$ quark. This decay is suppressed in the SM by the GIM mechanism [@ref:GIM] and occurs at the level of quantum loop corrections only. The branching fraction $\mathcal{B}(\cPqt \to \cPZ\cPq)$ is predicted to be $\mathcal{O}(10^{-14})$ [@ref:Glover:2004cy], far below the experimental reach of the Large Hadron Collider(LHC). Detection of this signal would therefore be an indication of a large enhancement in the branching fraction and clear evidence for violations of the SM prediction. There are several models, for example R-parity-violating supersymmetric models [@AguilarSaavedra:2000db] and topcolor-assisted technicolor models [@Lu:2003yr], that predict enhancements of the $\cPqt \to \cPZ\cPq$ decay where $\mathcal{B}(\cPqt \to \cPZ\cPq)$ could be as large as $\mathcal{O}(10^{-4})$. Previous searches for the flavor changing neutral currents in top quark decays performed at the Tevatron by CDF and D0 determined a $\mathcal{B}(\cPqt \to \cPZ\cPq)$ upper limit of 3.7% [@ref:2008aaa] and 3.2% [@ref:Abazov:2011qf] at the 95% confidence level(CL), respectively. At a center-of-mass energy of 7, the $\ttbar$ production cross section at the LHC at the next-to-leading order is 157.5 for an assumed top quark mass of 172.5, which is twenty times larger than that at the Tevatron at a center-of-mass energy of 2. This enables event samples with leptonically decaying vector bosons to be used more effectively. These samples have well determined backgrounds. A recent search in the three-lepton channels performed at ATLAS with an integrated luminosity of 2.1reported a $\mathcal{B}(\cPqt \to \cPZ\cPq) $ upper limit of 0.73% [@ref:ATLAS-2012]. We expect ${\cal B}(\cPqt \to \cPZ\cPq)$ to be small and look for ${\ttbar \to \cPZ\cPq + \PW\cPqb \to \ell\ell \cPq + \ell} \nu\cPqb$ final state events, which produce three-lepton ($\Pe\Pe\Pe, \Pe\Pe\mu, \mu\mu\Pe, \mu\mu\mu$) final states. This choice results in a measurement with reduced background and fewer signal events. The analysis uses a data sample corresponding to an integrated luminosity of 5.0of proton-proton collisions at $\sqrt{s} = 7$, recorded by the Compact Muon Solenoid(CMS) experiment during 2011. The CMS Detector {#sec:cms} ================ The central feature of the CMS apparatus is a superconducting solenoid, 13 in length and 6 in diameter, which provides an axial magnetic field of 3.8. Within the field volume there are several particle detection systems. Charged particle trajectories are measured by silicon pixel and silicon strip trackers, covering $0\le \phi \le 2\pi$ in azimuth and $\abs{\eta} < 2.5$ in pseudorapidity, where $\eta$ is defined as $-\log[\tan \theta/2]$ and $\theta$ is the polar angle of the trajectory of the particle with respect to the counterclockwise proton beam direction. A crystal electromagnetic calorimeter and a brass/scintillator hadron calorimeter surround the tracking volume, providing energy measurements of photons, electrons and hadron jets. Muons are identified and measured in gas-ionization detectors embedded in the steel return yoke outside the solenoid. The detector is nearly hermetic, allowing energy balance measurements in the plane transverse to the beam direction. A two-tier trigger system selects the most interesting proton-proton collision events for use in physics analysis. A more detailed description of the CMS detector can be found in Ref. [@ref:cms]. Basic Selection {#sec:preselection} =============== Events with two opposite-sign, isolated leptons ($\Pe$ or $\mu$) consistent with a $\cPZ$-boson decay and an extra charged lepton are selected, $\Pep\Pem\Pe^\pm, \Pep\Pem\mu^\pm, \Pgmp\Pgmm\Pe^\pm, \Pgmp\Pgmm\Pgm^\pm$. All three leptons must be isolated and have transverse momentum [$\pt>20$]{}, and the electrons (muons) must have $\abs{\eta} < 2.5~(\abs{\eta} < 2.4)$. Events are required to pass at least one of the $\Pe\Pe$ or $\mu\mu$ high-$\pt$ double-lepton triggers. Their efficiencies for events containing two leptons satisfying the analysis selection are measured to be 99%, 98%, 91% and 93% for the $\Pe\Pe\Pe$, $\Pe\Pe\mu$, $\mu\mu\Pe$ and $\mu\mu\mu$ channels, respectively. Muon candidates are reconstructed with a global fit of trajectories using hits in the tracker and the muon system. The muon candidate must have associated hits in the silicon strip and pixel detectors, have segments in the muon chambers, and have a high-quality global fit to the track trajectory. The efficiency for these muon selection criteria is at least $99$% [@Khachatryan:2010xn]. Electron reconstruction starts from clusters of energy deposits in the electromagnetic calorimeter, which are matched to hits in the silicon strip and the pixel detectors. Electrons are identified using variables which include the ratio between the energy deposited in the hadron and the electromagnetic calorimeters, the shower width in $\eta$, and the distance between the calorimeter shower and the particle trajectory in the tracker, measured in both $\eta$ and $\phi$. The selection criteria used are optimized [@Khachatryan:2010xn] to maintain an efficiency of approximately 95% for the electrons from $\PW$ or $\cPZ$ decays. The invariant mass of at least one $\Pep\Pem$ or $\Pgmp\Pgmm$ pair is required to be between 60and 120. If two dilepton pairs lie in this mass window, the one closest to the $\cPZ$ mass is taken. Due to the high instantaneous luminosity of the LHC, there are multiple interactions per bunch crossing (pileup). Therefore, events are required to have at least one good primary vertex, which is chosen as the vertex with the highest $\Sigma {\pt}^2$ of its associated tracks. All leptons, which are used to select or reject events, must come from the same primary vertex. The $\Pgmp\Pgmm$ pair opening angle is required to differ from $\pi$ radians by more than 0.05 radians to reject cosmic rays. Electrons and muons from $\cPZ$ and $\PW$ decays are expected to be isolated from other particles. A cone of size $\DR \equiv \sqrt{(\Delta\eta)^2 + (\Delta \phi)^2}=0.3$ is constructed around the lepton momentum direction. The lepton relative isolation is quantified by summing the transverse energy (as measured in the calorimeters) and the transverse momentum (as measured in the silicon tracker) of all objects within this cone, excluding the lepton, and then dividing by the lepton transverse momentum [@Chatrchyan:2012xi]. The resulting quantity, corrected for additional underlying event activity due to pileup events, is required to be less than 0.125 (0.1) for $\cPZ \to \ell^+\ell^-$ ($\PW \to \ell\nu$). This requirement rejects misidentified leptons and background arising from hadronic jets. The third lepton in the event should be the result of a leptonic decay of a $\PW$ boson. In order to increase the electron purity, more stringent reconstruction requirements are used for $\PW\to \Pe\nu$ candidates. In this case the selection criteria are optimized [@Khachatryan:2010xn] to reject the background from jets while maintaining an efficiency of 80% for the electrons from $\PW$ or $\cPZ$ decays. The muon purity for the $\cPZ$ selection described above is high and the same reconstruction requirements are used to identify ${\PW\to}\mu\nu$ candidates. Events with a fourth lepton satisfying the $\PW\to \ell\nu$ criteria are rejected. The jets and the missing transverse energy vector ($-\Sigma \vec{p}_{\mathrm{T}}$) and its magnitude ($\met$) are reconstructed using a particle-flow technique [@ref:pf]. An anti-clustering algorithm [@ref:kt] with a distance parameter of 0.5 is used for jet reconstruction. The energy calibration [@ref:jetscale] is performed separately for each particle type in the jet, and the resulting jet energies require only a small correction accounting for thresholds and residual inefficiencies. In addition, a correction for pileup is included and jets are required to satisfy identification criteria that eliminate jets originating from noisy channels in the calorimeters [@ref:met; @ref:noise]. Jets are required to have $\pt > 30$, $\abs{\eta} < 2.4$, and to be separated by $\DR > 0.4$ from leptons passing the analysis selection. Neutrinos from $\PW$-boson decays escape detection and produce a significant momentum imbalance in the detector. We require the missing transverse energy to be larger than 30. The samples of Drell–Yan events with invariant mass of lepton pairs $ m_{\ell\ell}$ larger than 50, SM $\ttbar$, $\cPZ \ttbar$, $\PW \ttbar$ and $\PW\cPZ$ are generated using $\MADGRAPH$ [@ref:madg]. The samples of $\PW\PW$ and $\cPZ\cPZ$ diboson events are simulated using  [@ref:pythia], while single-top-quark events are generated using  [@ref:Nason:2004rx; @ref:Frixione:2007vw; @ref:Alioli:2010xd]. The signal sample $\Pp\Pp \to \ttbar \to \cPZ\cPq + \PW\cPqb \to$ $\ell^+\ell^- \cPq + \ell^\pm\nu\cPqb\ ( \ell = \Pe , \mu, \tau)$ is generated with and the top quarks decay and hadronize through . Due to the loss of top quark spin information for FCNC in , events are reweighted according to the SM prediction of the helicity distribution. This study is not sensitive to the choice of anomalous coupling settings, which are taken into account in systematic uncertainties. The set of parton distribution functions used is CTEQ6L [@ref:CTEQ6L]. The CMS detector response is simulated using a $\GEANTfour$-based [@ref:geant] model, and the events are reconstructed and analyzed using the same software used to process collision data. The simulated events are weighted so that the trigger efficiencies, reconstruction efficiencies and the distribution of reconstructed vertices observed in data are reproduced. The observed and expected yields based on MC after the basic event selection described above are listed in Table \[tab:preselection\]. The initial data sample of 1.3(1.6) million $\cPZ$ to $\Pe\Pe\,( \mu\mu )$ events is reduced to less than 100 events per three-lepton channel. All entries in Table \[tab:preselection\] also include the $\tau$ decay mode contributions. Single-top-quark production is dominated by the $\PW\cPqt$ channel. The total yields are dominated by diboson production and a reasonable agreement is observed between data and simulation. The details of the background estimations are discussed in Section \[sec:back\]. Figure \[fig:jet\] shows the distributions for data and simulated events of the missing transverse energy, transverse mass of the $\PW$ boson candidate ($m_\mathrm{T}$), and the scalar sum of the transverse energy , after the trigger, $\cPZ$ boson, third lepton, fourth-lepton veto, missing transverse energy, and the additional requirement of two or more jets. The variable is defined as $ \Sigma { p_{\mathrm{T}\ell}} + \Sigma { p_{\mathrm{T}\rj}} + \met$, where only the three leptons and two jets from the $\ttbar$ candidate are considered. The $m_\mathrm{T}$ is calculated using the transverse momentum and azimuthal direction of the third lepton and the magnitude and direction of the missing transverse energy, as $\sqrt{2p_{\mathrm{T}\ell}\met(1-\cos(\Delta\phi))}$. ![image](METWithLepton2_log.pdf){width="45.00000%"}\ ![image](TMass.pdf){width="45.00000%"} ![image](hist_hts_sc.pdf){width="45.00000%"} Signal Reconstruction {#sec:signal} ===================== For the $\cPqt \to \cPZ\cPq \to \ell^+\ell^- \rj$ signal, a full reconstruction of the top quark mass $m_{\cPZ \rj}$ is possible and straightforward, but the possibility of a combinatorial background arises since there is no unambiguous way to pair multiple light-quark jets with the $\cPZ$ boson. Therefore all possible combinations are examined. The invariant mass of the W and b jet system ($m_{\PW\cPqb}$) can be reconstructed by assuming that the transverse components of the neutrino momentum are given by the missing transverse energy vector information, while the longitudinal component is calculated as where ${E_{\ell}}$, ${p_{x \ell}}$, ${p_{y\ell}}$, and ${p_{z\ell}}$ are the energy and momentum components for the lepton, while the neutrino ${E_{\mathrm{T}\nu}}$, ${p_{x \nu}}$ and ${p_{y\nu}}$ are estimated from the reconstructed missing transverse energy magnitude and direction, and imposing the constraint that the invariant mass of the lepton and the neutrino is equal to the $\PW$-boson mass ($m_\PW$). If the discriminant is found to be negative, it is set equal to zero. In events in which there are two possible solutions for $p_{z\nu}$, the solution with the smaller magnitude of $p_{z\nu}$, is taken; studies with simulated signal events show that this solution is the correct one more than 60% of the time. Next, we add the requirements on jets, $m_{\cPZ \rj}$, and $m_{\PW\cPqb}$ to the basic selection described in Section \[sec:preselection\], and search for $\ttbar\to \PW\cPqb + \cPZ\cPq$ in two ways. One selection requires a minimum value of and loose requirements on $m_{\cPZ \rj}$ and $m_{\PW\cPqb}$. The second selection is stricter, with tight requirements on the $m_{\cPZ \rj}$ and $m_{\PW\cPqb}$ quantities and the requirement that one of the jets should be consistent with the hadronization of a $\cPqb$ quark, namely a “ jet”. In this Letter, we refer to these two selections as the “” and “$\cPqb$-tag” selections, respectively. The first selection is the more sensitive and hence is taken as the reference analysis. Table \[tab:effy\] shows the estimates of the overall signal efficiency determined from simulated events. Channel Selection $[\%]$ $\cPqb$-tag Selection $[\%]$ -------------- ------------------ ------------------------------ $\Pe\Pe\Pe$ (12.4 $\pm$ 1.1) (3.8 $\pm$ 0.6) $\Pe\Pe\mu$ (13.8 $\pm$ 1.2) (5.0 $\pm$ 0.7) $\mu\mu \Pe$ (14.8 $\pm$ 1.2) (5.1 $\pm$ 0.7) $\mu\mu\mu$ (14.7 $\pm$ 1.2) (5.3 $\pm$ 0.7) S\[T\] Selection ---------------- In the selection, at least two jets with $\pt > 30$are required, which are assumed to come from the primary vertex. A constituent track candidate in a jet is removed from the reconstruction if it does not point to the same vertex, but there is no association requirement between jets and the $\cPZ$ candidate which is chosen in the basic selection. A candidate event is required to have above 250, $m_{\cPZ \rj}$ and $m_{\PW\cPqb}$ are required to be between 100and 250. The requirement reduces the boson-jet combinations. The distribution of the best candidate is shown in Figure \[fig:jet\]. All possible $\ttbar$ combinations are examined and the reconstructed $\ttbar$ pair that has the largest separation in azimuthal angle is selected. Figure \[fig:mass\](top) shows the comparison of the distributions of $m_{\cPZ \rj}$ and $m_{\PW\cPqb}$ in data and simulation after the basic event selection described in Section \[sec:preselection\] (Table \[tab:preselection\]), combined with the two or more jets and the requirements. b-tag Selection --------------- To further reduce the background from diboson events, a $\cPqb$-tag based selection is performed. In this selection, at least two jets are required to be associated with the primary vertex associated with the $\cPZ$ candidate and the event can contain only one $\cPqb$ jet. The $\cPqb$ jets are identified by the track counting high-efficiency $\cPqb$-tagging algorithm described in Ref. [@ref:btag], which relies on tracks with large impact parameter significance. This tagging method has an identification efficiency of 65% to 85% for $\cPqb$ jets with transverse momentum between 30to 100and a misidentification rate below 15%. The jet which gives the invariant mass of $m_{\cPZ \rj}$ closest to the top mass is selected and the reconstructed top quark mass $m_{\cPZ \rj}$ is required to be within 25of the assumed top quark mass, $m_\cPqt = 172.5$, while $m_{\PW\cPqb}$ is required to be within 35of $m_\cPqt$. Figure \[fig:mass\](bottom) shows the comparison between data and simulated events for $m_{\cPZ \rj}$ and $m_{\PW\cPqb}$ after the basic event selection and requiring at least two jets, one of which is a $\cPqb$ jet. ![image](hist_tcz_sc.pdf){width="45.00000%"} ![image](hist_tbw_sc.pdf){width="45.00000%"}\ ![image](topMassZ.pdf){width="45.00000%"} ![image](topMassW.pdf){width="45.00000%"} Background Estimation {#sec:back} ===================== Backgrounds are estimated from the yields of simulated events passing the full selection for $\PW\PW$, $\PW\cPZ$, $\cPZ\cPZ$, $\PW\ttbar$, $\cPZ\ttbar$, and single-top-quark production, while estimates based on data are made for Drell–Yan and $\ttbar$ backgrounds. The uncertainties in the background estimation given below include, in order, the statistical and systematic components. The $\PW\cPZ$ and $\cPZ\cPZ$ production are the dominant diboson backgrounds. The production of $\PW$ pairs has a higher cross section, but is unlikely to contain both an extra high-lepton and a $\cPqb$ jet. The diboson background estimates are $13.6\pm 0.2 \pm 2.6 $ ($ 0.72\pm 0.01 \pm 0.15$) and $1.09\pm 0.02\pm 0.21$ ($0.058\pm 0.001 \pm 0.012$) for the $\PW\cPZ$ and $\cPZ\cPZ$ processes in the ($\cPqb$-tag) selection. These estimates have been rescaled by $1.3 \pm 0.1$ to take into account the overall normalization difference observed between data and simulation for the zero jet events, after the event selection given in Section \[sec:preselection\]. The uncertainty of the rescaling factor estimated from statistical fluctuations in the data contributes to the systematic uncertainties on the diboson background estimates. The single-top-quark background is smaller than $0.01$ at the 95% CL in both selections. The $\cPZ\ttbar$ and $\PW\ttbar$ cross sections are of the same order [@ref:CMS_Vtt]. The corresponding background estimates are $3.75\pm 0.06 \pm 2.30 $ ($ 0.260\pm 0.004\pm 0.160$) and $0.54\pm 0.03\pm 0.36$ ($0.039\pm 0.002 \pm 0.026$) for the $\cPZ\ttbar$ and $\PW\ttbar$ processes in the ($\cPqb$-tag) selection. The Drell–Yan background is small due to the minimum 30requirement of missing transverse energy. Other backgrounds from QCD multijet events in which a jet could be misidentified as a lepton are negligible. It is possible for SM $\ttbar$ to satisfy the $\cPZ$ selection when both $\PW$ bosons decay leptonically into the same flavor, but the third lepton and the top quark mass requirements will reject these events. The Drell–Yan and $\ttbar$ background estimates are based on two data samples. The first sample is composed of all events satisfying the basic event selection with two or more jets and loose requirements in , $m_{\cPZ \rj}$, and $m_{\PW\cPqb}$. The second sample also has loose requirements in , $m_{\cPZ \rj}$, and $m_{\PW\cPqb}$, but in addition it also has a less stringent isolation criteria for the third lepton. Therefore, the second sample is an admixture of the purer three-lepton sample plus events with a misidentified lepton, originating from jets or heavy-flavor decays, or genuine three-lepton events that were lost in the signal sample due to the more stringent isolation requirement. The number of events in the two samples are then related by the efficiency of events with nominal lepton isolation and the probability of a jet to be misidentified as a lepton. Using the genuine and misidentified lepton efficiencies, which are both determined from data, the yield of genuine and misidentified three-lepton events is found. This measurement is turned into an estimate of the Drell–Yan and $\ttbar$ background after subtracting the contribution from dibosons and taking into account the change in acceptances and efficiencies ([$\eg$]{} $\cPqb$-tagging) after the full signal selections are made. The total contribution of Drell–Yan and $\ttbar$ events, after the and $\cPqb$-tag selections are estimated to be $1.5\pm 0.5 \pm 0.4$ and $0.06\pm 0.02 \pm 0.01$, respectively. The statistical and systematic uncertainties are estimated from the amounts of the events of these two data samples and the uncertainty of the lepton isolation efficiencies measured with data. These estimates are compatible with the expectations based on simulated events. The total estimated backgrounds are given in Table \[tab:yields\]. Selection $\cPqb$-tag ----------------------------------- -------------------------------------- -------------------------------------- $\PW\cPZ$ background $13.59\pm 0.20 \pm 2.58 $ $ 0.718\pm 0.011\pm 0.150$ $\cPZ\cPZ$ background $1.09\pm 0.02\pm 0.21$ $0.058\pm 0.001 \pm 0.012$ Drell-Yan and $\ttbar$ background $1.52\pm 0.46\pm 0.41$ $0.055\pm 0.017 \pm 0.012$ $\cPZ\ttbar$ background $3.75\pm 0.06\pm 2.30$ $0.260\pm 0.004 \pm 0.160$ $\PW\ttbar$ background $0.54\pm 0.03\pm 0.36$ $0.039\pm 0.002 \pm 0.026$ Total background prediction 20.49$\pm$ 0.51 $\pm$ 3.51 1.13 $\pm$ 0.02 $\pm$ 0.22 Observed events 11 0 Expected limit at the 95% CL ${\cal B}(\cPqt\to\cPZ\cPq)< 0.40\%$ ${\cal B}(\cPqt\to\cPZ\cPq)< 0.41\%$ Observed limit at the 95% CL ${\cal B}(\cPqt\to\cPZ\cPq)< 0.21\%$ ${\cal B}(\cPqt\to\cPZ\cPq)< 0.30\%$ Systematic Uncertainties {#sec:sys} ======================== The systematic uncertainties come from the trigger efficiency, choice of parton distribution functions, lepton selection, pileup modeling, missing transverse energy resolution, uncertainty on the $\ttbar$ cross section and diboson rescaling, $\cPqb$-tagging efficiency for high-$\pt$ $\cPqb$ jets [@ref:btag], and jet energy scale [@ref:jetscale]. The prescription given in [@ref:cteq] is used to determine the uncertainty from the choice of parton distribution functions. In addition, there is a 2.2% uncertainty on the luminosity measurement [@ref:lumi2012]. All these sources combine to give a 19%(21%) relative uncertainty on the signal acceptance times efficiency in the ($\cPqb$-tag) selection. The systematic uncertainties are summarized in Table \[tab:sys\]. The systematic uncertainty of the background estimation is listed with the total background prediction given in Table \[tab:yields\]. Source selection $[\%]$ $\cPqb$-tag selection $[\%]$ -------------------------------------- ------------------ ------------------------------ Trigger efficiency 4 4 Parton distribution functions 6 6 Lepton selection 7 7 Pileup events 7 7 Missing transverse energy resolution 8 8 Cross sections and rescaling 8 8 $\cPqb$ tagging — 9 Jet energy scale 10 10 Total 19 21 Results {#sec:results} ======= In the ($\cPqb$-tag) selection, we expect $20.5\pm 3.5$ ($1.1 \pm 0.2$) events from the SM background processes and we observe 11(0) events for all four channels combined. When all statistical and systematic uncertainties are taken into account, the probability for the expected number of events, 20.5, to fluctuate to 11 events, as observed, or fewer is 5%. No excess beyond the SM background is observed and a 95% CL upper limit on the branching fraction of $\cPqt\to\cPZ\cPq$ is determined using the modified frequentist approach (CL$_\mathrm{s}$ method [@ref:Junk:1999kv; @ref:Read:2002hq]). A summary of the observed and predicted yields and limits are presented in Table \[tab:yields\]. The calculation of the upper limit is based on the information provided by the observed event count combined with the values and the uncertainties of the luminosity measurement, the background prediction, and the fraction of all $\ttbar\to \cPZ\cPq + \PW\cPqb \to \ell \ell \cPq + \ell \cPqb$ events expected to be selected. The signal event yield is obtained from the efficiency times acceptance and branching fraction for simulated events. As $\mathcal{B}(\cPqt \to \cPZ\cPq)$ is expected to be small, the possibility of both top quarks decaying via flavor changing neutral currents is not considered. The best observed and expected 95% CL upper limits on the branching fraction ${\cal B}(\cPqt \to \cPZ\cPq)$ are 0.21% and 0.40%, respectively, obtained in the selection from the combined three-lepton analyses. The one-sigma boundaries of the expected limit are 0.30–0.59%. The corresponding observed and expected upper limits, and one-sigma boundaries for the $\cPqb$-tag selection are 0.30%, 0.41% and 0.30–0.53%, respectively. The expected limit for the and $\cPqb$-tag selections show that they have comparable sensitivity. The one with slightly better expected limit is taken as the final result. Summary {#sec:conclusions} ======= A search for flavor changing neutral currents in top quark decays in $\ttbar$ events produced in proton-proton collisions at $\sqrt{s} = 7$is presented. A sample of three-lepton events is selected from data recorded by CMS during 2011 corresponding to an integrated luminosity of 5.0. These events are compatible with a $\Pp\Pp \to \ttbar \to$ $\cPZ\cPq + \PW\cPqb \to$ $\ell\ell \cPq + \ell \nu\cPqb\,( \ell = \Pe,\,\mu)$ topology. Since three-lepton events originating from the SM processes are rare the background contributions are small. No excess of events over the SM background is observed and a ${\cal B}({\cPqt\to\cPZ\cPq})$ branching fraction larger than 0.21% is excluded at the 95% confidence level.\ \ Acknowledgments {#acknowledgments .unnumbered} =============== We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC machine. We thank the technical and administrative staff at CERN and other CMS institutes, and acknowledge support from: FMSR (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES (Croatia); RPF (Cyprus); MoER, SF0690030s09 and RDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NKTH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); NRF and WCU (Korea); LAS (Lithuania); CINVESTAV, CONACYT, SEP, and UASLP-FAI (Mexico); MSI (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Armenia, Belarus, Georgia, Ukraine, Uzbekistan); MON, RosAtom, RAS and RFBR (Russia); MSTD (Serbia); MICINN and CPAN (Spain); Swiss Funding Agencies (Switzerland); NSC (Taipei); TUBITAK and TAEK (Turkey); STFC (United Kingdom); DOE and NSF (USA). The CMS Collaboration \[app:collab\] ==================================== =5000=500=5000
--- abstract: 'Metals which can form intermetallic compounds by exothermic reactions constitute a class of reactive materials with multiple applications. Ni-Al laminates of thin alternating layers are being considered as model nanometric metallic multilayers for studying various reaction processes. However, the reaction kinetics at short timescales after mixing are not entirely understood. In this work, we calculate the free energies of Ni-Al alloys as a function of composition and temperature for different solid phases using thermodynamic integration based on state-of-the-art interatomic potentials. We use this information to interpret molecular dynamics (MD) simulations of bilayer systems at 800 K and zero pressure, both in isothermal and isenthalpic conditions. We find that a disordered phase always forms upon mixing as a precursor to a more stable nano crystalline B2 phase. We construe the reactions observed in terms of thermodynamic trajectories governed by the state variables computed. Simulated times of up to 30 ns were achieved, which provides a window to phenomena not previously observed in MD simulations. Our results provide insight into the early experimental reaction timescales and suggest that the path (segregated reactants)$\rightarrow$(disordered phase)$\rightarrow$(B2 structure) is always realized irrespective of the imposed boundary conditions.' address: Lawrence Livermore National Laboratory author: - 'Luis Sandoval [^1], Geoffrey H. Campbell, Jaime Marian' title: 'Thermodynamic interpretation of reactive processes in Ni-Al nanolayers from atomistic simulations' --- Introduction {#sec:intro} ============ Solid reactive materials that form intermetallic compounds via high energy release are increasingly being used in multiple materials science processes [@nuzzo1986; @intro1; @roga2008]. These systems have the advantage that both reactants and products are confined to the condensed state, which makes them helpful in anaerobic conditions and where gaseous products are non-desirable. For these reasons, there is a wide range of applications where they are now being used, such as welding, propellants, heat initiators, etc. Ni-Al systems are an important subclass of reactive materials due to the formation of intermetallic phases with high temperature strength and high resistance to oxidation [@barmak1997; @gunduz2009]. Because of their many attractive properties and promising applications, Ni-Al reactive systems have attracted significant attention over the last two decades, both experimental [@kim2008; @kim2011; @trenkle2010; @swa2013] and theoretical [@salloum2010; @vohra2011]. The formation of Ni-Al intermetallics is an intrinsically atomistic process, governed by the free energies of the different phases involved as well as the kinetics of atomic and interfacial motion. With the advent of efficient simulation codes and reliable interatomic potentials, molecular dynamics (MD) has emerged as an ideal tool to investigate these reactive processes. However, despite the number of MD studies performed to date [@geyser2000; @yu2007; @delogu; @ev2011], a comprehensive thermodynamic picture of the reaction processes is still lacking. In this paper, we use state-of-the-art interatomic potentials to calculate the free energies of different Ni-Al phases. We then use this thermodynamic information to interpret and understand the reactive behavior of Ni-Al nano laminates. We carry out simulations at 800 K, which is known to be above the temperature of self ignition in NiAl ($\approx$600 K), and zero pressure, both under isothermal and isenthalpic conditions. We find that, in both cases, a recrystallized B2 phase forms from a local disordered phase (which can be considered the liquid phase or a local amorphous phase, depending on the situation) caused primarily by the swift penetration of Ni into the Al layer. This paper is organized as follows. In Section \[sec:methods\], we describe the simulation method in detail, as well as the thermodynamic integration technique employed. We then begin Section \[sec:results\] providing the energy vs. volume curves for several solid phases of equiatomic Ni-Al, followed by the internal and free energy results as a function of alloy composition, and a description of Ni-Al bicrystal reaction kinetics. We finalize in Section \[sec:disc\] with a discussion of our findings and the conclusions. Methods {#sec:methods} ======= All calculations presented here were carried out with the `lammps` code [@lammps] using 256 processors on Livermore Computing’s parallel architectures. We employ the embedded-atom method (EAM) interatomic potential for Ni-Al developed by Purja and Mishin [@mishin2009]. This potential is particularly suitable for simulations of heterophase interfaces and mechanical behavior of Ni-Al alloys. Additionally, the melting points of fcc Ni —1700 K— and Al —1040 K— are well reproduced by the potential, which adds confidence to the high temperature calculations that will be presented here. Except where noted, we consider periodic systems at zero total pressure. To compute free energies we use thermodynamic integration using Kirkwood’s coupling parameter method n [@kirkwood1935; @rickman2002; @muller2007; @tuckerman2008], also known as $\lambda$-*integration*. In the canonical ensemble, the free energy difference, $\Delta F=F_{B}-F_{A}$, between two systems $A$ and $B$ characterized by potential energy functions $U_A$ and $U_B$ can be obtained by integrating along a reversible path from $A$ to $B$. The distance along this path can be measured by using a potential energy function that uses a switching parameter $\lambda$: $$U(\lambda)=(1-\lambda)U_{A}+\lambda U_{B}$$ The canonical partition function for such a system can be written as: $$Q(N;\Omega;T;\lambda)=C\int{d\vec{r}^N}\exp{\left\{-\beta U(\lambda)\right\}} \label{eq:q}$$ where $N$ is the number of particles, $\Omega$ is the system volume, $T$ is the absolute temperature, and $C$ is a constant that includes the result of integration over momenta and other physical constants. $\beta=(k_BT)^{-1}$ is the reciprocal temperature with $k_B$ being Boltzmann’s constant. From eq. \[eq:q\] the Helmholtz free energy can be calculated as $F=-\beta^{-1}\ln{Q}$, whose derivative with respect to the switching parameter can be written as: $$\left.\frac{\partial F(\lambda)}{\partial\lambda}\right|_{N,\Omega,T}=-\frac{1}{\beta}\frac{\partial}{\partial\lambda}\ln{Q}= -\frac{1}{\beta Q}\frac{\partial Q}{\partial\lambda}=\frac{\int{d\vec{r}^N}\frac{\partial U(\lambda)}{\partial\lambda}\exp{\left\{-\beta U(\lambda)\right\}}}{\int{d\vec{r}^N}\exp{\left\{-\beta U(\lambda)\right\}}} \label{eq:deriv}$$ which is the expression of an ensemble average that can be calculated via molecular dynamics simulations. From this, the free energy difference between systems $A$ and $B$ is given by: $$\Delta F=F(U_A)-F(U_B)=\int_0^1\left\langle\frac{\partial U}{\partial\lambda}\right\rangle d\lambda=\int_0^1\left\langle U_{B}-U_A\right\rangle d\lambda \label{int}$$ For solid phases with ordered crystal structure, a system of Einstein oscillators is appropriate as a reference state (characterized by $U_B$). The Einstein crystal can also be used for systems without short-range order such as amorphous phases provided that internal transport processes (e.g. self-diffusion) are negligible on the scale of the simulations: $$U_B(\vec{r})=\frac{\alpha}{2}\sum_{i=1}^N{\left(\vec{r}_i-\vec{r}_{0,i}\right)^2} \label{einstein}$$ where $\alpha$ is a spring constant[^2] and $\{\vec{r}_0\}$ are the equilibrium positions. After accounting for the use of periodic boundary conditions and fixing the center of mass in MD simulations, the free energy of a system of harmonic oscillators can be obtained analytically as: $$F(U_B)=-\frac{3N}{2\beta}\left\{ \log{\frac{m\alpha^{N-1}\beta^{N-2}}{h^2}}+\frac{1}{3N}\log{\frac{N}{\Omega^2}} \right\} \label{fuo2}$$ where $m$ and $h$ are, respectively, the atomic mass and Planck’s constant. The free energy of the system at temperatures other than the reference temperature, $T_0$, used in the above process can be found by recourse to the Gibbs-Helmholtz integral: $$\frac{F}{T}=\frac{F_0}{T_0}-\int_{T_0}^{T}{\frac{E(n,\Omega,\theta)}{\theta^2}d\theta} \label{hg}$$ where $F_0$ is the free energy at $T_0$ and $E$ is the internal energy. $E$ can be computed as the ensemble average of the total energy, decomposed into potential and kinetic energies: $$E(T)=\langle U(T)+K(T)\rangle \label{et}$$ By way of example, Figure \[fig:lambda\] shows the calculation of the integrand in eq. \[int\] as a function of $\lambda$ for several alloy compositions in random bcc and amorphous Ni-Al phases at 500 K. The curves represent cubic polynomial fits to the data points. Integration of these curves as in eq. \[int\] yields the free energy of the system. Sufficient sampling is critical, particularly when $\langle\partial U/\partial\lambda\rangle$ changes quickly, to ensure an accurate calculation of the free energy integral. To predict critical (phase transition) points with accuracy, $\lambda$ must be resolved very finely near $\lambda=1$ (in 0.01 intervals), which makes these calculations very costly. Methods to mitigate this difficulty have been discussed in the literature [@resat1993]. For liquid systems, defined as those in which diffusion takes place on the order of the simulation time scale, the Einstein crystal can no longer be used as a reference state. In such cases, the free energy is obtained as [@mei1992; @frenkelbook; @glosli1999]: $$F_l=F_{gm}(\rho_0,T_0)+\int_0^{\rho_0}d\rho\left[\frac{p(T_0,\rho)-\rho k_BT_0}{\rho^2}\right] \label{liquid}$$ where $p$ is the pressure, $\rho_0$ is the system’s density at a temperature $T_0$ above the supercritical temperature (here we have taken $T_0=2000$ K) and $F_{gm}(\rho,T)$ is the free energy of a binary ideal gas mixture at density and temperature $\rho$ and $T$: $$F_{gm}(\rho,T)=\frac{1}{2}\left[F_g^{Ni}(\rho,T)+F_g^{Al}(\rho,T)\right]+k_BT\ln\frac{1}{4}$$ with [@broughton1997]: $$F_g^i(\rho,T)=N_ik_BT\left\{\ln\left[\rho\left(\frac{h^2}{2\pi m_ik_BT}\right)^{3/2}\right]-1\right\}$$ The integrand in the second term of the r.h.s. of eq. \[liquid\] represents an isothermal expansion from $\rho_0$ to zero density (i.e. infinite volume), where the system effectively behaves as an ideal gas. This expansion should be reversible, which means that no first-order transition —e.g. liquid-gas— should be traversed. The equation of state $p(T_0,\rho)$ is obtained from a set of canonical ensemble calculations at $T_0$ of an equiatomic liquid mixture of Ni and Al. This is shown in Figure \[fig:ideal\]. The resulting data are fitted to a 3$^{\rm{rd}}$ degree polynomial and the integral in eq. \[liquid\] is solved to yield the free energies. Results {#sec:results} ======= Equiatomic NiAl --------------- First we present results for equiatomic systems (Ni$_1$Al$_1$) containing 16,000 atoms. We study four distinct phases, namely, the ordered fcc L1$_0$, ordered (B2) and disordered (random solid solution) bcc phases, and an amorphous phase obtained from quenching a liquid system (equilibrated for 100 ps at 3000 K) at a cooling rate of 300 K ps$^{-1}$ [@noya2002]. We note that the term ‘amorphous’ is employed loosely in this paper to refer to an unstructured material, be it in the solid or in the liquid state. Typically an amorphous system can always be considered the liquid structure of a certain solid phase, although it may not necessarily correspond to the absolutely lowest free energy structure. ### Equilibrium volume and thermal expansion. The thermal expansion coefficient $\alpha_t(T)$ is obtained from the temperature dependence of the atomic volume $\Omega_a$: $$\alpha_t=\frac{1}{\Omega_0}\frac{d\Omega_a}{dT} \label{eq:alpha}$$ where $\Omega_0$ is a reference atomic volume (usually taken as the value at 0 K). We first compute $\Omega_0$ for the four phases indicated above from energy-volume relations. The evolution of the cohesive energy as a function of atomic volume (and density) is shown in Figure \[eos\] for each phase. From the figure, one can obtain the equilibrium values as those corresponding to a minimum of the cohesive energy (indicated by vertical dashed lines in Fig. \[eos\]). The numerical values in each case are given in Table \[table\]. The figure also gives the relative stability of each phase at zero temperature, with the B2 phase always being the most stable. [\*6c]{} &[$\Omega_0$ \[Å$^3$\]]{} & [$\rho_0$ \[$\times$10$^{28}$ m$^{-3}$\]]{} & [$a_0$ \[Å\]]{} & [$\alpha_t$ \[$\times10^{-5}$ K$^{-1}$\]]{} & [$C_p$ \[eV atom$^{-1}$ K$^{-1}$]{}\]\ & 11.4 & 8.81 & 2.83 & 4.53 & $2.8\times10^{-4}$\ & 11.9 & 8.40 & 3.62 & 4.15 & $2.8\times10^{-4}$\ & 12.4 & 8.06 & – & 6.05 & $2.6\times10^{-4}$\ [bcc]{} & 12.1 & 8.26 & 2.89 & 2.73 & $2.1\times10^{-4}$\ & 10.9 & 9.17 & 3.52 & 2.76 & $2.7\times10^{-4}$\ & 16.9 & 5.91 & 4.05 & 5.24 & $2.7\times10^{-4}$\ \[table\] Next we calculate the variation of the atomic volume with temperature using simulations in the isothermal-isobaric ensemble ($NpT$). Results for all four phases considered here are shown in Fig. \[alpha\]. The values of $\alpha_t$ at 800 K are given in Table \[table\]. For the B2 and L1$_0$ systems, the volume increases linearly with temperature, resulting in a constant $\alpha_t$, while for the bcc solid solution it displays some roughness associated with increased internal diffusion and clustering processes. For the amorphous phase, the evolution of the volume with temperature displays two clearly distinguishable linear regimes with a transition at approximately 1000 K. This results in two different values of $\alpha_t$ above and below that transition. This change is due to the transformation from an amorphous solid with high internal viscosity to a liquid with high internal diffusivity. The diffusion coefficient of the amorphous phase has been calculated as a function of temperature and the results plotted against the right-hand axis in Fig. \[alpha\]. Clearly, the diffusivity follows an Arrhenius behavior, $D(\beta)=D_0\exp\left(-\beta E_m\right)$, with $D_0=2.7\times10^{11}$ m$^2$ s$^{-1}$ and $E_m=0.63$ eV. On the basis of these results, the free energies of the amorphous phase above 1000 K are calculated using eq. \[liquid\]. ### Free energies. From equation \[et\], we compute the internal energies as a function of temperature for all the different equiatomic phases as well as for pure fcc Ni and Al for comparison. Results are shown in Figure \[ee\], with all systems exhibiting linear dependencies except the amorphous one again at temperatures above 1000 K. From these data, the heat capacity at zero pressure can be calculated straightforwardly as $$C_p=\left(\frac{\partial H}{\partial T}\right)_P=\left(\frac{\partial E}{\partial T}\right)_P$$ because at zero pressure $H\equiv E$. The results at 800 K for each phase are given in Table \[table\]. These calculations are a precursor to obtaining the free energies as a function of temperature (cf. eq. \[hg\]), which are given in Figure \[ff\]. Figure \[ff\] establishes the relative stability of each phase as a function of temperature. The free energies are extended in each case to the crossing temperature with the amorphous phase. This yields the melting points for each of the Ni-Al structures considered: 900 K for the L$1_0$, 1200 K for the random bcc solid solution, and 1780 K for the B2 phase. The latter is the most stable of all the solid phases over the entire temperature range. This has important implications for the reaction kinetics of Ni-Al bilayers that will be studied below. Free energies of Ni$_x$Al$_y$ alloys ------------------------------------ During the early stages of reactive mixing, the system will probe a wide spectrum of compositions corresponding to different Ni:Al atomic ratios (Ni$_x$Al$_y$). Locally, the $x$:$y$ ratio can be far from unity, depending on fluctuations set by the temperature and the relative diffusivity of each species. This means that the kinetic evolution of the reaction front is set by fluctuations, [*i.e.*]{} variations in local composition, temperature, etc. Thus, it is also of interest to calculate the free energy as a function of composition $c$ (which represents the Ni concentration) for selected phases. Since mixing processes involve high entropy and therefore low order, here we focus on the amorphous and solid solution bcc phases as potential precursors of thermodynamically stable (ordered) Ni$_x$Al$_y$ alloys. Here, this amorphous phase can be considered as the liquid structure of the bcc solid solution, although there are lower energy solid structures (e.g. B2) at these temperatures and, thus, it cannot be considered the absolute liquid configuration. Figure \[fig:comp\] shows the free energy surface $F(c,T)$ for the two phases of interest. Derivatives of $F(c,T)$ with respect each one of the axes give the entropy (temperature axis) and the chemical potential (concentration axis) differences. The information contained in Fig. \[fig:comp\] can be used to define the phase diagram between these two structures, although neither of these phases are equilibrium phases and thus are typically not considered in Ni-Al phase diagrams. As the free energy surface shows, the free energy differences between the solid solution bcc and amorphous phases is small, with the crystalline phase generally being more stable than the amorphous one, except roughly for $0.1<c<0.2$, and, interestingly, at $c\approx0.9$ and $T>600$ K. Additionally, the free energies are minimum for Ni concentrations around $c=0.8$. We believe this to be an indirect indicator of the stability of the Ni$_3$Al system. Reaction kinetics {#sebsec:kin} ----------------- We now study the reactivity of a Ni-Al bilayer at 800 K. The simulated system consists of two crystallites of Ni and Al containing, respectively, $N=313,600$ and 329,251 atoms. The Ni and Al subsystems are fcc lattices oriented in the same direction and separated by an interface with a surface normal oriented along the \[100\] direction. Periodic boundary conditions are used along each coordinate. Crone  have shown that the ignition temperature (at which a self-sustaining reaction is achieved) depends on the misfit interface strain [@crone2011]. To avoid such dependency, we use the columnar arrangement employed by Baras and Politano [@baras2011] which ensures a strain-free interface after relaxation. The initial dimensions of the Ni and Al layers are, respectively, 19.6$\times$8.8$\times$19.6 nm, and 17.5$\times$17.5$\times$17.5 nm. This provides for an empty 1-nm thick buffer over which the Al subsystem can expand, relaxing all interfacial stresses. The bicrystal is initially equilibrated at the target temperature of 800 K by means of an inverse simulated annealing. This is done by heating the system from 0 K at a rate of 1 K ps$^{-1}$ so that it takes 0.8 ns to reach the desired temperature of 800 K. This annealing procedure results in some diffusive mixing prior to reaching 800 K. Therefore, on the Al side of the original interface at $t=0$ , the initial state corresponds to an Al-rich phase with some interpenetrated Ni. This Al-rich phase retains its original fcc order but we have confirmed that it does not correspond to an ordered L1$_0$ structure. On the Ni side of the original interface, the penetration of Al is quite limited, resulting in essentially a very dilute fcc Ni-Al phase. This picture is consistent with relative interdiffusion coefficients that are 3.3 times larger for Ni in Al than vice versa [@shankar1978]. Experiments are typically conducted at constant pressure and temperature, which suggests running the reaction simulations in the $NpT$ (isobaric-isothermal) ensemble. However, enforcing a constant temperature during typical MD time scales (several nanoseconds) is not representative of a true isothermal process, as the thermostat coupling must be sufficiently strong to extract the released heat over MD time scales. This typically results in unphysically high time relaxation constants that do not reflect the true time scale of the process. Thus, the reaction dynamics may be more faithfully simulated using the isobaric-isoenthalpic ensemble $NpH$, where the temperature is free to fluctuate in response to internal transformations under constant enthalpy. For better understanding the reaction kinetics, here we carry out MD simulations from identical initial configurations up to 30 ns under both ensembles. The results for both cases are reported below. ### $NpT$ ensemble {#npt} A sequence of snapshots from the 30-ns reaction simulation is given in Figure \[fig:sequence\]. Following equilibration, a Nosé-Hoover thermo-barostat with a damping constant of 1.0 ps$^{-1}$ is used to maintain a constant temperature of 800 K. The time evolution of the temperature for the entire simulation is shown in Figure \[fig:temp\]. The initial stages of the reaction process are a continuation of the main features of the sample preparation, namely, swift penetration of Ni atoms in the fcc Al crystal and limited Al diffusion in Ni. Eventually this leads to the formation of an unstructured (amorphous) phase in the Al-rich region, as confirmed by pair correlation function $g(r)$ analysis. $g(r)$ is computed in a 20-Å thick slab that encompasses the original interface and its evolution is shown in Figure \[fig:gofr\][^3]. The results for the crystalline and amorphous phases are consistent with those reported in the literature [@zhu2007; @izvekov2012]. According to the Ni-Al phase diagram [@massalski], the dissolution of Ni in the Al half-crystal at 800 K draws a trajectory in the temperature-concentration space that traverses different Al-rich phases of the alloy system until reaching the equiatomic stoichiometry. These include noncubic structures not captured by the present potential, such as NiAl$_3$ and Ni$_2$Al$_3$. At the same time, there is experimental evidence that melting occurs during reaction of Ni and Al in environments with atomic ratios close to 1:3 [@ma1990; @zhua2002]. Crystalline NiAl phases subsequently emerge from the melt, giving rise to a stable alloy. As it will be discussed in Section \[sec:disc\], simulations in the $NpT$ ensemble prevent melting by thermostatting the exothermic release due to the Ni-Al reaction ($\approx$0.32 eV per atom)[^4]. The formation and growth of the unstructured phase continues up to a time of approximately 20 ns in our simulation. From there on, thermodynamics drives the system toward structures consistent with Fig. \[ff\], [*i.e.*]{} B2 phases. Recrystallization of the amorphous phase into a B2 structure initiates at the interface, resulting in a metastable (nano) crystalline structure characterized by high-angle boundaries and uncorrelated grain orientations. Evidence for this nano crystalline B2 structure is provided in Figure \[b2\_quenched\], which shows a still frame of the final system 30 ns after equilibration taken 5 Å from the original interface location on the Al-rich side. [0.27]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpT$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration. (b) Beginning of the mixing process. Ni penetrates into Al much more than vice versa. (c) Mixing process is nearly complete, amorphization starts. (d) Mixing complete, all amorphous. (e) Crystallization in a B2 (bcc) phase starts at the interface. Crystallization starts where the local composition is close to equiatomic. (f) Crystallization proceeds.[]{data-label="fig:sequence"}](frame_00.pdf "fig:"){width="\textwidth"}   [0.3]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpT$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration. (b) Beginning of the mixing process. Ni penetrates into Al much more than vice versa. (c) Mixing process is nearly complete, amorphization starts. (d) Mixing complete, all amorphous. (e) Crystallization in a B2 (bcc) phase starts at the interface. Crystallization starts where the local composition is close to equiatomic. (f) Crystallization proceeds.[]{data-label="fig:sequence"}](frame_24.pdf "fig:"){width="\textwidth"} [0.3]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpT$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration. (b) Beginning of the mixing process. Ni penetrates into Al much more than vice versa. (c) Mixing process is nearly complete, amorphization starts. (d) Mixing complete, all amorphous. (e) Crystallization in a B2 (bcc) phase starts at the interface. Crystallization starts where the local composition is close to equiatomic. (f) Crystallization proceeds.[]{data-label="fig:sequence"}](frame_91.pdf "fig:"){width="\textwidth"} [0.3]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpT$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration. (b) Beginning of the mixing process. Ni penetrates into Al much more than vice versa. (c) Mixing process is nearly complete, amorphization starts. (d) Mixing complete, all amorphous. (e) Crystallization in a B2 (bcc) phase starts at the interface. Crystallization starts where the local composition is close to equiatomic. (f) Crystallization proceeds.[]{data-label="fig:sequence"}](frame_174.pdf "fig:"){width="\textwidth"} [0.3]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpT$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration. (b) Beginning of the mixing process. Ni penetrates into Al much more than vice versa. (c) Mixing process is nearly complete, amorphization starts. (d) Mixing complete, all amorphous. (e) Crystallization in a B2 (bcc) phase starts at the interface. Crystallization starts where the local composition is close to equiatomic. (f) Crystallization proceeds.[]{data-label="fig:sequence"}](frame_257.pdf "fig:"){width="\textwidth"} [0.3]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpT$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration. (b) Beginning of the mixing process. Ni penetrates into Al much more than vice versa. (c) Mixing process is nearly complete, amorphization starts. (d) Mixing complete, all amorphous. (e) Crystallization in a B2 (bcc) phase starts at the interface. Crystallization starts where the local composition is close to equiatomic. (f) Crystallization proceeds.[]{data-label="fig:sequence"}](frame_334.pdf "fig:"){width="\textwidth"} ![Crystal structure at the end of a 30-ns simulation of Ni-Al at 800 K. The image corresponds to a cut parallel to the original Ni-Al interface taken at a distance of 5 Å into the Al layer. The observed microstructure consists of several B2 grains of approximately 5 nm in size. The structure was quenched from 800 K to remove thermal noise and make visualization clearer.[]{data-label="b2_quenched"}](34ns_1000K_minimized.pdf){width="12cm"} The image shows atoms colored according to their Ackland-Jones parameter [@aj], which shows the formation of a nano grained bcc structure close to equiatomic composition[^5]. Therefore, the analysis is conclusive in terms of the final phase formed, and is consistent with the free energy calculations observed in previous sections. ### $NpH$ ensemble {#nph} A sequence of snapshots from the 30-ns reaction simulation is given in Figure \[fig:sequence2\]. The corresponding time evolution of the temperature is given in Figure \[fig:temp\]. As for the $NpT$ case, Al diffusion into Ni is practically negligible, whereas Ni atoms quickly penetrate the Al half-crystal. This spurs the formation of a disordered region in the Al side that eventually encompasses the entire half-crystal (Fig. \[framenph4\], after 16 ns of simulation). This is confirmed by Figure \[proff\], which shows a constant nonzero background Ni concentration in the Al region after 13 ns. This also causes an upturn in the temperature of the box, cf. Fig. \[fig:temp\], which reaches the melting point of the NiAl B2 phase at 24 ns. This is followed by a propagation of the interface between the pure fcc Ni crystal and the Ni-Al disordered region, resulting in a gradual increase of the relative Al concentration in the original Ni region. By 26 ns the entire computational box has transformed into a liquid with the temperature remaining constant at approximately 1850 K. [0.27]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpH$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration (identical to Fig. \[frame1\]). (b) The Al half-crystal expands relaxing all stresses. A disordered mixture starts to form. (c) Disorder almost complete. A reaction front propagates into the Ni-rich half-crystal. (d) Full disorder is achieved in the Al half-crystal. (e) Full liquefaction is achieved. Reaction front progresses. (f) Full mixing achieved.[]{data-label="fig:sequence2"}](frame_NpH_00.pdf "fig:"){width="\textwidth"}   [0.3]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpH$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration (identical to Fig. \[frame1\]). (b) The Al half-crystal expands relaxing all stresses. A disordered mixture starts to form. (c) Disorder almost complete. A reaction front propagates into the Ni-rich half-crystal. (d) Full disorder is achieved in the Al half-crystal. (e) Full liquefaction is achieved. Reaction front progresses. (f) Full mixing achieved.[]{data-label="fig:sequence2"}](frame_NpH_10.pdf "fig:"){width="\textwidth"} [0.35]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpH$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration (identical to Fig. \[frame1\]). (b) The Al half-crystal expands relaxing all stresses. A disordered mixture starts to form. (c) Disorder almost complete. A reaction front propagates into the Ni-rich half-crystal. (d) Full disorder is achieved in the Al half-crystal. (e) Full liquefaction is achieved. Reaction front progresses. (f) Full mixing achieved.[]{data-label="fig:sequence2"}](frame_NpH_13.pdf "fig:"){width="\textwidth"} [0.3]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpH$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration (identical to Fig. \[frame1\]). (b) The Al half-crystal expands relaxing all stresses. A disordered mixture starts to form. (c) Disorder almost complete. A reaction front propagates into the Ni-rich half-crystal. (d) Full disorder is achieved in the Al half-crystal. (e) Full liquefaction is achieved. Reaction front progresses. (f) Full mixing achieved.[]{data-label="fig:sequence2"}](frame_NpH_16.pdf "fig:"){width="\textwidth"} [0.32]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpH$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration (identical to Fig. \[frame1\]). (b) The Al half-crystal expands relaxing all stresses. A disordered mixture starts to form. (c) Disorder almost complete. A reaction front propagates into the Ni-rich half-crystal. (d) Full disorder is achieved in the Al half-crystal. (e) Full liquefaction is achieved. Reaction front progresses. (f) Full mixing achieved.[]{data-label="fig:sequence2"}](frame_NpH_22.pdf "fig:"){width="\textwidth"} [0.28]{} ![Time sequence of the reaction process in a Ni-Al bilayer at 800 K in the $NpH$ ensemble. Red circles represent Al atoms, blue circles symbolize Ni atoms. (a) Initial system during equilibration (identical to Fig. \[frame1\]). (b) The Al half-crystal expands relaxing all stresses. A disordered mixture starts to form. (c) Disorder almost complete. A reaction front propagates into the Ni-rich half-crystal. (d) Full disorder is achieved in the Al half-crystal. (e) Full liquefaction is achieved. Reaction front progresses. (f) Full mixing achieved.[]{data-label="fig:sequence2"}](frame_NpH_30.pdf "fig:"){width="\textwidth"} The above description is substantiated by the analysis of the time evolution of the $g(r)$ function. This is shown in Figure \[gofrnph\], which confirms the existence of decreasing tendencies for Ni and Al fcc pairs and an increasing one for disordered Ni-Al pairs. Discussion {#sec:disc} ========== Thermodynamics of Ni-Al compounds --------------------------------- Results in Table \[table\] match, where appropriate, those obtained by Purja and Mishin [@mishin2009]. Our calculations do not include zero-point motion at low temperatures and are thus technically only suited for temperatures above the Debye temperature ($\approx$430 and 450 K for Al and Ni, respectively). However, our data are in reasonable agreement with those reported by Wang  [@wang2004] at low temperatures using first principles calculations. Asta  [@asta1999] calculated several structural and physical properties of liquid Ni-Al mixtures using different interatomic potentials, and concluded that in, mixtures rich in Al ($c<0.5$), interatomic potentials of the embedded-atom type as the one employed here underestimate the viscosity and diffusivity relative to available experimental data. However, we do not consider this example to be fully representative of our calculations, as it pertains to liquid phases with different concentrations. Besides these studies, there are to our knowledge no studies concerning the thermodynamic properties of Ni-Al systems in such an extensive temperature range. Ni-Al Reaction kinetics and thermodynamics ------------------------------------------ Ni-Al mixing processes have been studied in the literature using atomistic simulations with a combination of $NVE$ and $NpH$ ensembles[@henz2009; @weingarten2010; @ale2011; @ale2012; @izvekov2012]. In our case, it is important to discuss our simulations in the context of the exothermic heat release due to the Ni-Al reaction. As mentioned in Section \[sebsec:kin\], the excess heat resulting from the formation of amorphous (liquid) NiAl relative to isolated Ni and Al at 800 K is approximately 0.32 eV per atom. In the absence of any mechanical work performed on the system, this would result in a temperature increase of: $$\Delta T\approx\Delta E_r/C_p$$ which, taking the values for the heat capacity from Table \[table\], would results in temperature increases on the order of 1230 K. This would suggest a final temperature of $\approx$2030 K[^6] at the end of the simulations and, as discussed in Section \[sebsec:kin\], melting of all the phases involved. This picture is in agreement with other MD simulations using the same interatomic potential [@ale2012] in the $NpH$ ensemble. This is evidently not what happens during the $NpT$ simulations, which resemble an infinitely slow process where the released heat has sufficient time to dissipate and the temperature is kept constant. In such a case, one would expect the system to display the equilibrium phase at each composition. The reaction process in the $NpT$ ensemble can be understood as following a rectilinear path along the 800-K isotherm on the $F$-$T$ plane shown in Fig. \[ff\] (this path is shown descriptively in the figure). The reaction process occurs via the formation of a mixing zone of diffuse nature, characterized by the penetration of Ni in the Al layer, that grows as Ni diffuses and reaches concentrations capable of resulting in stable Ni-Al compounds. There is atomistic evidence that the minimum temperature for the process to occur in this fashion is of the order of 700 K [@zhang2011]. Because Ni interpenetration in Al is much larger than vice versa, the bilayer reaction can be thought of as a process where Ni arrives in the Al crystal and gradually increases its relative concentration from zero to 0.5. Thermodynamically, this is illustrated also in Fig. \[fig:comp\] via the chemical potential difference $\Delta\mu=\partial F/\partial c$ (where $c$ is the Ni concentration), which is negative for all values of $T$ up to $c\approx0.8$. This trajectory means that any incremental addition of Ni results in a decrease of $F$ and is therefore thermodynamically favored. However, in going from low Ni concentrations to $c\approx0.5$ along the 800-K isotherm (or, in fact, any other), one crosses a region of the free energy surface where the amorphous phase is more stable than the bcc phase (at 800 K, between $c\gtrsim0.08$ and $c\lesssim0.40$). This is essentially what happens between 9 and 25 ns after the reaction initiation, and signals the local melting of the bcc solid solution at that composition and temperature. As the Ni concentration continues to increase, the system recrystallizes again until an equiatomic compound is formed. This occurs first near the Ni-Al interfaces, which spur the nucleation of a lowest-free-energy B2 phase. Interestingly, a nano grained structure appears as a result of the impingement of several independently nuclei. The constitution of the bcc and B2 phases takes place according to classical (heterogeneous) nucleation, and thus, although this case may not necessarily be considered as *realistic*, it is thermodynamically consistent in terms of the free energy calculations reported here. For their part, $NpH$ simulations reveal a very different picture. In this case, the temperature does increase up to the melting point of the B2 phase, creating a liquid phase that encompasses the entire simulation box. Thermodynamically, the simulations may be regarded as following a constant $H\equiv E$ path in Fig. \[ee\]: at 800 K, $(E_{\mathrm{Ni}}+E_{\mathrm{Al}})/2\approx-3.72$ eV/atom. Following this isenthalpy along the temperature axis, one can see that the liquid (amorphous) phase is encountered at $\approx$1800 K, in good agreement with the melting point of the B2 structure. The path along temperature is not uniform but, as Fig. \[fig:temp\] shows, one that is gradual up to the melting point of Al and that steepens thereafter until an absolute liquid is formed. The rate of temperature increase during the first stage is approximately 15 K ns$^{-1}$, while for the second one it is 60 K ns$^{-1}$. This accelerated exothermic release correlates with a homogeneous mixture of Ni-Al in the original Al half crystal. It is at this point that the interface begins to move into the original Ni half crystal, resulting in rapid heat production. The full melting point is achieved when the interface traverses the remaining crystalline zone. The $NpH$ simulations result in a net temperature increase of $\Delta T\approx1050$ K. Following the above reasoning, this is the result of $C_p\Delta T\approx0.27$ eV per atom released into the system. Because the total released energy is 0.32 eV per atom, this implies that $0.32-0.27=0.07$ eV per atom are released as latent heat. In experimental conditions, the exothermic heat released during this reaction would dissipate over time, cooling the liquid configuration below the melting point and resulting in a stable crystalline phase. To study whether this phase, as in the $NpT$ case, is B2, we cool the liquid configuration corresponding to the end point of the 30-ns $NpH$ MD simulation down to 600 K at a cooling rate of 50 K ps$^{-1}$. Figure \[cool\] shows the evolution with time of the fraction of atoms with different crystalline structure during the cooling $NpH$ simulation. As shown, approximately after 11 ns, corresponding to a temperature of 1250 K, the system phase transforms into a B2 structure. The figure shows a discontinuous transition of the system’s potential energy, which suggests a first-order phase transformation. For the B2 phase, this crystallization temperature of 1250 K must be compared with the melting point of 1780 K to give an idea of the degree of hysteresis achieved in the simulations. ![Time evolution of the fraction of atoms with a particular atomic structure for the cooling of the liquid $NpH$ configurations (Fig. \[framenph6\]). The concomitant evolution of the system’s potential energy is also shown. The phase transformation occurs at 11 ns of simulation time, which corresponds to a temperature of 1250 K. The figure includes two snapshots of the configurations at times 1 and 20 ns (Al: red atoms; Ni: blue atoms), where the temperatures are, respectively, 1800 and 800 K.[]{data-label="cool"}](cooling.pdf){width="\textwidth"} The fraction of ‘unstructured’ atoms before this transformation represents the liquid structure while, after it, it corresponds to grain boundaries and local disordered zones interspersed in the B2 structure. It must be noted that, to first order, the time scale for heat dissipation can be obtained by assuming that the system dissipates heat at a rate given by Newton’s law of cooling: $$T(t) = T_0\exp\left(-\frac{t}{\tau}\right)$$ where $T_0$ is the temperature of the heat bath, and: $$\tau=\frac{mC_p}{\kappa L}$$ is the time constant associated with this process, where $\kappa$ is the thermal conductivity and $L$ is a length scale that represents the distance from the heat source to the bath. Taking values from Table \[table\] for the heat capacity and from the literature for $\kappa$ [@kappa], we obtain a value of $\tau=87$ ps, well within the times simulated with MD here. This implies that, after a few hundred ps, the system would start to dissipate heat to a heat bath if there were one available. However, on the one hand, this value of $\tau$ is not sufficiently low to be used as damping constant in the $NpT$ simulations (cf. Ref. [@lammps]). On the other, it means that heat would start to flow out of the system within the time scale of the $NpH$ simulations, which is not a process captured here. In any case, we provide this discussion to give the reader an idea of the realism of the simulations done here. Conclusions =========== We have obtained the thermodynamic properties of intensive thermodynamic variables for the N-Al system as a function of structure and composition. We have performed atomistic simulations of the Ni-Al system at high homologous temperatures and extracted several thermodynamic quantities for these conditions. This simulation methodology was then applied to a Ni-Al diffusion couple and its evolution observed. The simulations were done at 800 K and zero pressure under $NpT$ and $NpH$ conditions. In both cases, the Ni atoms diffuse quickly into the Al and this alloying causes a structural transformation to an amorphous phase. This amorphous region corresponds to a highly disordered Ni-Al mixture in the $NpT$ ensemble, while it represents the absolute liquid in $NpH$ simulations. In the former case, amorphous regions near the interface —where the composition first reaches the 1:1 ratio— nucleate into grains of the B2 Ni-Al intermetallic phase. These grow and collide with one another, resulting in a nano crystalline B2 structure. In the latter case, a gradual cooling of the liquid phase also results in a B2 system. The absence of other intermetallic phases such as NiAl$_3$ or Ni$_3$Al may at first be counterintuitive because these compositions will at some point be present upon cooling of a liquid or recrystallization of an amorphous phase. However, the free energy calculations indicate that the B2 phase has a higher thermodynamic driving force for formation that leads to more rapid kinetics. These other intermetallic phases are also not observed experimentally in systems with the small length scales and rapid kinetics considered here [@kim2008]. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Dr. A. Caro for critically reviewing the manuscript. This work was performed under the auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The contributions of LS and GHC to this work were supported by DOE Office of Science, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. JM acknowledges support from the DOE’s Early Career Research Program. References {#references .unnumbered} ========== [10]{} refs.tex [^1]: Currently at Theoretical Division T-1, Los Alamos National Laboratory, Los Alamos, NM 87545, USA. [^2]: For numerical reasons, it is best to choose $\alpha$ in eq. \[einstein\] such that: $$\alpha=\frac{3}{\beta\langle\Delta r^2\rangle}$$ where $\Delta r^2$ is the mean square displacement of the target phase. [^3]: Note that, at 800 K, Al is approaching its melting limit and $g(r)$ is quite broad for the fcc structure. [^4]: This result is obtained from Fig. \[ee\] as $\Delta E_r=0.5(E_{\rm{Ni}}+E_{\rm{Al}})-E^{am}_{\rm{NiAl}}$, which, at 800 K, is $\Delta E_r\approx0.5(-4.35-3.19)+4.09=-0.32$ eV per atom. [^5]: According to the lammps convention, the Ackland-Jones parameter takes values of 0 (blue): unknown, 1 (cyan): bcc, 2 (green): fcc, 3 (yellow): hcp, and 4 (red): icosahedral. [^6]: Of course, once the melting point of the most stable thermodynamic phase is reached —the B2 phase at 1780 K—, the released heat is invested as latent heat and the temperature does not change.
--- abstract: 'I model profiles of the \[Ne[ii]{}\] forbidden emission line at 12.81$\mu$m, emitted by photoevaporative winds from discs around young, solar-mass stars. The predicted line luminosities ($\sim10^{-6}$L$_{\odot}$) are consistent with recent data, and the line profiles vary significantly with disc inclination. Edge-on discs show broad (30–40kms$^{-1}$) double-peaked profiles, due to the rotation of the disc, while in face-on discs the structure of the wind results in a narrower line ($\simeq10$kms$^{-1}$) and a significant blue-shift (5–10kms$^{-1}$). These results suggest that observations of \[Ne[ii]{}\] line profiles can provide a direct test of models of protoplanetary disc photoevaporation.' date: 'Accepted 2008 September 1. Received 2008 September 1; in original form 2008 August 12' title: '\[Ne[ii]{}\] emission line profiles from photoevaporative disc winds' --- \[firstpage\] planetary systems: protoplanetary discs – stars: pre-main-sequence – hydrodynamics – line: profiles Introduction {#sec:intro} ============ The manner in which gas is removed from discs around young stars has critical consequences for theories of planet formation, as the formation of gas-giant planets must precede the dissipation of gas discs. Currently, models of protoplanetary disc evolution suggest that the most of the gas is removed by a combination of viscous accretion through the disc and photoevaporation by irradiation from the central star \[e.g., @cc01 [@acp06b]; see also the recent reviews by @dull_ppiv and @rda08\]. Such models agree reasonably well with current data, but the ionizing luminosities of T Tauri stars (required to drive the photoevaporation) are very poorly constrained. @acp05 used archival data to estimate the chromospheric ionizing fluxes from a small sample of bright T Tauri stars, and found values in the range $\Phi \sim 10^{42}$–$10^{43}$ ionizing photons per second. However, recent data suggests that this may over-estimate more typical values of $\Phi$ by 1–2 orders of magnitude [@herczeg07b]. Photoevaporative flows have been observed clearly in ultracompact H[ii]{} regions around massive stars [@holl94; @lugo04], and in the externally illuminated “proplyds” in the Orion nebula [e.g. @jhb98]. However, the evidence for or against the existence of central star-driven photoevaporative winds from discs around solar-mass stars is rather tenuous. @font04 modelled the profiles of optical forbidden emission lines and found reasonable agreement with both the line fluxes and profiles obtained in the spectroscopic survey of @heg95. However, some discrepancies were also present (notably between the predicted and observed fluxes from \[O[i]{}\]), and no further studies of such emission lines exist. For several reasons, the \[Ne[ii]{}\] fine-structure line at 12.81$\mu$m offers perhaps the best opportunity to confirm or deny the existence of a slow ($\sim10$kms$^{-1}$) ionized wind from a T Tauri disc. The high ionization potential of neon (21.56eV) means that Ne$^+$ only exists close to T Tauri stars in photoionized gas, so forbidden emission lines from neon ions are unlikely to arise elsewhere in the T Tauri system. The 12.81$\mu$m line also falls in an atmospheric window (unlike the \[Ne[iii]{}\] line at 15.55$\mu$m), and can therefore be observed from the ground at echelle resolution. Moreover, the continuum emission from the star-disc system is orders of magnitude weaker in the mid-infrared than in the optical, so the line-to-continuum ratio for \[Ne[ii]{}\] can often be higher than for optical forbidden lines such as \[N[ii]{}\] or \[S[ii]{}\]. In addition, as will be shown, the critical density of the \[Ne[ii]{}\] line is such that most of the emission arises in the “launching region” of the photoevaporative wind, making it an excellent tracer of disc photoevaporation. Recently the [*Spitzer Space Telescope*]{} has detected the \[Ne[ii]{}\] 12.81$\mu$m line towards more than 20 young, solar-mass stars [@pasc07; @lahuis07; @esp07], and in most cases it is thought to arise in the star-disc system. @herczeg07a observed the \[Ne[ii]{}\] line from the nearby source TW Hya at high resolution ($\lambda/\Delta\lambda \approx 30,000$), and found that the line (with width $21\pm4$kms$^{-1}$) was narrower than lines that originate in the accretion funnel, but significantly broader than lines emitted from the disc at larger radii: this strongly suggests that the line originates in a hot disc atmosphere. @gni07 and @gh08 showed that X-ray heating of the disc can reproduce the observed line fluxes, but a photoevaporative disc wind should give rise to a similar line flux [@pasc07; @gh08]. While both ionization mechanisms (X-ray and UV) can likely explain the observed line fluxes, the velocity structure of a photoevaporative wind should result in a line profile that is distinct from that produced by X-ray irradiation of a bound disc atmosphere. In this [*Letter*]{} I model the profiles of the \[Ne[ii]{}\] 12.81$\mu$m line emitted by photoevaporative winds, and find that high-resolution spectroscopy can provide an unambiguous means of detecting such winds in T Tauri systems. Models ====== Hydrodynamic models ------------------- In order to construct velocity profiles it is first necessary to compute the hydrodynamic structure of the wind. Here two different cases are considered: the standard photoevaporative wind [@holl94; @font04] and the “direct” wind, applicable to a disc with an inner hole that is optically thin to ionizing photons [@acp06a]. I use the [zeus2d]{} hydrodynamics code [@sn92] to model the winds, following the approaches of @font04 and @acp06a respectively. I assume azimuthal and midplane symmetry, and therefore model the disc using a polar \[$(r,\theta)$\] grid covering the range $\theta=[0,\pi/2]$[^1]. The grid cells are logarithmically-spaced in $r$ and linearly-spaced in $\theta$, with the numbers of radial ($N_r$) and polar ($N_{\theta}$) cells chosen so that the grid cells are approximately square throughout (i.e. $\Delta r = r \Delta \theta$, see e.g., @bate02). The rotation option in [zeus2d]{}, which introduces a centrifugal pseudo-force, is turned on, and accelerations due to gravity are evaluated using only a point mass at the origin. I adopt the van Leer (second-order) interpolation scheme, and the standard von Neumann & Richtmyer form for the artificial viscosity (with $q_{\mathrm {visc}}=2.0$). ### Standard case In the standard case the wind models follow the “photoevaporative disc wind” models of @font04. These models make use of the results of @holl94 [specifically the “weak-wind” case], who used detailed radiative transfer calculations to determine the structure of the ionized disc atmosphere. The atmosphere is isothermal at $10^4$K, with a sound speed $c_{\mathrm s}=10$kms$^{-1}$. The critical length-scale $R_{\mathrm g}$, known as the gravitational radius, is found by equating the sound speed of the ionized gas with the local (Keplerian) orbital speed, so $$R_{\mathrm g} = \frac{GM_*}{c_{\mathrm s}^2} \simeq 8.9 \, \left(\frac{M_*}{1\mathrm M_{\odot}}\right)\, \mathrm {AU} \, ,$$ where $M_*$ is the mass of the central star. The (number) density at the base of the ionized atmosphere, $n_0(R)$, determines the structure of the wind, and the value at $R_{\mathrm g}$ is $$\label{eq:base_den} n_{\mathrm g} = C \left( \frac{3\Phi}{4\pi\alpha_{\mathrm B}R_{\mathrm g}^3}\right)^{1/2} \simeq 2.8\times10^4 \left(\frac{\Phi}{10^{41}\mathrm s^{-1}}\right)^{1/2} \left(\frac{M_*}{1\mathrm M_{\odot}}\right)^{-3/2} \, .$$ $\alpha_B=2.6\times10^{-13}$cm$^3$s$^{-1}$ is the Case B recombination coefficient for atomic hydrogen at $10^4$K [@allen], and the constant $C$ was determined by the numerical calculations of @holl94. Inside $R_{\mathrm g}$ the base density scales as $R^{-3/2}$, while outside $R_{\mathrm g}$ it scales as $R^{-5/2}$. @font04 adopt the following fitting form for the base density, which varies smoothly between the two power-laws: $$n_0(R) = n_{\mathrm g} \left(\frac{2}{(R/R_{\mathrm g})^{15/2} + (R/R_{\mathrm g})^{25/2}}\right)^{1/5} \, .$$ The disc atmosphere is given an isothermal equation of state, and the outer and inner radial boundaries are set as outflow boundaries. The inner polar boundary (the $z$-axis) is a reflecting boundary, but the flow is not at all sensitive to the inner boundary conditions. The base of the flow (i.e. the disc midplane) is held constant at every time-step, using the density profile $n_0(R)$ from Equation \[eq:base\_den\]. \[The base of the flow is actually found several disc scale-heights above the midplane [@holl94], but additional calculations show that this approximation has a negligible effect on the line profiles.\] The flow is thus assumed to be recombination-dominated (with negligible advection across the ionization front): previous studies suggest that this approximation is valid [e.g. @acp06a]. The base cells are given a Keplerian velocity in the orbital direction, and zero velocity in the radial direction. The polar (vertical) velocity out of the base cells is not prescribed, but instead computed hydrodynamically: typically the launch velocity is $\simeq0.3$–0.4$c_{\mathrm s}$ [@font04]. The models are computed in dimensionless units: the unit of length is $R_{\mathrm g}$, the unit of time is the orbital period at $R_{\mathrm g}$, and the density is normalized so that $n_{\mathrm g}=1$. Initially the grid is filled with a uniform density of $10^{-8}n_{\mathrm g}$, and the model is evolved forward in time until a steady-state is reached. In order to compute line profiles, the resulting density and velocity structures are scaled to physical units by choosing the parameters $\Phi$ and $M_*$. I use a grid with $r_{\mathrm {in}} = 0.03$R$_{\mathrm g}$, $r_{\mathrm {out}} = 20$R$_{\mathrm g}$, $N_r = 246$ and $N_{\theta}=59$. As one would expect there is excellent agreement with the results of @font04, with the location of the sonic surface and the integrated mass-loss rate agreeing with their result to better than 1% accuracy. Choosing different locations for the boundaries ($r_{\mathrm {in}} = 0.01$R$_{\mathrm g}$; $r_{\mathrm {out}} = 50$R$_{\mathrm g}$) has a negligible effect on the structure of the flow and affects the computed line fluxes and profiles only at the percent level. A numerical convergence test (using twice the grid resolution in both dimensions) suggests that the flow solution is correct to 1–2% accuracy. A steady-state is reached after $t=5$–10 time units; the subsequent line profile modelling makes use of the density and velocity fields at $t=40$. The steady-state flow solution is shown in Fig.\[fig:PDW\_struc\]. ### Discs with holes For the case of a disc with an optically thin inner cavity, the model of @acp06a is used. In this case the calculation is not scale-free (because of the ionization/recombination balance calculation), so physical units are used. In addition, because the pressure scale-height of the cold disc must be resolved into several grid cells, somewhat higher resolution is required than in the standard case (where the underlying cold disc is treated as a boundary condition). I adopt model parameters of $M_* = 1$M$_{\odot}$ and $\Phi=10^{41}$s$^{-1}$, and grid parameters $r_{\mathrm {in}} = 5$AU, $r_{\mathrm {out}} = 200$AU, $N_r = 360$ and $N_{\theta}=153$. The outer radius is chosen to be approximately equal to that used in the standard case when scaled to physical units (for $M_* = 1$M$_{\odot}$), but the results are not sensitive to the exact location of the boundary. The initial disc follows an power-law surface density profile $\Sigma \propto R^{-1}$, truncated exponentially at some inner radius, and is locally isothermal in the vertical direction with aspect $H/R=0.05$. The total disc mass is $0.001$M$_{\odot}$ (consistent with the low disc mass expected during photoevaporative clearing), but the resulting line profiles are not sensitive to this mass. The model is allowed to evolve to a “quasi-steady” state[^2] before computing line profiles, and considers a inner hole of radius 9AU ($\simeq$R$_{\mathrm g}$). The quasi-steady state is reached after 1–2 outer orbital periods; the line profiles are calculated from the density and velocity fields at $t=4000$yr. Line profiles {#sec:profiles} ------------- In order to compute line profiles from the hydrodynamic models, it is first necessary to construct three-dimensional density and velocity fields from the two-dimensional simulations. I assume reflective symmetry at the disc midplane in order to extend the coordinate range to $\theta=[0,\pi]$, and azimuthal symmetry around the polar axis. The resulting grid has $N_r \times 2N_{\theta} \times N_{\phi}$ cells: an azimuthal resolution of $N_{\phi}=120$ is adopted, but the exact value of $N_{\phi}$ has no significant effect on the results. The line luminosity $L$ at a given velocity $v$ is computed as $$\begin{aligned} L(v) = \frac{1}{\sqrt{2\pi}v_{\mathrm {th}}} \int \exp\left(-\frac{[v-v_{\mathrm {los}}(\mathbf r)]^2}{2v_{\mathrm {th}}^2}\right) \, Ab_{\mathrm {Ne}} \, X_{\mathrm {II}} \nonumber \\ \times \, n_e(\mathbf r) \, P_u \, A_{ul} \, h\nu_{ul} \, dV \, ,\end{aligned}$$ with the integral evaluated by direct summation over the entire extent of the grid. Here $v_{\mathrm {los}}(\mathbf r)$ is the line-of-sight component of the local gas velocity vector, computed for a specified inclination angle $i$. ($i=\pi/2$ corresponds to a disc viewed edge-on; $i=0$ face-on.) $n_e(\mathbf r)$ is the local number density of the gas (hydrogen is assumed to be fully ionized), and $h\nu_{ul}$ is the energy of the emitted photons. The Doppler broadening term depends on the thermal velocity of the emitting atoms $$v_{\mathrm {th}} = c_{\mathrm s}\sqrt{m_{\mathrm H}/m_{\mathrm {Ne}}} \, ,$$ where $m_{\mathrm H}$ and $m_{\mathrm {Ne}}$ are the masses of hydrogen and neon atoms respectively. I adopt the standard solar value for the abundance of neon, $Ab_{\mathrm {Ne}}=1.0\times10^{-4}$, and the Einstein coefficient of the transition is $A_{ul} = 8.39\times 10^{-3}$s$^{-1}$ [@mendoza83; @gni07]. $X_{\mathrm {II}}$ is the fraction of neon that exists as Ne$^+$, discussed below. Following @gni07, the excitation fraction of the upper state $P_u$ is computed as $$\label{eq:P_u} P_u = \frac{1}{2 C_{ul} \exp(-T_{ul}/T) +1} \, ,$$ where the excitation temperature $T_{ul} = 1122.8$K, and the gas temperature $T=10,000$K. $C_{ul}$ expresses the departure from a thermal level population, and is defined as $$C_{ul} = 1 + \frac{n_{\mathrm {cr}}}{n_e(\mathbf r)} \, ,$$ where the critical density $n_{\mathrm {cr}} = 5\times10^5$cm$^{-3}$. I assume that the wind is optically thin to the emitted line, but that the disc absorbs 100% of the line flux if the midplane intercepts the line-of-sight to the observer. This approach neglects line emission from partially ionized gas in the ionization front, but analytic estimates suggest this contributes to the total line luminosity only at the percent level. The factor $X_{\mathrm {II}}$ is determined by ionization/recombination balance. The first ionization potential of atomic neon is 21.56eV, so in the $10^4$K disc atmosphere collisional ionization of neon is negligible. Instead, neon is photo-ionized, either by UV photons with energies greater than 21.56eV or by high-energy X-rays (via the Auger mechanism). The low line-of-sight column density in the wind region ($10^{17}$–$10^{18}$cm$^{-2}$) suggests that UV ionization is more efficient than ionization by X-rays, and I neglect X-ray ionization here. The second ionization potential of neon is 41.0eV, so the relative abundances of Ne, Ne$^+$ and Ne$^{2+}$ are determined by the spectral slope of the incident UV radiation field. Little is known about the details of the incident radiation field, but comparison to Galactic H[ii]{} regions suggests that $X_{\mathrm {II}}$ likely lies in the range 0.1–1.0 [e.g. @rubin91]. In the models almost all the ionized gas is below the critical density, so the line flux per unit volume is simply proportional to $X_{\mathrm {II}}$. Moreover, the bulk of the line flux arises from a fairly limited range in radius ($\simeq0.1$–2.0$R_{\mathrm g}$; see discussion in Section \[sec:results\] below), and $X_{\mathrm {II}}$ is unlikely to vary significantly over this range. Ionization balance therefore has a negligible effect on the shape of the line profile, but the integrated line flux scales linearly with $X_{\mathrm {II}}$. In the absence of any knowledge of the incident spectrum of radiation I adopt a constant value of $X_{\mathrm {II}}=1/3$, noting that this introduces an uncertainly of a factor of $\sim3$ in the derived line luminosities. Results {#sec:results} ======= [cccccc]{} Inclination & Line flux & Peak & FWHM & Peak & FWHM\ & $10^{-6}$L$_{\odot}$ & &\ \ $\pi/2$ & 3.0 & $\pm$10.6 & 33.5 & $\pm$7.6 & 38.1\ $\pi/4$ & 1.5 & $-$12.1/$+$0.9 & 26.3 & $-$7.4 & 28.7\ 0 & 1.5 & $-$6.4 & 9.5 & $-$6.6 & 16.8\ \ \ $\pi/2$ & 5.4 & $\pm$3.1 & 17.1 & 0.0 & 19.3\ $\pi/4$ & 2.7 & $-$4.8 & 13.3 & $-$6.3 & 18.1\ 0 & 2.7 & $-$7.9 & 10.5 & $-$8.9 & 17.0\ The results of the line-profile modelling are shown in Table \[tab:results\]. Two disc models are presented (standard, 9AU hole) for model parameters $M_*=1$M$_{\odot}$ and an $\Phi=10^{41}$s$^{-1}$, and for each model the line profile was computed for inclinations of $i=0$ (face-on), $i=\pi/4$ and $i=\pi/2$ (edge-on). In addition, “realistic” line profiles, at the typical resolution of an echelle spectrograph, were computed by convolving the line profiles with a Gaussian profile with half-width $\sigma=5$kms$^{-1}$: the effective resolution of these profiles is $\lambda/\Delta\lambda=$30,000. The line profiles for the standard case are shown in Fig.\[fig:line\_disc\]: for clarity, only the edge-on and face-on cases are shown. When viewed edge-on the line is broad (30–40kms$^{-1}$) and double-peaked, while face-on inclination results in a line that is narrower (10kms$^{-1}$) and slightly blue-shifted (6–7kms$^{-1}$). At resolution $\lambda/\Delta\lambda=30,000$ the blue-shift is comparable for all inclinations $i \lesssim \pi/4$ (see Table \[tab:results\]), as any double-peaked structure in the line is not well-resolved. These line profiles can be easily understood by considering the flow structure. At large radii the wind is essentially isothermal and spherically symmetric, and is analogous to a Parker wind solution. The streamlines are close to radial, and the density along streamlines drops as $n_e \propto r^{-2}$. As the density is sub-critical the line flux per unit volume scales $\propto n_e^2 \propto r^{-4}$, so the total line luminosity (integrated over volume) is dominated by the emission from small radii. Inspection of the mass-loss profile (see, e.g., Fig.7 of @font04) shows that $>80$% of the mass-loss comes from $<2R_{\mathrm g}$, and the density structure is such that the critical density is reached only at radii $\lesssim 0.1R_{\mathrm g}$ (see Fig.\[fig:PDW\_struc\]). This highlights why the \[Ne[ii]{}\] 12.81$\mu$m line is an ideal probe of disc photoevaporation: lines with higher critical densities will likely be dominated by emission from high density regions very close to the star, while lines with lower critical densities are insensitive to the launching region of the wind. In the launching region the rotation speed of the disc is greater than the wind or thermal speeds (as $v_{\mathrm K} = c_{\mathrm s}$ at $R=R_{\mathrm g}$), so when viewed edge-on the line profile is dominated by Keplerian rotation and shows a pronounced double-peaked structure. When viewed face-on the rotation is entirely in the plane of the sky, and instead the vertical component of the wind velocity results in a blue-shifted profile. In this case the profile is slightly asymmetric, with a blue “tail” due to high-velocity, low-density gas at large $r$. In all cases the line-widths are significantly larger than expected from thermal broadening alone. This occurs because the streamlines are almost radial at large radius. Consequently the dispersion in the line-of-sight component of the flow velocity is always larger than the thermal line-width, resulting in broader than thermal line-widths even for face-on discs. In the case of a disc with a cleared central hole, the line profile when viewed edge-on is markedly different from the standard case (see Fig.\[fig:line\_10au\]). The removal of gas with high rotation speeds results in a profile that is much less strongly double-peaked, and significantly narrower (15–20kms$^{-1}$). Still larger hole sizes result in even less pronounced double-peaked profiles, and for hole sizes $\gtrsim 5$R$_{\mathrm g}$ there is no evidence of a double-peaked profile. The profile in the face-on case is similar to that seen in the standard model: fairly narrow ($\simeq10$kms$^{-1}$), slightly asymmetric, and blue-shifted (7–9kms$^{-1}$), and this general shape is seen irrespective of the size of the central hole. The integrated line flux is somewhat larger than in the standard case (by a factor of $\simeq 2$), due to the higher density in the wind launching region. The predicted line luminosities are a few $\times10^{-6}$L$_{\odot}$. The exact line fluxes are likely only accurate to within a factor of $\sim3$, as discussed in Section \[sec:profiles\] above, but the predicted line strengths are comparable to those predicted by previous calculations, both from UV ionization [@gh08] and X-ray ionization [@gni07]. The emitted line fluxes scale approximately linearly with $\Phi$, and the shape of the line profile is largely independent of the value of $\Phi$. This can be understood from Equations \[eq:base\_den\] & \[eq:P\_u\]: the line flux scales $\propto n_e^2$ as long as $n_e \ll n_{\mathrm {cr}}$, and the density of the ionized gas scales $\propto \Phi^{1/2}$. We see from Fig.\[fig:PDW\_struc\] that almost all of the gas in the emitting region is below the critical density for \[Ne[ii]{}\], and additional calculations suggest that this holds as long as $\Phi\lesssim10^{43}$s$^{-1}$. Higher still ionizing fluxes will result in the critical density being exceeded, making the \[Ne[ii]{}\] line less sensitive to the launching region of the wind, but this is unlikely to be relevant to discs around T Tauri stars. Discussion ========== In general, these results compare favourably with the previous modelling of @font04. These authors modelled the emission from optical forbidden lines (\[S[ii]{}\] and \[N[ii]{}\]), and for face-on inclinations found that the line profiles were blue-shifted by $\simeq10$kms$^{-1}$. The blue-shift decreased with increasing inclination angle, and in the edge-on discs the lines were centred on zero velocity. They did not, however, find any double-peaked lines, even in the edge-on case. The reasons for this are not entirely clear, but it is likely attributable to the different critical densities of the lines considered. The critical densities of the \[S[ii]{}\] and \[N[ii]{}\] lines are lower that that of the \[Ne[ii]{}\] 12.81$\mu$m line, so these lines are dominated by emission from larger radii, where the rotation speed of the disc is smaller. Consequently, as in the case of a disc with a large inner hole, the lines do not appear double-peaked. So far I have neglected the effects of X-ray ionization in producing \[Ne[ii]{}\] emission, but when discussing observations this cannot be ignored. @gni07 modelled the effects of X-ray irradiation in detail [see also @gh08]. Unlike in the UV case, the X-ray luminosities of T Tauri stars are well-known, and the predicted \[Ne[ii]{}\] emission from X-ray ionized gas is comparable in strength to the emission from the photoevaporative wind ($\sim 10^{-6}$L$_{\odot}$). X-ray heating likely creates a hot, but static, disc atmosphere, and in real systems the emission from this atmosphere will be added to the line profiles modelled here. The line profiles from such a bound atmosphere have not been modelled in detail, but their general shape was discussed by @gni07. They find that emission is dominated by gas at small radii $(\lesssim15$AU), with most of the emission arising at radii where the rotational speed of the disc gas is $\sim10$kms$^{-1}$. The gas temperature in this region is 1000–5000K, resulting in an intrinsic (Doppler) line-width of $\sim 1$kms$^{-1}$. The line should therefore be double-peaked when viewed edge-on, with a profile similar to that predicted here. In the face-on case, however, the profile should be different: narrow ($\sim 1$kms$^{-1}$), and centred on zero velocity. The broader ($\sim 10$kms$^{-1}$) line-width predicted here is unlikely to be matched by a static disc atmosphere unless any turbulence in the disc is highly supersonic [which is generally not the case in protoplanetary discs, e.g., @bh98], or unless the emission originates very close to the star ($\lesssim 1$AU). In addition, a static atmosphere results in line emission centred on zero velocity, distinct from the blue-shifted profile of the photoevaporative wind. Critical to observing this blue-shift are the relative line fluxes from the static atmosphere and the wind. If these are comparable, as predicted for $L_X\sim10^{30}$erg s$^{-1}$ and $\Phi \sim 10^{41}$s$^{-1}$, then the diagnostic blue-shift will likely be 2–5kms$^{-1}$. Recently @herczeg07a observed the \[Ne[ii]{}\] 12.81$\mu$m line from the nearby face-on ($i\simeq7^{\circ}$) disc TW Hya, at a resolution of $\lambda/\Delta\lambda \approx 30,000$. The emission was unresolved at an effective spatial resolution of $\sim40$AU, suggesting that it arises within this distance of the central star. @herczeg07a measured a line-width (FWHM) of $21\pm4$kms$^{-1}$, centred on $-2\pm3$kms$^{-1}$. The blue-shift is not statistically significant, in part because of the limitations of the wavelength calibration, but the line width is rather larger than expected from a static disc atmosphere alone. In addition there was some evidence for asymmetry in the profile, with an enhanced flux on the blue side of the line, but not at a statistically significant level. The observed profile is consistent with emission from a photoevaporative wind (centred on $\simeq -5$–10kms$^{-1}$) combined with the emission from a heated disc atmosphere (centred on zero velocity). Further such observations are expected in the near future, and in this [*Letter*]{} I have shown that high-resolution observations of the \[Ne[ii]{}\] 12.81$\mu$m line can provide a critical test of models of disc photoevaporation. Detection of blue-shifted \[Ne[ii]{}\] emission would provide unambiguous evidence of a photoevaporative wind, and observations of these line profiles may represent the most readily observable diagnostic of central star-driven disc photoevaporation. Acknowledgements {#acknowledgements .unnumbered} ================ I am grateful for a number useful discussions with Greg Herczeg, Brent Groves and Ilaria Pascucci. I also thank Cathie Clarke and Ewine van Dishoeck for comments on the manuscript, and the referee, Will Henney, for a thoughtful and insightful report. This work was supported by the Netherlands Organisation for Scientific Research (NWO) through VIDI grants 639.042.404 and 639.042.607. [99]{} Alexander, R., 2008, NewAR, 52, 60 Alexander, R.D., Clarke C.J., Pringle, J.E., 2005, MNRAS, 358, 283 Alexander, R.D., Clarke C.J., Pringle, J.E., 2006a, MNRAS, 369, 216 Alexander, R.D., Clarke C.J., Pringle, J.E., 2006b, MNRAS, 369, 229 Balbus S.A., Hawley J.F., 1998, RvMP, 70, 1 Bate M.R., Ogilvie G.I., Lubow S.H., Pringle J.E., 2002, MNRAS, 332, 575 Clarke, C.J., Gendrin, A., Sotomayor, M., 2001, MNRAS, 328, 485 Cox A.N. (editor), 2000, [*Allen’s Astrophysical Quantities*]{}, AIP Press, New York Dullemond C.P., Hollenbach D., Kamp I., d’Alessio P., 2007, in Reipurth, B., Jewitt, D., Keil, K., eds, [*Protostars & Planets V*]{}, U. Arizona Press, Tuscon, 555 Espaillat, C., et al., 2007, ApJ, 664, L111 Font, A.S., McCarthy, I.G., Johnstone, D., Ballantyne, D.R., 2004, ApJ, 607, 890 Glassgold, A.E., Najita, J.R., Igea, J., 2007, ApJ, 656, 515 Gorti, U., Hollenbach, D., 2008, ApJ, in press (arXiv:0804.3381) Hartigan, P., Edwards, S., Ghandour, L., 1995, ApJ, 452, 736 Herczeg, G.J., Najita, J., Hillenbrand, L.A., Pascucci, I., 2007, ApJ, 670, 509 Herczeg, G.J., 2007, IAUS, 243, 147 Hollenbach, D., Johnstone, D., Lizano, S., Shu, F., 1994, ApJ, 428, 654 Johnstone, D., Hollenbach, D., Bally, J., 1998, ApJ, 499, 758 Lahuis, F., van Dishoeck, E.F., Blake, G.A., Evans, N.J., Kessler-Silacci, J.E., Pontoppidan, K.M., 2007, ApJ, 665, 49 Lugo J., Lizano S., Garay G., 2004, ApJ, 614, 807 Mendoza C., 1983, IAUS, 103, 143 Pascucci, I., et al., 2007, ApJ, 663, 383 Rubin R.H., Simpson J.P., Haas M.R., Erickson E.F., 1991, ApJ, 374, 564 Stone, J.M., Norman, M.L., 1992, ApJS, 80, 753 \[lastpage\] [^1]: Lower-case $r$ denotes spherical radius; upper-case $R$ cylindrical radius. [^2]: The flow in this case is only “quasi-steady” because the mass reservoir is finite. This state is reached in $\lesssim 10^3$yr, which is much less than the time-scale for disc clearing ($\sim10^5$yr). As a result, aside from initial transients the flow solution does not change significantly over the duration of the simulations.
--- bibliography: - 'refscheck.bib' title: 'TASI Lectures: Particle Physics from Perturbative and Non-perturbative Effects in D-braneworlds$^*$' --- =1 Introduction ============ Progress in our understanding of quantum field theory and particle physics over the last fifty years has given us a truly remarkable model in the Standard Model of particle physics. To high accuracy and precision, based on all experimental evidence to date, it appears to be the correct low energy effective field theory for all particle interactions below the weak scale. By now there is much excitement about the results which will come from the Large Hadron Collider (LHC) over the next few years. Most expect that the Higgs boson will be found, at which point all particles of the Standard Model will have been discovered. Compared to other possibilities offered by non-abelian gauge theory, the Standard Model is rather complicated: there are three families of quarks and leptons which transform under the gauge group $SU(3)_C\times SU(2)_L\times U(1)_Y$ and 26 parameters which make up the particle masses, mixing angles, and gauge coupling constants. There are large hierarchies in the parameters, as masses of the lightest and heaviest fermions in the theory differ by over ten orders of magnitude. Furthermore, though the up-flavor quarks (for example) transform in the same way with respect to the symmetries of the quantum theory, there is a hierarchy of about five orders of magnitude between the masses of the $u$-quark and $t$-quark. These rather striking experimental facts necessitate a theoretical explanation, and the Standard Model, though very successful, does not provide one. Any underlying theoretical framework should be able to explain the origin of the Standard Model gauge group, particle representations, and parameters. If it is to be the fundamental theory of nature, that framework should also give a sensible theory of quantum gravity. The best candidate for such a framework is superstring theory, which has been shown to naturally give rise to all of these ingredients. Identifying particular string vacua which could give rise to our world is often difficult, though. One difficulty is that superstring theory requires ten-dimensional spacetime, and therefore six of those dimensions must be compact and of very small size to evade experimental bounds on extra dimensions. For the sake of ${\mathcal{N}}=1$ supersymmetry, the standard type of manifold for compactification is a Calabi-Yau threefold, of which there are at least thirty thousand[^1], and almost certainly many more. In addition to other ingredients one can choose in defining a string theory, this choice of compactification manifold gives rise to a vast number of string theories whose four-dimensional effective theory might possibly give rise to the particle physics seen in our world. These vacua are conjectured to be local minima of a potential on the moduli space of string theory, a notion which is often referred to as the landscape [@Susskind:2003kw; @Denef:2004ze; @Schellekens:2008kg]. In these lectures, we focus on a corner of the string landscape which addresses many of the fundamental questions in particle physics in an illuminating geometric fashion. We focus on type II string theory (IIa in concrete examples), where spacetime-filling stacks of D-branes wrapping non-trivial cycles in the Calabi-Yau give rise to four-dimensional gauge theories. We emphasize that all of the ingredients necessary to construct the Standard Model (or MSSM) are present in these compactifications, which are able to realize: - Non-abelian gauge symmetry with $G=U(N)$, $SO(2N)$, or $Sp(2N)$ for each brane. - Chiral matter at brane intersections, with natural family replication. - Hierarchical masses and mixing angles, dependent on geometry in the Calabi-Yau. In type IIa, all of these effects are described by geometry, and one can often employ CFT techniques for their calculation. In the presence of orientifold planes, which are needed for globally consistent supersymmetric models, these theories can realize all of the representations present in a standard Georgi-Glashow SU(5) grand unified theory (GUT). Such theories are often know as D-braneworlds or type II orientifold compactifications[^2]. Despite their success in realizing gauge symmetry and chiral matter content, initial studies of D-braneworlds did not give rise to some important parameters in particle physics. In particular, in *all* weakly coupled D-braneworlds, both the Majorana neutrino mass term $M_R \, N_R \, N_R$ and the Georgi-Glashow top-quark Yukawa coupling $10\, 10\, 5_H$ are forbidden in string perturbation theory by global $U(1)$ symmetries. In many concrete realizations these same global symmetries also forbid other Yukawa couplings, giving rise to massless families of quarks or leptons, in direct contradiction with experiment. In [@Blumenhagen:2006xt; @Ibanez:2006da; @Florea:2006si], it was shown that euclidean D-brane instantons in D-braneworlds can generate non-perturbative corrections to superpotential couplings involving charged matter. Thus, all of the mentioned coupling issues can, in principle, be ameliorated by non-perturbative effects. The instanton corrections are exponentially suppressed, with the factor depending on the volume of the cycle wrapped by the instanton in the Calabi-Yau, and therefore can account for the large hierarchies seen in nature. Taking into account D-instantons, then, the global $U(1)$ symmetries which forbid couplings in perturbation theory are a virtue of these compactifications, rather than a drawback. For a comprehensive review of D-instantons in type II string theory, see [@Blumenhagen:2009qh] and references therein. As there are comprehensive reviews of both generic aspects of D-braneworlds [@Blumenhagen:2005mu; @Marchesano:2007de; @Blumenhagen:2006ci] and D-instantons [@Blumenhagen:2009qh], we intend these notes to be a short review of both topics. For the sake of brevity, we often omit in-depth derivations, choosing instead to give the reader an intuition for the geometry of these models and the corresponding particle physics. We hope that they are sufficient to prepare the reader to read either the existing literature or the in-depth reviews. These lectures are organized as follows. In section \[sec:background\] we give a rudimentary introduction to D-branes and explain how they give rise to gauge theories. Based on scales in the theory, the possibility of large extra dimensions is discussed. In section \[sec:CFT\] we discuss important aspects of chiral matter. We begin by briefly reviewing conformal field theory techniques and then use open string vertex operators to derive the supersymmetry condition as a function of angles between branes. We introduce the notion of the orientifold projection, and discuss the appearance of particular representations in this context. In section \[sec:global consistency\] we discuss conditions required for global consistency of type II orientifold compactifications. We derive the conditions on homology necessary for Ramond-Ramond tadpole cancellation and present the generalized Green-Schwarz mechanism, as well as the constraints they impose on chiral matter. In section \[sec:toroidal orbifold\] we discuss the basics of toroidal orbifolds and present a Pati-Salam model. In section \[sec:couplings\] we discuss the appearance of Yukawa couplings in string perturbation theory via CFT techniques and present examples of two important couplings forbidden in perturbative theory. In section \[sec:instantons\], we present the basics of D-instantons. We discuss details of gauged axionic shift symmetries which are important for superpotential corrections. We also discuss details of charged and uncharged zero modes in terms of both CFT and sheaf cohomology, and present a concrete example of the instanton calculus. Finally, in section \[sec:quivers\] we discuss the advantages and disadvantages of the “bottom-up" approach, which involves studying quiver gauge theories rather than fully defined orientifold compactifications. Generic Background: D-branes, String Parameters, and Scales \[sec:background\] ============================================================================== Early attempts to understand the physics of superstring theory involved the study of the Ramond-Neveu-Schwarz (RNS) action, a (1+1)-dimensional superconformal worldsheet action with spacetime as the target space, which must be ten-dimensional to cancel the worldsheet conformal anomaly. Rather than describing particle worldlines, the RNS action describes the physics of string “worldsheets" embedded in ten-dimensional spacetime. There are only two possibilities for the topology of the superstring: either an $S^1$ or the interval, describing closed and open strings, respectively. The closed strings therefore have no special points, but the open strings do, and it is important to ask what boundary conditions must be imposed at their endpoints. Very early in the history of string theory, it was realized that the open string equations of motion allow for two types of boundary conditions, known as Neumann and Dirichlet. As an open string propagates in ten-dimensional spacetime, it can have either type of boundary condition in each dimension of spacetime. An open string with $(p+1)$ Neumann dimensions and $(9-p)$ Dirichlet dimensions has boundary conditions given by $$\begin{aligned} \mu = 0,\dots, p \qquad {\partial}_\sigma X^\mu|_{\sigma=0,\pi} = 0 \notag \\ \mu = p+1, \dots, 9 \qquad {\partial}_\tau X^\mu|_{\sigma=0,\pi}=0,\end{aligned}$$ where $\sigma$ is the worldsheet coordinate along the string and $\tau$ is the worldline proper time in the particle limit. A look at the definition of the Dirichlet condition might worry the reader, as the $\tau$-derivative vanishing means that the ends of the string are “stuck" at particular points in spacetime, which would seem to break Poincar' e invariance. For this very reason early work on open strings did not consider the possibility of Dirichlet boundary conditions. It was realized, however, that there are issues with ignoring the possibility of Dirichlet boundary conditions. In particular, it was soon realized that type IIa and type IIb superstring theory are T-dual to one another. The simplest statement of the duality is that type IIb compactified on an $S^1$ of radius $R$ gives the same physics as type IIa compactified on an $S^1$ of radius $\frac{\alpha '}{R}$, where $\alpha '=l_s/(2\pi)^2$ depends on the string length $l_s$. A basic fact about the duality is that it exchanges Neumann and Dirichlet boundary conditions of the open strings on the circular dimension, and thus if the duality is to hold, Dirichlet and Neumann boundary conditions must be on the same footing. The key insight which solved the issue about Poincar' e invariance [@Polchinski:1995mt] was that open strings end on objects which themselves carry energy, providing an object to which the momentum at the ends of the open string can escape. These objects, given the name D-branes based on the importance of Dirichlet boundary conditions, source Ramond-Ramond charge from the closed string sector. A Dp-brane is an object on which an open string with $(p+1)$ Neumann dimensions and $(9-p)$ Dirichlet dimensions can end. The endpoints of the open string can only move in the Neumann dimensions, and therefore the strings are confined to the Dp-brane. A massless open string which starts and ends on the same D-brane is interpreted as a gauge boson, since string quantization shows that it transforms in the adjoint of some group $G$, usually $U(N)$, $SO(2N)$, or $Sp(2N)$. One interesting question is whether there is anything to be learned from the fact that a Yang-Mills theory with gauge group $G$ is confined to some submanifold of the total spacetime, whereas the closed string (gravitational) sector propagates in the full spacetime. Indeed, ignoring small constant factors for the time being[^3], the spacetime effective action $S$ contains a gauge term for the Dp-brane and a gravitational term as $$S\supset (\frac{M_s}{g_s})^{p-3} \int_{{\mathbb{R}}^{3,1}\times\pi_a} F_{ab}F^{ab} + \frac{M_s^8}{g_s^2} \int_{{\mathbb{R}}^{3,1}\times{\mathcal{M}}} {\mathcal{R}}_{10d},$$ where the string mass $M_s=(\frac{1}{2\pi\alpha '})^{1/2}$ is the natural mass scale in the theory, $\pi_a$ is the $(p-3)$-cycle in the Calabi-Yau manifold ${\mathcal{M}}$ wrapped by the Dp-brane, and $g_s$ is the string coupling constant. Dimensionally reducing to four dimensions, we obtain $$S \supset (\frac{M_s}{g_s})^{p-3} \,\, {\mathcal{V}}_{\pi_a} \int_{{\mathbb{R}}^{3,1}} F_{\mu\nu}F^{\mu\nu} +\frac{M_s^8}{g_s^2} \,\, {\mathcal{V}}_{\mathcal{M}}\int_{{\mathbb{R}}^{3,1}} {\mathcal{R}}_{4d}$$ from which we can read off $M_p^2 \sim \frac{M_s^8}{g_s^2} {\mathcal{V}}_{\mathcal{M}}$ and $\frac{1}{g^2_\text{YM}} \sim \frac{M_s^{p-3}}{g_s} {\mathcal{V}}_{\pi_a}$. If the geometry is factorizable such that ${\mathcal{V}}_{\mathcal{M}}= {\mathcal{V}}_{\pi_a} \, {\mathcal{V}}_t$, where ${\mathcal{V}}_t$ is the volume of the dimensions in ${\mathcal{M}}$ transverse to the Dp-brane, it immediately follows that $$M_p^2 \,\, g^2_{YM} \sim \frac{M_s^{11-p}}{g_s} \, {\mathcal{V}}_t.$$ This relation between known parameters on the left-hand side and string-theoretic parameters on the right-hand side allows for the following important observation: in the braneworld scenarios, a low string scale $M_s$ allows for the possibility of large extra dimensions transverse to the brane, whose volume is ${\mathcal{V}}_t$. The possibility of large extra dimensions was explored in [@ArkaniHamed:1998rs; @Antoniadis:1998ig], which looked in particular at the possibility of two large extra dimensions. Being a bit more precise, the dynamics of massless open string modes in the worldvolume are given by the Dirac-Born-Infeld (DBI) action plus the Wess-Zumino (WZ) action. $$\begin{aligned} S_\text{eff} &= S_\text{DBI} + S_\text{WZ} \notag\end{aligned}$$ Together they form the relevant worldvolume Lagrangian to leading order in the string coupling and derivatives. Looking to the subset ${a,b}$ of the ten-dimensional spacetime indices $M,N$ along the worldvolume of the D-brane, the actions are given by[^4] $$\begin{aligned} \label{eqn:DBIWZ} S_\text{DBI} &= -\mu_p \int d^{p+1}x \,\, e^{-\phi} \,\, \sqrt{-det(G_{ab}+B_{ab}+2\pi \alpha ' F_{ab})} \notag \\ S_\text{WZ} &= -\mu_p \int d^{p+1}x \,\,\, \text{tr} \, e^{2\pi \alpha ' {\mathcal{F}}} \wedge \sqrt{\frac{\hat A ({\mathcal{R}}_T)}{\hat A ({\mathcal{R}}_N)}} \wedge \bigoplus_q C_q\end{aligned}$$ where the Ramond-Ramond charge of a $p$-brane is $\mu_p = 2\pi l_s^{-p-1} = (2\pi)^{-p} (\alpha ')^{(-p-1)/2}$ and the fields are defined $G_{ab} \equiv {\partial}_a X^M {\partial}_b X^N g_{MN}$ and $B_{ab} = {\partial}_a X^M {\partial}_b X^N B_{MN}$. $\hat A(x)$ is the A-roof genus, given in terms of the Pontryagin classes $p_i$ of a real bundle $x$ as $\hat A(x) = 1 - \frac{1}{24} p_1 + \frac{1}{5760}(7p_1^2-4p_2) + \dots$, and ${\mathcal{R}}_T$ and ${\mathcal{R}}_N$ are the curvature forms of the tangent bundle and normal bundle of the brane worldvolume, respectively. The key physics to note is that the DBI action describes the coupling of the open string gauge field modes in $F$ to the massless NSNS sector, that is, the dilaton, metric and two-form. The WZ action, on the other hand, describes the coupling of the Ramond-Ramond forms $C_q$ which charge the D-branes to the gauge field $F$. As we will see in section \[sec:global consistency\], the WZ action is a useful tool for deriving and understanding global consistency conditions of type II orientifold compactifications. Massless Spectrum and Conformal Field Theory \[sec:CFT\] ======================================================== Having introduced D-branes and the fact that gauge theories are confined to them in section \[sec:background\], in this section we present details related to those gauge theories. Specifically, using conformal field theory techniques we will discuss the appearance of gauge bosons and chiral matter, as well as details related to the presence of orientifolds, whose presence often allows for more interesting four-dimensional particle physics. A particularly natural arena in which to discuss the massless spectrum of D-braneworlds is that of conformal field theory, where quantization techniques allow us to directly identify interesting properties of massless superstrings. For brevity, we present only the details relevant for our presentation, and refer the interested reader to [@FMS1; @FMS2; @Knizhnik; @DFMS; @Polchinski; @PeskinTASI] and references therein for more details on the BRST quantization of superstrings. We consider open strings attached to the D-brane, with Dirichlet boundary conditions transverse to the D-brane and Neumann boundary conditions along the D-brane worldvolume. In the conformal field theory description, we have a concrete representation of massless states, as they are given by vertex operators. The worldsheet fermions appearing in vertex operators have two possibilities for the boundary conditions, Neveu-Schwarz and Ramond, differentiated by a sign in $\psi$ upon going around the spatial direction of the closed superstring, with the Ramond sector carrying the minus sign. Non-abelian Gauge Symmetry -------------------------- In the NS sector with superconformal ghost number $\phi=0$, the vertex operator for the gauge boson is given by $$V_{A_\mu}=\xi_\mu \partial_zX^\mu e^{ik \cdot X},$$ where $\mu \in \{0,1,2,3\}$ and $\xi_\mu$ is the polarization vector in the target space, making this a spin 1 field. Here we employ radial quantization by mapping the Euclidean worldsheet coordinates $(\tau_E,\sigma)$ to complex coordinates $z,{\overline}z$ as $$\begin{aligned} z=e^{\tau_E+i\sigma}\nonumber\\ \bar{z}=e^{\tau_E-i\sigma}.\end{aligned}$$ Since these are open strings, there is a worldsheet boundary condition $0\le\sigma\le\pi$, which is equivalent to ${{\rm Im\,}}z\ge 0$. Instead of considering two sets of Virasoro generators defined on the upper-half plane ${{\rm Im\,}}z\ge 0$, we employ the doubling trick[^5] in order to consider one set of Virasoro generators on the *whole* complex plane with holomorphic coordinate $z$. The NS vertex operator in the (-1) ghost pictures is given by: $$\begin{aligned} \label{eqn:vert gauge boson} V_{A_I}(-1)=\xi_\mu e^{-\phi} \psi^\mu e^{ik \cdot X},\end{aligned}$$ where the conformal dimensions of the fields appearing in the vertex operator are $$\begin{aligned} \left[ e^{\alpha \Phi} \right]=-\frac{\alpha (\alpha +2)}{2} \qquad \qquad \left[\psi \right]=\frac{1}{2} \qquad \qquad [e^{i\, k\cdot X}] = \frac{\alpha' k^2}{2}.\end{aligned}$$ Calculating the conformal dimension of , we obtain $\left[V_{A_I}\right]=\frac{1}{2} + \frac{1}{2} + \frac{\alpha' k^2}{2}$. The requirement that this be equal to one shows that this vertex operator corresponds to a massless field, in particular a massless spin-1 field confined to the worldvolume of the D-brane. It therefore has a natural interpretation as a gauge boson. Thus far, the vertex operators discussed correspond to the massless degrees of freedom in a pure $U(1)$ gauge theory, living on the D-branes. As is well known from the theory of open strings, there is a generalization which corresponds to adding degrees of freedom at the endpoints of the strings. These degrees of freedom are allowed because they break no symmetries (conformal, Poincar' e, etc) of the worldsheet theory and are known as Chan-Paton factors. Though this is a trivial generalization of the worldsheet theory, it has profound implications for spacetime physics. In particular, for $a\in\{1, \dots, N\}$, the Chan-Paton factor $\Lambda_a$ associated with one end of an open string corresponds to that end being confined to a stack of $N$ coincident D-branes. Adding a Chan-Paton factor for each end of the massless open string associated with , the generalized vertex operator is $$\label{eqn:vert gauge boson w chan-paton} V_{A_\mu}=\xi _\mu e^{-\phi}\psi ^\mu e^{ik \cdot X}(\Lambda _a\otimes \bar{\Lambda}_b)$$ with $a,b\in\{1,\cdots N\}$, $\Lambda_a$ the fundamental, and ${\overline}\Lambda_b$ the antifundamental. In the absence of orientifolds, the $N^2$ degrees of freedom in the Chan-Paton factors transform in the adjoint of $U(N)$. Thus, a stack of $N$ coincident D-branes has a $U(N)$ gauge theory living on its worldvolume. Orientifold Projection ---------------------- In addition to D-branes, type II string theory also allows for the presence of another type of object which carries Ramond-Ramond charge. These objects have a fixed negative tension and are known as orientifold planes, and it is this negative tension which allows for supersymmetric globally consistent models, as we will see in section \[sec:global consistency\]. To specify an orientifold compactifications in type II, one must provide an orientifold projection, in addition to a Calabi-Yau manifold. The orientifold projection is a combination of three actions: - $\Omega: \sigma \mapsto -\sigma$, the worldsheet parity operator - $(-)^F$, an action on worldsheet fermions - a ${\mathbb{Z}}_2$ involution on the Calabi-Yau. The ${\mathbb{Z}}_2$ involution, which must be antiholomorphic in type IIa and holomorphic in type IIb, generically has a non-trivial fixed point locus, where the orientifold planes sit. The involution acts on non-trivial cycles in the Calabi-Yau, and therefore it also acts on any D-brane wrapping a non-trivial cycle. Associated to any D-brane in an orientifold compactification, therefore, is an image brane, as depicted in Figure \[fig:dod\]. The ${\mathbb{Z}}_2$ involution fixes the homology of the O-planes, which via global consistency has profound implications for particle physics. We postpone this detailed discussion until section \[sec:global consistency\]. In addition to implications for the homology of D-branes and O-planes, the orientifold projection imposes constraints on the physical states of the theory. If the cycle wrapped by a stack of $N$ D-branes is not invariant under the orientifold, then there are no additional conditions on the physical state, and both that stack and its image stack give rise to $U(N)$ gauge symmetry. If the cycle is orientifold invariant, one must distinguish between two cases: - the cycle is pointwise fixed - the cycle is fixed, but only in homology. In both cases the orientifold projection imposes a constraint on the Chan-Paton factors, given by $(\Lambda _a\otimes \bar{\Lambda}_b)=\pm(\Lambda _a\otimes \bar{\Lambda}_b)^T$, where the different signs are for the different cases in homology. The extra constraint changes the gauge theory on the D-brane from $U(N)$ to $SO(2N)$ or $Sp(2N)$, depending on sign, where the details are dependent upon whether one is in type IIa or type IIb. We refer the reader to section 2.2.8 of [@Blumenhagen:2006ci] for more details. Chiral Matter and Representations\[sec:chiral matter\] ------------------------------------------------------ Now that we have introduced the very basics of orientifolds, we have the requisite background for describing generic matter content in weakly coupled type II string theory. The basic idea is that an open string could, in general, end on two different stacks of D-branes, $a$ and $b$. We say that such a string is in the $ab$-sector, and therefore the adjoint representation of the $a$-stack and $b$-stack correspond to the $aa$-sector and the $bb$-sector, respectively. Orientifolds allow for further generality, as now each D-brane has an image brane, allowing for the extra possibilities of strings in the $aa'$-sector, $ab'$-sector, and $bb'$-sector. A natural question to ask is whether there is a geometric way to see the appearance of matter from the Higgsing of the adjoint representation of some higher gauge group. Considering a stack of $(N+M)$ D-branes on a generic cycle, called $a$, the only brane sector is the $aa$-sector, which corresponds to the adjoint of $U(N+M)$ and has $(N+M)^2=N^2+M^2+2NM$ degrees of freedom. Performing a geometric deformation where one unfolds the stack of $(N+M)$ D-branes into stacks of $N$ and $M$ D-branes, called $b$ and $c$, the degrees of freedom from the $aa$-sector now lie in the $bb$, $cc$, or the $bc$ sector. These sectors respectively give rise to the adjoints of $U(N)$ and $U(M)$, as well as bifundamentals $(N,{\overline}M)$ and $({\overline}N, M)$. This is a concrete geometric interpretation in string theory for how the adjoint of $U(N+M)$ breaks to the adjoint of $U(N)$, $U(M)$ and bifundamentals. In this simple example, the matter fields in the bifundamental representations live at the intersection of two branes. Having motivated the notion that matter lives at the intersection of different stacks of D-branes, we focus on intersecting D6-branes in type IIa for a concrete discussion of the quantization of an open string between two stacks of intersecting branes. Calling the three-cycles wrapped by two stacks of D6-branes $\pi_a,\pi_b\in H_3({\mathcal{M}},{\mathbb{Z}})$, we write $$\begin{aligned} \pi_a=N_{aI}A^I+M_a^JB_J \notag \\ \pi_b=N_{bI}A^I+M_b^JB_J,\end{aligned}$$ where we have used the symplectic basis of three-cycles $A^I$ and $B_I$. They satisfy $$\begin{aligned} A^I\circ B_J=\delta ^I_J, \qquad \int _{A^I}\alpha_J = \delta _{IJ}, \qquad \int _{B_I}\beta_J =- \delta ^{IJ}\end{aligned}$$ where $\alpha_J$ and $\beta^J$ are the dual basis of three-forms on the Calabi-Yau. From this it is straightforward to calculate the topological intersection number $$\begin{aligned} \pi _a \circ \pi _b= N_{aI}M_b^I-M_a^IN_{bI}.\end{aligned}$$ This already has interesting implications for physics: if, for example, $\pi_a\circ \pi_b=3$, then these D-branes intersect at three points in the Calabi-Yau, giving three copies of whatever quantized matter lives at a single intersection. This is quite naturally interpreted as family replication. The quantization of open strings living at the intersection of two D-branes[^6] utilizes the standard open string quantization, together with the non-trivial boundary conditions $$\begin{aligned} \label{eqn:boundary conditions} \partial _\sigma X^{2I-1}=X^{2I}=0 \quad \text{at} \quad \sigma =0 \nonumber \\ \partial _\sigma X^{2I-1}+(\tan {\pi \theta _I})\partial _\sigma X^{2I}=0 \quad \text{at} \quad \sigma =\pi \nonumber\\ X^{2I}-(\tan {\pi \theta _I}) X^{2I-1}=0 \quad \text{at} \quad \sigma =\pi \nonumber\end{aligned}$$ where $I=\in\{2,3,4\}$[^7]. Going to complexified notation for the worldsheet bosons and quantizing, with boundary conditions taken into account, the expansion in terms of oscillators is given by $$Z^I=X^{2I-1}+iX^{2I}=\sum _{n \epsilon {{\mathbb{Z}}}}\frac{\alpha _{n-\theta_I}^I}{n-\theta_I}z^{-n+\theta_I}+ \sum _{n \epsilon {{\mathbb{Z}}}}\frac{\tilde{\alpha} _{n+\theta_I}^I}{n+\theta_I}\bar{z}^{-n-\theta_I},$$ and we note that since $\tilde{\alpha}_{n+\theta_I}^{I\dagger}=\alpha_{n+\theta_I}^I$, $Z\mapsto {\overline}Z$ under $\theta _I \mapsto -\theta_I$ . The only non-vanishing commutator is $[\alpha^I_{n\pm\theta}, \alpha^{I'}_{m\mp \theta}] = \pm \, m \,\delta_{n+m} \delta^{II'}.$ We can also complexify worldsheet fermions, giving $$\begin{aligned} \text{Neveu-Schwarz Sector:}& \qquad \Psi ^I=\psi ^{2I-1}+i\psi ^{2I}=\sum _{r\in{\mathbb{Z}}+\frac{1}{2}}\psi_{r-\theta_I}z^{-r-\frac{1}{2}+\theta _I} \notag \\ \text{Ramond Sector:}& \qquad \Psi ^I=\psi ^{2I-1}+i\psi ^{2I}=\sum _{r\in{\mathbb{Z}}}\psi_{r-\theta_I}z^{-r-\frac{1}{2}+\theta _I},\end{aligned}$$ where the only non-vanishing anticommutator is $\{\psi^I_{m-\theta_I},\psi^{I'}_{n+\theta_I}\} = -\delta_{mn} \, \delta^{I,I'}$. We refer the reader to [@Cvetic:2006iz] for more details on oscillator quantization for D-branes at intersecting angles, including zero point energies and mass formulae. Instead, we show the equivalent physics using the vertex operator formalism of CFT. In the vertex operator formalism, it is the presence of bosonic twist fields $\sigma_\theta$ [@DFMS] that ensure the boundary conditions for D6-branes intersecting at non-trivial angles $\theta_I$. As one might expect, we will have vertex operators for both fermions and bosons living at the intersections of two branes. From studying their conformal dimensions, we will extract mass formulae and show that, though the fermion is always massless, the mass of the boson depends on the angles of intersection. Recall that in quantizing the superstring, one often chooses to bosonize the worldsheet fermions $\Psi^M$ with $M\in \{0,\cdots,9\}$, rather than working with the fermions directly. Usually each of the five complexified worldsheet fermions is bosonized as $$\begin{aligned} \text{Neveu-Schwarz Sector:}& \qquad \Psi^M \cong e^{iH_M} \notag \\ \text{Ramond Sector:}& \qquad \Psi^M \cong e^{i(1\pm\frac{1}{2})H_M},\end{aligned}$$ where the half integer in the last line is present in order to take care of the Ramond boundary conditions on the worldsheet. The $\pm$ ambiguity corresponds to each complexified worldsheet fermion having spin $\pm\frac{1}{2}$, and the $2^5$ sign choices reflect that fact that the Ramond sector ground state is a $32$-dimensional Dirac spinor in ten dimensions. The key difference between a standard open superstring and an open superstring at the intersection of two D-branes is the boundary conditions, which must be taken into account. As they only apply in the internal dimensions, they only change three of the complexified worldsheet fermions, which become $$\begin{aligned} \text{Neveu-Schwarz Sector:}& \qquad \Psi^I \cong e^{i\theta_I H_I} \notag \\ \text{Ramond Sector:}& \qquad \Psi^I \cong e^{i(\theta_I\pm\frac{1}{2})H_I},\end{aligned}$$ with $I\in\{2,3,4\}$, where the sign in the last line depends crucially on how the angles are defined. Here we choose the conventions $0<\theta_I<1$ for $I=2,3$ and $-1<\theta_4\leq 0$. The two complexified worldsheet fermions which are not subject to boundary conditions form a two-component Weyl spinor in four dimensions, which we write as $S^\alpha$ in the vertex operators. Having discussed the relevant ingredients, we would like to explicitly write two vertex operators, in the NS-sector and R-sector, for open strings stretched between spacetime filling D6-branes with non-trivial intersection in the Calabi-Yau. Omitting Chan-Paton factors, since they aren’t immediately relevant to the discussion, the vertex operators are $$\begin{aligned} \label{eqn:int vertex op} V_{-1}=e^{-\phi}\prod _{I=2}^3 \sigma_{\theta_I}e^{i\theta_I H_I}\sigma_{1+\theta_4}e^{i(1+\theta_4)H_4}e^{ik\cdot X}\nonumber\\ V_{-\frac{1}{2}}=u_\alpha e^{-\frac{\phi}{2}}S^\alpha \prod _{I=2}^3 \sigma_{\theta_I}e^{i(\theta_I-\frac{1}{2}) H_I}\sigma_{1+\theta_4}e^{i(\frac{1}{2}+\theta_4)H_4}e^{ik\cdot X} ,\end{aligned}$$ respectively. To calculate the mass of these states, one must know the conformal weights of the fields appearing in the vertex operators, which are given by $$\begin{aligned} [e^{\alpha\phi}]=-\frac{\alpha(\alpha+2)}{2}\qquad \qquad [\sigma_\theta]=\frac{\theta(1-\theta)}{2}\qquad \qquad [e^{iaH_I}] = \frac{a^2}{2} \notag \\ \notag \\ [e^{i k\cdot X}] = \frac{\alpha ' k^2}{2} \qquad \qquad [S^\alpha] = [e^{i(\pm\frac{1}{2}H_1 \pm\frac{1}{2} H_2)}] = 2\frac{(\pm\frac{1}{2})^2}{2} = \frac{1}{4}.\end{aligned}$$ Knowing these, it is straightforward to calculate the conformal weights of the vertex operators to be $$\begin{aligned} \left[V_{-1}\right]=\frac{1}{2}+\sum_{I=2}^3\left(\frac{\theta_I(1-\theta_I)}{2}+\frac{1}{2}\theta_I^2\right) +\frac{(\theta_4 +1)(-\theta_4)}{2}+\frac{(\theta_4 +1)^2}{2}\nonumber\\ =\frac{1}{2}+\frac{1}{2}\sum _{I=1}^3 \theta_I +\frac{1} {2}+\frac{\alpha ' k^2}{2} \\ \nonumber \\ \nonumber \\ \left[V_{-\frac{1}{2}}\right]=\frac{3}{8}+\frac{1}{4}+3\frac{1}{8}+\frac{\alpha ' k^2}{2}.\end{aligned}$$ The mass formulae [@DFMS; @FMS1; @FMS2; @Berkooz:1996km; @Bachas:1995ik] are derived from the requirement that the conformal dimension be one, yielding $$\begin{aligned} \alpha ' \, m^2 = \sum_{I=1}^3 \theta_I \qquad \qquad \text{and} \qquad \qquad \alpha ' m^2 = 0,\end{aligned}$$ for the spacetime bosons and fermions, respectively. Therefore, if the sum of the three angles is negative, zero, or positive then the boson is tachyonic, massless, and massive, respectively. In the case where the boson is massless, the NS-sector boson $V_{-1}$ becomes massless and forms a supermultiplet with the R-sector fermion $V_{-\frac{1}{2}}$. Thus, $\sum_I \theta_I=0$ is the local condition for intersecting branes to give rise to supersymmetric matter. The angle condition is a local picture of mutually supersymmetric branes. The global picture is that for a D-brane to give rise to a supersymmetric gauge theory, it must wrap a supersymmetric cycle, given by a special Lagrangian in type IIa or a holomorphic divisor in type IIb. For the effective four-dimensional theory to be supersymmetric, the branes must preserve the *same* supersymmetry. It has been shown that the global conditions for two D6-branes on special Lagrangians to preserve the same supersymmetry reduces locally to the condition on angles. The vertex operators also generically include Chan-Paton factors, which might satisfy further constraints due to the orientifold projection. The generic case we would like to discuss is the structure of Chan-Paton factors at the intersection of two gauge D-branes, in which case the factors are a tensor product of some combination of fundamentals and antifundamentals. That is, for gauge branes with $U(N_a)$ and $U(N_b)$ gauge symmetry, the possibilities are $({\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a, {\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_b)$, $({\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a, {\overline{{\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}}}_b)$, $({\overline{{\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}}}_a, {\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_b)$ and $({\overline{{\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}}}_a, {\overline{{\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}}}_b)$, where the choice between fundamental and antifundamental depends on the direction of the string and whether or not the string ends on a brane or its orientifold image. Thus, the most common possibility is that chiral matter appearing at the intersection of two D-branes is in the bifundamental representation. The possible representations and multiplicities of chiral matter are listed in Table \[table:spectrum\]. There are two special cases which are interesting to discuss, one of which arises in the table. First, we revisit the possibility that a string begins and ends on the same $U(N_a)$ brane. In such a case, the Chan-Paton factors take the form ${\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a \otimes {\overline{{\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}}}_a = \text{Adj}_a \oplus 1$. That is, due to the decomposition into a direct sum, the string beginning and ending on the same brane can transform in the adjoint representation, and is therefore a gauge boson. Second, one might wonder about the properties of a string beginning on a brane and ending on its orientifold image. In such a case the Chan-Paton factors take the form ${\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a \otimes {\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a = {\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-0.4pt \raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a \oplus {\raisebox{-3.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-6.9pt \raisebox{3pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a$, and therefore chiral matter can be in the symmetric or antisymmetric representation of the gauge group. This is of great importance, for example, in $SU(5)$ GUT models. There the $10$ representation is needed, which can be realized as $10={\raisebox{-3.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-6.9pt \raisebox{3pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_5$ in type II D-braneworlds. Thus, we see that D-braneworlds can give rise to all of the ingredients necessary for realizing the gauge symmetry and matter content of the Standard Model. In particular, a stack of multiple D-branes give rise to non-abelian gauge symmetry, with the possibility of chiral matter living at the intersection of two D-branes. We have shown the presence of chiral supermultiplets locally at the intersection of two D-branes. Upon taking into account global aspects, namely the need for compactification, the fact that branes can intersect multiple times in the internal space gives a geometric reason for family replication. Global Consistency \[sec:global consistency\] ============================================= Since string theory gives rise to low energy gauge theories in a variety of ways, it has been important throughout its history to address whether or not string theory gives rise to *consistent* gauge theories. For instance, the first superstring revolution was sparked in [@GreenSchwarz] when Green and Schwarz showed that the type I string in ten dimensions with $SO(32)$ gauge symmetry is anomaly free. Over time, anomaly cancellation has been shown to arise naturally in many corners of the landscape. In all corners, the conclusion thus far has been the same: the natural ingredients arising in a string theory ensure the consistency of the low energy effective theory. In this section, we will present known results for how this occurs in type II orientifold compactifications. We begin with tadpole cancellation, which amounts to conditions on the homology of spacetime filling D-branes and O-planes that ensures the necessary cancellation of Ramond-Ramond charge on the internal space. These global conditions on homology impose constraints on chiral matter which are *necessary* for tadpole cancellation. We will show that a subset of these constraints on chiral matter are precisely the conditions for the cancellation of non-abelian anomalies. We will also show that the presence of Chern-Simons couplings of Ramond-Ramond forms to $U(1)$ field strengths gives rise to a generalized Green-Schwarz mechanism which cancels abelian and mixed anomalies. Couplings of this type also generically give a Stuckelberg mass to the corresponding $U(1)$ gauge bosons, whose corresponding global symmetries impose phenomenologically important selection rules on superpotential couplings. Ramond-Ramond Tadpole Cancellation ---------------------------------- Historically, by studying amplitudes arising in CFT descriptions, it was shown that certain one-loop cylinder, Mobius strip, and Klein-bottle diagrams have infrared divergences due to the presence of massless Ramond-Ramond tadpoles, which are required to cancel for consistency of the theory. With the advent of D-branes, a geometric picture of tadpole cancellation became clear in the works of [@Aldazabal:2000dg; @Blumenhagen:2002wn; @Blumenhagen:2002vp]. Following those works and working in type IIa, we examine the RR seven-form kinetic term of the ten-dimensional supergravity Lagrangian, along with relevant Wess-Zumino terms of the D-brane effective action $$\begin{aligned} S \supset -\frac{1}{4\kappa^2}\int_{{\mathbb{R}}^{3,1}\times {\mathcal{M}}} H_8 \wedge *H_8 \,\,\,+ \,\,\, \mu_6 \sum_a N_a \int_{{\mathbb{R}}^{3,1}\times \pi_a} C_7 \,\,\, \notag \\ + \,\,\, \mu_6 \sum_a N_a \int_{{\mathbb{R}}^{3,1}\times \pi_a^{'}} C_7 \,\,\, - \,\,\, 4\mu_6 \sum_a N_a \int_{{\mathbb{R}}^{3,1}\times \pi_{O6}} C_7,\end{aligned}$$ where $H_8=dC_7$ is the field strength of the Ramond-Ramond seven-form which couples to D6-branes and O6-planes, $\pi_a$ is the three-cycle wrapped by a D6-brane and $\pi_a '$ is wrapped by its orientifold image, and $\pi_{O6}$ is the three-cycle wrapped by the O6-plane. The ten-dimensional gravitation coupling is $\kappa^2=\frac{1}{2}(2\pi)^7(\alpha ')^4$. Given this action and the Poincar' e dual $\delta(\pi_a)$ of $\pi_a$, the equation of motion is $$\frac{1}{\kappa^2} \,d(*H_8) = \mu_6 \sum_a N_a \, (\delta(\pi_a) + \delta(\pi_{a^{'}})) \,\,\,-\,\,\, 4\mu_6 \, \delta(\pi_{O6})$$ and we see from the left-hand side that the right-hand side is an exact form, and is therefore trivial in $H^3({\mathcal{M}},{\mathbb{Z}})$. But Poincar' e duality is an *isomorphism* between cohomology and homology, and therefore the Poincar' e dual of the right hand side must be trivial in homology, yielding $$\label{eqn:tadpole} \sum_a N_a \, ([\pi_a] + [\pi_{a^{'}}]) = 4 \, [\pi_{O6}],$$ where $[\pi_a]$ is the homology class of the three-cycle $\pi_a$. This is the D6-brane tadpole cancellation condition in type IIa orientifold compactifications. It is a condition on the homology of the cycles which the D6-branes and O6-planes wrap. Qualitatively, satisfying this condition ensures that the Ramond-Ramond charge is canceled on the internal space, which is necessary since the spacetime filling D6-branes and O6-planes source Ramond-Ramond charge and the directions transverse to them are a submanifold of the compact Calabi-Yau ${\mathcal{M}}$. That is, the condition must be satisfied, as otherwise the flux lines would have nowhere to go in a compact manifold. The condition on homology is the necessary and sufficient for cancellation of homological RR tadpoles, but the condition on homology induces constraints on chiral matter which are necessary for tadpole cancellation. These are interesting in their own right. Using Table \[table:spectrum\] and intersecting [^8] with another three-cycle $\pi_a$ wrapped by a D6-brane (in the case where orientifolds are absent), we obtain $$0 = \pi_a \circ \sum_b N_b \,\,\, \pi_b = \sum_b N_b \,\,\, I_{ab} = \sum_b N_b (\#(a,{\overline}b) - \#({\overline}a,b)).$$ This can be rewritten as $\#a = \#{\overline}a$, which is precisely the condition for non-abelian anomaly cancellation if $N_a>2$. Generalizing to the case with orientifolds and image branes, the full condition is $$\begin{aligned} \label{eqn:chiral tadpole constraint} N_a \ge 2&: \qquad \# a - \# {\overline}a + (N_a+4)\,\, \# \, {\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-0.4pt \raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a + (N_a-4) \,\, \# \, {\raisebox{-3.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-6.9pt \raisebox{3pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a = 0 \notag \\ \notag \\ N_a = 1&: \qquad \# a - \# {\overline}a + (N_a+4)\,\, \# \, {\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-0.4pt \raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a = 0 \,\,\, \text{mod} \,\,\, 3,\end{aligned}$$ where the mod 3 condition for the $N_a=1$ case comes from the fact that there is no antisymmetric representation of a $U(1)$. For more details of this derivation, we refer the reader to [@Cvetic:2009yh]. Thus, we see that type II string theory provides a beautiful geometric picture for the existence of anomaly cancellation: the Ramond-Ramond charge of spacetime filling branes must be canceled on the internal space, which yields a condition on homology[^9] that induces necessary constraints on chiral matter. These constraints on chiral matter happen to include the cancellation of non-abelian anomalies, but also include some genuinely stringy constraints. Generalized Green-Schwarz Mechanism \[sec:Generalized Green-Schwarz Mechanism\] ------------------------------------------------------------------------------- While the constraints on chiral matter corresponding to the cancellation of non-abelian anomalies are satisfied immediately if the homological tadpole cancellation condition is satisfied, tadpole cancellation does not cancel the abelian, mixed abelian-non-abelian, and mixed abelian-gravitational anomalies. In [@Aldazabal:2000dg] it was shown that there is a generalization of the Green-Schwarz mechanism [@GreenSchwarz] to the case of intersecting branes. The mechanism cancels the anomalies [@Aldazabal:2000dg; @Blumenhagen:2002wn] by the gauging of axionic shift symmetries associated with the Ramond-Ramond forms. We now address some of the details. Again for concreteness we work with the type IIa supergravity action. Expanding the exponential of the field strengths of the gauge fields in the Wess-Zumino action , each stack of $D6$-branes, indexed by $a$, has Chern-Simons couplings of the form $$\label{eqn:chernsimons} \int_{{\mathbb{R}}^{3,1}\times \pi_a} C_3 \wedge \text{Tr}(F_a \wedge F_a), \qquad \qquad \int_{{\mathbb{R}}^{3,1}\times \pi_a} C_5 \wedge \text{Tr}(F_a),$$ where $F_a$ is the gauge field strength on the brane. As we are concerned with mixed anomaly cancellation in the effective four-dimensional gauge theory, we expand the Ramond-Ramond forms in basis $(\beta^I,\alpha_J)$ Poincar' e dual to the integral basis of three-cycles $(A^I,B_J)$ defined in section \[sec:chiral matter\] as $$C_3 = \Upsilon^I\,\alpha_I + \tilde \Upsilon_I \, \beta^I \qquad \text{and} \qquad C_5 = \tilde \Delta^I \, \alpha_I + \Delta_I \, \beta^I$$ where the coefficients of $\alpha$ and $\beta$ are the four-dimensional axions and two-forms $$\begin{aligned} \Upsilon^I &= \int_{A^I} C_3 \qquad \qquad \tilde \Upsilon_{I} = -\int_{B_I} C_3 \notag \\ \tilde \Delta^I &= \int_{A_I} C_5 \qquad \qquad \Delta_{I} = - \int_{B^I} C_5.\end{aligned}$$ Upon dimensional reduction of ${\mathcal{N}}_a$ D6-branes on $\pi_a = Na_I\, A^I + M_a^I \, B_I$ , the generic Chern-Simons couplings can be written in terms of axionic couplings of the form $$\label{eqn:4d terms} {\mathcal{N}}_a\,\,\int_{{\mathbb{R}}^{3,1}} (N_{aI} \Upsilon^I - M_a^I \tilde \Upsilon_I) \wedge \text{Tr}(F_a \wedge F_a),\qquad {\mathcal{N}}_a\,\,\int_{{\mathbb{R}}^{3,1}} (N_{aI} \tilde \Delta^I - M_a^I \Delta_I) \wedge \text{Tr}(F_a).$$ The axions and two-forms come in four-dimensional Hodge dual pairs $(d\Upsilon^I, -d\Delta_I)$ and $(d\tilde\Upsilon_I,d\tilde \Delta^I)$, which can be derive from the ten-dimensional Hodge duality $dC_3=-\star_{10} dC_5$. One can show that the axions transform as $$\label{eqn:axion transformation} \Upsilon^I \mapsto \Upsilon^I + {\mathcal{N}}_a\, M_a^I\Lambda_a \qquad \text{and} \qquad \tilde \Upsilon_I \mapsto \tilde\Upsilon_I + {\mathcal{N}}_a\, N_{aI}\Lambda_a$$ under $U(1)_a$, which clearly leaves the four-dimensional couplings invariant. It was shown in [@Blumenhagen:2002wn] that this gauging of the axionic shift symmetry precisely cancels the abelian and mixed anomalies. Another important fact is that the couplings of the form $\int \Delta \wedge \text{Tr}(F_a)$ and $\int \tilde \Delta \wedge \text{Tr}(F_a)$ generically give rise to a Stuckelberg mass term for the $U(1)_a$ gauge bosons. Since we are dealing with orientifold compactifications, there are also image branes on $\pi_a'$ with field strength $-F_a$, so that the relevant couplings take the form $$\label{eqn:BwedgeF} {\mathcal{N}}_a\,\,\int_{{\mathbb{R}}^{3,1}} (N_{aI} - N_{aI}') \tilde \Delta^I \wedge \text{Tr}(F_a)\,\,\,-\,\,\, {\mathcal{N}}_a\,\,\int_{{\mathbb{R}}^{3,1}} (M_a^I - {M'}_a^I) \Delta_I \wedge \text{Tr}(F_a).$$ Though the $U(1)_a$ gauge bosons receive a mass, no symmetries of the action are broken, and so the gauge symmetry selection rules associated with the $U(1)_a$ gauge symmetry survive in the low energy effective action as global selection rules. These are precisely the global $U(1)$ symmetries which forbid superpotential terms in string perturbation theory. However, as is fortunate for phenomenological purposes, it is often the case that some linear combination of the $U(1)_a$ gauge symmetries remains massless. As a condition on homology, this means that a linear combination $\sum_x \,q_x\, U(1)_x$ is massless when $$\sum_x {\mathcal{N}}_x\,q_x\,\,(\pi_x-\pi_x')=0.$$ As with the case of the condition on homology for tadpole cancellation, this can be intersected with a cycle $\pi_a$ wrapped by a D6-brane to give constraints on the allowed forms of chiral matter. Using Table \[table:spectrum\], these constraints are given by $$\label{eqn:chiral masslessness constraint} -q_a{\mathcal{N}}_a\,\,(\#({\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-0.4pt \raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a) + \#({\raisebox{-3.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-6.9pt \raisebox{3pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a)) + \sum_{x\ne a} q_x {\mathcal{N}}_x \,\, (\#(a,{\overline}x) - \#(a,x)) = 0,$$ which becomes $$\label{eqn:chiral masslessness constraint N1} -q_a \,\,\frac{\#(a) - \#({\overline}a) + 8 \#({\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-0.4pt \raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a)}{3} + \sum_{x\ne a} q_x {\mathcal{N}}_x \,\, (\#(a,{\overline}x) - \#(a,x)) = 0,$$ for the special case ${\mathcal{N}}_a=1$. Any linear combination of $U(1)$’s which satisfies these conditions is an anomaly-free $U(1)$ with a massless gauge boson. When trying to realize the standard model in D-braneworlds, there must be such a $U(1)$ which allows for an interpretation as hypercharge. The particular linear combination corresponding to hypercharge, sometimes called a “hypercharge embedding", has important implications for the realization of MSSM matter fields, and thus also for the structure of couplings. Concrete Model-Building Example: Toroidal Orbifolds \[sec:toroidal orbifold\] ============================================================================= The beautiful geometric picture of particle physics offered by type IIa intersecting braneworlds has encouraged much work in model building. Many of these examples are compactified on various toroidal orbifolds[^10], which offer two distinct advantages over more generic Calabi-Yau backgrounds. In particular, the homology cycles on a toroidal orbifold are particularly easy to visualize, which makes model building a bit more intuitive. Furthermore, toroidal orbifolds offer a CFT description, and therefore all of the power of vertex operator formalism can be brought to bear. Generically, the toroidal orbifold is a six-torus modded out by a discrete group $\Gamma$, so that ${\mathcal{M}}= T^6/\Gamma$. We think of a factorizable six-torus $T^6=T^2\times T^2\times T^2$. One might wonder about the simplest possibility, where $\Gamma$ is trivial and therefore we simply have the type IIa string compactified on an orientifold of $T^6$. Unfortunately, due to simple considerations from the supersymmetry condition, these models cannot realize the MSSM. In the literature, therefore, $\Gamma$ is non-trivial and is usually of the form ${\mathbb{Z}}_N$ or ${\mathbb{Z}}_N\times {\mathbb{Z}}_M$. Before discussing the effects of $\Gamma$-action on $T^6$, it is necessary to mention another detail or two about the orientifold. Introducing complex coordinates $z^i=x^i + i\,y^i$ on each of the $T^2$’s, the anti-holomorphic involution acts as ${\overline}\sigma:z^i\mapsto {\overline}z^i$. On each $T^2$ there are exactly two different choices for the complex structure which are consistent with the involution. They correspond to the torus and the tilted torus. The twofold basis of one-cycles for the torus and tilted torus are $([a^i],[b^i])$ and $([a'^i],[b^i])$, where $[a'^i]=[a^i]+\frac{1}{2}[b^i]$. Since the six-torus is factorizable, the three-cycles can be written as a product of three one-cycles as $$\label{eqn:one-cycle param} \pi_a = \prod_{i=1}^3 (n_a^i[a^i]+\tilde m_a^i [b^i]),$$ where $\tilde m_a^i = m_a^i$ for untitled tori and $\tilde m_a^i = m_a^i + \frac{1}{2} n_a^i$ for tilted tori. Using the fact that the only non-vanishing intersection of one-cycles is $[a^i]\circ [b^i] = -1$, it is straightforward to calculate $$I_{ab} = \prod_{i=1}^3(n_a^i \tilde m_b^i - \tilde m_a^i n_b^i) = \prod_{i=1}^3(n_a^i m_b^i - m_a^i n_b^i).$$ One should make careful note that the intersection number *does not* depend on the choice of tilted or untilted tori. This makes sense, of course, because topological quantities such as $I_{ab}$ must not depend on metric-related issues, such as complex structure moduli. Since we have specified a manifold on which to compactify, it is useful to recast the generic tadpole cancellation conditions in terms of the wrapping numbers $(n,m)$. Independent of the tilt on each $T^2$, the O6-plane is wrapping the cycle $2[a^i]$, so that the entire three-cycle reads $\pi_{O6}=8[a^1][a^2][a^3]$. The action of ${\overline}\sigma$ on a generic cycle is simply $(n^i,\tilde m^i) \mapsto (n^i,- \tilde m^i)$. Parameterizing the cycles in terms of wrapping numbers as in , the RR tadpole cancellation conditions become $$\begin{aligned} \label{eqn:T6 orientifold tadpole} [a^1][a^2][a^3]&: \qquad \sum_{a=1}^K N_a \prod_i n_a^i = 16 \notag \\ [a^i][b^j][b^k]&: \qquad \sum_{a=1}^K N_a n_a^i \tilde m_a^j \tilde m_a^k = 0 \qquad \text{with} \qquad i\ne j\ne k\end{aligned}$$ One might wonder why there are no equations for the three-cycle basis components of the form $[b][b][b]$ or $[a][a][b]$. This is because $\tilde m^i\mapsto -\tilde m^i$ under ${\overline}\sigma$ kills any contribution to a component with an odd number of $b$’s. As type IIa compactified on an orientifold of $T^6$ cannot realize the MSSM, it is important to examine the possibility of non-trivial $\Gamma.$ Here we consider a well-studied choice for the orbifold group, where $\Gamma={\mathbb{Z}}_2\times{\mathbb{Z}}_2$. The generators of the ${\mathbb{Z}}_2\times{\mathbb{Z}}_2$ are given by $\omega$ and $\theta$, defined to be $$\omega: (z^1,z^2,z^3) \mapsto (-z^1,-z^2,z^3) \qquad \qquad \theta: (z^1,z^2,z^3) \mapsto (z^1,-z^2,-z^3).$$ Since $\Gamma$ also acts on the homology cycles of $T^6$, the simplification of the tadpole conditions in terms of wrapping numbers must take this into account. In fact, this can be done for any $\Gamma$. We refer the interested readers to the reviews [@Blumenhagen:2005mu; @Marchesano:2007de; @Blumenhagen:2006ci] for the derivation and expressions of the orbifold tadpole conditions. As an example, we present the wrapping numbers for a globally consistent model of [@Cvetic:2004ui] in table \[table:pati-salam example\] . This model is a type IIa orientifold on $T^6/({\mathbb{Z}}_2\times{\mathbb{Z}}_2)$ with intersecting D6-branes and Pati-Salam $SU(4)_C\times SU(2)_L\times SU(2)_R$ gauge symmetry after the Green-Schwarz mechanism has given masses to $U(1)$ gauge bosons. From the point of view of bifundamental matter under the Pati-Salam group, the chiral spectrum is particularly nice, as it contains three families of $(4,{\overline}2, 1)$ and $({\overline}4, 1,2)$. However, in addition to the Pati-Salam gauge symmetry, which arises from stacks $a$, $b$, and $c$, “filler" branes labeled with integers are needed to satisfy the tadpole conditions. This gives rise to many chiral exotics arising at the intersection of a filler brane with a Pati-Salam brane. The appearance of chiral exotics at intersections with filler branes occurs somewhat frequently in type II orientifolds, and often spoils the phenomenology. Perturbative Yukawa Couplings \[sec:couplings\] =============================================== To this point, we have reviewed how low energy effective theories with particular gauge symmetry and chiral matter arise in the context of type II orientifold compactifications. While these effects are the most important if a string vacuum is to realize the particle physics of our world, it is also crucial that the couplings of the low energy theory reproduce the structure in the Standard Model. This is a particularly important detail to investigate in the context of intersecting brane models, as the gauge symmetries whose gauge bosons are given a Stuckelberg mass via the Green-Schwarz mechanism impose global selection rules on couplings, forbidding crucial superpotential terms in string perturbation theory. If a model is to realize such a forbidden, but desired, coupling, it must be due to a non-perturbative effect. Yukawa Couplings from String Amplitudes --------------------------------------- Before we address non-perturbative effects, we address those Yukawa couplings which *are* present in string perturbation theory, and thus give the leading order effects. To determine the structure of a given Yukawa coupling, one must calculate the relevant correlation function in conformal field theory. We take the example of an up-flavor quark Yukawa coupling, which appears in the superpotential as $H_u\,Q_L\, u_R$. It is phenomenologically preferred that at least one such Yukawa coupling be present in string perturbation theory, as the top-quark Yukawa coupling is ${\mathcal{O}}(1)$, which is difficult to obtain via a non-perturbative effect. In terms of the vertex operators presented in section \[sec:CFT\], the relevant correlator is $$\label{eqn:upyukawa} \langle V_{-1}^{H_u} \,\,\, V_{-1/2}^{Q_L} \,\,\, V_{-1/2}^{u_R}\rangle,$$ where the $NS$ and $R$ sector vertex operators are chosen so that we are using the bosonic component of the Higgs supermultiplet, and the fermionic components of the quarks. Before being concerned about the precise structure of the correlator, the most coarse thing that one can do is determine whether or not the operator is forbidden by symmetries in string perturbation theory. This depends entirely on how the $H_u$, $Q_L$, and $u_R$ fields are represented in the brane stacks. For example, consider the case of a three stacks of D-branes with $U(3)_a\times U(2)_b\times U(1)_c$ gauge symmetry, and that the linear combination $U(1)_Y = \frac{1}{6}U(1)_a + \frac{1}{2}U(1)_c$ is left massless by the Green-Schwarz mechanism, which we identify with hypercharge. Now suppose that the fields are realized as $$H_u\sim (b,c)\qquad Q_L^1\sim (a,b)\qquad Q_L^2\sim (a, {\overline}b)\qquad u_R\sim ({\overline}a, {\overline}c),$$ where two families of the left-handed quark doublets are realized as $Q_L^1$ and one as $Q_L^2$. Then the possible up-flavor quark Yukawa couplings have $U(1)$ structure $$H_u\,Q_L^1\,u_R:\,\,\,(0,2,0) \qquad\text{and}\qquad H_u\,Q_L^2\,u_R:\,\,\,(0,0,0),$$ the first of which has non-zero global $U(1)$ charge and is therefore forbidden in string perturbation theory. In this case we have one family of up-flavor quarks perturbatively allowed and two disallowed, which gives a nice explanation of the large top-quark mass. However, if the model is to be phenomenologically viable, non-perturbative effects *must* generate Yukawa couplings for the other two families, otherwise the up-quark and charm-quark will be massless. Since the vertex operators for each of the relevant fields is known explicitly, the correlator can be calculated explicitly using CFT techniques [@DFMS; @Cvetic:2003ch; @Abel:2003vv; @Bertolini:2005qh]. The non-trivial aspect of the calculation of this correlation function involves calculating the three-point correlator of the bosonic twist fields which take into account the boundary conditions associated with the angles between branes. Having the picture of a toroidal orbifold in our head, the target space picture of this Yukawa coupling for one of the tori looks like Figure \[fig:upyukawa\], where the corresponding bosonic twist field amplitude that needs to be calculated is $$\label{eqn:three twist} \langle \sigma_\nu(z_1)\,\sigma_{-\nu-\lambda}(z_3)\,\sigma_\lambda(z_4)\rangle.$$ Calculation of this correlator can be performed by calculating the four point disk correlator $$\label{eqn:four twist} \langle \sigma_{\nu}(z_1)\,\sigma_{-\nu}(z_2)\,\sigma_{-\lambda}(z_3)\,\sigma_{\lambda}(z_4)\rangle$$ and extracting in the limit where $z_2\mapsto z_3$. The spacetime picture of the four point correlator is the blue trapezoid in Figure \[fig:fourpoint\], and the geometric picture of the $z_2\mapsto z_3$ limit is to take the uppermost brane north, past the point of convergence of the dotted red lines. For technical details on the calculation of these correlators, we refer the reader to [@Cvetic:2003ch]. The twist field correlator determines the angular dependence of the Yukawa coupling, which is often referred to as the “quantum" part of the coupling, due to its dependence on CFT quantum correlators. The explicit structure of a Yukawa coupling in a $T^6=T^2\times T^2\times T^2$ background is given by $$h = \sqrt{2} \, g_s \, 2\pi \, \prod_{I=1}^3 (\frac{16\pi^2\,\Gamma(1-\nu^I) \Gamma(1-\lambda^I)\,\Gamma(1-\nu^I-\lambda^I)}{\Gamma(\nu^I)\Gamma(\lambda^I)\Gamma(\nu^I+\lambda^I)})^{\frac{1}{4}} \,\,\, \sum_m \text{exp}(\frac{-A_I^m}{2\pi\alpha '}),$$ where the indices $I\in\{1,2,3\}$ are one for each two-torus and $A_I^m$ is the area of the $m$-th triangle (worldsheet instanton [@Cremades:2004wa; @Aldazabal:2000cn]) on the $j$-th two-torus. The factor involving the worldsheet instantons is often called the classical factor. Coupling Issues, Exemplified ---------------------------- In our simple example in the previous section, we saw that two of the families of up-flavor quark Yukawa couplings were forbidden in string perturbation theory, as $H_uQ_L^1u_R$ had non-zero global $U(1)$ charge. In certain scenarios non-perturbative effects can generate the missing couplings, but in this case the issue could be avoided entirely if all three families of left-handed quark doublets appear as $Q_L^2$ instead of $Q_L^1$, as in that case all of the up-flavor quark Yukawa couplings would be perturbatively allowed. This depends heavily on how the chiral matter in a given orientifold compactification is realized at the intersection of two branes, as that determines the structure of the global $U(1)$ charges. There are important phenomenological couplings which are $\emph{always}$ forbidden in string perturbation theory, though, so that if a weakly coupled type II orientifold compactification is to realize them, it *must* be at the non-perturbative level. In this section, we discuss the non-perturbative generation of the always forbidden Majorana neutrino mass term and its role in the seesaw mechanism. In addition, we discuss the non-perturbative generation of the always forbidden $10\,10\,5_H$ Yukawa coupling, which gives mass to the top-quarks in Georgi-Glashow GUTs. **Example One: The Neutrino Masses**\ Consider a single Dirac neutrino mass coupling $h_\nu \,\, H_u \, L\, N_R$, which can be calculated as a function of moduli using the conformal field theory techniques above. Then, after electroweak symmetry breaking, the Dirac mass term is $m_{D_\nu} = h_\nu \,\langle H_u \rangle$. Comparing to a generic quark mass $m_q = h_Q \, \langle H_u \rangle$, we see that the quark masses and the neutrino masses are generically of the same order, unless the Yukawa couplings are tuned such that $h_\nu \ll h_Q$. While worldsheet instantons are able to account for the standard model fermion hierarchies to some degree, it is only in very small regions of moduli space where they could account for the hierarchy $m_Q \sim 1\, GeV$ and $m_\nu\sim \, 10^{-3} \, eV$. As in the particle theory literature, we would prefer to have some mechanism to account for this hierarchy, rather than attributing it to some miraculous result of moduli stabilization. One popular field theoretic mechanism which accounts for the small neutrino masses is the type I seesaw mechanism. In this mechanism, in addition to the Dirac type neutrino mass term $h_\nu \, H_u\, L\, N_R$, there is a Majorana neutrino mass term of the form $M_R N_R N_R$. The neutrino mass matrix takes the form $$\left( \begin{tabular}{cc} 0 & $h_\nu \,\,\, \langle H_u \rangle$ \\ $h_\nu \,\,\, \langle H_u \rangle$ & $M_R$ \end{tabular} \right)$$ giving rise to mass eigenvalues $M_R$ and $\frac{h_\nu^2 \langle H_u \rangle^2}{M_R}$ in the limit of large $M_R$. Thus, one of the mass eigenvalues has been “seesawed" to a very small value by the large Majorana mass, giving (in this mechanism) the reason for the very small neutrino masses observed in nature. While this mechanism is nice from the point of view of field theory, there is an important difficulty which arises when attempting to realize a Majorana mass term in string theory. As the right-handed neutrinos $N_R$ will have global $U(1)$ quantum numbers in a type II orientifold compactification, the Majorana mass term $M_R N_R N_R$ will also be charged with respect to the global $U(1)$’s, and is therefore forbidden in string perturbation theory. Therefore, in type II orientifold compactifications it is difficult to account for the smallness of the neutrino masses in string perturbation theory: in the absence of extreme fine-tuning of the moduli, $m_{D_\nu}$ and $m_Q$ are of the same order, and the seesaw mechanism cannot be realized, as the Majorana mass term is forbidden. **Example Two: the $10\, 10\, 5_H$ in $SU(5)$ GUTs**\ There is another well-known coupling problem that arises when trying to realize Georgi-Glashow GUTs in weakly coupled type II orientifold compactifications. In these models, the $SU(5)$ gauge theory is realized by a stack of five spacetime-filling D-branes wrapping a non-trivial cycle in the Calabi-Yau, and chiral matter charged under the $SU(5)$ factor is realized at the intersection of this five-stack with some other D-brane. It is the possibility of symmetric and antisymmetric matter representations, in addition to the bifundamental, which allows for the realization of the standard $SU(5)$ GUT particle representations. Specifically, the $10$ representation of $SU(5)$ can be realized as ${\raisebox{-3.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-6.9pt \raisebox{3pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_5$ at the intersection of the five-stack with its orientifold image. Though the proper spectrum can be realized, there is an immediate problem at the level of couplings. Since the $10$ is realized as an antisymmetric, it comes with charge $2$ under the $U(1)$ of the five-stack. In addition, the $5_H$ comes with charge $1$, since it a fundamental of $SU(5)$, and the top-quark Yukawa coupling $10\, 10\, 5_H$ has charge $5$ and is therefore *always* forbidden in string perturbation theory. On the other hand, the bottom-quark Yukawa coupling $10 \, {\overline}5 \, {{\overline}5}_H$ can be present in string perturbation theory, giving a massive bottom-quark and a massless top-quark. This inverts the standard hierarchy and is a major phenomenological pitfall that must be remedied if one hopes to realize realistic Georgi-Glashow GUTs in weakly coupled type II orientifold compactifications. Non-Perturbative Superpotential Corrections: D-instantons \[sec:instantons\] ============================================================================ We have now seen that it would be phenomenologically useful if some non-perturbative effect were able to generate superpotential couplings which are forbidden in string perturbation theory. Doing so would require that the non-perturbative physics somehow cancels the excess $U(1)$ charge associated with a perturbative Yukawa coupling. A the reader familiar with the KKLT [@Kachru:2003aw] scenario (for example) might be concerned that D-instantons cannot serve this purpose, as there a euclidean D3-instanton in type IIb generated a non-perturbative correction without charged matter which was responsible for stabilizing a K" ahler modulus. However, it was shown in [@Blumenhagen:2006xt; @Ibanez:2006da; @Florea:2006si] that in the presence of spacetime filling gauge D-branes the axionic shift symmetries which are gauged by the Green-Schwarz mechanism cause the axions to be charged with respect to the global $U(1)$ symmetries. These axions appear in instanton corrections and can cancel the net $U(1)$ charge of perturbatively forbidden couplings, giving rise to these couplings at the non-perturbative level. Consider a type II orientifold compactification with euclidean D-instantons in the background. The instantons are pointlike in spacetime and wrap a non-trivial cycle in the Calabi-Yau. The instanton action is of the form $$S_{inst} = S^E_{cl} + S({\mathcal{M}},\Phi)$$ where ${\mathcal{M}}$ are the set of instanton zero modes and $\Phi$ are the set of charged matter fields present in the low energy theory. The instanton correction to the low energy effective theory in four dimensions takes the form $$\label{eqn:path integral} S^{4d}_{np}(\Phi) = \int [{\mathcal{D}}{\mathcal{M}}] \,\,e^{-S_{inst}},$$ where the structure of the correction is determined by ${\mathcal{M}}$ and $\Phi$, and its magnitude is set by the classical instanton suppression factor, which depends on the volume wrapped by the instanton in the Calabi-Yau. Instanton Heuristics -------------------- For the sake of concreteness, we again work in type IIa, where the instantons are euclidean D2 branes which are point-like in spacetime and wrap non-trivial three-cycles in the Calabi-Yau. Taking a three-cycle $\Xi$, the classical action of the instanton is $$S_{cl}^E = T_{E2}[\frac{1}{g_s}\int_\Xi \sqrt{\text{det}\, G} - i \int_\Xi C_3],$$ where the first term comes from the Born-Infeld action and the second from the Wess-Zumino action. Here $e^{-Re\, S_{cl}^E}$ is a real suppression factor that sets the scale of superpotential corrections. Its value is set by $$Re \, S_{cl}^E = \frac{T_{E2}}{g_s} \, V_\Xi = \frac{8\pi^2}{g_a^2} \, \frac{V_\Xi}{V_{\pi_a}},$$ and can lead to very large suppression due to its exponential nature. This is phenomenologically very relevant, as (for example) it allows one to realize neutrino masses of the correct order with only a highly suppressed Dirac term $LH_uN_R$ [@Cvetic:2008hi], without resorting to the seesaw mechanism. Another alternative to the seesaw mechanism includes the generation of the Weinberg operator $LH_uLH_u$ by a D-instanton [@Cvetic:2010mm]. One might recall from section \[sec:Generalized Green-Schwarz Mechanism\] that dimensional reduction of the Ramond-Ramond three-form $C_3$ on three cycles gives rise to four-dimensional axions $\Upsilon^I$ and $\tilde \Upsilon_I$, which transform under $U(1)_a$ due to the generalized Green-Schwarz mechanism. These axions enter the classical instanton action as $$Im\, S_{cl}^E = T_{E2} \, \int_\Xi C_3 = T_{E2} \, (N_{\Xi I}\Upsilon^I - M_\Xi^I \tilde \Upsilon_I).$$ The gauge field one-form associated with $U(1)_a$ transforms as $A \mapsto A + d\Lambda_a$, and the axions transform as $$\Upsilon^I \mapsto \Upsilon^I + {\mathcal{N}}_a \,\, M_a^I \Lambda_a \qquad \text{and} \qquad \Upsilon_I \mapsto \Upsilon_I + {\mathcal{N}}_a \,\, N_{a,I} \Lambda_a$$ from which it can be seen that the classical instanton action transforms as $$e^{-S_{cl}} \mapsto e^{-S_{cl} + i \, Q_\Xi^a\,\, \Lambda_a},$$ with $Q_\Xi^a={\mathcal{N}}_a\,\, \Xi\, \circ\, \pi_a$. Taking the orientifold and image branes into account induces extra shifts in the classical instanton action, so that in full generality $$\label{eqn:instanton charge} Q_\Xi^a={\mathcal{N}}_a\,\Xi\circ(\pi_a - \pi_a ').$$ This is precisely the net charge carried microscopically by the charged instanton zero modes in the path integral measure, as will be discussed in section \[sec:fermionic zero modes\]. If a subset of matter fields $\phi_i\in\Phi$ has charges $Q_i$ such that $Q^a_\Xi + \sum_i Q_i^a = 0 \,\,\, \forall a$, an instanton wrapped on $\Xi$ can, in principle, generate a superpotential coupling of the form $$\label{eqn:correction} e^{-S_{cl}^{E2}}\prod_i \phi_i,$$ since the non-trivial transformation of the axions cancels the global $U(1)$ charge of $\prod_i \phi_i$. Fermionic Zero Modes \[sec:fermionic zero modes\] ------------------------------------------------- The arguments of the previous section heuristically showed that the gauging of shift symmetries by the Green-Schwarz mechanism make it possible for couplings of the form to be gauge invariant, depending on the $U(1)$ charges of the matter fields. Whether or not such a a term is actually generated by an instanton depends heavily on the microscopic properties of the instanton, in particular its fermionic zero modes, which correspond to massless open strings. The importance of fermionic zero modes for determining non-perturbative corrections is well known in other areas of the landscape. For example, in [@Witten:1996bn] Witten argued that an M5-instanton wrapped on a $6$-cycle $\Xi_{M5}$ must satisfy $$\chi(\Xi_{M5},{\mathcal{O}}_{\Xi_{M5}}) = \sum_{i=0}^3 h^i(\Xi_{M5},{\mathcal{O}}_{\Xi_{M5}}) = 1$$ if it is to contribute to the superpotential. This is a constraint on particular (uncharged) fermionic zero modes of the instanton, which are counted by the Hodge numbers $h^i(\Xi_{M5},{\mathcal{O}}_{\Xi_{M5}})$. Similar constraints exist for the uncharged modes in type II, as, for example, a euclidean D3-instanton wrapped on a holomorphic divisor $D$ must satisfy $\chi(D,{\mathcal{O}}_D)=1$ if it is to contribute to the superpotential. The influence of the fermionic modes on non-perturbative corrections is easy to see: the path integral integral is over all fermionic zero modes, so if these modes are not lifted or the instanton action does not have appropriate terms for soaking them up, then the Grassman integral evaluates to zero. Suppose (in an unrealistic but illustrative example) that $S({\mathcal{M}},\Phi)= a\,\xi + b\,\eta + c\,\xi\eta$, where the Greek variables are fermionic zero modes. Then from the non-perturbative correction would be $$\label{eqn:fermionic modes example} e^{-S_{cl}^{E}} \int [d\xi] [d\eta] \,\,\, e^{-(a\,\xi + b\,\eta + c\,\xi\eta)} = e^{-S_{cl}^{E}} \int [d\xi] [d\eta] \,\,\, (1-(a\,\xi + b\,\eta + c\,\xi\eta)) = c \,\, e^{-S_{cl}^{E}}.$$ It is easy to see that if the third term in $S({\mathcal{M}},\Phi)$ were not present, then the correction would be absent. One might intuitively think that, since the basic branes in our theory are gauge D-branes and euclidean D-instantons, there exist two types of fermionic zero modes living in the instanton worldvolume, corresponding to a string from the instanton to itself and a string from the instanton to a gauge brane. Indeed, this is the case, as can be shown (for example) by CFT techniques. The strings from the instanton to itself are known as uncharged zero modes, and are the modes counted by $h^i(D,{\mathcal{O}}_D)$ in type IIb[^11] which contribute to the holomorphic genus. The strings from the instanton to a gauge brane are charged under the gauge group of the D-brane, and thus are known as charged modes. ### Uncharged Zero Modes Perhaps the most crucial of the uncharged modes are the ones associated with the breakdown of supersymmetry. Recall that type II string theory compactified on a Calabi-Yau manifold gives rise to ${\mathcal{N}}=2$ supersymmetry in four dimensions, an ${\mathcal{N}}=1$ subalgebra of which is preserved by the orientifold, with supercharges $Q^\alpha$, ${\overline}{Q}^{\dot \alpha}$. A spacetime-filling D-brane wrapping a 1/2-BPS cycle[^12] might preserve the same supercharges $Q^\alpha, {\overline}Q^{\dot \alpha}$, in which case the D-brane is supersymmetric with respect to the orientifold. The orthogonal complement to the ${\mathcal{N}}=1$ algebra preserved by the orientifold, which has supercharges $Q'^\alpha$ and ${\overline}Q'^{\dot \alpha}$, then gives four Goldstinos associated with the four broken supersymmetries. The key point is that, due to localization in the four extended dimensions, a 1/2-BPS D-instanton does not preserve the four supercharges $Q^\alpha$, ${\overline}Q^{\dot \alpha}$ preserved by a gauge D-brane and the orientifold, but instead the combination $Q'^{\alpha}$ and ${\overline}Q^{\dot \alpha}$. There are then four Goldstinos in the instanton worldvolume associated with the breakdown of supersymmetry: two chiral modes $\theta^\alpha$ associated with the breaking of $Q^\alpha$, and two anti-chiral modes ${\overline}\tau^{\dot \alpha}$ [@Argurio:2007qk; @Argurio:2007vqa] associated with the breaking of ${\overline}Q'^{\dot \alpha}$. With the $\theta$ mode identified as the $\theta$ mode for ${\mathcal{N}}=1$ theories in four dimensions, the instanton might contribute a superpotential correction if the ${\overline}\tau$ modes are somehow saturated or lifted. There are numerous ways for this to happen. One common possibility is that the instanton wraps an orientifold invariant cycle, in which case the ${\overline}\tau$ mode is projected out. In the case of a type IIa orientifold compactification, this can be seen directly in the CFT formalism, where the vertex operators associated with the $\theta$ and the ${\overline}\tau$ mode are given by $$\begin{aligned} V^{\theta}_{-\frac{1}{2}} = \theta_\alpha \, e^{-\frac{\phi}{2}} \, S^\alpha(z) \,\Sigma_{\frac{3}{8},\frac{3}{2}}(z) \notag \\ V^{{\overline}\tau}_{-\frac{1}{2}} = {{\overline}\tau}^{\dot \alpha} \, e^{-\frac{\phi}{2}} \, S_{\dot \alpha}(z) \,\Sigma_{\frac{3}{8},-\frac{3}{2}}(z),\end{aligned}$$ and the $\Sigma$ fields are spin fields describing fermionic degrees of freedom on the internal space. The subscripts of $\Sigma$ give the conformal dimension and worldsheet $U(1)$ charge. The orientifold projection induces extra constraints on the structure of Chan-Paton factors, which the ${\overline}\tau$ modes do not satisfy and therefore they are projected out. From the point of view of type IIb compactified on a generic Calabi-Yau with a euclidean D3 instanton wrapping a holomorphic divisor $D$, the presence or absence of $\theta$ and ${\overline}\tau$ is counted by the Hodge number $h^{0,0}(D)\cong h^0(D,{\mathcal{O}}_D)$. The presence of the holomorphic ${\mathbb{Z}}_2$-action $\sigma$ associated with the orientifold allows for a decomposition of ordinary sheaf cohomology into a sum of ${\mathbb{Z}}_2$-equivariant sheaf cohomology as $$H^i(D,{\mathcal{O}}_D) \cong H^i_+(D,{\mathcal{O}}_D) \oplus H^i_-(D,{\mathcal{O}}_D),$$ reflecting the fact that each of the zero modes transform with a sign under $\sigma$. The ${\overline}\tau \in h^0_-(D,{\mathcal{O}}_D)$ mode is the one which transforms with a $-$ sign, and is therefore absent in the case of an orientifold invariant divisor. Such an instanton is called an $O(1)$ instanton. These are not the only uncharged zero modes, however. For example, in addition to the $\theta$ and ${\overline}\tau$ Goldstino modes associated with the breakdown of supersymmetry due to the localization of the instanton in four-dimensional spacetime, there are additional uncharged modes corresponding to the localization of the instanton on submanifolds of the Calabi-Yau. These deformation modes also admit both a CFT description, when available, and a description in terms of cohomology. The latter can be seen in type IIb as deformations of a holomorphic divisor $D$, which are given by global sections of the normal bundle, and thus are counted by $h^0(D,N_{D|{\mathcal{M}}})$[^13]. If a cycle has no deformation modes, it (as well as an instanton wrapping it) is said to be rigid. In order for an instanton to contribute to the superpotential, it must realize the $\theta$ mode and none of the other uncharged modes[^14], as the superpotential appears in the ${\mathcal{N}}=1$ spacetime action as $\int d^4x\, d^2\theta \, W(\Phi)$. There are at least two possible reasons for the absence of a zero mode. First, it may be absent to begin with due to being projected out. This is the reason for the absence of the ${\overline}\tau$ mode in the case of an $O(1)$ instanton. The second possibility is that the extra zero modes are “saturated" or “soaked up". This is the case in , where the $\chi$ and $\eta$ integrals evaluate to one when integrating over the $c\,\chi\eta$ term. ### Charged Zero Modes The charged zero modes are massless strings from a euclidean D-instanton to a gauge D-brane, which are therefore charged under the gauge group of the D-brane. They are the microscopic modes that carry the global $U(1)$ charges which compensate for the overshoot in $U(1)$ charge of perturbatively forbidden couplings. The form of the superpotential corrections involving charged matter depends heavily on these charged modes and is calculated using the instanton calculus presented in [@Blumenhagen:2006xt] and reviewed in [@Blumenhagen:2009qh]. In short, it tells one how to determine the structure of $S({\mathcal{M}},\Phi)$ in the instanton action based on CFT disc diagrams involving charged matter fields and charged zero modes. For the sake of brevity, we refer the reader to those sources for a general discussion of instanton calculus, and instead we present the CFT basics of charged zero modes and an illustrative example that makes the physics and basics of the methods very clear. The Ramond sector open string vertex operator corresponding to a charged matter mode between a brane ${\mathcal{D}}_a$ and an instanton ${\mathcal{E}}$ is given by $$V^{\lambda_a^i}_{-\frac{1}{2}}(z) = \lambda_a^i e^{-\frac{\phi}{2}}\Sigma_{\frac{3}{8},-\frac{1}{2}}^{D_a,{\mathcal{E}}}(z)\sigma_{h=1/4}(z),$$ where $i$ is the gauge index of the brane ${\mathcal{D}}_a$ and the $\sigma$ fields are the 4D spin fields arising from the twisted 4D worldsheet bosons carrying half-integer modes. Due to the four Neumann-Dirichlet boundary conditions between the brane and the instanton in ${\mathbb{R}}^{3,1}$, the zero point energy in the NS-sector is shifted by $L_0=1/2$. This makes all NS-sector states massive, so that the only charged zero modes come from the Ramond sector. We emphasize that the net $U(1)$ charge of these zero modes, as dictated by Table \[table:charged modes\], is precisely equivalent to the charge of the classical instanton action . We examine the up-flavor quark sector of the model presented in Table 1 of [@Cvetic:2009ez][^15]. That model is a four-stack quiver with $U(3)_a\times U(2)_b \times U(1)_c \times U(1)_d$ gauge symmetry which becomes $SU(3)_C\times SU(2)_L \times U(1)_Y$ due to the Green-Schwarz mechanism, with the Madrid hypercharge embedding $U(1)_Y=\frac{1}{6}U(1)_a + \frac{1}{2}U(1)_c + \frac{1}{2}U(1)_d$. The fields relevant for up-flavor quark Yukawa couplings are realized as $$H_u: \,\,\, (b,c) \qquad Q_L: \,\,\, (a,{\overline}b) \qquad u_R^1: \,\,\, 1\times({\overline}a, {\overline}c) \qquad u_R^2: \,\,\, 2\times({\overline}a, {\overline}d),$$ so that the Yukawa couplings have global U(1) charge $$H_u\,Q_L\,u_R^1: (0,0,0,0) \qquad \qquad H_u\, Q_L \, u_R^2: (0,0,1,-1).$$ Since $u_R^2$ has multiplicity two, two families are perturbatively forbidden, while one family is perturbatively realized, as one might hope given the hierarchy of the top-quark mass relative to the up and the charm. To generate the missing $H_uQ_Lu_R^2$ couplings in type IIa, an instanton ${\mathcal{E}}$ would have to exhibit intersection numbers $$I_{{\mathcal{E}}a}=0 \qquad I_{{\mathcal{E}}b}=0 \qquad I_{{\mathcal{E}}c} = 1 \qquad I_{{\mathcal{E}}d} = -1,$$ which gives rise to two charge modes, ${\overline}\lambda_c$ and $\lambda_d$. One can heuristically “see" that this cancels the excess global $U(1)$ charge in Figure \[fig:spacetimeanddisk\] by the fact that spacetime picture is closed and the arrows point in in a consistent direction. The corresponding disk diagram, also drawn in the figure, contributes the instanton action. If, in a global embedding which realizes this spectrum, a rigid O(1) instanton exists with this intersection pattern, then one can perform the instanton calculus with the mentioned disk contribution. The path integral takes the form $$\begin{aligned} \int d^4x\,d^2\theta\,d{\overline}\lambda_c\, d\lambda_d \,\,\,e^{-S_{cl}^{\mathcal{E}}+Y^J \, {\overline}\lambda_c \, H_uQ_Lu_R^{2,J} \, \lambda_d} &= \notag \\ e^{-S_{cl}^{\mathcal{E}}} \,\,\, \int d^4x\,d^2\theta\,d{\overline}\lambda_c\, d\lambda_d \,\,\, Y^J \, {\overline}\lambda_c \, H_uQ_Lu_R^{2,J} \, \lambda_d &= \notag \\ e^{-S_{cl}^{\mathcal{E}}} \,\,\, \int d^4x\,d^2\theta \,\,\, Y^J \, H_uQ_Lu_R^{2,J},\end{aligned}$$ where $J$ runs across the two family indices for $u_R^2$. In such a case the up-quark and charm-quark masses are suppressed by a factor of $e^{-S_{cl}^{\mathcal{E}}}$ relative to the top-quark mass. In principle $Y^J$ can give a hierarchy between the up-quark and charm-quark, since it depends on worldsheet instantons, but this generically depends heavily on the details of moduli stabilization. Braneworld Quivers: The Bottom-Up Approach \[sec:quivers\] ========================================================== To this point we have discussed how all of the basic ingredients of real-world particle physics can be realized in the context of weakly coupled type II orientifold compactifications. In particular, gauge symmetry lives on the worldvolume of spacetime filling D-branes with possible gauge groups $U(N)$, $Sp(2N)$, or $SO(2N)$. Chiral matter appears at the intersection of two stacks of D-branes, with the type and amount of chiral matter dictated by the topological intersection numbers of the D-branes. Finally, the presence or absence of superpotential couplings depends crucially on the charge of couplings under the $U(1)$ symmetries associated with the $U(N)$ branes. D-braneworlds offer a beautiful geometric picture which suggests the possibility of arranging branes in such a way that something very similar to the MSSM is obtained. A top-down approach would first require specifying a Calabi-Yau manifold ${\mathcal{M}}$ together with a ${\mathbb{Z}}_2$ involution on the space, which would allow for the identification of O-planes and follow with an investigation of the types of arrangements of D-branes allowed by tadpole cancellation. Perhaps with the specification of further data (e.g. fluxes in type IIb, for chirality), the massless spectrum can be calculated and the global $U(1)$ charges of the matter can be determined, allowing for the determination of perturbative superpotential couplings. One can then perform a scan of possible instanton cycles to determine which might have the proper fermionic zero mode structure for superpotential contribution. Needless to say, this quickly becomes rather involved. Though a “top-down" model is necessary if string theory is to provide the correct description of particle physics in our world, this does not necessarily mean that the best way to identify promising models is by taking a top-down approach to each vacuum. Recently, a “bottom-up" approach [@Antoniadis:2000ena; @Aldazabal:2000sa; @Antoniadis:2001np] has emerged which suggests looking at certain subsets of data associated with a string vacuum, with the hope that one can say non-trivial things across broader patches of the landscape, despite the fact that certain details have been ignored. This approach is only good to the extent that the ignored details don’t destroy the physics determined by the subset of vacuum data of interest. We already saw an example of this approach in section \[sec:CFT\], when looking at the up-quark Yukawa couplings $H_u\,Q_L^1\, u_R$ and $H_u\, Q_L^2\, u_R$, where the fields are realized by three-stacks of D-branes. Notice that we specified neither a Calabi-Yau manifold nor a ${\mathbb{Z}}_2$ involution in this example, and yet were able to make statements about couplings based on assumptions about how chiral MSSM matter is realized at the intersection of various brane stacks. This information would be a subset of the information associated with the quiver realized in a type II orientifold compactification. In fact, much has been learned recently about particle physics in type II by studying quivers, which are a subset of the data defining a type II orientifold compactification. Generically, a quiver is made of nodes and edges between them, where the nodes represent gauge D-branes and an edge represents matter at the intersection of the corresponding D-branes. Thus, a quiver encodes the gauge symmetry and matter content in a type II orientifold compactification, including global $U(1)$ charges. We emphasize that these *are not* globally consistent string compactifications. They cannot be, as quivers are only a subset of the data associated with a string vacuum. However, a given quiver can be shown to be *compatible* with global consistency if it satisfies the necessary conditions and , which do contain genuinely string constraints that are not already present in field theory. An example of a consistent type II quiver which realizes the exact MSSM spectrum is given in Figure \[fig:quiver\], which corresponds to MSSM matter being realized as $$\begin{aligned} Q_L: \,\,\,2\times(a,{\overline}b), \,\,\,1\times(a,b) \qquad\,\,\, u_R:\,\,\,3\times({\overline}a, {\overline}c) \qquad\,\,\, d_R:\,\,\, 3\times {\raisebox{-3.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-6.9pt \raisebox{3pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_a \notag \\ \notag \\ L: \,\,\,3\times(b,{\overline}c) \qquad\,\,\, E_R: \,\,\,3\times {\raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}\hskip-0.4pt \raisebox{-.5pt}{{\hbox{\rule{0.4pt}{6.5pt}\hskip-0.4pt\rule{6.5pt}{0.4pt}\hskip-6.5pt\rule[6.5pt]{6.5pt}{0.4pt}}\rule[6.5pt]{0.4pt}{0.4pt}\hskip-0.4pt\rule{0.4pt}{6.5pt}}}}_c \qquad\,\,\, H_u: \,\,\,1\times (b,c) \qquad\,\,\, H_d:\,\,\, 1\times({\overline}b, {\overline}c), \notag\end{aligned}$$ where the gauge symmetry is $U(3)_a\times U(2)_b\times U(1)_c$ and the massless hypercharge is realized as $U(1)_Y = \frac{1}{6} U(1)_a + \frac{1}{2} U(1)_b$. There are a few details required to understand the quiver diagram properly. First, since we are in the framework of type II orientifold compactifications, there is an image brane associated with each $U(N)$ brane, which is all of the branes, in this case. Rather than doubling the number of nodes, we double the number of arrows on the edges, with an arrow coming out of a node representing a fundamental, and an arrow going into a node representing an antifundamental. Therefore, there are four options for arrow orientation on each edge, representing all possible bifundamental representations between those two branes and their orientifold images. Second, edges from a node to itself implicitly correspond to a string stretching between a brane and its image, which yields a symmetric or antisymmetric representation. In the $U(N)$ case, in addition to being symmetric or antisymmetric of $SU(N)$, there is a choice of $U(1)$ charge $\pm 2$, and so we label these edges with either an $A$ or an $S$ and either a $+$ or a $-$. This quiver contains enough information to say a great deal about the structure of the Yukawa couplings. Specifically, two of the families of up-quarks and all of the leptons have perturbatively realized Yukawa couplings, while one of the up-quark families and all of the down-quark families are forbidden in string perturbation theory. Note that the perturbatively forbidden up-flavor quark Yukawa coupling has global $U(1)$ charge $(0,2,0)$, as does the R-parity violating coupling $LLE_R$. Therefore, any D-instanton which generates the missing up-flavor Yukawa coupling will also generate an R-parity violating operator at a very high level. For this reason, and perhaps others, any type II orientifold compactification giving rise to this quiver is phenomenologically ruled out. Any given quiver can, in principle, be realized in many different global embeddings, allowing one to make statements across broader patches of the landscape. For example, in addition to the quiver presented in Figure \[fig:quiver\], there are 23 other three-stack quivers with $U(1)_Y = \frac{1}{6} U(1)_a + \frac{1}{2} U(1)_b$ and the exact MSSM spectrum that also satisfy the necessary constraints and . There is one other linear combination for the hypercharge which might realize the MSSM spectrum with three-stacks, which also gives rise to a total of 24 quivers with the exact MSSM spectrum that satisfy and . One can therefore make the following statement: if the exact MSSM is to be realized in a type II orientifold compactification at the intersections of three stacks of D-branes with $U(3)\times U(2)\times U(1)$ gauge symmetry, that compactification’s corresponding quiver must be one of the mentioned 48, the couplings of which we can say a great deal about based on their global charges. Popular avenues for studying quivers include systematic study of hypercharge embeddings and MSSM quivers [@Anastasopoulos:2006da], as well as the study of particular quivers at the level of couplings [@Ibanez:2008my]. Additionally, systematic work has been done along these lines at the level of couplings [@Cvetic:2009yh] and the mass hierarchical structure of MSSM quarks and leptons have been investigated in [@Anastasopoulos:2009mr; @Cvetic:2009ez]. The basic strategy in the systematic works was to study the phenomenology of MSSM quivers, possibly extended by three right-handed neutrinos or a singlet $S$ which can give rise to a dynamical $\mu$-term. All presented quivers satisfy the necessary constraints on the chiral spectrum for tadpole cancellation and a massless hypercharge, and thus it is not ruled out that these quivers can be embedded in a consistent type II orientifold compactification. Beyond these necessary constraints, the quivers also satisfy a host of phenomenological constraints. In particular, for a quiver to be semi-realistic, one has to require that D-instanton effects generate enough of the forbidden Yukawa couplings to ensure that there are no massless quark or lepton families. In doing so, however, a D-instanton which generates a Yukawa coupling might also generate a phenomenological drawback, such as an R-parity violating coupling, a dimension five proton decay operator, or a $\mu$-term which is far too large. Such a quiver would be ruled out. Conclusion and Outlook ====================== In these lectures, we have presented the basic perturbative and non-perturbative physics of type II orientifold compactifications. This corner of the string landscape is particularly nice for understanding aspects of four-dimensional particle physics, as spacetime filling D-branes give rise to four-dimensional gauge symmetry and chiral matter can appear at their intersections. The importance of the non-perturbative D-instanton effects cannot be overstated in these compactifications, as in their absence a type II orientifold compactification often gives rise to massless families of quarks or leptons, due to global $U(1)$ selection rules that forbid their Yukawa couplings. Indeed, they are *necessary* for some aspects of particle physics, as, for example, the Majorana neutrino mass term and the $10\,10\, 5_H$ top-quark Yukawa coupling are *always* perturbatively forbidden in weakly coupled type II. In addition to generating phenomenologically desirable couplings, instanton effects must be taken into account because they could also generate couplings which spoil the physics. Furthermore, instantons generate the leading superpotential contributions for K" ahler moduli in type IIb, and thus play an important role in their stabilization. Though the picture of particle physics in these models is beautiful, one still has to decide how to deal with the enormity of the landscape. It is a useful fact that coupling issues can be studied at the quiver level, which specifies how chiral matter transforms under the gauge groups of the D-branes, and thus the global $U(1)$ charges of matter, while postponing the issue of global embeddings to a later date. We believe this approach to be of great use in identifying promising quivers for global embeddings, as it is not worth trying to realize a global embedding for the sake of particle physics if it can already be seen at the quiver level that a model is ruled out phenomenologically. In addition to phenomenological constraints on quivers, string consistency conditions on chiral matter greatly constrain the possibilities, so it is not true that “anything goes". From the point of view of globally consistent type II orientifold compactifications, both type IIa and type IIb have their advantages and disadvantages. On one hand, the appearance of chiral matter at the intersection of D6-branes in type IIa is purely dependent on geometric issues, giving an intuitive picture of particle physics. In addition, many useful CFT techniques have been developed for the type IIa string compactified on a toroidal orbifold. In type IIb, on the other hand, the appearance of chiral matter depends on the choice of worldvolume flux on the D7-branes, and therefore depends on more than geometry. The major advantage of type IIb, however, is that much more is known about moduli stabilization and that the full power of complex algebraic geometry can be utilized, since the 1/2-BPS gauge branes and euclidean D-instantons wrap holomorphic divisors rather than special Lagrangians. In addition, type IIb offers a description as the $g_s\mapsto 0$ limit of F-theory, which has been of great interest in string phenomenology over the last few years[^16]. The interplay between type IIb D-braneworlds and F-theory compactifications runs deep, as one might expect. In particular, a major motivating factor for the study of the appearance of Georgi-Glashow GUTs in F-theory is the absence of the $10\, 10\, 5_H$ Yukawa coupling in type IIb string perturbation theory. Though this coupling can be generated by an instanton in type II [@Blumenhagen:2007zk; @Blumenhagen:2008zz], it is exponentially suppressed by the classical action of the instanton and therefore still has trouble explaining the top-quark hierarchy, unless the K" ahler moduli are stabilized such that the Yukawa coupling is ${\mathcal{O}}(1)$. In the F-theory lift of a type IIb GUT, the $10\, 10\, 5_H$ occurs at a point of $D_6$ enhancement, which is the lift of where the orientifold, $U(5)$ brane, and $U(1)$ brane intersect in type IIb. These objects also can intersect at a point of $E_6$ enhancement in F-theory, which gives a perturbative (${\mathcal{O}}(1)$) contribution to $10\, 10\, 5_H$. Though this is one of the phenomenological advantages of F-theory GUTs over type II, instantons still play important roles in other aspects of the physics. This is still an active area of research, and in particular the microscopic description of charged modes in F-theory needs clarification. We would like to acknowledge Iñaki Garc' ia-Etxebarria, Paul Langacker, Robert Richter and Timo Weigand for recent collaborations. We thank I.G.E. and R.R. in particular for discussions related to the content of the lectures. We thank the TASI organizers for providing a wonderful school, and the participants for lively discussions. This work was supported in part by the National Science Foundation under Grant No. NSF PHY05-51164, DOE under grant DE-FG05-95ER40893-A020, NSF RTG grant DMS-0636606, the Fay R. and Eugene L. Langberg Chair, and the Slovenian Research Agency (ARRS). [^1]: This is being as conservative as possible. In [@Kreuzer:2000xy], 473,800,776 four (complex) dimensional toric varieties were identified which have a Calabi-Yau threefold hypersurface by Batyrev’s construction. These threefolds have 30,108 distinct pairs of Hodge numbers, giving the lower bound on the number of topologically distinct threefolds. [^2]: In this framework, the first globally consistent models with chiral matter were presented in [@Angelantonj:2000hi; @Blumenhagen:2000fp; @Aldazabal:2000cn] and the first supersymmetric globally consistent models with chiral matter were presented in [@Cvetic:2001tj; @Cvetic:2001nr]. For comprehensive reviews and further information, see [@Blumenhagen:2005mu; @Marchesano:2007de; @Blumenhagen:2006ci] and references therein. [^3]: One can be more precise, of course, by expanding the DBI action to leading order in $\alpha '$ to obtain the standard gauge kinetic term with appropriate factors. Here, we are just interested in consequences of dimensional analysis. [^4]: We realize we are very brief, and refer the reader to [@Blumenhagen:2006ci] for more details. [^5]: The conformal field theory references should contain more details, for the reader unfamiliar with the doubling trick. [^6]: We recommend appendix A of [@Cvetic:2006iz] for all the details. [^7]: Here we depart from conventions elsewhere in the literature, which often use $\theta_1$, $\theta_2$, and $\theta_3$ for the angles in each of the three complex internal dimensions. We use the labeling $0,\dots,9$ for real dimensions and $0,\dots,4$ for complexified dimensions, thus $2,3,4$ for the angles between branes on the internal space. [^8]: With indices switched. That is, the sum is over $b$, rather than $a$. [^9]: It is important to note that, in addition to the necessary cancellation of homological Ramond-Ramond charge, one must also cancel K-theory charges, due to the fact that D-branes are classified by K-theory groups [@Witten:1998cd] rather than homology groups. We refer the interested reader to [@Blumenhagen:2005mu; @Marchesano:2007de; @Blumenhagen:2006ci] for more details. [^10]: For recent work on global toroidal orbifold models and other recent work, see [@Forste:2010gw] and references therein. For systematic work on the landscape of string vacua for particular toroidal orbifolds, see [@Blumenhagen:2004xx]. [^11]: For a beautiful explanation of how these modes lift to modes of vertical M5 instantons in F-theory, see [@Blumenhagen:2010ja], and for a lift to F-theory of an instanton generating the $10\, 10\, 5_H$ see [@Cvetic:2010rq]. For a generic review of ED3 zero modes from the point of view of sheaf cohomology, see section 2 of [@Cvetic:2010ky]. [^12]: These are special Lagrangians for D6-branes, holomorphic divisors for D7-branes. [^13]: Deformation modes, as well as other uncharged modes, are often said to be counted by $h^i(D,{\mathcal{O}}_D)$ in the literature, which may not be the most illuminating presentation. To help motivate this, note that in a Calabi-Yau manifold a holomorphic divisor has $K_D = N_{D|{\mathcal{M}}}$, so that by Serre duality we have $H^0(D,N_{D|{\mathcal{M}}})\cong H^2(D,K_D\otimes N_{D|{\mathcal{M}}}^*)\cong H^2(D,{\mathcal{O}}_D)$. Therefore, by some simple isomorphisms, the intuitive notion of deformation modes as normal bundle sections is recast as ${\mathcal{O}}_D$ sheaf cohomology. [^14]: In type IIb, the precise statement in cohomology is $h^0_+(D,{\mathcal{O}}_D)=1$ and all others zero. This necessary and sufficient constraint automatically satisfies the necessary constraints $\chi(D,{\mathcal{O}}_D)=1$, as it must. [^15]: In fact this model is a quiver, not a globally defined orientifold compactification. See section \[sec:quivers\] for more information. [^16]: For work on F-theory GUTs, see [@Donagi:2008ca; @Beasley:2008dc; @Beasley:2008kw]. For lectures and further information, see [@Weigand:2010wm] and references therein.
--- abstract: 'We show that generalized spherical harmonics are well suited for representing the space and orientation molecular density in the resolution of the molecular density functional theory. We consider the common system made of a rigid solute of arbitrary complexity immersed in a molecular solvent, both represented by molecules with interacting atomic sites and classical force fields. The molecular solvent density $\rho(\mathbf{r},\mathbf{\Omega})$ around the solute is a function of the position $\mathbf{r}\equiv(x,y,z)$ and of the three Euler angles $\mathbf{\Omega}\equiv(\theta,\phi,\psi)$ describing the solvent orientation. The standard density functional, equivalent to the HNC closure for the solute-solvent correlations in the liquid theory, is minimized with respect to $\rho(\mathbf{r},\mathbf{\Omega})$. The up-to-now very expensive angular convolution products are advantageously replaced by simple products between projections onto generalized spherical harmonics. The dramatic gain in speed of resolution enables to explore in a systematic way molecular solutes of up to nanometric sizes in arbitrary solvents and to calculate their solvation free energy and associated microscopic solvent structure in at most a few minutes. We finally illustrate the formalism by tackling the solvation of molecules of various complexity in water.' author: - Lu Ding - Maximilien Levesque - Daniel Borgis - Luc Belloni bibliography: - 'main.bib' title: Efficient molecular density functional theory using generalized spherical harmonics expansions --- Introduction ============ The knowledge of the free energy of solvation or chemical potential of a molecular or macromolecular solute immersed in a molecular solvent like water is the starting point of many applications in different fields. Without surprise, beside experimental work, various numerical theories/simulations have been developed following different directions in order to predict such solvation free energy while minimizing the restitution time. The atomic/molecular level of description where the particles are described by sites interacting via classical force fields (essentially Lennard-Jones and coulombic contributions) offers a good compromise between expensive *ab-initio* treatments (with electronic, quantum mechanics description) and crude continuous solvent models. The numerical difficulty originates from the large number of solvent molecules to take into account. How to solve this statistical mechanical problem? The molecular dynamics or Monte Carlo simulations which explicitly consider up to millions of solvent molecules in a simulation cell seem to be methods of choice for an exact resolution but, in practice, are limited by prohibitive times to solution and associated large statistical uncertainties. Consequently, there is a clear demand for alternative theoretical routes. As usual, since the beginning of the liquid state theory field in the 1950-1960’s, the approach based on the Ornstein-Zernike (OZ) equation, the integral equations (IE) or the classical density functional theory (DFT) formalism offers a good candidate for such calculations. [@hansen_theory_2013] The goal is to derive the density of the solvent as a function of its position and its orientation in the vicinity of the solute. For polar solvent like water, the electrostatic couplings and resulting hydrogen-bonding correlations are highly anisotropic and the angular description requires a high level of sophistication. We briefly mention here the Reference Interaction Site Model (RISM) approach which ignores this full molecular analysis and replaces it by site-site correlations, only; [@Chandler-RISM; @hirata-rossky81; @pettitt_integral_1982; @pettitt07; @pettitt08] the gain in simplicity and speed is obvious since the interacting particles are spherical; the price to pay is to deal with phenomenological site-site OZ equation, correlation functions without proper statistical mechanical foundation, and ad-hoc closures. RISM is well-developed in its three-dimensional version [@hirata_molecular_2003; @kloss_treatment_2008; @maruyama_massively_2014; @sergiievskyi_multigrid_2011; @sergiievskyi_modelling_2012; @kast-IAS2015] and has provided valuable insight to a number of physical-chemistry problems [@imai_locating_2006; @yoshida_molecular_2009; @casanova_evaluation_2007; @kaminski_modeling_2010; @kloss_quantum_2008; @kast-IAS2015], including the prediction of solvation free energies [@palmer_accurate_2010; @sergiievskyi_3drism_2012; @truchon_cavity_2014; @sergiievskyi_solvation_2015; @palmer2016; @luchko2016; @tielker-et-al2016]. A RISM-based density functional theory has also been developed for similar applications [@liu_site_2013; @Wu_hydration14; @Wu_hydration15]. To bypass the limitations of the RISM approximation, and some of its pitfalls, we propose here to stay at the more ambitious and demanding, but otherwise more fundamental, full molecular level of description. When the solvent and solute particles keep a simple shape, say 3-sites ${\rm H_{2}O}$ molecules around a spherical ion, it is natural to express the solvent density $\rho(\mathbf{r},\mathbf{\Omega}_{12})$ in terms of solute-solvent separation $r$ and five Euler angles orientation. This radial description has been studied in great details both in bulk solvent and in solutions. Powerful formalisms which make use of expansions onto rotational invariants or generalized spherical harmonics enabled to solve the molecular Ornstein-Zernike equation (MOZ) and integral equations for various densities, temperatures, and compositions [@blum_invariant_1972; @blum_invariant_1972-1; @fries_solution_1985; @richardi_molecular_1999; @lombardero99; @puibasset_bridge_2012; @belloni14]. This approach breaks down when the molecular/macromolecular solute particle takes a complicated shape with many interacting sites. In such case, it is desirable to consider the solvent density $\rho(\mathbf{r},\mathbf{\Omega})$ as a function of its 3D absolute position $\mathbf{r}\equiv(x,y,z)$ around the fixed solute and of its absolute orientation with respect to a laboratory frame, characterized by three Euler angles $\mathbf{\Omega}\equiv(\theta,\phi,\psi)$. Such approach was developed recently in a DFT framework (named MDFT, for molecular density functional theory, in reference to MOZ) [@ramirez_density_2002; @gendre_classical_2009; @zhao_molecular_2011; @borgis_molecular_2012; @sergiievskyi_fast_2014]. In the current implementation, the formalism requires as input the full angular-dependent direct correlation function of the homogeneous solvent - a difficult problem in itself [@ramirez_direct_2005; @zhao_accurate_2013] especially to get it precisely at all wave-lengths [@puibasset_bridge_2012; @belloni-to-come]. The computation of the excess free energy requires a double integration over orientations for each spatial grid point, which made it prohibitive to tackle large molecular systems. This computational limitation has been overcome in special cases, such as the point-charge models of water, for which the free-energy functional can be further approximated and expressed in terms of two simpler fields than the full orientational density, namely the density and polarization density fields. [@levesque_scalar_2012; @jeanmairet_molecular_2013; @jeanmairet_molecular_2013-1; @jeanmairet_molecular_2015; @jeanmairet_molecular_2016] The objective of the present work is to develop a formalism and numerical algorithms so efficient they unlock the resolution of 3D-DFT or OZ+IE theories in the general case. Section II recalls the 3D-DFT approach while Section III develops the formalism based on angular projections onto carefully chosen basis of spherical harmonics. A few examples applications are shown in Section IV. 3D Molecular DFT and Ornstein-Zernike approach ============================================== The goal is to derive the local and orientational molecular solvent density $\rho(\mathbf{r},\mathbf{\Omega})$ where the vector $\mathbf{r}\equiv(x,y,z)$ defines the position of the rigid solvent molecule and $\mathbf{\Omega}\equiv(\theta,\phi,\psi)$ represents its orientation with respect to a fixed laboratory frame. The direction of the main axis of the molecule is characterized by the colatitude $\theta$ and longitude $\phi$ while $\psi$ is the angle of rotation around this axis. The choice of solvent’s origin and main axis should take advantage of the molecular symmetry group. For instance, for the water molecule of point group C$_{2\textrm{v}}$, the origin is chosen at the oxygen site while the $z$ axis is its $C_{2}$ main symmetry axis and points from the oxygen to the mid-point between the hydrogens. It will be shown below that chosing high symmetry axes implies notable simplifications. The starting point of the liquid-state density functional theory (DFT) consists in writing a functional $F\left[\rho(\mathbf{r},\mathbf{\Omega})\right]$ of the molecular density to be minimized. It is defined as the difference between the grand potential of the solvated solute and the grand potential of the homogeneous solvent at density $\rho_{\textrm{bulk}}$. It is thus by definition the solvation free energy of the solute. Without approximation for the moment, it may be splitted into ideal, external and excess contributions[@evans92; @evans_density_2009]: $$F=F_{{\rm ideal}}+F_{{\rm ext}}+F_{{\rm excess}}\label{eq:Fid+Fext+Fexc}$$ The ideal term, coming from the entropy of mixing of the solvent molecules, reads $$F_{{\rm ideal}}=k_{{\rm B}}T\iiint\mathrm{d}\mathbf{r}\iiint\mathrm{d}\mathbf{\Omega}\left[\rho(\mathbf{r},\mathbf{\Omega})\ln\dfrac{\rho(\mathbf{r},\mathbf{\Omega})}{\rho_{{\rm bulk}}}-\Delta\rho(\mathbf{r},\mathbf{\Omega})\right]\label{eq:1.1}$$ where $T$ is the temperature, $k_{\textrm{B}}$ is the Boltzmann constant, $k_{{\rm B}}T$ is the thermal energy, and $\Delta\rho(\mathbf{r},\mathbf{\Omega})\equiv\rho(\mathbf{r},\mathbf{\Omega})-\rho_{{\rm bulk}}$ with $\rho_{\textrm{bulk}}\equiv n_{\textrm{bulk}}/8\pi\text{\texttwosuperior}$. $n_{\textrm{bulk}}$ is the bulk density. The external contribution comes from the interaction potential $V_{{\rm ext}}$ between the solute molecule and one solvent molecule: $$F_{{\rm ext}}=\iiint\mathrm{d}\mathbf{r}\iiint\mathrm{d}\mathbf{\Omega}\rho(\mathbf{r},\mathbf{\Omega})V_{{\rm ext}}(\mathbf{r},\mathbf{\Omega}).$$ In the usual case of spherically symmetric site-site interaction potentials, $V_{{\rm ext}}$ reads $$V_{{\rm ext}}(\mathbf{r},\mathbf{\Omega})=\sum_{i={\rm solvent\,site}}\sum_{j={\rm solute\,site}}v_{ij}\left(\left|\mathbf{r}+\mathbf{s}_{i}(\mathbf{\Omega})-\mathbf{r}_{j}\right|\right)\label{eq:1.3}$$ where $\mathbf{s}_{i}$ is the intra vector joining the solvent origin to the site $i$. When the DFT is solved inside a cubic cell of edge $L$ with periodic boundary conditions, the contributions from the neighboring solute images must be added to Eq. \[eq:1.3\] in the obvious and usual way. The coulombic $1/r$ contribution to the external potential is derived by solving the Poisson equation inside the cell. Again, this imposed potential is constant in what follows. The final, excess term involves the correlations between the solvent molecules perturbed by the neighboring solute. As usual in such liquid-state theory, an approximation must be assumed for this contribution. The bare, well developed and documented functional, first term in an infinite Taylor expansion around the liquid bulk density, reads: $$\beta F_{{\rm excess}}=-\frac{1}{2}\iiint\mathrm{d}\mathbf{r}_{1}\iiint\mathrm{d}\mathbf{\Omega}_{1}\iiint\mathrm{d}\mathbf{r}_{2}\iiint\mathrm{d}\mathbf{\Omega}_{2}\Delta\rho(\mathbf{r}_{1},\mathbf{\Omega}_{1})c(\mathbf{r}_{12},\mathbf{\Omega}_{1},\mathbf{\Omega}_{2})\Delta\rho(\mathbf{r}_{2},\mathbf{\Omega}_{2})\label{eq:1.4}$$ where $c(\mathbf{r}_{12},\mathbf{\Omega}_{1},\mathbf{\Omega}_{2})$ is the bulk solvent-solvent molecular direct correlation function (DCF), which depends on the distance $r_{12}$ between the two solvent molecules and the five Euler angles characterizing their relative orientation (invariant by translation and rotation of the ensemble $(\mathbf{r}_{12},\mathbf{\Omega}_{1},\mathbf{\Omega}_{2})$ with respect to the fixed frame). We remind the reader that even if three Euler angles are necessary to define the orientation of a single molecule, five only are necessary for defining *relative* orientations. The function $c$ of the bulk solvent for a given temperature and pressure is an input in the present approach and is provided by previous extensive Monte Carlo + IE bulk calculations [@puibasset_bridge_2012; @belloni14]. The formal functional differentiation of \[eq:1.1\] leads to: $$\rho(\mathbf{r},\mathbf{\Omega})=\rho_{{\rm bulk}}\exp\left[-\beta V_{{\rm ext}}(\mathbf{r},\mathbf{\Omega})+\gamma(\mathbf{r},\mathbf{\Omega})\right]\label{eq:1.5}$$ where $\beta=1/k_{{\rm B}}T$ and $\gamma(\mathbf{r},\mathbf{\Omega})$ represents the indirect (total minus direct) solute-solvent correlation function which is related to the previous functions via the solute-solvent Ornstein-Zernike equation: $$\gamma(\mathbf{r}_{1},\mathbf{\Omega}_{1})=\iiint\mathrm{d}\mathbf{r}_{2}\iiint\mathrm{d}\mathbf{\Omega}_{2}c(\mathbf{r}_{12},\mathbf{\Omega}_{1},\mathbf{\Omega}_{2})\Delta\rho(\mathbf{r}_{2},\mathbf{\Omega}_{2})\label{eq:1.6}$$ The integral equation \[eq:1.5\] is nothing but the HNC approximation for the solute-solvent correlations, which ignores the so-called bridge function. Inclusion of more sophisticated excess functionals or bridge functions will be investigated in future works. The excess free energy functional \[eq:1.4\] may be written as: $$F_{{\rm excess}}=-\frac{1}{2}\iiint\mathrm{d}\mathbf{r}_{1}\iiint\mathrm{d}\mathbf{\Omega}_{1}\Delta\rho(\mathbf{r}_{1},\mathbf{\Omega}_{1})\gamma(\mathbf{r}_{1},\mathbf{\Omega}_{1}).$$ In practice, the numerical resolution consists in general to describe the cubic cell with a 3D grid of $N\times N\times N$ spatial positions (grid nodes) and mesh size $L/N$. $N$ is typically below 256 for computer memory reasons. Generalization to parallelepiped cells or different directional mesh resolutions is straightforward. For each of the $N^{3}$ grid points, the orientation is characterized by different $\Omega$ triplets. For simplicity, we use $N_{\theta}$, $N_{\phi}$, $N_{\psi}$ decoupled values $\theta_{i}$, $\phi_{j}$, $\psi_{k}$ chosen from the Gauss quadrature. In general, $0\le\theta_{i}<\pi$, $0\le\phi_{j}<2\pi$ and $0\le\psi_{k}<2\pi$. In the case of the ${\rm H_{2}O}$ molecule (of symmetry group $\mathrm{C}_{2v}$), $0\le\psi_{k}<\pi$ is sufficient. Typical numbers are $5-10$ for each of the three angles. The resolution consists either to numerically minimize the total DFT functional \[eq:1.1\] with respect to the solvent density $\text{\ensuremath{\Delta\rho}(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{\Omega}})}$ or, equivalently, to solve the integral equation \[eq:1.5\]. We choose the former route in the present study. The process is iterative. At convergence, $\text{\ensuremath{\Delta\rho}(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{\Omega}})}$ gives the equilibrium solvent profiles around the solute and the value taken by $F$ provides the free energy of solvation. The most demanding and challenging part of the calculation is obviously the excess part \[eq:1.4\] or \[eq:1.6\] which requires a 6D spatial+angular convolution. The spatial one is naturally performed in the Fourier space where \[eq:1.6\] becomes: $$\hat{\gamma}(\mathbf{q},\mathbf{\Omega}_{1})=\iiint\mathrm{d}\mathbf{\Omega}_{2}\hat{c}(\mathbf{q},\mathbf{\Omega}_{1},\mathbf{\Omega}_{2})\Delta\hat{\rho}(\mathbf{q},\mathbf{\Omega}_{2}).\label{eq:1.8}$$ $\mathbf{q}\equiv(q_{x},q_{y},q_{z})$ is the vector in the Fourier space, each component $q_{i}$ is discretized in $N$ values multiples of $2\pi/L$. The hat functions indicate the 3D Fourier transformed functions, defined as $\hat{f}(\mathbf{q},\mathbf{\Omega})=\iiint f(\mathbf{r},\mathbf{\Omega})e^{i\mathbf{q}\cdot\mathbf{r}}\mathrm{d}\mathbf{r}$. They are complex quantities. Of course, we use state-of-the-art FFT libraries to compute the space convolution with a complexity in $\mathcal{O}\left(N\log N\right)$ instead of the $\mathcal{O}\left(N^{2}\right)$ in the naive implementation.The angular convolution which remains in the MOZ equation \[eq:1.8\], when implemented straightforwardly in refs [@gendre_classical_2009; @zhao_molecular_2011], represents the main barrier for an efficient resolution: for each of the $N^{3}$ values of $\boldsymbol{q}$ and for each of the orientation triplet $\mathbf{\Omega}_{1}$, one must perform a 3D integral over the whole orientation triplet $\mathbf{\Omega}_{2}$ using angular quadratures! Indeed, even the naive implementation is not so straightforward in practice since, expressed in the laboratory frame, the 8-variables angular DCF $\hat{c}(\mathbf{q},\boldsymbol{\Omega}_{1},\boldsymbol{\Omega}_{2})$ is too large to be stored. This problem can be solved by storing the DCF in the so-called intermolecular frame, for which the $z$-axis is taken in the direction of $\mathbf{q}$, so that $\hat{c}(q,\boldsymbol{\Omega}_{1}^{'},\boldsymbol{\Omega}_{2}^{'})$ can be expressed as a function of only 6 variables when accounting from rotational invariance around $\mathbf{q}$. For each value of $\mathbf{q}$, one thus needs also to infer the correspondence between orientations $\boldsymbol{\Omega}_{i}$ and $\boldsymbol{\Omega}_{i}^{'}(\mathbf{q},\boldsymbol{\Omega}_{i})$ in the fixed and molecular frame, respectively. This process, whatever the algorithm (storing or recomputing), further impairs the numerical efficiency. In the next Section, we show that the use of expansions onto basis of generalized spherical harmonics will (i) advantageously replace the angular convolution by simple products between projections, and (ii) reduce the memory footprint of the storage of the DCF. Expansion onto generalized spherical harmonics ============================================== The angular dependency of the solvent density $\Delta\rho(\mathbf{r},\mathbf{\Omega})$ is expanded for each point $\mathbf{r}$ of the 3D network onto a basis of carefully chosen functions, the generalized spherical harmonics $R_{\mu'\mu}^{m}(\mathbf{\Omega})$ following Messiah and Blum’s notations [@messiah_tome2; @blum_invariant_1972]: $$\Delta\rho(\mathbf{r},\mathbf{\Omega})=\sum_{m=0}^{n_{\max}}\sum_{\mu'=-m}^{m}\sum_{\mu=-m}^{m}f_{m}\Delta\rho_{\mu'\mu}^{m}(\mathbf{r})R_{\mu'\mu}^{m}(\mathbf{\Omega}),\label{eq:1.9}$$ with $$R_{\mu'\mu}^{m}(\mathbf{\Omega})=r_{\mu'\mu}^{m}(\theta)e^{-i\mu'\phi-i\mu\psi},$$ where $r_{\mu'\mu}^{m}(\theta)$ is the generalized Legendre polynomial and $f_{m}=\sqrt{2m+1}$ is a normalization factor. Each labelled coefficient, the so-called projections, in the sum \[eq:1.9\] is obtained by angular integral of the original function (projection onto the corresponding basis vector): $$\Delta\rho_{\mu'\mu}^{m}(\mathbf{r})=f_{m}\iiint\Delta\rho(\mathbf{r},\mathbf{\Omega})R_{\mu'\mu}^{m*}(\mathbf{\Omega})\mathrm{d}\mathbf{\Omega}.\label{eq:1.11}$$ The expansion in \[eq:1.9\] is in principle infinite. In practice, it is truncated at $m\leq n_{\max}$, which defines the basis$\left\{ n_{\max}\right\} $ of angular functions. In order to be consistent with the prescription of the Gauss quadrature, the number of angles for $\theta$, $\phi$, and $\psi$ will verify $N_{\theta}=n_{\max}+1$, $N_{\phi}=2n_{\max}+1$ and $N_{\psi}=2\left(n_{\textrm{max}}/s\right)+1$ where $s$ is the order of the symmetry axis used as main molecular axis for the solvent molecule ($s=2$ for $C_{2V}$ molecules like water) and the division is an integer division. Since the input function $\Delta\rho(\mathbf{r},\mathbf{\Omega})$ is *real*-*valued*, a symmetry relation follows between the complex-valued projections $\Delta\rho_{\mu'\mu}^{m}(\mathbf{r})$: $$\Delta\rho_{\underline{\mu'}\underline{\mu}}^{m}(\mathbf{r})=\left(-1\right)^{\mu'+\mu}\Delta\rho_{\mu'\mu}^{m*}(\mathbf{r}),\label{eq:1.12}$$ where $\underbar{\ensuremath{\mu}}\equiv-\mu$. As a consequence, it is sufficient to deal here with $\mu'\geq0$ (or $\mu\geq0$). For ${\rm H_{2}O}$ solvent, $\mu$ is even and the total number of independent projections per spacial grid node is 4, 19, 40, 85, 140 for $n_{\max}=1$, 2, 3, 4 and 5, as shown in table \[tab:table\_norientations\_nprojections\]. The transformation from $\text{\ensuremath{\Delta\rho}(\ensuremath{\mathbf{r}},\ensuremath{\mathbf{\Omega}})}$ to $\Delta\rho_{\mu'\mu}^{m}(\mathbf{r})$ through Eq. \[eq:1.9\] and \[eq:1.11\] is numerically performed using a fast 3-step algorithm [@lado_95] described in the appendix. Each $r$-projection is then Fourier transformed by FFT $$\Delta\hat{\rho}_{\mu'\mu}^{m}(\mathbf{q})=\iiint\Delta\rho_{\mu'\mu}^{m}(\mathbf{r})e^{i\mathbf{q}\cdot\mathbf{r}}\mathrm{d}\mathbf{r}.$$ Of course, since the angle $\Omega$ is defined with respect to a fixed frame, independent of $\mathbf{r}$, this means that $$\Delta\hat{\rho}(\mathbf{q},\mathbf{\Omega})=\sum_{m=0}^{n_{\max}}\sum_{\mu'=-m}^{m}\sum_{\mu=-m}^{m}f_{m}\Delta\hat{\rho}_{\mu'\mu}^{m}(\mathbf{q})R_{\mu'\mu}^{m}(\mathbf{\Omega}),\label{eq:1.13-1}$$ and the symmetry relation \[eq:1.12\] becomes: $$\Delta\hat{\rho}_{\underline{\mu'}\underline{\mu}}^{m}(\mathbf{q})=\left(-1\right)^{\mu'+\mu}\Delta\hat{\rho}_{\mu'\mu}^{m*}(-\mathbf{q}),\label{eq:1.14}$$ which halves the number of $\mathbf{q}$ values to consider. In the same way, the bulk function can be decomposed into: $$\hat{c}(\mathbf{q},\mathbf{\Omega}_{1},\mathbf{\Omega}_{2})=\sum_{mnl\mu\nu}\hat{c}_{\mu\nu}^{mnl}(q)\Phi_{\mu\nu}^{mnl}(\hat{\mathbf{q}},\mathbf{\Omega}_{1},\mathbf{\Omega}_{2}),$$ where the coefficients $c_{\mu\nu}^{mnl}\left(q\right)$ depend here on the norm $q$ only and the rotational invariants are defined such as to verify the invariance by rotation of the ensemble: $$\Phi_{\mu\nu}^{mnl}(\hat{\mathbf{q}},\mathbf{\Omega}_{1},\mathbf{\Omega}_{2})=f_{m}f_{n}\sum_{\mu'\nu'\lambda'}\left(\begin{array}{ccc} m & n & l\\ \mu' & \nu' & \lambda' \end{array}\right)R_{\mu'\mu}^{m}(\mathbf{\Omega}_{1})R_{\nu'\nu}^{n}(\mathbf{\Omega}_{2})R_{\lambda'0}^{l}(\hat{\mathbf{q}}).\label{eq:1.16}$$ The coefficients $\left(\begin{array}{ccc} m & n & l\\ \mu' & \nu' & \lambda' \end{array}\right)$ are the usual 3-j-symbols. The complex projections $\hat{c}_{\mu\nu}^{mnl}$ verify symmetry relations because $c$ is a real-valued function and the solvent molecules 1, 2 are identical: $$\hat{c}_{\underline{\mu}\underline{\nu}}^{mnl}=(-1)^{m+n+\mu+\nu}\hat{c}_{\mu\nu}^{mnl*}$$ $$\hat{c}_{\nu\mu}^{nml}=(-1)^{m+n}\hat{c}_{\mu\nu}^{mnl}$$ In the case of ${\rm H_{2}O}$ symmetry, $\mu$ and $\nu$ are even and $\hat{c}_{\mu\nu}^{mnl}$ is real-valued if $l$ is even and pure imaginary if $l$ is odd. Consequently: $$\hat{c}_{\underline{\mu}\underline{\nu}}^{mnl}=(-1)^{m+n}\hat{c}_{\mu\nu}^{mnl*}=(-1)^{m+n+l}\hat{c}_{\mu\nu}^{mnl}.$$ In that case, the number of independent real coefficients is 4, 27, 79, 250, 549 for $n_{\max}=1,2,3,4,5$, respectively. What is the interest of all these projections? The angular integral over $\boldsymbol{\Omega}_{2}$ in equation \[eq:1.8\] now concerns only two spherical harmonics $R(\boldsymbol{\Omega}_{2})$ in equations \[eq:1.13-1\] and \[eq:1.16\]: it can now be performed analytically! The calculation is again simplified and accelerated by switching to the local, molecular frame linked to $\hat{q}$, taken as principal axis. The orientation of the solvent molecule in this frame is noted $\mathbf{\Omega'}$. Composition relations between spherical harmonics during the transformation (rotation) from fixed to local frames are simple matrix products, one for each $m$ indices: $$\mathbf{R}^{m}(\mathbf{\Omega})=\mathbf{R}^{m}(\hat{\mathbf{q}})\mathbf{R}^{m}(\mathbf{\Omega'})\label{eq:22}$$ $$R_{\mu'\mu}^{m}(\mathbf{\Omega})=\sum_{\chi}R_{\mu'\chi}^{m}(\hat{\mathbf{q}})R_{\chi\mu}^{m}(\mathbf{\Omega'})$$ In the local frame, the expansion analogous to equation \[eq:1.13-1\] becomes: $$\Delta\hat{\rho}(\mathbf{q},\mathbf{\Omega'})=\sum_{m\mu\chi}f_{m}\Delta\hat{\rho}_{\mu;\chi}^{m}(\mathbf{q})R_{\chi\mu}^{m}(\mathbf{\Omega'}).\label{eq:1.20}$$ The new coefficients (be careful of the new lower indices notation consistent with Blum’s) are deduced from the previous ones by a transformation analogous of the so-called $\chi$-transform of Blum[@blum_invariant_1972-1; @blum_invariant_1972]: $$\Delta\hat{\rho}_{\mu;\chi}^{m}(\mathbf{q})=\sum_{\mu'}\Delta\hat{\rho}_{\mu'\mu}^{m}(\mathbf{q})R_{\mu'\chi}^{m}(\hat{\mathbf{q}}).\label{eq:1.21}$$ For each discrete value of $\mathbf{q}$, the ensemble of $R_{\mu'\chi}^{m}(\hat{\mathbf{q}})$ projections is calculated using fast recurrence relations depending only on the Cartesian coordinates of $\mathbf{q}$[@choi_rotmat_1999]. Moreover, $\mathbf{q}$ and $-\mathbf{q}$ require a single treatment since $$R_{\mu'\chi}^{m}(-\hat{\mathbf{q}})=\left(-1\right)^{m}R_{\mu'\underline{\chi}}^{m}(\hat{\mathbf{q}})=\left(-1\right)^{m+\mu'+\chi}R_{\underline{\mu'}\chi}^{m}(\hat{\mathbf{q}})$$ In notation $\chi$, the general symmetry relation of equation \[eq:1.14\] becomes: $$\Delta\hat{\rho}_{\underline{\mu'};\chi}^{m}(\mathbf{q})=\left(-1\right)^{m+\mu'+\chi}\Delta\hat{\rho}_{\mu';\chi}^{m*}(-\mathbf{q})$$ In the same way, the bulk $\hat{c}$ function reads in this new frame[@blum_invariant_1972]: $$\hat{c}(q,\mathbf{\Omega'}_{1},\mathbf{\Omega'}_{2})=\sum_{mn\mu\nu\chi}f_{m}f_{n}\hat{c}_{\mu\nu;\chi}^{mn}(q)R_{\chi\mu}^{m}(\mathbf{\Omega'}_{1})R_{\underline{\chi}\nu}^{n}(\mathbf{\Omega'}_{2})\label{eq:1.24}$$ where the new coefficients are deduced from the old ones through the Blum’s $\chi$-transform: $$\hat{c}_{\mu\nu;\chi}^{mn}(q)=\sum_{\chi}\left(\begin{array}{ccc} m & n & l\\ \chi & \underline{\chi} & 0 \end{array}\right)\hat{c}_{\mu\nu}^{mnl}(q)$$ Some symmetry relations apply, even for molecules without symmetry: $$\hat{c}_{\underline{\mu}\underline{\nu};\chi}^{mn}=\left(-1\right)^{m+n+\mu+\nu}\hat{c}_{\mu\nu;\chi}^{mn*},$$ and $$\hat{c}_{\nu\mu;\chi}^{nm}=\left(-1\right)^{m+n}\hat{c}_{\mu\nu;\chi}^{mn}.$$ In the specific case of water, $$\hat{c}_{\mu\nu;\underline{\chi}}^{mn}=\hat{c}_{\underline{\mu}\underline{\nu};\chi}^{mn}=\left(-1\right)^{m+n}\hat{c}_{\mu\nu;\chi}^{mn*}.$$ Finally, the insertion of expansions \[eq:1.20\] and \[eq:1.24\] into the OZ convolution product \[eq:1.8\] (formally valid for any reference frame, so in particular for the local one) followed by an analytical integration over $\mathbf{\Omega'}_{2}$ (thanks to the orthogonality of the spherical harmonics) leads to a very simple OZ relation between $\hat{c}$, $\Delta\hat{\rho}$ and $\hat{\gamma}$ $\chi$-projections: $$\hat{\gamma}_{\mu;\chi}^{m}(\mathbf{q})=\sum_{n\nu}\left(-1\right)^{\chi+\nu}\hat{c}_{\mu\nu;\chi}^{mn}(q)\Delta\hat{\rho}_{\underline{\nu};\chi}^{n}(\mathbf{q})\label{eq:1.28}$$ This OZ relation constitutes the main result of the present formalism and manuscript. It replaces the expensive angular convolution product \[eq:1.8\] by simple algebraic products between projections in the local frame! This can be seen as the angular analogous of the replacement of spatial convolution in equation \[eq:1.6\] by direct product in Fourier space \[eq:1.8\]. It is important to note that different $\chi$ values do not mix in \[eq:1.28\]; there is one simple matrix multiplication for each $\chi$ value. Once the $\hat{\gamma}_{\chi}$ projections have been derived from the OZ equation, the return to the laboratory frame follows a relation inverse of equation \[eq:1.20\]: $$\hat{\gamma}_{\mu'\mu}^{m}(\mathbf{q})=\sum_{\chi}\hat{\gamma}_{\mu;\chi}^{m}(\mathbf{q})R_{\mu'\chi}^{m*}(\hat{\mathbf{q}})\label{eq:1.29}$$ Note that the transformation between fixed and local frames in \[eq:1.21\] and \[eq:1.29\] invoke the spherical harmonics $R(\hat{\mathbf{q}})$, where $(\hat{\mathbf{q}})$ is understood as the rotation which goes from the fixed to the local frames. This last one is not defined univocally because there is freedom in the choice of the rotation angle around $\hat{\mathbf{q}}$. Fortunately, it is satisfying to verify in the previous analysis that this angle is completely irrelevant in the final result: indeed, the ensemble \[eq:1.21\], \[eq:1.28\], \[eq:1.29\] involves products of the form $R_{?\chi}^{?}(\hat{\mathbf{q}})R_{?\chi}^{?*}(\hat{\mathbf{q}})$ which are really independent of it. Finally, we apply an inverse FFT to $\hat{\gamma}_{\mu'\mu}^{m}(\mathbf{q})$, then gather all projections of $\gamma_{\mu'\mu}^{m}(\mathbf{r})$ like in equation \[eq:1.9\]. We end up with the desired indirect correlation function $\gamma(\mathbf{r},\mathbf{\Omega})$: $$\gamma(\mathbf{r},\mathbf{\Omega})=\sum_{m\mu'\mu}f_{m}\gamma_{\mu'\mu}^{m}(\mathbf{r})R_{\mu'\mu}^{m}(\mathbf{\Omega})\label{eq:1.30}$$ The very expensive original OZ equation \[eq:1.6\] has thus been replaced by the series of cheap steps \[eq:1.11\], \[eq:1.13-1\], \[eq:1.21\], \[eq:1.28\], \[eq:1.29\], \[eq:1.30\]: $$\Delta\rho(\mathbf{r},\mathbf{\Omega})\rightarrow\Delta\rho_{\mu'\mu}^{m}(\mathbf{r})\rightarrow\Delta\hat{\rho}_{\mu'\mu}^{m}(\mathbf{q})\rightarrow\Delta\hat{\rho}_{\mu;\chi}^{m}(\mathbf{q})\rightarrow\hat{\gamma}_{\mu;\chi}^{m}(\mathbf{q})\rightarrow\hat{\gamma}_{\mu'\mu}^{m}(\mathbf{q})\rightarrow\gamma_{\mu'\mu}^{m}(\mathbf{r})\rightarrow\gamma(\mathbf{r},\mathbf{\Omega}).$$ Implementation and examples =========================== We apply the present DFT approach in the case of the SPC/E model of water. The ${\rm H_{2}O}$ solvent molecule is characterized by one LJ site localized at the O site and three partial charges at the O, H, H sites. For this model, the DCF projections $c_{\mu\nu;\chi}^{mn}(q)$ at different orders of accuracy $n_{max}$ have been previously obtained by combining Monte Carlo simulation data at short distances and HNC closure at long distances and solving the resulting 1D MOZ+mixed IE. The temperature is 298.15 K and the bulk density is 997 g/L [@puibasset_bridge_2012; @belloni-to-come]. . As mentioned above, the functional of equation \[eq:Fid+Fext+Fexc\] is minimized with respect to $\rho(\mathbf{r},\mathbf{\boldsymbol{\Omega}})$ using the quasi-Newton minimizer L-BFGS[@BFGS]. The density is usually initiated at $\rho_{\textrm{bulk}}\exp\left(-\beta V_{ext}\left(\mathbf{r},\mathbf{\boldsymbol{\Omega}}\right)\right)$. LBFGS requires at each minimization step the value of the functional and its gradient $$\frac{\beta\delta F}{\delta\rho(\mathbf{r},\boldsymbol{\Omega})}=\log\left(\frac{\rho(\mathbf{r,}\boldsymbol{\Omega})}{\rho_{bulk}}\right)+\beta V_{ext}(\mathbf{r},\boldsymbol{\Omega})-\gamma(\mathbf{r},\boldsymbol{\Omega}).\label{eq:gradient}$$ A typical minimization process using the new method described above for computing $\gamma(\mathbf{r},\boldsymbol{\Omega})$ is illustrated in Fig. \[fig:convergence\] for the CO$_{2}$ molecule in water; it is seen that in this case a relative error of $10^{-4}$ is reached after only $\approx20$ cycles. To our experience, convergence is reached before $\approx35$ steps or never. ![Convergence of the free energy estimation during the minimization process for a CO$_{2}$ molecule in water with a box size $L=24$ Å, $N=72$ and $n_{max}=3$. The inset shows the evolution of the relative difference in the free energy functional between successive steps. \[fig:convergence\]](convergence_CO2_mmax3){width="8.5cm"} We begin by comparing the numerical efficiency of the new method to that of the original, direct method, described by equation \[eq:1.6\]. The old, direct method requires to pre-compute and store the DCF in local frame, $\hat{c}(q,\mathbf{\boldsymbol{\Omega}'}_{1},\mathbf{\boldsymbol{\Omega}'}_{2})$ using eq. \[eq:1.24\] and then involves three successive steps, namely (i) to fast Fourier transform $\Delta\rho(\mathbf{r,\mathbf{\boldsymbol{\Omega})}}$ to $\Delta\rho(\mathbf{q,\mathbf{\boldsymbol{\Omega})}}$, (ii) to compute $\gamma(\mathbf{q,\mathbf{\boldsymbol{\Omega})}}$ using the MOZ equation \[eq:1.8\] in $q$ space with the stored DCF, and finally (iii) to inverse fourier Transform to $\gamma(\mathbf{r,\mathbf{\boldsymbol{\Omega})}}$ . On the other side, the new method involves the succession of seven steps described in the previous section. Note that steps 1-2 and 6-7, i.e. angular transforms and spatial transforms, could well be inverted. However it is more efficient to go with the angular transforms first, since for a given $n_{\textrm{max}}$ there are less projections than angles (see table \[tab:table\_norientations\_nprojections\]), and thus less functions to Fourier transform. -------------------- --------------------------- ----------------------------------------- -------------------------------------------------- --------------------------- ----------------------------------------- -------------------------------------------------- $n_{\textrm{max}}$ $N_{\boldsymbol{\Omega}}$ $N_{\text{complex-valued projections}}$ $N_{\text{independent real-valued projections}}$ $N_{\boldsymbol{\Omega}}$ $N_{\text{complex-valued projections}}$ $N_{\text{independent real-valued projections}}$ 1 18 10 7 6 4 4 2 75 35 22 45 19 14 3 196 84 50 84 40 28 4 405 165 95 225 85 55 5 726 286 161 330 140 88 -------------------- --------------------------- ----------------------------------------- -------------------------------------------------- --------------------------- ----------------------------------------- -------------------------------------------------- : Correspondance between the number of orientations, $N_{\Omega}=N_{\theta}\times N_{\phi}\times N_{\psi}$, and the number of projections for generic solvent molecules and for water. The number of independent real projections uses the symmetry rule from equation \[eq:1.12\]. For C$_{2V}$ molecules, we add the constraint that $\psi$ lies in $[0,\pi[$ and not $[0,2\pi[$ and that $\mu$ is even.\[tab:table\_norientations\_nprojections\] We show in Fig. \[fig:CPU\_vs\_N\] that the CPU time requested to compute $F_{exc}$ and $\gamma(\mathbf{q,\mathbf{\boldsymbol{\Omega})}}$ is indeed much lower with the new algorithm and that it behaves linearly with respect to the chosen number of orientations per grid point, $N_{\Omega}$, whereas it is quadratic in the direct algorithm. The numerical gain is of a factor 200 for $n_{max}=3$ (84 orientations) and 750 for $n_{max}=5$ (330 orientations). No surprise, the dependence with respect to the number of spatial grid points $N$ is clearly cubic, as shown in the bottom panel of Fig.\[fig:CPU\_vs\_N\]. The quoted CPU times refer to calculations on a single thread on an INTEL Sandy Bridge $\varcopyright$ processor at 2 GHz. No parallelism of whatever kind is included here. The important information to take from that last figure is that even in that single-thread case, and for $n_{max}=3$ , the calculation of $\gamma(\mathbf{q,\mathbf{\boldsymbol{\Omega})}}$ takes a few seconds for a grid of size $72^{3}$ , and above a minute for $200^{3}$. ![(Top) CPU time for the computation of $F_{exc}$ within a single minimization step using the direct algorithm (blue diamonds) and the new one (black circles) with $N=72$ as a function of number of discrete orientations per spatial grid point, $N_{\Omega}$. The displayed numbers $N_{\Omega}=6$, 45, 84, 225, 330 correspond to $n_{max}=1$, 2, 3 ,4, 5, respectively. The red and blue lines represent the best fit to linear behavior, $T=aN_{\Omega}$, and quadratic behavior, $T=bN_{\Omega}^{2}$ , respectively. (Bottom) same quantity as function of spatial grid point number $N$ using the new algorithm with $n_{max}=3$. The red line represents the best fit to cubic behavior $T=cN^{3}$.\[fig:CPU\_vs\_N\]](timing_vs_nb-angles_and_N){width="8.5cm"} In Fig. \[fig:CPU-decomposition\] we show the decompostion of the CPU time along the different steps of the algorithm for different $n_{\textrm{max}}$: Although the Fast Generalized Spherical Harmonics Transform (FGSHT) is the most time-consuming, the different steps are rather equilibrated. None of them is a bottleneck. ![Decomposition of the CPU time of the different steps involved in the calculation of $\gamma(\mathbf{r},\boldsymbol{\Omega})$ from $\Delta\rho(\mathbf{r},\boldsymbol{\Omega})$ at different angular resolutions for different $n_{\textrm{max}}$: Fast generalized spherical harmonics transforms (red, steps $1+7$); 3D-FFT (blue, steps $2+6$); rotations between laboratory and local frames (green, steps $3+5$); and resolution of the molecular Ornstein-Zernike equation in the local frame (purple, step $4$).\[fig:CPU-decomposition\]](branch_perf){width="8.5cm"} We show furthermore in Fig. \[fig:global\_perf\] that despite the complexity of the computation of $F_{exc}$ with respect to the straightforward calculation of the local quantities $F_{ext}$ and $F_{id}$, the computational overhead for $F_{exc}$ appears only a factor 2 with respect to $F_{id}$ and a factor 8 with respect to $F_{ext}$. All in all, with a sufficient grid resolution of 3 points per angstrom and angular resolution $n_{\textrm{max}}=3$ (see below), this makes it possible to handle, even on a single core, the solvation of small molecules (typically $L=25$ Å, $N\sim75$) within a minute, and that of much larger molecules (e.g., $L=60$ Å, $N\sim180$) in tens of minutes. Those latter calculations were absolutely out of reach with the direct algorithm. ![CPU time for the computation of the different components $F_{\textrm{id}}$, $F_{\textrm{ext}}$, $F_{\textrm{exc}}$ of the solvation free energy for a cubic grid of size $72^{3}$ and $n_{\textrm{max}}=3$. \[fig:global\_perf\] ](global_perf_2){width="8.5cm"} In Fig. \[fig:sfe-pyrimidine\], we examine the precision of the method for the solvation free energies, taking as example a small organic molecule, pyrimidine, dissolved in water. The three-dimensional solvent structure resulting from the functional minimisation is shown on top of the figure. For this neutral molecule as for many others, we observe that in order to converge the solvation free energy, MDFT requires a grid resolution of 3 points per Angstrom, a box length of $28$ Å(say a dozen of Angstrom of “solvent buffer” from the molecule to the box edge in every direction), and an angular resolution corresponding to $n_{\textrm{max}}=3$ (84 orientations). ![On top, the pyrimidine molecule with CH groups in green and N atoms in blue. The water density map in the plane of the molecule is also shown. Computed solvation free energy of a pyrimidine molecule in water as a function of spatial resolution for $L=25$ Å, $n_{\textrm{max}}=4$ (top panel), or of box size $L$ at a given resolution and number of orientations, $\Delta r^{-1}=4$ $\textrm{�}\ensuremath{^{-1}}$, $n_{\textrm{max}}=4$ (middle panel) , or of number of orientation per grid point at fixed box size and spatial resolution ($L=25$ Å, $\Delta r^{-1}=4$ Å$^{-1}$). In this last plot, the 5 points are for $n_{\textrm{max}}=1$ to 5. \[fig:sfe-pyrimidine\]](solute\lyxdot pyrimidine\lyxdot snap){width="8.5cm"} ![On top, the pyrimidine molecule with CH groups in green and N atoms in blue. The water density map in the plane of the molecule is also shown. Computed solvation free energy of a pyrimidine molecule in water as a function of spatial resolution for $L=25$ Å, $n_{\textrm{max}}=4$ (top panel), or of box size $L$ at a given resolution and number of orientations, $\Delta r^{-1}=4$ $\textrm{�}\ensuremath{^{-1}}$, $n_{\textrm{max}}=4$ (middle panel) , or of number of orientation per grid point at fixed box size and spatial resolution ($L=25$ Å, $\Delta r^{-1}=4$ Å$^{-1}$). In this last plot, the 5 points are for $n_{\textrm{max}}=1$ to 5. \[fig:sfe-pyrimidine\]](pyr_horizontal_2){width="8.5cm"} Such results are corroborated for charged entities too, as shown in Fig. \[fig:SFE\_CH4q\] for the toy model corresponding to an hypothetical CH$_{4}^{q}$ entity, that is a single Lennard-Jones center with parameters corresponding to a unified-atoms representation of methane ($\sigma=3.73$ Å, $\epsilon=1.23$ kJ/mol) from Asthagiri et al. [@asthagiri_role_2008], with a charge $q$ at its center. For this very specific spherically symmetric test cases, the 1D integral equation theory is able to solve exactly the same HNC problem, which we use as a test bed. More precisely, MDFT results are compared to a direct integral equation resolution of the two component system with the solute at infinite dissolution[@belloni14]. This last approach implies spherical boundaries that tend toward infinity. In the molecular density functional theory, no restriction apply to the symmetry of the solute molecule and we use a finite box with periodic boundary conditions. Consequently, the results of the minimisation have to be corrected twice for charged systems[@kastenholz_computation_2006-1; @kastenholz_computation_2006; @hunenberger_single-ion_2011]. The first correction is of the Madelung type and accounts for the contribution of the periodic images of the solute and solvent (so-called correction of type B)[@kastenholz_computation_2006] $$\Delta F_{B}=-\xi(1-\frac{1}{\epsilon})\frac{q^{2}}{2L}+\mathcal{O}\left(L^{-2}\right),\label{eq:typeB}$$ with $\xi\approx2.873$ and $\epsilon=71$ for SPC/E water [@berendsen_missing_1987; @kusalik_science_1994]. The other one originates from the periodic treatment of the electrostatic potential, yielding a vanishing charge density at the box boundary and a finite electrostatic potential in the uniform solvent (type C) $$\Delta F_{C}=-(6\epsilon_{0})^{-1}qn_{bulk}\gamma_{0},\label{eq:typeC}$$ where $\gamma_{0}$ is the quadrupole moment of the SPC/E water molecule. ![Solvation free energy of a hypothetical CH$_{4}^{q}$ molecule calcultated by MDFT-HNC as a function of its charge $q$ for different angular resolution $n_{max}=2$ to 5. The finite size corrections of equations \[eq:typeB\] and \[eq:typeC\] are included. For comparison, we also show the exact 1D-IET results that can be calculated for this spherically symmetric case.\[fig:SFE\_CH4q\]](energie\lyxdot IET-MDFT_DB){width="8.5cm"} In Fig. \[fig:SFE\_CH4q\], we show that the MDFT free energies, including the above corrections, do match the rigorous (but spherically symmetric only) converged HNC-IET results when the angular resolution is increased; within the resolution of the figure, convergence of the solvation free energy is reached for $n_{\textrm{max}}=3$. We note that both IET and MDFT diverge for $q=-1$ and $n_{max}>3$, a failure of the HNC approximation for this artificial solute. Fortunately, this is not the case with Lennard-Jones parameters fitted to model halides, e.g. from [@horinek_rational_2009]. In Fig. \[fig:gr\_CH4q\] and \[fig:Pr\_CH4q\], we show the effect of the angular resolution on the solvent structure for $q=+1$, $0$ and $-0.6$. We plot there the corresponding radial distribution function (or reduced solvent density) around the solute, $g(r)=\int d\boldsymbol{\Omega}\rho(\mathbf{r},\boldsymbol{\Omega})/n_{bulk}$ and radial solvent polarisation, $P(r)=\int d\boldsymbol{\Omega}\left(\boldsymbol{\Omega}\cdot\hat{\mathbf{r}}\right)\rho\left(\mathbf{r},\boldsymbol{\Omega}\right)/n_{\text{bulk}}$. Although it appears that for the cationic case $q=+1$, only $n_{max}=4$ gives a full convergence of the fine structure beyond the first peak, $n_{max}=3$ does provides overall an acceptable compromise. It is remarquable that for the neutral case, despite a vanishing electric field, the solute induces an expected but small finite polarisation due to density-orientation couplings. This fine effect is slower to converge with $n_{\textrm{max}}$. ![Reduced water density around CH$_{4}^{q}$ for different angular resolution $n_{max}=2$ to 5 and with $q=+1$, 0 and $-0.6$ in the left, middle, and right panels, respectively. \[fig:gr\_CH4q\]](gr_CH4q){width="8.5cm"} ![Polarisation density around CH$_{4}^{q}$ for different angular resolution $n_{max}=2-5$ and with $q=+1,\:0,\:-0.6$ in the left, middle, and right panels, respectively.\[fig:Pr\_CH4q\] ](Pr_CH4q){width="8.5cm"} ![Distribution function between a CH$_{4}^{\text{+}}$ and a water molecule as a function of the CH$_{4}^{+}-$O distance $r$ and the cosine of the angle $\theta^{\prime}$ between the water dipole and the axis joining the two sites. For each $r$ and $\cos\theta^{\prime}$, we average over all values of intrinsic rotation angle $\psi^{\prime}$. The distribution is not dependent of $\phi^{\prime}$ in this local frame. This is thus a plot of $\left\langle g(r,\cos\theta^{\prime})\right\rangle _{\psi^{\prime}}$.\[fig:g-r-costheta\]](grcos){width="8.5cm"} In order to illustrate the intrinsic molecular nature of the molecular density functional theory and thus its major advantage over other site-based liquid state theories like 3D-RISM, we show in figure \[fig:g-r-costheta\] the distribution function $g$ between CH$_{4}^{+}$ and the oxygen atom of water as a function of the distance between the two sites, $r$, and of the cosine of the angle $\theta^{\prime}$ between the water dipole and the axis joining those sites, averaged over all intrinsic rotations $\psi^{\prime}$. We note that in the local framework there is invariance over angle $\phi^{\prime}$. We see that the maximum probability is found for a distance of 3.1 Å and a cosine between 0.7 and 0.9. A cosine of 1 accounts for the oxygen atom pointing exactly toward the cation. Without solvent-solvent correlations, all water molecules would have their dipole pointing exactly toward the cation and would thus have such cosine of 1. It is not the case here : it is favorable to point slightly off the cation ($\cos\theta_{{\rm max}}\approx0.8$, not 1) but to keep a more favorable short range order, i.e., to keep more of the hydrogen bond network. For a given distance $r=3.1$ Å, that is in the maximum of the radial distribution function, we show the effect of $\cos\theta^{\prime}$ and $\psi^{\prime}$ in figure \[fig:g-psi-costheta\]. First, we see that the distribution is symmetric around $\psi^{\prime}=\pi/2$, as expected from a C$_{2{\rm v}}$ molecule in the reference framework. The maximum of the distribution is again found for $\cos\theta^{\prime}$ between 0.65 and 0.75, that is for a dipole pointing roughly toward the cation. For a dipole perpendicular to the solute-oxygen vector, that is for $\cos\theta^{\prime}=0$, we see that the internal rotation of $\psi^{\prime}=\pi/2$ that produces the two hydrogen the farthest from the cation is much more probable that other internal rotations. ![Distribution function between a CH$_{4}^{+}$ and the water molecule separated by the distance $r=3.1$ Å as a function of the cosine of the angle $\theta^{\prime}$ between the site-site axis and the water dipole, and of the intrinsic rotation angle $\psi^{\prime}$.\[fig:g-psi-costheta\]](gcospsi){width="8.5cm"} We conclude by a proof of concept to show that this formalism is efficient enough to unlock the description of the solvation around large molecular solutes. In Fig. \[fig:4M7G\], we show the water structure around a protein made of 4000 atoms corresponding to 230 residues. We use a grid of $128^{3}$ nodes, an angular resolution corresponding to $n_{\textrm{max}}=3$ and a discretization of the grid of $0.5$ Å. The overall minimisation took 2 minutes on 24 distributed cores. Using MD simulations, an equivalent statistics for the water density requires at least 100 ns and hundreds of cpu-hours with the same computer ressources. It would be even more challenging to get the whole angular-dependent density $\rho(\mathbf{r},\boldsymbol{\Omega})$, a direct output of the functional approach. ![Water density around a protein made of 230 residues and 4000 atomic sites (4M7G: Streptomyces Erythraeus Trypsin). The displayed isosurface correspond to 3 times the bulk density. \[fig:4M7G\]](4m7g_2){width="8.5cm"} Conclusion ========== The three-dimensional density functional theory and integral equation formalism at the molecular level of description of the solvent has been greatly improved by using the concept of expansions/projections onto generalized spherical harmonics. The present analysis of the Ornstein-Zernike convolution product follows that previously developed in bulk systems. The resulting algorithm decreases the time-to-solution by many orders of magnitude. This makes it possible to study in a systematic and routine way many solute/solvent mixtures and to provide free energies of solvation with restitution times of at most a few minutes. Applications to simple molecular solutes in water have been presented. A detailed assessment of the method with respect to reference MD calculations or experimental data, as well as examination of large molecular systems of biological interest, like the prediction of protein hydration, will be reported soon in a companion paper. The general algorithm presented in this paper could be accelerated following different directions not speaking of making it highly parallel. First, it is important to note that the $\gamma$ function is a convolution product. It is thus smoother (both in spatial and angular dependence) than its two building blocks $\Delta\rho$ and $c$. As a consequence, it is legitimate to use a degraded basis $\left\{ n_{\max}'\right\} $ with $n_{\max}'<n_{\max}$ during the entire process. Also, inhomogeneous grids in space and orientations seem logical extensions to the important milestone reported therein. These technics may lead to further substancial decrease in time-to-solution without altering the precision. Now that the numerical barrier has been unlocked remains an important question. As usual in such liquid-state theories, the validity of the HNC-like DFT functional has to be challenged, and one will have to go beyond this approximation by building solute-solvent bridge function(al)s. We already have several suggestions in that directions, either based on global thermodynamic corrections [@levesque12_1; @jeanmairet_molecular_2013; @jeanmairet_molecular_2013-1; @gageat_coarseGrainedBridge_2017] or on a detailed understanding of the bridge functions for simple molecular systems[@belloni12]. Appendix: angular representation versus projections {#appendix-angular-representation-versus-projections .unnumbered} =================================================== The expansion (\[eq:1.9\]) and the projection (\[eq:1.11\]) which transform triplets of angles $\mathbf{\Omega}\equiv(\theta,\phi,\psi)$ to indices $_{\mu'\mu}^{m}$ or vice versa follow a three-step algorithm originally developed for bulk systems[@lado_95]: First and second steps: transform $\phi$ and $\psi$ into $\mu'$ and $\mu$: $$\Delta\rho_{\mu'\mu}(\theta)=\dfrac{1}{4\pi^{2}}\int_{0}^{2\pi}\int_{0}^{2\pi}\Delta\rho(\theta,\phi,\psi)e^{+i\mu'\phi+i\mu\psi}\mathrm{d}\phi\mathrm{d}\psi\label{eq:1.33}$$ $$\Delta\rho(\theta,\phi,\psi)=\sum_{\mu'=-n_{\max}}^{n_{\max}}\sum_{\mu=-n_{\max}}^{n_{\max}}\Delta\rho_{\mu'\mu}(\theta)e^{-i\mu'\phi-i\mu\psi}\label{eq:1.33-2}$$ The 2D angular integral \[eq:1.33\] is performed by trapezoidal rule (or Gauss Chebychev quadrature): $$\Delta\rho_{\mu'\mu}(\theta)=\dfrac{1}{N_{\phi}N_{\psi}}\sum_{j=0}^{N_{\phi}-1}\sum_{k=0}^{N_{\psi}-1}\Delta\rho(\theta,\phi_{j}\equiv j\dfrac{2\pi}{N_{\phi}},\psi\equiv k\dfrac{2\pi}{N_{\psi}})e^{+2i\pi\left(\frac{\mu'j}{N_{\phi}}+\frac{\mu k}{N_{\psi}}\right)}$$ One recognizes a discrete 2D Fourier transform which can be efficiently performed by 2D FFT, provided $N_{\phi}=N_{\psi}=2n_{\max}+1$. Same remark for the inverse transformation \[eq:1.33-2\]. The case of ${\rm H_{2}O}$ symmetry can be adapted by choosing $N_{\psi}=2\left(n_{\max}/2\right)+1$ angles between 0 and $\pi$ . This operation must be performed for each $\theta$ value. Third step: transformation between $\theta$ and $m$: $$\Delta\rho_{\mu'\mu}^{m}=f_{m}\int_{-1}^{1}\dfrac{\mathrm{d}\cos\theta}{2}\Delta\rho_{\mu'\mu}(\theta)r_{\mu'\mu}^{m}(\theta)=f_{m}\sum_{i=1}^{N_{\theta}}w_{i}\Delta\rho_{\mu'\mu}(\theta_{i})r_{\mu'\mu}^{m}(\theta_{i})$$ $$\Delta\rho_{\mu'\mu}^{m}=\sum_{m=\max(\left|\mu'\right|,\left|\mu\right|)}^{n_{\max}}\Delta\rho_{\mu'\mu}^{m}r_{\mu'\mu}^{m}(\theta)$$ The integral over $\theta$ is performed using Gauss-Legendre quadrature with $N_{\theta}=n_{\max}+1$ and associated weights $w_{i}$. It is performed for each pair $\left\{ \mu',\mu\right\} $. Despite the lack of Fast transform in this last step, the whole procedure is fast enough not to be the limiting process in the OZ convolution calculation. Overall, we can qualify the whole angles-to-projections process, analogous to a FFT for the angular variable, as a Fast Generalized Spherical Harmonics Transform (FGSHT). This work was supported by the Energy oriented Centre of Excellence (EoCoE), grant agreement number 676629, funded within the Horizon2020 framework of the European Union.
--- abstract: 'We present measurements of the near-infrared brightness of Io’s hot spots derived from 2-5 $\mu$m imaging with adaptive optics on the Keck and Gemini N telescopes. The data were obtained on 271 nights between August 2013 and the end of 2018, and include nearly 1000 detections of over 75 unique hot spots. The 100 observations obtained between 2013 and 2015 have been previously published in de Kleer and de Pater (2016a); the observations since the start of 2016 are presented here for the first time, and the analysis is updated to include the full five-year dataset. These data provide insight into the global properties of Io’s volcanism. Several new hot spots and bright eruptions have been detected, and the preference for bright eruptions to occur on Io’s trailing hemisphere noted in the 2013-2015 data (de Kleer and de Pater 2016a) is strengthened by the larger dataset and remains unexplained. The program overlapped in time with *Sprint-A/EXCEED* and *Juno* observations of the jovian system, and correlations with transient phenomena seen in other components of the system have the potential to inform our understanding of the impact of Io’s volcanism on Jupiter and its neutral/plasma environment.' author: - Katherine de Kleer - Imke de Pater - 'Edward M. Molter' - Elizabeth Banks - Ashley Gerard Davies - Carlos Alvarez - Randy Campbell - Joel Aycock - John Pelletier - Terry Stickel - 'Glenn G. Kacprzak' - 'Nikole M. Nielsen' - Daniel Stern - Joshua Tollefson title: 'Io’s Volcanic Activity from Time Domain Adaptive Optics Observations: 2013-2018' --- Introduction ============ Io’s dramatic volcanic activity exhibits a high degree of spatial and temporal variability. The distribution of volcanic thermal emission in space and time contains information on the underlying volcanic advection processes, providing a window into the nature of Io’s geological processes as well as into how tidal heating impacts the characteristics of the volcanism it powers. While some of Io’s volcanoes have remained persistently active since the *Voyager* fly-bys in 1979, numerous transient eruptions appear and subside in a matter of days, hours, or even minutes (e.g. Johnson et al. 1988; Veeder et al. 1994; de Pater et al. 2014; de Kleer et al. 2014; Tsang et al. 2014; Davies et al. 2018). The timeline of thermal activity for a given volcano is indicative of the style of volcanism and hence geological processes active at that site (Davies et al. 2010). The time intervals between eruptions at a given site can provide information on characteristic resupply timescales, while a comparison of eruption timing between sites has the potential to illuminate eruption clustering if present. Finally, the periodic forcing of Io may translate into specific temporal signatures that may be apparent in thermal timelines. Volcanoes are also distributed non-randomly across Io’s surface, showing in particular a dearth of activity at the sub- and anti-jovian longitudes, as well as in polar regions (Hamilton et al. 2013; Veeder et al. 2015; de Kleer and de Pater 2016b), although no dataset published to date has had good coverage of the high latitudes. The spatial distribution of Io’s surface heat flow may place constraints on models for tidal heat dissipation in Io’s interior, or may indicate the degree of fluid flow in Io’s mantle through the amount of smoothing in the observed spatial trends relative to the expected patterns. Without allowing for lateral movement of melt, the end-member case of heat deposition in a shallow aesthenosphere predicts higher heat flow at lower latitudes with the greatest heat flow centered at the sub-jovian and anti-jovian regions. In contrast, the end-member case of deep mantle heating results in enhanced heat flow at the poles (Gaskell et al., 1988; Segatz et al., 1988). Determining the temporal and spatial distribution of Io’s volcanism requires a large sample size of hot spot detections over a range of timescales. We have been building up a database of thermal emission from individual volcanoes on Io’s surface since 2013, when we initiated a time domain campaign of adaptive optics imaging of Io’s volcanoes at the Keck and Gemini N telescopes. Io has been observed using adaptive optics on Keck since 2001 (Marchis et al. 2002), but only since 2013 has there been a dedicated Io observing program at such high cadence. These observations spatially resolve Io, permitting the identification of individual active volcanoes, and are often made at multiple wavelengths in the 2-5 $\mu$m range in order to constrain temperature and total power output. The observations have a typical spatial resolution of $\sim$100-500 km depending on telescope, wavelength, and sky conditions. The collective dataset is well suited to an investigation into the volcanic eruption processes at individual hot spots, which requires data capturing the time evolution of the eruptions, and to identification of spatial and temporal patterns in the distribution of activity. The prior Io dataset that is most comparable in cadence, wavelength, and spatial resolution is from the *Galileo* Near-Infrared Mapping Spectrometer (NIMS; Carlson et al. 1992), which observed Io on 25 distinct passes with a typical spatial resolution of 100-400 km on Io’s surface, including some images with resolutions as coarse as 725 km and as fine as 100 meters (see Table 3.2 in Davies 2007). NIMS detected thermal emission from 115 unique hot spots (Davies et al. 2012; Veeder et al. 2012; 2015), each detected between one and 50$+$ times over the course of the mission. Long-term programs observing Io’s thermal emission from the NASA InfraRed Telescope Facility have also been very successful (Spencer et al. 1990; Veeder et al. 1994; Rathbun and Spencer 2010). While such data do not spatially resolve Io, techniques such as lucky imaging and observing Io as the satellite enters or emerges from occultation behind Jupiter have permitted brightness measurements of individual volcanoes. Though sensitive only to the brightest events and only to the Jupiter-facing hemisphere, occultation observations have by far the longest time baseline, having been made on more than 100 occasions over the past $>$2 decades (Rathbun et al. 2018), albeit at only one wavelength (3.5 or 3.8 $\mu$m). Our spatial resolution and sensitivity to faint hot spots is intermediate between NIMS data and occultation observations, and is comparable to a typical NIMS observation. Our cadence and total number of observations are higher than all prior datasets, although the time baseline of our high cadence campaign is much shorter than the decadal timescales covered by the occultation datasets. Our campaign is introduced in de Kleer et al. (2014), and the analysis methods and results from the first 2.5 years of the program (100 nights of observation) are given in de Kleer and de Pater (2016a). Here we present results from the 2016-2018 observations, and a joint analysis of all data to date, 2013-2018. The flexible scheduling capabilities at Gemini N, and our Twilight Zone observing program at Keck[^1], have been instrumental in achieving the high cadence and quantity of observations. The observations and data analysis methods are reviewed in Section \[sec:obs\], the results are presented and discussed in Section \[sec:results\], and the conclusions are summarized in Section \[sec:conc\]. Observations and data analysis {#sec:obs} ============================== We observed Io in the near-infrared with adaptive optics on 271 nights between August 2013 and July 2018; the observing dates and details are given in Table \[tbl:obs\]. Observations were made with the NIRI imager on Gemini N (Hodapp et al. 2003) combined with the ALTAIR adaptive optics system in Natural Guide Star (NGS) mode, and with the NIRC2 imager on Keck II also using NGS adaptive optics (Wizinowich et al. 2000). The Gemini N data constitute 80% of the total visits, and include images in the L’ (3.78 $\mu$m) and K-cont (2.27 $\mu$m) filters. The Keck images were taken in a variety of filters from H-cont (1.58 $\mu$m) to Ms (4.67 $\mu$m), shown in Figure \[fig:keckims\]. ![Images from Keck on 2017 May 28 demonstrating the range of filters used in the observations. All images were taken within a 30-minute window, and each is labeled with the filter name and central wavelength. \[fig:keckims\]](Keck_example.png){width="18cm"} Images are flux calibrated to a standard star if a star was observed and the night was photometric; otherwise the images are calibrated to volcano-free regions of Io’s disk, which do not change measurably with time. Within each image, all hot spots are identified; their pixel locations are translated to latitude and longitude coordinates on Io’s disk based on Io’s ephemeris; and their intensity is measured based on an aperture photometry approach adapted to point sources on a bright background (de Pater et al. 2014). All observing, data reduction, and analysis procedures are described in detail in de Kleer and de Pater (2016a), and an identical approach is used here. Of the full dataset, the 100 observations from 2013 through the end of 2015 were published in de Kleer and de Pater (2016a), while the 171 observations from 2016-2018 are presented here for the first time. The detection limits for hot spots in the Keck and Gemini N images are given as a function of emission angle in Appendix A of de Kleer and de Pater (2016a). We use these limits to define the sensitivity of the dataset as a whole to hot spots at different longitudes. This sensitivity varies by up to 20% across Io’s surface, and is used to correct the longitudinal hot spot distribution described in Section \[sec:spatdist\]. Results & Discussion {#sec:results} ==================== The coordinates, number of detections, and average brightness of each of the 75 hot spots detected and tracked by our program are listed in Table \[tbl:overview\]. The full set of near-infrared brightnesses in all filters for all 980 hot spot detections are tabulated in Table \[tbl:hsphot\]. In some cases, the location of a hot spot appears to shift over time or transition from one active site to another nearby; in cases where it is not clear from the data whether the emission over time is produced by a single site or multiple nearby sites, we tabulate all detections under a single site name. The timeline of L’-band (3.78 $\mu$m) brightness of all volcanoes is shown in Figure \[fig:timeline\], which gives a sense for the global variability of Io’s volcanism over this period. ![Timeline of Io’s volcanic activity from 2013-2018. All L’-band detections of thermal emission from volcanic centers are plotted, with each volcanic center in a different color. The gaps in the timeline correspond to periods when the Jupiter system was not observable from Maunakea. The timeline shows that there are multi-month intervals with no bright activity, and other intervals when several large eruptions took place. \[fig:timeline\]](timeline_hotspots2018.png){width="16cm"} Energetic eruptions ------------------- Since 2013 we have detected bright eruptions at eighteen sites, where we define “bright” as a maximum L’-band brightness greater than 20 GW/$\mu$m/sr. There is no hot spot on Io that consistently exhibits this level of activity, and this cut-off therefore selects for transient events. These eruptions are typically vigorous, high-power events with significant short-wavelength emission. The majority of these were short-lived, exhibiting their peak brightness for only a few days before decaying. A few volcanoes are exceptions to this rule, producing thermal emission that is consistently present at a moderate level while also exhibiting infrequent brightenings; these volcanoes are Loki Patera, Pillan Patera, Marduk Fluctus, and Kurdalagon Patera. The first three of these were active throughout the period of observation, while Kurdalagon Patera was not detected before its eruption at the beginning of 2015 but was subsequently active and variable through 2018. Table \[tbl:transients\] lists these eighteen hot spots and the brightest L’-band intensity measured at each during our program. Note that only the single brightest detection is given even though some volcanoes had multiple large eruptions. The full timeline for each of these hot spots can be found in Table \[tbl:hsphot\]. The events that occurred prior to the end of 2015 were presented in de Kleer and de Pater (2016a); our discussion here therefore focuses on eruptions detected since the beginning of 2016. For volcanoes with detections at multiple wavelengths on a given night, we fit a Planck spectrum to estimate the temperature. All detections with measured temperatures above 800 K are given in Table \[tbl:highT\]; in total there are 32 such detections at 18 unique hot spots. These hot spots are not exactly the same set as the 18 sites where bright eruptions are seen, although there is significant overlap. While this is a small fraction of the total number of observations for which we were able to derive temperature estimates, it confirms previous findings that these high temperatures are common and widespread across a variety of volcanic styles and are not limited to outburst events (e.g. Carr 1986; Lopes-Gautier et al. 1999). However, we note that Io’s active volcanoes likely exhibit a range of temperatures from near the magma temperature ($\sim$1500 K if the magma is basaltic) down to near the passive surface temperature ($\sim$125 K), and the temperatures recovered in a single temperature fit therefore do not directly represent any physical temperature (although they do serve as a lower limit on the eruption temperature). In fact, if the magma composition is the same at all of Io’s volcanoes, then the best-fit temperature instead reflects the proportion of high-temperature to low-temperature emitting areas, and high fitted temperatures are indicative of volcanic eruptions vigorous enough that sufficient area is exposed at very high temperatures to yield a short wavelength peak in thermal emission (e.g. Davies et al. 2010). The temperatures in Table \[tbl:highT\] are derived as in de Kleer and de Pater (2016a), using Markov Chain Monte Carlo simulations to determine the probability distribution for temperature and emitting area, from which the uncertainties are also derived. Measurements from all available wavelengths are used, incorporating uncertainties on the intensity measurements, and a maximum K-cont (2.27 $\mu$m) brightness limit of 7 GW/$\mu$m/sr is imposed in the fitting when the hot spot was not detected at that wavelength. The temperature estimates are derived from the intensity measurements given in Table \[tbl:hsphot\], which have been corrected for geometric foreshortening. However, in the case of a high emission angle observation of an event of significant vertical extent such as fire fountaining, the short-wavelength emission may arise primarily from the hot fountaining component that is not foreshortened, while the longer-wavelength emission arises from both the fountains and the resultant lava flows, which are foreshortened. Applying the foreshortening correction across all wavelengths may therefore inflate the derived short-wavelength emission and hence the temperature, so that temperatures derived from high emission angle observations should be viewed with caution. ### Eruption at P95 (May 2016) In May 2016 a bright and short-lived eruption was detected at patera P95, near 10$^{\circ}$S 128$^{\circ}$W. The eruption was first detected on May 17 with a temperature around 1000 K. The second and final detection of the eruption occurred two days later on May 19, and the eruption had already declined significantly in brightness by this time. The latest non-detection of the site prior to the eruption was May 12, while the eruption had faded to below $I_{Lp}\sim$5 GW/$\mu$m/sr by May 24, and to below the detection limit even at optimal viewing geometry ($I_{Lp} \sim$3 GW/$\mu$m/sr) by May 28. While high in both temperature and infrared emission, this event therefore was short-lived, detected only over a 3-night period and constrained to be active at a detectable level for less than 16 days. ### Eruptions at Shamash Patera and in the Illyrikon Regio (June 2016) A pair of dramatic eruptions occurred in the southern hemisphere at Shamash Patera (33$^{\circ}$S 150$^{\circ}$W) and in Illyrikon Regio near 71$^{\circ}$S 180$^{\circ}$W in June 2016. The eruption in Illyrikon Regio was first detected on June 17, and began no earlier than June 10. Shamash Patera was still not active as late as June 18, after the eruption at Illyrikon Regio had begun, but exhibited bright activity on June 20. The eruption at Shamash Patera had decayed and cooled somewhat by June 27, while the eruption in Illyrikon Regio stayed bright and hot through the end of June, after which we had no further observations until November. Figure \[fig:illsham\] shows images of the eruptions at these volcanoes. The two volcanoes appear close in the images but are separated by over 1000 km of surface distance. The location of the hot spot in Illyrikon Regio is poorly constrained due to the high emission angle of all observations and we cannot conclusively match a surface feature at its location, but the positioning of a dark patera at 71$^{\circ}$S 170$^{\circ}$W is consistent with some of the thermal emission detections, whose best-fit longitudes fall in the range of 165-193$^{\circ}$W. At 71$^{\circ}$S, this is the most polar hot spot detected by our program. ![Near-infrared images from Gemini N of eruptions in Io’s southern hemisphere in 2016. The hot spot near the south pole is at a new site in the Illyrikon Regio. Despite the apparent similarity between all four L’ images, the viewing geometry changes significantly between observations: from left to right the central meridian longitudes are 249$^{\circ}$; 138$^{\circ}$; 233$^{\circ}$; and 123$^{\circ}$ W. The mid-latitude hot spot is Marduk Fluctus on June 17 and 24, and is Shamash Patera on June 20 and 27. \[fig:illsham\]](Timeline_IllSham.png){width="12cm"} ### Eruption at UP 254W (May 2018) On May 10, 2018 a bright, high-temperature ($\sim$1000 K) eruption was detected at 37$^{\circ}$S 254$^{\circ}$W; a small patera at exactly this location is seen in spacecraft surface imaging (Williams et al. 2011a) and is a plausible source of the eruption. The hot spot was detected again on May 31 but had dimmed nearly to invisibility, and was not seen again. Although the hot spot location is close to the hot spot we refer to as “SE of Pele”, these hot spots are clearly distinct and are spatially resolved in the May 31 observations. No prior activity at this location has been documented. ### Eruption at Isum Patera (May-June 2018) Of the high-power events detected in 2016-2018, the most dramatic in both temperature and duration was an eruption at Isum Patera in May-June 2018. The event began prior to May 27 and exhibited temperatures around or above 1000 K for the subsequent month. The total emission decayed steadily over this period, suggesting that new magma was being erupted throughout but at a rate that decreased with time. Figures \[fig:isum\] and \[fig:isum2\] show images of the eruption and plot its infrared timeline and the corresponding temperature fits. ![Near-infrared images from Gemini N of the eruption at Isum Patera in spring 2018. The eruption is the only hot spot visible in the K filter (2.27 $\mu$m), and is seen at a corresponding location in the L’ images (3.78 $\mu$m). The bright hot spot south of Isum Patera is Marduk Fluctus. \[fig:isum\]](Timeline_Isum.png){width="18cm"} ![Eruption at Isum Patera in 2018. (a) Timeline of 3.8 $\mu$m intensity over a $\sim$1-month period; (b) Temperature fits to the 2.3- and 3.8-$\mu$m measured brightnesses. Temperatures may be over-estimated if lava fountaining is occurring, as discussed in the text.\[fig:isum2\]](IsumPlots.png){width="16cm"} Activity at new hot spots ------------------------- Many of Io’s most active hot spots today have exhibited persistent or episodic activity back to the *Voyager* and *Galileo* missions, nearly 40 years in some cases. However, the detection of new hot spots at locations where no thermal emission was previously seen is also common in the ground-based datasets (de Pater et al. 2016; Cantrall et al. 2018), including hot spots where no corresponding surface features are seen. The detection of these new hot spots improves our understanding of the distribution of active volcanic centers on Io’s surface and of their heat flow. Cantrall et al. (2018) identified 24 hot spots that had been detected in ground-based data and were not seen by *Galileo*, more than a quarter of the total number of hot spots seen in the ground-based dataset. The new data presented here bring the total number of hot spots seen in the ground-based adaptive optics datasets from 2001-2018 to 104, 29 of which were not seen by previous spacecraft missions. In the new data presented here, the hot spots where thermal emission had not been previously detected were: Ekhi Patera; the hot spot in Illyrikon Regio; an unknown location near 54$^{\circ}$N 218$^{\circ}$W; the hot spot SE of Pele; and an unnamed patera near 37$^{\circ}$S 254$^{\circ}$W. Of these five, emission from Ekhi Patera and the unknown location at 218$^{\circ}$W was detected only once and at low brightness; the hot spot SE of Pele was consistently detected from Dec 2016 through the end of 2018 though at a low level; and the hot spots in Illyrikon Regio and at 254$^{\circ}$W were locations where bright eruptions took place. Of the 75 hot spots detected in 2013-2018, about 1/3 of them were detected throughout the period of observation, 1/3 were detected only in 2013-2015, and 1/3 were detected only in 2016-2018. For the set of hot spots that were only detected during one of the two intervals of observation, the majority were detected less than half a dozen times. This characteristic, in combination with the fact that a substantial fraction of these hot spots were previously detected by *Galileo* or *Voyager*, suggests that despite the apparent turnover in activity between observing intervals, it is likely that nearly all hot spots have been geologically active throughout the period of space- and Earth-based observation, but that they only output surface thermal emission sporadically, so that the set of hot spots detected in an observation period may depend heavily on the exact timing of the observations. A clear exception to this is the category of hot spots where no previous thermal emission has been seen, no clear patera feature is present at the site of the emission, and yet the hot spot stays persistently active for years after the activity is first seen. In this dataset, the two most prominent examples are the hot spot in Chalybes Regio, and the hot spot SE of Pele. These appear to be locations where volcanic activity initiated since the *Galileo* mission. Chalybes Regio is a northern region with extensive lava flow fields. Thermal emission was first detected from this location in 2010 (de Pater et al. 2014), and was attributed to PFu 2083, a small patera floor unit identified by Williams et al. (2011a) at 56$^{\circ}$N 74$^{\circ}$W. Thermal emission from this location was consistently detected in every observation we made between 2013 and 2018 that had appropriate viewing geometry. In many cases the emission appears spatially extended, indicative of multiple active areas that are not spatially resolved in the data. The hot spot SE of Pele, located near 35$^{\circ}$S 240$^{\circ}$W, was first detected on Dec 23, 2016 and was thereafter detected through the end of our observation period in mid-2018. While there are many flows and paterae in this general area of Io’s surface, there is no patera whose location provides a good match to the observed thermal emission. Persistent volcanoes and periodicities -------------------------------------- The sites that were most consistently detected during our campaign were Loki Patera (113 detections), Marduk Fluctus (87 detections), Janus Patera (84 detections), the hot spot in Chalybes Regio (80 detections), and Uta (57 detections). The first four of these were detected every time the viewing geometry was favorable. The hot spot at Uta does not appear to stay localized to the patera and may in fact be composed of multiple closely spaced volcanic centers not clearly resolved from one another. The timelines for these five hot spots are shown in Figure \[fig:indivhs\]. The large number of observations of each of these hot spots provides a database of thermal brightness that may be used to fit models for volcanic activity style. In addition, the quantity and cadence of the images result in a dataset that is sensitive to periodicities in volcanic brightness on timescales from days to months. The tidal forcing that both powers the activity and deforms Io’s crust is periodic, and the resultant activity may reflect these periodicities depending on the rheology and eruption mechanism. ![Activity timelines for five persistently active hot spots. Black circles and red squares indicate detections with Gemini N and Keck respectively. \[fig:indivhs\]](indivhs.png){width="16cm"} We conducted a periodicity analysis on the five persistently active hot spots listed above by calculating Lomb-Scargle periodograms (Scargle 1982; Zechmeister and Kürster 2009) and comparing the periodogram peaks against significance levels derived by bootstrapping. These periodograms are shown, with significance levels indicated, in Figure \[fig:LS\], and the volcano intensity timeline is plotted phased on the period corresponding to the periodogram peak. The bootstrapping technique samples from the dataset randomly with replacement and computes the periodogram; the confidence levels correspond to the percentage of resampled datasets that show no peaks above the indicated level (Ivezić et al. 2014; VanderPlas 2018). Note that because the duration of Io’s eruptions is typically longer than the interval between observations, confidence intervals derived from random resampling will lead to an apparent enhancement in the significance of observing cadence periodicities. ![Generalized Lomb-Scargle periodograms for the five most consistently detected hot spots. The 99% and 95% significance levels are shown as dotted horizontal lines. The plots in the middle column show the data phased to the period corresponding to the peak in the periodogram, and the rightmost column shows the data as a function of Io’s mean anomaly at the time of observation. The most prominent periodicities are near Io’s and Earth’s rotation periods (1 and 1.77 days), and their beat frequencies (periods near 0.6 and 2.3 days). The mean anomaly plots demonstrate that the 1.77-day periodicities are more consistent with being an observing cadence effect rather than a physical effect due to tidally modulated volcanism, which would exhibit a shorter period and a mean anomaly correlation. Within a given plot, the coloring of points is monotonic with time (blue=earliest; red=latest) to indicate the temporal ordering of the datapoints.\[fig:LS\]](LS_JanusPateraC.png "fig:"){width="16cm"} ![Generalized Lomb-Scargle periodograms for the five most consistently detected hot spots. The 99% and 95% significance levels are shown as dotted horizontal lines. The plots in the middle column show the data phased to the period corresponding to the peak in the periodogram, and the rightmost column shows the data as a function of Io’s mean anomaly at the time of observation. The most prominent periodicities are near Io’s and Earth’s rotation periods (1 and 1.77 days), and their beat frequencies (periods near 0.6 and 2.3 days). The mean anomaly plots demonstrate that the 1.77-day periodicities are more consistent with being an observing cadence effect rather than a physical effect due to tidally modulated volcanism, which would exhibit a shorter period and a mean anomaly correlation. Within a given plot, the coloring of points is monotonic with time (blue=earliest; red=latest) to indicate the temporal ordering of the datapoints.\[fig:LS\]](LS_UtaC.png "fig:"){width="16cm"} ![Generalized Lomb-Scargle periodograms for the five most consistently detected hot spots. The 99% and 95% significance levels are shown as dotted horizontal lines. The plots in the middle column show the data phased to the period corresponding to the peak in the periodogram, and the rightmost column shows the data as a function of Io’s mean anomaly at the time of observation. The most prominent periodicities are near Io’s and Earth’s rotation periods (1 and 1.77 days), and their beat frequencies (periods near 0.6 and 2.3 days). The mean anomaly plots demonstrate that the 1.77-day periodicities are more consistent with being an observing cadence effect rather than a physical effect due to tidally modulated volcanism, which would exhibit a shorter period and a mean anomaly correlation. Within a given plot, the coloring of points is monotonic with time (blue=earliest; red=latest) to indicate the temporal ordering of the datapoints.\[fig:LS\]](LS_ChalybesRegioC.png "fig:"){width="16cm"} ![Generalized Lomb-Scargle periodograms for the five most consistently detected hot spots. The 99% and 95% significance levels are shown as dotted horizontal lines. The plots in the middle column show the data phased to the period corresponding to the peak in the periodogram, and the rightmost column shows the data as a function of Io’s mean anomaly at the time of observation. The most prominent periodicities are near Io’s and Earth’s rotation periods (1 and 1.77 days), and their beat frequencies (periods near 0.6 and 2.3 days). The mean anomaly plots demonstrate that the 1.77-day periodicities are more consistent with being an observing cadence effect rather than a physical effect due to tidally modulated volcanism, which would exhibit a shorter period and a mean anomaly correlation. Within a given plot, the coloring of points is monotonic with time (blue=earliest; red=latest) to indicate the temporal ordering of the datapoints.\[fig:LS\]](LS_MardukFluctusC.png "fig:"){width="16cm"} ![Generalized Lomb-Scargle periodograms for the five most consistently detected hot spots. The 99% and 95% significance levels are shown as dotted horizontal lines. The plots in the middle column show the data phased to the period corresponding to the peak in the periodogram, and the rightmost column shows the data as a function of Io’s mean anomaly at the time of observation. The most prominent periodicities are near Io’s and Earth’s rotation periods (1 and 1.77 days), and their beat frequencies (periods near 0.6 and 2.3 days). The mean anomaly plots demonstrate that the 1.77-day periodicities are more consistent with being an observing cadence effect rather than a physical effect due to tidally modulated volcanism, which would exhibit a shorter period and a mean anomaly correlation. Within a given plot, the coloring of points is monotonic with time (blue=earliest; red=latest) to indicate the temporal ordering of the datapoints.\[fig:LS\]](LS_LokiPateraC.png "fig:"){width="16cm"} Nearly all hot spots show peaks near 0.997 days and 1.77 days, with weaker signals near 0.64 and 2.3 days (seen prominently in many of the periodograms in Figure \[fig:LS\]), which correspond to Earth’s sidereal day, Io’s rotation period (sidereal period = 1.7691 days), and the periods corresponding to their beat frequencies ($\nu_{beat}=\mid \nu_1-\nu_2\mid$). These periods reflect the observing cadence: the average interval between observations is a multiple of Earth’s sidereal day, while repeat observations of a given hot spot are made (on average) at multiples of Io’s rotation period. Io’s rotation period as observed from Earth differs slightly from its sidereal period due to the relative motion of Earth and Jupiter, and is minimized at opposition when the motion of Earth relative to Jupiter is maximized perpendicular to the line of sight. This leads to Earth-apparent rotation periods in the range of 1.7680-1.7691 days. A periodicity at Io’s rotation period could also be an indication of a tidally modulated volcanic process, whereby the volcanic activity or thermal emission is controlled in part by diurnally varying tidal stresses. However, the periodicities near Io’s rotation period in the dataset analyzed here do not show evidence for this effect. In particular, the peak periodogram power is at periods of 1.769-1.776 days, which match or slightly exceed Io’s apparent rotation period but are a poorer match to Io’s 1.7627 day anomalistic period, or time between successive perijoves, which is the relevant parameter for diurnal stresses and differs from the apparent rotation period due to the precession of Io’s orbit. In order to further highlight the difference between the observed 1.77-day signal and Io’s tidal forcing, the rightmost column in Figure \[fig:LS\] shows the brightness of each volcano as a function of Io’s mean anomaly at the time of observation, demonstrating that there is no mean anomaly correlation even in hot spots that show a strong 1.77-day periodicity. The period of precession of Io’s longitude of perijove is $\sim$1.5 years, so that on the timescale of a few months we see the same Ionian longitudes near a similar phase of Io’s orbit, which likely accounts for the prominence of both Earth’s and Io’s rotation periods in the periodograms. This can be seen in the middle column of Figure \[fig:LS\]: all datapoints are color-coded by time (cycling from blue to red from early to late), and it is clear from the middle plot of the Chalybes Regio panel, for example, that the apparent periodicity likely arises from a combination of a long-term brightening and observing cadence biases. In essence, we are unable to rigorously distinguish between a scenario where a volcano is variable over an orbital period, and a scenario where it is variable over a longer timescale but observational effects caused apparent orbital-timescale periodicities. This limitation could be entirely eliminated in a spacecraft dataset, where the observing cadence is less regular and the same hot spot can be viewed at a variety of mean anomalies within a time period that is short relative to the timescale for intrinsic variability of that volcano. Only Loki Patera shows a statistically significant periodicity at a period other than the four discussed above. Over the time period of observation analyzed here, Loki Patera’s activity was periodic with a period of 465.63 days. Hot spot spatial distribution {#sec:spatdist} ----------------------------- The distribution of hot spot number density with latitude and longitude is shown in Figure \[fig:LatLonHists\], and Figure \[fig:spatdist\] plots the location and brightness of all L’-band hot spot detections, updated from a similar figure based on the 2013-2015 data in de Kleer and de Pater (2016b). The longitudinal distribution has been corrected for the sensitivity of the observations to each longitude bin. The latitudinal distribution is not corrected because the latitudinal differences in the volcano brightness distribution or in topography are poorly constrained. ![Distribution of detected hot spots in latitude and longitude. The longitude distribution is plotted as fraction of total hot spot number per longitude bin, using only hot spots that were detected at L’, and is corrected for observational biases. The latitude distribution is corrected for the surface area in each latitude bin and normalized but is not corrected for observational biases, which contribute to the dearth of hot spots at high latitudes. \[fig:LatLonHists\]](LatDist.png "fig:"){width="14cm"} ![Distribution of detected hot spots in latitude and longitude. The longitude distribution is plotted as fraction of total hot spot number per longitude bin, using only hot spots that were detected at L’, and is corrected for observational biases. The latitude distribution is corrected for the surface area in each latitude bin and normalized but is not corrected for observational biases, which contribute to the dearth of hot spots at high latitudes. \[fig:LatLonHists\]](LonDist_debias.png "fig:"){width="14cm"} ![The spatial distribution of hot spot thermal emission detected on Io in 2013-2018. Each circle shows the location and brightness of a single hot spot detection, with circle size proportional to the log of the 3.8-$\mu$m intensity. All circles are semi-transparent, and high-opacity regions indicate multiple detections at the same location. The top and middle panels show the distribution in two distinct time periods (the 2013-2015 plot is identical to that given in de Kleer and de Pater 2016a), and the bottom panel shows the cumulative distribution from 2013-2018. The position uncertainties are typically a few degrees at low latitudes and higher towards the poles; much but not all of the apparent jitter in hot spot locations is within these uncertainties.\[fig:spatdist\]](spatdist0.pdf "fig:"){width="14cm"} ![The spatial distribution of hot spot thermal emission detected on Io in 2013-2018. Each circle shows the location and brightness of a single hot spot detection, with circle size proportional to the log of the 3.8-$\mu$m intensity. All circles are semi-transparent, and high-opacity regions indicate multiple detections at the same location. The top and middle panels show the distribution in two distinct time periods (the 2013-2015 plot is identical to that given in de Kleer and de Pater 2016a), and the bottom panel shows the cumulative distribution from 2013-2018. The position uncertainties are typically a few degrees at low latitudes and higher towards the poles; much but not all of the apparent jitter in hot spot locations is within these uncertainties.\[fig:spatdist\]](spatdist1.pdf "fig:"){width="14cm"} ![The spatial distribution of hot spot thermal emission detected on Io in 2013-2018. Each circle shows the location and brightness of a single hot spot detection, with circle size proportional to the log of the 3.8-$\mu$m intensity. All circles are semi-transparent, and high-opacity regions indicate multiple detections at the same location. The top and middle panels show the distribution in two distinct time periods (the 2013-2015 plot is identical to that given in de Kleer and de Pater 2016a), and the bottom panel shows the cumulative distribution from 2013-2018. The position uncertainties are typically a few degrees at low latitudes and higher towards the poles; much but not all of the apparent jitter in hot spot locations is within these uncertainties.\[fig:spatdist\]](spatdist2.pdf "fig:"){width="14cm"} Our observations in 2013-2015 showed an apparent difference in the spatial distribution of bright, transient eruptions compared to persistent hot spots (de Kleer and de Pater 2016b). In particular, all nine of the volcanoes that hosted bright eruptions ($I_{max,Lp}>$30 GW/$\mu$m/sr) during those years are located on the trailing hemisphere. In the full dataset (2013-2018), 18 volcanoes exhibited bright eruptions, where bright is defined as $I_{max,Lp}>$20 GW/$\mu$m/sr. Note that this definition is effectively the same as in our previous paper because there were no volcanoes with detected $I_{max,Lp}$ between 20 and 30 GW/$\mu$m/sr in 2013-2015. The threshold was lowered because the larger dataset now available indicates that no volcano persistently maintains a flux density level above 20 GW/$\mu$m/sr, and this cutoff is therefore sufficient to isolate bright transient events. Of these 18 volcanoes, every single one falls within a 180$^{\circ}$ band in longitude, from 128-308$^{\circ}$W, despite the fact that our program had comparable sensitivity to events of this magnitude at all Ionian longitudes. Moreover, all but two of the eruptions occurred on Io’s trailing hemisphere (180-360$^{\circ}$W); the probability of 16 or more eruptions occurring on the trailing hemisphere (given 18 eruptions total) is 0.00066 if volcanoes are randomly distributed in longitude. Despite the distinctive distribution of the largest eruptions, there is no significant difference between the two hemispheres in terms of spatially and temporally averaged near-infrared brightness (provided that Loki Patera is excluded), nor in the number of active hot spots. The time-averaged volcanic L’-band intensity arises 47.5% and 52.5% from the leading and trailing hemispheres, respectively (excluding Loki Patera), while the number of hot spots detected at L’ is identical between the two hemispheres (31 hot spots, excluding those detected only at longer wavelengths, to which only a subset of the data were sensitive). In order to further explore whether any hemispheric-scale asymmetries are present in the distribution of the hot spots, we broaden this analysis from a comparison of just leading vs. trailing hemispheres by choosing all 180-degree longitude intervals and determining the fraction of hot spots that fall within each interval. While the leading and trailing hemispheres exhibit comparable time-averaged radiances and hot spot number, there is an asymmetry between the sub- and anti-jovian hemispheres with more hot spots and higher radiances on the anti-jovian hemisphere, despite the fact that this hemisphere had poorer coverage during our program. The hemisphere centered on 160$^{\circ}$W maximizes both metrics, containing $\sim$60% of the hot spots and $>$70% of the time averaged radiances. However, in artificial datasets where hot spots are randomly distributed in longitude, an asymmetry in hot spot number at this level is well within the expected range (i.e. within one $\sigma$ of the median). Conclusions {#sec:conc} =========== We present results from measurements of the thermal emission of Io’s volcanoes, derived from near-infrared imaging with adaptive optics at the Keck and Gemini N telescopes on 271 nights between August 2013 and the end of 2018. The first 100 nights of observations were presented in de Kleer and de Pater (2016a), while the 171 nights since the start of 2016 are presented here for the first time. Over the five years of the program to date, we made 980 detections of over 75 unique hot spots, with some hot spots detected more than 80 times and Loki Patera detected 113 times. We provide downloadable tables of hot spot brightnesses and observing details, and hope that these data products will serve as a resource for others in the community who will build on the analyses presented here. Nearly all bright transient eruptions where temperature measurements were possible displayed temperatures above 800 K, confirming that eruptions at such high temperatures are common and are likely the rule rather than the exception. The detection of new hot spots that were not previously detected by spacecraft is a common occurrence. Adding the data presented here to that summarized by Cantrall et al. (2018), there have now been 104 distinct hot spots seen in the AO data from 2001-2018, 25-30 of which were not previously seen by spacecraft. It is likely that many of these hot spots have been active since before the *Galileo* and *Voyager* visits but were not emitting sufficient radiation during the visits to have been detected. However, some of the new hot spots have no corresponding surface feature and remain persistently active after they are first detected (e.g. Chalybes Regio and the hot spot SE of Pele), suggesting that activity recently initiated at these locations. We performed a periodicity search on the five most consistently detected hot spots (each detected 57-113 times) but did not detect any new periodicities beyond those introduced by the observing cadence. Spacecraft data would be needed to draw a robust conclusion about tidally modulated volcanism on diurnal timescales. De Kleer and de Pater (2016b) noted that all bright, transient eruptions took place on Io’s trailing hemisphere. This trend continues through the additional 3 years of data presented here, and the probability of the observed asymmetry is 0.00066 if volcanoes are randomly distributed. Note that this asymmetry applies only to the character of the volcanism; the number and cumulative near-IR radiance is nearly identical between leading and trailing hemispheres. This dataset now constitutes the largest set of unique detections of thermal emission from individual Ionian hot spots to date, permitting robust statistical analyses of properties such as the spatial distribution of hot spot activity, the variability and time-averaged power of numerous individual hot spots, and the occurrence rates of bright and/or high-temperature eruptions. These data, in combination with *Galileo’s* sensitivity to smaller, cooler hot spots and the multi-decadal time baseline provided by ground-based occultation data, are now providing a truly global, multi-wavelength picture of Io’s volcanic activity over a wide range of timescales. The timing of our program coincided with intensive observations of the extended sodium cloud and the plasma torus by ground-based programs and by the *EXCEED/Hisaki* and *Juno* missions. The correlation of these datasets with our timeline of Io’s activity is already providing clues into the connections between different components of the jovian system (Yoshikawa et al. 2017; Koga et al. 2018; Morgenthaler et al. 2019), but our understanding of this system is far from complete. Continued coverage of Io’s volcanoes throughout these missions will be key to unraveling the sources of variability in the jovian neutral and plasma environment. Acknowledgements {#acknowledgements .unnumbered} ================ KdK is supported by the Heising-Simons Foundation through a *51 Pegasi b* postdoctoral fellowship, and this research was partially supported by the National Science Foundation grant AST-1313485 to UC Berkeley and a NASA Keck PI Data Award, administered by the NASA Exoplanet Science Institute. We are grateful to Roy and Frances Simperman for their support of the Keck Visiting Scholars program, which enabled KdK, EM, and CA to develop the Keck twilight program through which some of the data presented here were obtained. We thank G. Puniwai for acquiring several of the Keck observations. We thank P. Capak, J. Cohen, N. Hernitschek, D. Masters, and S.A. Stanford of the the Complete Calibration of the Color-Redshift Relation (C3R2; Masters et al. 2017) NASA Keck Key Strategic Mission Support survey team for providing twilight observations on the nights of UT 2017 December 11-13. The work of DS and AGD was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. Much of the data presented herein were obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), Ministério da Ciência, Tecnologia e Inovação (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. Some of the data was obtained at the W. M. Keck Observatory from telescope time allocated to the National Aeronautics and Space Administration through the agency’s scientific partnership with the California Institute of Technology and the University of California. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. [llllll]{} Nusku Patera & -65.0 & 6.2 & 1 &\ Uta & -34.4 & 21.0 & 57 & 3.6 & Lp\ Kanehekili Fluctus & -17.0 & 34.5 & 8 & 1.2 & Lp\ Janus Patera & -3.9 & 37.4 & 84 & 4.7 & Lp\ UP 38W & -25.3 & 37.7 & 1 & 1.9 & Ms\ Pfu374 & -24.3 & 49.7 & 3 & 1.0 & Ms\ Masubi & -42.9 & 53.7 & 9 & 2.4 & Lp\ PFd1691 & 9.4 & 58.3 & 22 & 2.5 & Lp\ Laki-Oi Patera & -44.6 & 59.7 & 4 & 3.9 & Lp\ Shamshu Patera & -8.3 & 61.5 & 1 & 1.1 & Ms\ Tejeto Patera & -42.9 & 68.7 & 4 & 4.4 & Lp\ Chalybes Regio & 55.4 & 70.2 & 80 & 9.6 & Lp\ Zal Patera & 37.9 & 74.6 & 24 & 2.9 & Lp\ Tawhaki Patera & 2.5 & 75.6 & 19 & 2.1 & Lp\ Ekhi Patera & -28.4 & 86.7 & 1 & 3.8 & Lp\ Gish Bar & 15.6 & 89.1 & 18 & 3.4 & Lp\ Aluna Patera & 41.7 & 90.1 & 2 & 3.2 & Ms\ P207 & -36.5 & 91.1 & 1 & 5.0 & Lp\ Shango Patera & 33.5 & 95.6 & 3 & 1.8 & Ms\ Itzamna Patera & -15.0 & 99.0 & 9 & 1.6 & Lp\ Arusha Patera & -39.6 & 99.0 & 4 & 3.5 & Lp\ Sigurd Patera & -5.1 & 99.2 & 8 & 4.4 & Lp\ P197 & -46.9 & 107.3 & 11 & 6.2 & Lp\ Amirani & 20.5 & 113.2 & 27 & 2.6 & Lp\ Dusura Patera & 36.4 & 121.1 & 3 & 7.5 & Lp\ Maui Patera & 18.2 & 125.8 & 2 & 3.2 & Lp\ P95 & -10.0 & 127.8 & 2 & 36.5 & Lp\ Malik Patera & -32.9 & 129.6 & 9 & 3.1 & Lp\ UP 132W & 18.4 & 131.6 & 5 & 3.6 & Lp\ Thor & 40.6 & 134.7 & 2 & 1.3 & Ms\ P123 & -41.9 & 139.2 & 20 & 4.6 & Lp\ Tupan Patera & -18.0 & 140.5 & 10 & 1.6 & Lp\ Surya Patera & 21.2 & 149.4 & 4 & 2.7 & Lp\ Shamash Patera & -33.2 & 150.5 & 2 & 41.3 & Lp\ Sobo Fluctus & 12.9 & 152.8 & 1 &\ Prometheus & -1.5 & 153.3 & 22 & 2.8 & Lp\ Culann & -17.2 & 161.8 & 11 & 2.0 & Lp\ Zamama & 18.5 & 173.2 & 3 & 1.2 & Lp\ Illyrikon Regio & -70.8 & 179.9 & 4 & 109.2 & Lp\ Sethlaus/Gabija Paterae & -50.0 & 198.1 & 6 & 11.6 & Lp\ Isum Patera & 31.1 & 205.4 & 16 & 37.6 & Lp\ Marduk Fluctus & -23.7 & 211.1 & 87 & 10.5 & Lp\ Kurdalagon & -49.3 & 216.7 & 36 & 13.1 & Lp\ Unknown & 53.6 & 217.8 & 1 &\ Susanoo/Mulungu Paterae & 18.6 & 221.0 & 10 & 4.5 & Lp\ 201308C & 29.1 & 228.0 & 11 & 555.7 & Lp\ P17 & -3.5 & 228.8 & 1 & 1.8 & Lp\ P13 & 13.9 & 229.0 & 4 & 9.2 & Lp\ East Girru & 21.3 & 233.5 & 3 & 4.9 & Lp\ Reiden Patera & -18.0 & 234.4 & 2 & 3.5 & Lp\ Pyerun Patera & -57.7 & 237.1 & 1 & 3.9 & Ms\ SE of Pele & -34.5 & 239.5 & 30 & 3.9 & Lp\ Pillan Patera & -11.3 & 243.7 & 21 & 7.1 & Lp\ Chors Patera & 65.1 & 245.6 & 5 & 30.6 & Lp\ UP 254W & -37.1 & 254.5 & 2 & 67.7 & Lp\ Pele & -18.2 & 255.2 & 19 & 2.2 & Lp\ Shakuru Patera & 24.8 & 261.7 & 2 & 2.7 & Lp\ Mithra Patera & -58.0 & 265.6 & 4 & 25.4 & Lp\ Svarog Patera & -51.6 & 269.3 & 3 & 4.1 & Ms\ Daedalus Patera & 18.7 & 273.9 & 5 & 2.5 & Lp\ PV59 & -38.2 & 289.7 & 22 & 6.7 & Lp\ N Lerna Regio & -56.0 & 290.6 & 19 & 5.2 & Lp\ Kibero Patera & -12.5 & 297.1 & 2 & 11.7 & Lp\ Amaterasu Patera & 38.8 & 304.3 & 13 & 7.6 & Lp\ Sengen Patera & -29.8 & 305.1 & 4 & 5.2 & Lp\ Rarog Patera & -39.2 & 305.4 & 14 & 29.3 & Lp\ Heno Patera & -55.6 & 307.5 & 7 & 70.3 & Lp\ Loki Patera & 12.6 & 307.5 & 113 & 38.3 & Lp\ Shoshu Patera & -17.6 & 322.9 & 1 & 2.7 & Lp\ Tol-Ava Patera & 0.7 & 326.5 & 4 & 4.4 & Lp\ PV170 & -47.9 & 327.8 & 3 & 7.1 & Lp\ Fuchi Patera & 28.3 & 328.7 & 1 & 1.0 & Lp\ Surt & 44.4 & 334.1 & 2 & 1.2 & Ms\ Pfu1063 & 41.7 & 357.7 & 3 & 1.7 & Lp\ Paive Patera & -42.9 & 358.3 & 2 & 0.9 & Ms\ ------------------------- -------------- ----------------- ----------------- ------------------ ----------- Site Date of Peak Lat Lon I$_{max,Lp}$ Reference \[UT\] \[$^{\circ}$N\] \[$^{\circ}$W\] \[GW/$\mu$m/sr\] Heno Patera 08-15-2013 -56 308 270$\pm$70 c Rarog Patera 08-15-2013 -39 305 325$\pm$80 c Loki Patera$^b$ 08-22-2013 13 308 136$\pm$20 c 201308C 08-29-2013 29 228 $>$500 d Chors Patera 10-22-2014 65 246 57$\pm$19 e Mithra Patera 01-10-2015 -58 266 55$\pm$12 e Sethlaus/Gabija Paterae 04-01-2015 -50 198 33$\pm$5 e Kurdalagon$^b$ 04-05-2015 -49 217 68$\pm$11 e Amaterasu Patera 12-25-2015 39 304 43$\pm$6 e P95 05-17-2016 -10 128 58$\pm$13 Shamash Patera 06-20-2016 -33 151 53$\pm$9 Illyrikon Regio 06-27-2016 -71 180 125$\pm$69 P13 02-05-2017 14 229 23$\pm$2 Marduk Fluctus$^b$ 02-05-2017 -24 211 27$\pm$2 Pillan Patera$^b$ 02-23-2017 -11 244 27$\pm$5 Susanoo/Mulungu Paterae 01-12-2018 19 221 20$\pm$3 UP 254W 05-10-2018 -37 252 134$\pm$24 Isum Patera 05-27-2018 31 205 64$\pm$16 ------------------------- -------------- ----------------- ----------------- ------------------ ----------- : Bright eruptions$^a$, 2013-2018 \[tbl:transients\] \ $^a$All eruptions detected with $I_{max,Lp}>$20 GW/$\mu$m/sr during this time period.\ $^b$Nearly all bright eruptions were transient events at sites where activity was not otherwise detected. Exceptions are: Pillan Patera, Loki Patera, and Marduk Fluctus, which were persistently active but exhibited spikes in activity; and Kurdalagon Patera, which was not detected prior to its first eruption but remained detectable afterwards.\ $^c$de Pater et al. (2014)\ $^d$de Kleer et al. (2014)\ $^e$de Kleer and de Pater (2016a) ------------------- ------------- ------- -------------- ----------- Site Date $\mu$ T$^b$ Reference \[UT\] \[K\] Shamash Patera 2016-Jun-20 0.81 1000$\pm$110 2016-Jun-27 0.74 850$\pm$80 Culann 2017-Jun-16 0.81 860$\pm$140 UP 254W 2018-May-10 0.63 960$\pm$100 PV170 2014-Dec-02 0.42 850$\pm$40 c Isum Patera 2018-May-27 0.48 1200$\pm$220 2018-May-31 0.70 1180$\pm$120 2018-Jun-16 0.84 1120$\pm$100 2018-Jun-18 0.43 1440$\pm$410 2018-Jun-23 0.85 980$\pm$70 2018-Jun-25 0.62 1230$\pm$280 2018-Jun-30 0.85 1010$\pm$80 PFd1691 2018-Jan-19 0.98 830$\pm$80 Rarog Patera 2013-Aug-15 0.60 1300$\pm$200 d 2014-Feb-10 0.78 890$\pm$120 c 2015-Mar-31 0.76 950$\pm$60 c PV59 2014-Oct-31 0.59 950$\pm$200 c P95 2016-May-17 0.39 1020$\pm$180 Kurdalagon Patera 2015-Jan-26 0.54 1200$\pm$150 c 2015-Mar-31 0.26 820$\pm$110 c 2015-Apr-05 0.57 1300$\pm$200 c Tawhaki Patera 2014-Mar-11 0.54 900$\pm$170 c 2018-Jan-19 0.97 800$\pm$90 P197 2014-Mar-11 0.67 1000$\pm$250 c N Lerna Regio 2014-Dec-02 0.50 820$\pm$180 c 2015-Mar-31 0.54 940$\pm$120 c Reiden Patera 2017-Dec-12 0.90 1170$\pm$100 201308C 2014-Dec-02 0.64 850$\pm$160 e SE of Pele 2017-Dec-12 0.79 950$\pm$160 P123 2015-Jan-11 0.74 820$\pm$160 c Illyrikon Regio 2016-Jun-20 0.24 1210$\pm$690 2016-Jun-27 0.17 1060$\pm$340 ------------------- ------------- ------- -------------- ----------- : High-temperature eruptions$^a$, 2013-2018 \[tbl:highT\] $^a$All eruptions detected with T$>$800 K during this period.\ $^b$Temperatures are derived from intensities corrected for geometric foreshortening, and may be overestimated in observations with high emission angle ($\mu$) if fire fountaining is producing a substantial fraction of the short-wavelength emission.\ $^c$de Kleer and de Pater (2016a)\ $^d$de Pater et al. (2014)\ $^e$de Kleer et al. (2014) References {#references .unnumbered} ========== - [Cantrall, C., de Kleer, K., de Pater, I., et al. Variability and geologic associations of volcanic activity on Io in 2001-2016. Icarus 312, 267-294 (2018).]{} - [Carlson, R.W., Weissman, P.R., Smythe, W.D., Mahoney, J.C. Near-Infrared Mapping Spectrometer experiment on Galileo. *Space Sci Rev* 60, 457-502 (1992).]{} - [Carr, M.H. Silicate volcanism on Io. JGR 91, 3521-3532 (1986).]{} - [Davies, A.G. Volcanism on Io: a Comparison with Earth, Cam. Univ. Press (2007).]{} - [Davies, A.G., Keszthelyi, L.P., Harris, A.J.L. The thermal signature of volcanic eruptions on Io and Earth. J. Volc. Geo. Res. 194, 75-99 (2010).]{} - [Davies, A.G., Veeder, G.J., Matson, D.L., Johnson, T.V. Io: Charting thermal emission variability with the Galileo NIMS Io Thermal Emission Database (NITED): Loki Patera. GeoRL 39, L01201, p. 1-6 (2012).]{} - [Davies, A.G., Davies, R.L., Veeder, G.J., et al. Discovery of a powerful, transient, explosive thermal event at Marduk Fluctus, Io, in *Galileo* NIMS data. *GRL* 45, 2926-2933 (2018).]{} - [de Kleer, K., de Pater, I., Davies, A.G., et al. Near-infrared monitoring of Io and detection of a violent outburst on 29 August 2013. Icarus 242, 352-364 (2014).]{} - [de Kleer, K., de Pater, I. Time variability of Io’s volcanic activity from near-IR adaptive optics observations on 100 nights in 2013-2015. Icarus 280, 378-404 (2016a).]{} - [de Kleer, K., de Pater, I. Spatial distribution of Io’s volcanic activity from near-IR adaptive optics observations on 100 nights in 2013-2015. Icarus 280, 405-414 (2016b).]{} - [de Pater, I., Davies, A.G., Ádámkovics, M., Ciardi, D.R. Two new, rare, high-effusion outburst eruptions at Rarog and Heno Paterae on Io. Icarus 242, 365-378 (2014).]{} - [de Pater, I., Davies, A.G., Marchis, F. Keck observations of eruptions on Io in 2003-2005. Icarus 274, 284-296 (2016).]{} - [Gaskell, R.W., Synnott, S.P., McEwen, A.S., Schaber, G.G. Large-scale topography of Io - Implications for internal structure and heat transfer. GRL 15, 581-584 (1988).]{} item\[\][Hodapp, K.W., Jensen, J.B., Irwin, E.M., et al. The Gemini near-infrared imager (NIRI). PASP 115, 1388-1406 (2003).]{} - [Koga, R., Tsuchiya, F., Kagitani, M., et al. The time variation of atomic oxygen emission around Io during a volcanic event observed with Hisaki/EXCEED. *Icarus* 299, 300-307 (2018).]{} - [Lopes-Gautier, R., McEwen, A.S., Smythe, W.B., et al. Active volcanism on Io: Global distribution and variations in activity. *Icarus* 140, 243-264 (1999).]{} - [Hamilton, C.W. et al. Spatial distribution of volcanoes on Io: Implications for tidal heating and magma ascent. Earth Planet Sci Lett 361, 272-286 (2013).]{} - [Ivezić, Ž., Connolly, A.J., VanderPlas, J.T., Gray, A. *Statistics, Data Mining, and Machine Learning in Astronomy*, Princeton Series in Modern Observational Astronomy, pp. 140-144 (2014).]{} - [Johnson, T.V., Veeder, G.J., Matson, D.L. et al. Io: Evidence for silicate volcanism in 1986. *Science* 242, 1280-1283 (1988).]{} - [Marchis, F., de Pater, I., Davies, A.G. et al. High-resolution Keck adaptive optics imaging of violent volcanic activity on Io. Icarus 160, 124-131 (2002).]{} - [Masters, D.C., Stern, D.K., Cohen, J.G. et al. The Complete Calibration of the Color-Redshift Relation (C3R2) Survey: Survey Overview and Data Release 1. ApJ, 841, 111, 10pp (2017).]{} - [Morgenthaler, J.P., Rathbun, J.A., Schmidt, C.A., Baumgardner, J., Schneider, N.M. Large volcanic event on Io inferred from jovian sodium nebula brightening. ApJ Lett 871, article id. L 23, 6pp (2019).]{} - [Rathbun, J.A., Spencer, J.R. Ground-based observations of time variability of multiple active volcanoes on Io. Icarus 209, 625-630 (2010).]{} - [Rathbun, J.A., Howell, R.R., Spencer, J.R. Active volcanoes on Io: Putting ground-based observations of Jupiter occultations into the PDS. 49th LPSC \#2083 (2018).]{} - [Scargle, J.D. Studies in astronomical time series analysis. II - Statistical aspects of spectral analysis of unevenly spaced data. ApJ 263, 835-853 (1982).]{} - [Segatz, M. et al. Tidal dissipation, surface heat flow, and figure of viscoelastic models of Io. Icarus 75, 187-206 (1988).]{} - [Spencer, J.R., Shure, M.A., Ressler, M.E., et al. Discovery of hotspots on Io using disk-resolved infrared imaging. Nature 348, 618-621 (1990).]{} - [Tsang, C.C.C., Rathbun, J.A., Spencer, J.R., Hesman, B.E., Abramov, O. Io’s hot spots in the near-infrared detected by LEISA during the New Horizons flyby, *JGR Planets*, 119, 2222-2238 (2014).]{} - [VanderPlas, J.T. Understanding the Lomb-Scargle Periodogram. *ApJ Supp.*, 236:16, 28pp (2018).]{} - [Veeder, G.J., Matson, D.L., Johnson, T.V., Blaney, D.L., Goguen, J.D. Io’s heat flow from infrared radiometry: 1983-1993. JGR Planets 99 (E8), 17095-17162 (1994).]{} - [Veeder, G.J., Davies, A.G., Matson, D.L., Johnson, T.V., Williams, D.A. and Radebaugh, J. Io: Volcanic thermal sources and global heat flow. *Icarus* 219, 701-722 (2012).]{} - [Veeder, G.J., et al., 2015. Io: Heat flow from small volcanic features. Icarus 245, 379-410.]{} - [Williams, D.A. et al. Geologic map of Io, USGS Scientific Investigations Map 3168, scale 1:15,000,000 (2011a).]{} - [Williams, D.A., Keszthelyi, L.P., Crown, D.A. Volcanism on Io: New insights from global geologic mapping. Icarus 214, 91-112 (2011b).]{} - [Wizinowich, P., Acton, D.S., Shelton, et al. First light adaptive optics images from the Keck II telescope: A new era in high angular resolution imagery. PASP 112, 315-319 (2000).]{} - [Yoshikawa, I., Suzuki, F., Hikida, R., et al. Volcanic activity on Io and its influence on the dynamics of the jovian magnetosphere observed by EXCEED/Hisaki in 2015. *Earth, Planets and Space–Frontier Letter* 69, 110, 11pp (2017).]{} - [Zechmeister, M. and Kürster, M., 2009. The generalized Lomb-Scargle periodogram. A&A 496, 577-584 (2009).]{} Tables ====== Table \[tbl:obs\] provides details on the observations, and Table \[tbl:hsphot\] provides the measured intensities for all hot spot detections in 2013-2018, corrected for geometric foreshortening. Both Tables \[tbl:obs\] and \[tbl:hsphot\] are available for download. [lllll]{} 2013-08-15 & Keck/NIRC2 & 337.0 & 2.0 & 5.85\ 2013-08-20 & Keck/NIRC2 & 275.0 & 2.0 & 5.79\ 2013-08-21 & Keck/NIRC2 & 115.0 & 2.0 & 5.78\ 2013-08-22 & Keck/NIRC2 & 319.0 & 2.0 & 5.77\ 2013-08-23 & Keck/NIRC2 & 161.0 & 2.0 & 5.76\ 2013-08-29 & Gemini N/NIRI & 305.7 & 1.9 & 5.69\ 2013-08-30 & Gemini N/NIRI & 146.2 & 1.9 & 5.68\ 2013-09-01 & Gemini N/NIRI & 192.7 & 1.9 & 5.65\ 2013-09-02 & Gemini N/NIRI & 36.4 & 1.9 & 5.65\ 2013-09-03 & Gemini N/NIRI & 241.4 & 1.9 & 5.63\ 2013-09-04 & Gemini N/NIRI & 79.4 & 1.9 & 5.62\ 2013-09-05 & Gemini N/NIRI & 285.2 & 1.9 & 5.61\ 2013-09-06 & Gemini N/NIRI & 129.2 & 1.9 & 5.59\ 2013-09-07 & Gemini N/NIRI & 334.4 & 1.9 & 5.58\ 2013-09-09 & Gemini N/NIRI & 24.2 & 1.9 & 5.55\ 2013-09-10 & Gemini N/NIRI & 226.0 & 1.9 & 5.54\ 2013-11-18 & Keck/NIRC2 & 221.3 & 1.6 & 4.53\ 2013-11-26 & Gemini N/NIRI & 54.3 & 1.6 & 4.44\ 2013-11-27 & Gemini N/NIRI & 257.7 & 1.6 & 4.43\ 2013-11-28 & Gemini N/NIRI & 101.3 & 1.6 & 4.42\ 2013-11-29 & Gemini N/NIRI & 306.9 & 1.6 & 4.41\ 2013-12-02 & Gemini N/NIRI & 154.6 & 1.6 & 4.38\ 2013-12-03 & Gemini N/NIRI & 25.4 & 1.6 & 4.37\ 2013-12-04 & Gemini N/NIRI & 202.4 & 1.6 & 4.36\ 2013-12-05 & Gemini N/NIRI & 81.0 & 1.6 & 4.35\ 2013-12-06 & Gemini N/NIRI & 273.1 & 1.6 & 4.34\ 2013-12-12 & Gemini N/NIRI & 72.3 & 1.6 & 4.29\ 2013-12-13 & Gemini N/NIRI & 231.2 & 1.6 & 4.29\ 2013-12-14 & Gemini N/NIRI & 118.9 & 1.6 & 4.28\ 2013-12-15 & Gemini N/NIRI & 320.4 & 1.6 & 4.28\ 2014-01-20 & Keck/NIRC2 & 34.2 & 1.6 & 4.25\ 2014-02-08 & Keck/NIRC2 & 274.0 & 1.6 & 4.39\ 2014-02-10 & Keck/NIRC2 & 318.1 & 1.6 & 4.41\ 2014-03-07 & Gemini N/NIRI & 35.7 & 1.6 & 4.74\ 2014-03-10 & Gemini N/NIRI & 257.7 & 1.6 & 4.78\ 2014-03-11 & Keck/NIRC2 & 131.8 & 1.6 & 4.79\ 2014-03-11 & Gemini N/NIRI & 98.9 & 1.6 & 4.79\ 2014-03-12 & Gemini N/NIRI & 302.4 & 1.6 & 4.81\ 2014-03-14 & Gemini N/NIRI & 23.6 & 1.6 & 4.85\ 2014-03-27 & Gemini N/NIRI & 129.1 & 1.6 & 5.05\ 2014-03-28 & Gemini N/NIRI & 337.0 & 1.6 & 5.07\ 2014-04-03 & Gemini N/NIRI & 105.8 & 1.6 & 5.16\ 2014-10-03 & Gemini N/NIRI & 307.5 & 0.2 & 5.82\ 2014-10-09 & Gemini N/NIRI & 86.8 & 0.2 & 5.74\ 2014-10-10 & Gemini N/NIRI & 290.8 & 0.1 & 5.73\ 2014-10-22 & Gemini N/NIRI & 209.8 & 0.0 & 5.55\ 2014-10-23 & Gemini N/NIRI & 55.8 & 0.0 & 5.54\ 2014-10-24 & Gemini N/NIRI & 258.4 & 0.0 & 5.53\ 2014-10-25 & Gemini N/NIRI & 95.0 & 0.0 & 5.51\ 2014-10-27 & Gemini N/NIRI & 145.8 & 0.0 & 5.48\ 2014-10-30 & Keck/NIRC2 & 39.0 & 0.0 & 5.44\ 2014-10-31 & Keck/NIRC2 & 241.8 & 0.0 & 5.42\ 2014-11-25 & Gemini N/NIRI & 283.9 & -0.1 & 5.03\ 2014-11-27 & Gemini N/NIRI & 334.2 & -0.1 & 5.0\ 2014-11-28 & Gemini N/NIRI & 153.1 & -0.1 & 4.99\ 2014-11-29 & Gemini N/NIRI & 22.1 & -0.1 & 4.97\ 2014-11-30 & Gemini N/NIRI & 227.9 & -0.1 & 4.95\ 2014-12-01 & Gemini N/NIRI & 71.0 & -0.1 & 4.94\ 2014-12-02 & Keck/NIRC2 & 275.7 & -0.1 & 4.92\ 2014-12-06 & Gemini N/NIRI & 329.3 & -0.2 & 4.87\ 2014-12-08 & Gemini N/NIRI & 50.0 & -0.2 & 4.84\ 2014-12-09 & Gemini N/NIRI & 249.7 & -0.2 & 4.82\ 2014-12-10 & Gemini N/NIRI & 97.6 & -0.2 & 4.81\ 2014-12-15 & Gemini N/NIRI & 40.0 & -0.2 & 4.74\ 2014-12-16 & Gemini N/NIRI & 247.0 & -0.2 & 4.73\ 2014-12-18 & Gemini N/NIRI & 293.3 & -0.2 & 4.7\ 2015-01-10 & Gemini N/NIRI & 296.0 & -0.2 & 4.46\ 2015-01-11 & Keck/NIRC2 & 137.7 & -0.2 & 4.44\ 2015-01-12 & Keck/NIRC2 & 338.5 & -0.2 & 4.44\ 2015-01-13 & Gemini N/NIRI & 134.0 & -0.2 & 4.32\ 2015-01-14 & Gemini N/NIRI & 327.6 & -0.2 & 4.43\ 2015-01-15 & Gemini N/NIRI & 230.7 & -0.2 & 4.42\ 2015-01-16 & Keck/NIRC2 & 74.3 & -0.2 & 4.41\ 2015-01-22 & Gemini N/NIRI & 153.6 & -0.2 & 4.38\ 2015-01-26 & Gemini N/NIRI & 249.7 & -0.2 & 4.36\ 2015-03-25 & Gemini N/NIRI & 149.4 & -0.0 & 4.66\ 2015-03-26 & Gemini N/NIRI & 29.1 & -0.0 & 4.68\ 2015-03-27 & Gemini N/NIRI & 213.2 & -0.0 & 4.69\ 2015-03-28 & Gemini N/NIRI & 41.8 & -0.0 & 4.71\ 2015-03-29 & Gemini N/NIRI & 244.2 & -0.0 & 4.72\ 2015-03-31 & Keck/NIRC2 & 291.5 & -0.0 & 4.74\ 2015-04-01 & Keck/NIRC2 & 134.9 & -0.0 & 4.75\ 2015-04-02 & Keck/NIRC2 & 339.1 & -0.0 & 4.77\ 2015-04-04 & Keck/NIRC2 & 27.9 & -0.0 & 4.80\ 2015-04-05 & Gemini N/NIRI & 250.6 & -0.0 & 4.81\ 2015-04-06 & Gemini N/NIRI & 70.3 & -0.0 & 4.83\ 2015-04-09 & Gemini N/NIRI & 322.0 & -0.0 & 4.87\ 2015-04-17 & Gemini N/NIRI & 149.0 & -0.0 & 4.99\ 2015-04-19 & Gemini N/NIRI & 197.3 & -0.0 & 5.02\ 2015-04-20 & Gemini N/NIRI & 39.8 & -0.0 & 5.04\ 2015-04-21 & Gemini N/NIRI & 243.6 & -0.0 & 5.03\ 2015-04-22 & Gemini N/NIRI & 88.7 & -0.0 & 5.07\ 2015-04-26 & Gemini N/NIRI & 206.4 & -0.0 & 5.13\ 2015-04-27 & Gemini N/NIRI & 23.8 & -0.0 & 5.15\ 2015-04-29 & Keck/NIRC2 & 72.9 & -0.0 & 5.18\ 2015-05-05 & Keck/NIRC2 & 213.5 & -0.0 & 5.27\ 2015-06-05 & Keck/NIRC2 & 38.9 & -0.1 & 5.75\ 2015-11-23 & Keck/NIRC2 & 327.2 & -1.5 & 5.65\ 2015-12-22 & Keck/NIRC2 & 328.5 & -1.8 & 5.15\ 2015-12-25 & Keck/NIRC2 & 106.5 & -1.8 & 5.19\ 2016-01-30 & Gemini N/NIRI & 124.5 & -1.9 & 4.65\ 2016-02-03 & Gemini N/NIRI & 220.1 & -1.9 & 4.60\ 2016-02-04 & Gemini N/NIRI & 65.2 & -1.9 & 4.60\ 2016-02-09 & Gemini N/NIRI & 323.6 & -1.9 & 4.56\ 2016-02-11 & Gemini N/NIRI & 44.4 & -1.9 & 4.54\ 2016-02-15 & Gemini N/NIRI & 122.0 & -1.9 & 4.51\ 2016-02-16 & Gemini N/NIRI & 299.5 & -1.9 & 4.50\ 2016-02-17 & Gemini N/NIRI & 144.5 & -1.9 & 4.49\ 2016-02-18 & Gemini N/NIRI & 18.2 & -1.9 & 4.49\ 2016-02-19 & Gemini N/NIRI & 206.6 & -1.9 & 4.48\ 2016-02-20 & Gemini N/NIRI & 51.4 & -1.9 & 4.48\ 2016-02-21 & Gemini N/NIRI & 257.1 & -1.9 & 4.47\ 2016-02-23 & Gemini N/NIRI & 296.4 & -1.9 & 4.46\ 2016-03-11 & Gemini N/NIRI & 156.0 & -1.9 & 4.43\ 2016-03-12 & Gemini N/NIRI & 336.6 & -1.9 & 4.44\ 2016-03-13 & Gemini N/NIRI & 199.0 & -1.8 & 4.43\ 2016-03-14 & Gemini N/NIRI & 27.8 & -1.8 & 4.44\ 2016-04-30 & Gemini N/NIRI & 201.4 & -1.6 & 4.81\ 2016-05-01 & Gemini N/NIRI & 45.0 & -1.6 & 4.83\ 2016-05-02 & Gemini N/NIRI & 250.7 & -1.6 & 4.84\ 2016-05-03 & Gemini N/NIRI & 94.1 & -1.6 & 4.86\ 2016-05-04 & Gemini N/NIRI & 299.5 & -1.6 & 4.87\ 2016-05-08 & Gemini N/NIRI & 30.3 & -1.6 & 4.93\ 2016-05-09 & Gemini N/NIRI & 238.2 & -1.6 & 4.94\ 2016-05-10 & Gemini N/NIRI & 77.1 & -1.6 & 4.95\ 2016-05-11 & Gemini N/NIRI & 281.0 & -1.6 & 4.97\ 2016-05-12 & Gemini N/NIRI & 132.8 & -1.6 & 4.98\ 2016-05-13 & Gemini N/NIRI & 343.2 & -1.5 & 5.00\ 2016-05-14 & Gemini N/NIRI & 201.4 & -1.5 & 5.01\ 2016-05-15 & Keck/NIRC2 & 35.5 & -1.5 & 5.03\ 2016-05-16 & Gemini N/NIRI & 218.5 & -1.5 & 5.04\ 2016-05-17 & Gemini N/NIRI & 61.3 & -1.5 & 5.06\ 2016-05-18 & Gemini N/NIRI & 264.0 & -1.5 & 5.07\ 2016-05-19 & Gemini N/NIRI & 125.6 & -1.5 & 5.09\ 2016-05-20 & Gemini N/NIRI & 314.4 & -1.5 & 5.11\ 2016-05-23 & Gemini N/NIRI & 203.5 & -1.5 & 5.15\ 2016-05-24 & Gemini N/NIRI & 47.8 & -1.5 & 5.17\ 2016-05-25 & Gemini N/NIRI & 261.6 & -1.5 & 5.18\ 2016-05-27 & Gemini N/NIRI & 299.1 & -1.5 & 5.21\ 2016-05-28 & Gemini N/NIRI & 140.3 & -1.5 & 5.23\ 2016-05-31 & Gemini N/NIRI & 35.4 & -1.5 & 5.28\ 2016-06-01 & Gemini N/NIRI & 234.0 & -1.5 & 5.29\ 2016-06-02 & Gemini N/NIRI & 78.5 & -1.5 & 5.31\ 2016-06-03 & Gemini N/NIRI & 281.5 & -1.5 & 5.32\ 2016-06-04 & Gemini N/NIRI & 125.6 & -1.5 & 5.34\ 2016-06-05 & Gemini N/NIRI & 329.2 & -1.5 & 5.36\ 2016-06-07 & Gemini N/NIRI & 27.9 & -1.5 & 5.39\ 2016-06-08 & Gemini N/NIRI & 218.5 & -1.5 & 5.40\ 2016-06-09 & Gemini N/NIRI & 62.2 & -1.5 & 5.42\ 2016-06-10 & Gemini N/NIRI & 264.3 & -1.5 & 5.43\ 2016-06-12 & Gemini N/NIRI & 313.8 & -1.5 & 5.46\ 2016-06-16 & Gemini N/NIRI & 45.3 & -1.5 & 5.53\ 2016-06-17 & Gemini N/NIRI & 249.0 & -1.5 & 5.43\ 2016-06-18 & Gemini N/NIRI & 92.7 & -1.5 & 5.56\ 2016-06-19 & Gemini N/NIRI & 296.9 & -1.5 & 5.57\ 2016-06-20 & Gemini N/NIRI & 138.3 & -1.5 & 5.58\ 2016-06-24 & Gemini N/NIRI & 232.6 & -1.5 & 5.65\ 2016-06-25 & Gemini N/NIRI & 76.4 & -1.5 & 5.66\ 2016-06-27 & Gemini N/NIRI & 122.9 & -1.5 & 5.69\ 2016-06-28 & Gemini N/NIRI & 326.9 & -1.5 & 5.71\ 2016-11-18 & Gemini N/NIRI & 323.4 & -2.4 & 6.14\ 2016-11-22 & Gemini N/NIRI & 55.1 & -2.5 & 6.10\ 2016-11-23 & Gemini N/NIRI & 257.1 & -2.5 & 6.09\ 2016-11-24 & Gemini N/NIRI & 103.3 & -2.5 & 6.07\ 2016-11-29 & Gemini N/NIRI & 38.4 & -2.5 & 6.02\ 2016-12-22 & Keck/NIRC2 & 39.5 & -2.7 & 5.69\ 2016-12-23 & Keck/NIRC2 & 240.0 & -2.7 & 5.67\ 2016-12-24 & Gemini N/NIRI & 85.6 & -2.7 & 5.66\ 2017-01-02 & Gemini N/NIRI & 114.2 & -2.8 & 5.51\ 2017-01-03 & Keck/NIRC2 & 314.1 & -2.8 & 5.50\ 2017-01-04 & Keck/NIRC2 & 156.0 & -2.8 & 5.48\ 2017-01-07 & Keck/NIRC2 & 31.9 & -2.8 & 5.44\ 2017-01-08 & Keck/NIRC2 & 253.8 & -2.8 & 5.42\ 2017-01-09 & Gemini N/NIRI & 99.9 & -2.8 & 5.40\ 2017-01-12 & Gemini N/NIRI & 333.8 & -2.8 & 5.36\ 2017-01-14 & Gemini N/NIRI & 37.1 & -2.9 & 5.32\ 2017-01-15 & Gemini N/NIRI & 236.9 & -2.9 & 5.30\ 2017-01-18 & Gemini N/NIRI & 115.6 & -2.9 & 5.26\ 2017-01-20 & Gemini N/NIRI & 148.1 & -2.9 & 5.23\ 2017-01-22 & Gemini N/NIRI & 220.5 & -2.9 & 5.19\ 2017-01-23 & Gemini N/NIRI & 53.0 & -2.9 & 5.18\ 2017-01-23 & Keck/NIRC2 & 69.3 & -2.9 & 5.18\ 2017-01-24 & Keck/NIRC2 & 274.6 & -2.9 & 5.16\ 2017-01-25 & Gemini N/NIRI & 113.9 & -2.9 & 5.15\ 2017-01-26 & Gemini N/NIRI & 317.6 & -2.9 & 5.13\ 2017-01-27 & Gemini N/NIRI & 149.0 & -2.9 & 5.11\ 2017-01-30 & Gemini N/NIRI & 49.8 & -2.9 & 5.07\ 2017-01-31 & Gemini N/NIRI & 256.6 & -2.9 & 5.50\ 2017-02-05 & Keck/NIRC2 & 194.2 & -3.0 & 4.97\ 2017-02-06 & Keck/NIRC2 & 37.6 & -3.0 & 4.96\ 2017-02-23 & Gemini N/NIRI & 229.9 & -3.0 & 4.73\ 2017-02-24 & Gemini N/NIRI & 62.4 & -3.0 & 4.73\ 2017-03-04 & Gemini N/NIRI & 246.5 & -3.0 & 4.64\ 2017-03-05 & Gemini N/NIRI & 97.1 & -3.0 & 4.63\ 2017-03-06 & Gemini N/NIRI & 298.4 & -3.0 & 4.62\ 2017-03-29 & Gemini N/NIRI & 312.5 & -3.0 & 4.47\ 2017-04-02 & Gemini N/NIRI & 26.0 & -3.0 & 4.46\ 2017-04-03 & Gemini N/NIRI & 233.1 & -3.0 & 4.45\ 2017-04-04 & Gemini N/NIRI & 67.3 & -3.0 & 4.45\ 2017-05-04 & Gemini N/NIRI & 63.5 & -2.8 & 4.55\ 2017-05-05 & Gemini N/NIRI & 253.7 & -2.8 & 4.55\ 2017-05-06 & Gemini N/NIRI & 76.4 & -2.8 & 4.56\ 2017-05-07 & Gemini N/NIRI & 272.2 & -2.8 & 4.57\ 2017-05-09 & Gemini N/NIRI & 328.2 & -2.8 & 4.59\ 2017-05-10 & Gemini N/NIRI & 162.7 & -2.8 & 4.59\ 2017-05-11 & Gemini N/NIRI & 23.6 & -2.8 & 4.61\ 2017-05-12 & Gemini N/NIRI & 207.0 & -2.8 & 4.61\ 2017-05-14 & Gemini N/NIRI & 256.3 & -2.8 & 4.63\ 2017-05-22 & Gemini N/NIRI & 83.8 & -2.7 & 4.72\ 2017-05-23 & Gemini N/NIRI & 287.3 & -2.7 & 4.73\ 2017-05-24 & Gemini N/NIRI & 129.5 & -2.7 & 4.74\ 2017-05-25 & Gemini N/NIRI & 334.2 & -2.7 & 4.75\ 2017-05-27 & Gemini N/NIRI & 20.5 & -2.7 & 4.78\ 2017-05-27 & Keck/NIRC2 & 19.5 & -2.7 & 4.78\ 2017-05-28 & Keck/NIRC2 & 222.9 & -2.7 & 4.79\ 2017-05-29 & Gemini N/NIRI & 68.6 & -2.7 & 4.80\ 2017-05-30 & Gemini N/NIRI & 272.3 & -2.7 & 4.81\ 2017-05-31 & Gemini N/NIRI & 117.3 & -2.7 & 4.82\ 2017-06-03 & Gemini N/NIRI & 32.0 & -2.7 & 4.87\ 2017-06-15 & Gemini N/NIRI & 288.3 & -2.6 & 5.03\ 2017-06-16 & Keck/NIRC2 & 130.1 & -2.6 & 5.05\ 2017-06-22 & Gemini N/NIRI & 273.4 & -2.6 & 5.14\ 2017-06-23 & Gemini N/NIRI & 115.5 & -2.6 & 5.15\ 2017-06-24 & Gemini N/NIRI & 320.1 & -2.6 & 5.17\ 2017-06-27 & Gemini N/NIRI & 210.5 & -2.6 & 5.21\ 2017-06-28 & Gemini N/NIRI & 53.6 & -2.6 & 5.23\ 2017-06-29 & Gemini N/NIRI & 257.8 & -2.6 & 5.24\ 2017-06-30 & Gemini N/NIRI & 100.7 & -2.6 & 5.26\ 2017-07-01 & Gemini N/NIRI & 305.7 & -2.6 & 5.28\ 2017-07-04 & Gemini N/NIRI & 202.3 & -2.6 & 5.32\ 2017-07-05 & Gemini N/NIRI & 37.0 & -2.6 & 5.34\ 2017-07-06 & Gemini N/NIRI & 243.6 & -2.6 & 5.35\ 2017-07-07 & Gemini N/NIRI & 84.2 & -2.6 & 5.37\ 2017-07-08 & Gemini N/NIRI & 287.9 & -2.6 & 5.39\ 2017-07-09 & Gemini N/NIRI & 130.4 & -2.6 & 5.40\ 2017-07-10 & Gemini N/NIRI & 336.0 & -2.6 & 5.42\ 2017-07-21 & Keck/NIRC2 & 50.1 & -2.5 & 5.59\ 2017-07-23 & Keck/NIRC2 & 95.5 & -2.5 & 5.61\ 2017-07-31 & Keck/NIRC2 & 282.9 & -2.5 & 5.73\ 2017-12-11 & Keck/NIRC2 & 50.1 & -3.0 & 6.19\ 2017-12-12 & Keck/NIRC2 & 252.4 & -3.0 & 6.18\ 2017-12-13 & Keck/NIRC2 & 94.6 & -3.0 & 6.17\ 2017-12-31 & Keck/NIRC2 & 157.0 & -3.1 & 5.95\ 2018-01-02 & Keck/NIRC2 & 201.5 & -3.1 & 5.93\ 2018-01-10 & Keck/NIRC2 & 30.3 & -3.1 & 5.82\ 2018-01-11 & Keck/NIRC2 & 223.5 & -3.1 & 5.80\ 2018-01-12 & Keck/NIRC2 & 283.1 & -3.1 & 5.78\ 2018-01-14 & Keck/NIRC2 & 125.0 & -3.2 & 5.76\ 2018-01-17 & Keck/NIRC2 & 17.7 & -3.2 & 5.72\ 2018-01-19 & Keck/NIRC2 & 62.2 & -3.2 & 5.69\ 2018-03-02 & Gemini N/NIRI & 319.0 & -3.3 & 5.02\ 2018-03-15 & Gemini N/NIRI & 84.4 & -3.4 & 4.82\ 2018-03-17 & Gemini N/NIRI & 132.6 & -3.4 & 4.78\ 2018-04-24 & Gemini N/NIRI & 283.7 & -3.4 & 4.43\ 2018-04-25 & Gemini N/NIRI & 133.9 & -3.4 & 4.43\ 2018-05-07 & Gemini N/NIRI & 51.7 & -3.3 & 4.40\ 2018-05-10 & Gemini N/NIRI & 289.4 & -3.3 & 4.40\ 2018-05-27 & Gemini N/NIRI & 150.3 & -3.3 & 4.44\ 2018-05-31 & Gemini N/NIRI & 240.3 & -3.3 & 4.46\ 2018-06-02 & Gemini N/NIRI & 283.8 & -3.2 & 4.47\ 2018-06-06 & Gemini N/NIRI & 18.7 & -3.2 & 4.50\ 2018-06-15 & Gemini N/NIRI & 18.9 & -3.2 & 4.58\ 2018-06-16 & Gemini N/NIRI & 214.9 & -3.2 & 4.58\ 2018-06-17 & Gemini N/NIRI & 61.1 & -3.2 & 4.60\ 2018-06-18 & Gemini N/NIRI & 264.8 & -3.2 & 4.61\ 2018-06-22 & Gemini N/NIRI & 21.7 & -3.1 & 4.65\ 2018-06-23 & Gemini N/NIRI & 200.5 & -3.1 & 4.66\ 2018-06-25 & Gemini N/NIRI & 249.0 & -3.1 & 4.68\ 2018-06-30 & Gemini N/NIRI & 203.9 & -3.1 & 4.74\ 2018-07-01 & Gemini N/NIRI & 29.6 & -3.1 & 4.76\ 2018-07-12 & Gemini N/NIRI & 106.6 & -3.1 & 4.90 [l | l | ll | llllllll]{} Nusku Patera & 2017-May-27 & -65.0$\pm$1.1 & 6.2$\pm$1.9 & & & & & & 4.0$\pm$1.0 & &\ & **Average** & **-65.0** & **6.2** &\ Uta & 2013-Dec-03 & -33.8$\pm$2.0 & 20.1$\pm$2.1 & & & & 2.7$\pm$1.3 & & & &\ & 2014-Jan-20 & -34.9$\pm$0.8 & 20.9$\pm$0.8 & & & 1.6$\pm$0.2 & 3.1$\pm$0.5 & & & 5.4$\pm$0.8 &\ & 2014-Feb-10 & -34.1$\pm$0.5 & 23.4$\pm$2.1 & & & & 5.1$\pm$2.6 & & & 4.6$\pm$0.9 &\ & 2014-Mar-07 & -34.9$\pm$2.0 & 25.9$\pm$2.2 & & & & 2.2$\pm$1.1 & & & &\ & 2014-Mar-14 & -34.7$\pm$2.0 & 24.8$\pm$2.1 & & & & 3.3$\pm$0.5 & & & &\ & 2014-Oct-30 & -33.7$\pm$0.8 & 21.2$\pm$0.8 & & & & 3.0$\pm$0.4 & & & 5.8$\pm$0.9 &\ & 2014-Nov-29 & -35.5$\pm$2.1 & 20.6$\pm$2.1 & & & & 3.2$\pm$0.6 & & & &\ & 2014-Dec-08 & -34.1$\pm$1.6 & 19.1$\pm$2.5 & & & & 3.4$\pm$1.7 & & & &\ & 2014-Dec-15 & -34.2$\pm$1.8 & 19.6$\pm$2.3 & & & & 2.9$\pm$1.4 & & & &\ & 2015-Jan-12 & -35.4$\pm$1.0 & 22.6$\pm$1.8 & & & & 3.2$\pm$1.6 & & & 4.9$\pm$0.7 &\ & 2015-Jan-16 & -34.1$\pm$1.0 & 21.0$\pm$1.8 & & & & 3.4$\pm$1.7 & & & 4.6$\pm$0.7 &\ & 2015-Mar-26 & -37.0$\pm$2.0 & 22.1$\pm$2.2 & & & & 3.4$\pm$0.5 & & & &\ & 2015-Mar-28 & -34.6$\pm$1.8 & 25.3$\pm$2.3 & & & & 3.2$\pm$0.5 & & & &\ & 2015-Apr-02 & -33.5$\pm$0.6 & 23.9$\pm$1.3 & & & & 3.5$\pm$0.8 & & & 4.7$\pm$0.7 &\ & 2015-Apr-04 & -33.9$\pm$0.5 & 24.8$\pm$0.5 & & & & 3.9$\pm$0.6 & & & 6.1$\pm$0.9 &\ & 2015-Apr-20 & -34.6$\pm$1.9 & 27.2$\pm$2.2 & & & & 3.7$\pm$0.6 & & & &\ & 2015-Apr-27 & -35.8$\pm$2.0 & 24.4$\pm$2.1 & & & & 4.0$\pm$0.6 & & & &\ & 2015-Apr-29 & -34.7$\pm$0.7 & 23.4$\pm$1.2 & & & & 4.1$\pm$0.9 & & & 5.4$\pm$0.8 &\ & 2015-Jun-05 & -32.2$\pm$0.7 & 23.5$\pm$0.9 & & & 3.3$\pm$0.5 & 4.8$\pm$0.7 & & & 7.1$\pm$1.1 &\ & 2015-Nov-23 & -35.5$\pm$0.7 & 21.3$\pm$1.9 & & & & 3.3$\pm$1.7 & & & 5.2$\pm$0.8 &\ & 2015-Dec-25 & -34.6$\pm$0.6 & 22.7$\pm$1.9 & & & & 4.0$\pm$1.4 & & & 6.1$\pm$0.9 &\ & 2016-Feb-11 & -32.3$\pm$1.3 & 18.1$\pm$1.8 & & & & 2.1$\pm$0.4 & & & &\ & 2016-Feb-18 & -31.9$\pm$1.2 & 18.2$\pm$1.3 & & & & 3.8$\pm$0.7 & & & &\ & 2016-Feb-20 & -32.3$\pm$1.3 & 22.2$\pm$1.9 & & & & 2.8$\pm$0.5 & & & &\ & 2016-Mar-12 & -35.9$\pm$1.3 & 19.6$\pm$2.8 & & & & 4.0$\pm$0.8 & & & &\ & 2016-Mar-14 & -32.7$\pm$1.2 & 20.9$\pm$1.3 & & & & 2.0$\pm$0.4 & & & &\ & 2016-May-01 & -32.0$\pm$1.4 & 19.5$\pm$1.9 & & & & 2.7$\pm$0.5 & & & &\ & 2016-May-08 & -32.8$\pm$1.4 & 24.1$\pm$1.5 & & & & 2.5$\pm$0.5 & & & &\ & 2016-May-13 & -34.7$\pm$1.4 & 22.2$\pm$2.8 & & & & 5.1$\pm$1.0 & & & &\ & 2016-May-15 & -35.4$\pm$0.5 & 23.5$\pm$1.7 & & & 2.6$\pm$0.4 & 3.2$\pm$0.5 & & & 5.4$\pm$0.8 &\ & 2016-May-17 & -33.3$\pm$1.4 & 21.0$\pm$2.6 & & & & 4.6$\pm$0.9 & & & &\ & 2016-May-24 & -31.5$\pm$1.4 & 22.2$\pm$2.0 & & & & 3.1$\pm$0.6 & & & &\ & 2016-May-31 & -32.0$\pm$1.5 & 21.9$\pm$1.7 & & & & 3.7$\pm$0.7 & & & &\ & 2016-Jun-07 & -34.7$\pm$1.5 & 24.0$\pm$1.6 & & & & 3.4$\pm$0.6 & & & &\ & 2016-Jun-16 & -34.9$\pm$1.6 & 18.5$\pm$2.3 & & & & 3.7$\pm$0.7 & & & &\ & 2016-Dec-22 & -35.3$\pm$1.1 & 20.1$\pm$1.2 & & & 3.3$\pm$0.5 & 6.6$\pm$1.0 & 7.8$\pm$1.2 & 9.7$\pm$1.5 & 7.5$\pm$1.1 &\ & 2017-Jan-03 & -34.4$\pm$0.9 & 17.9$\pm$1.5 & & & & & & 8.0$\pm$1.4 & 8.9$\pm$1.3 &\ & 2017-Jan-07 & -35.6$\pm$1.2 & 20.4$\pm$1.5 & & & & 3.7$\pm$0.6 & 5.3$\pm$0.8 & 6.6$\pm$1.5 & 7.0$\pm$1.0 &\ & 2017-Jan-14 & -33.6$\pm$1.5 & 19.6$\pm$1.9 & & & & 3.1$\pm$0.5 & & & &\ & 2017-Jan-23 & -34.7$\pm$0.7 & 20.6$\pm$1.5 & & & & 5.0$\pm$0.7 & & & &\ & 2017-Jan-30 & -36.5$\pm$1.5 & 19.1$\pm$2.3 & & & & 3.2$\pm$0.6 & & & &\ & 2017-Feb-06 & -35.6$\pm$1.0 & 19.2$\pm$0.6 & & & & 3.5$\pm$0.5 & 4.5$\pm$0.7 & 4.7$\pm$0.7 & 7.3$\pm$1.1 &\ & 2017-Apr-02 & -31.5$\pm$1.2 & 19.7$\pm$1.3 & & & & 2.1$\pm$0.4 & & & &\ & 2017-Apr-04 & -34.4$\pm$1.3 & 21.0$\pm$2.6 & & & & 4.1$\pm$0.7 & & & &\ & 2017-May-11 & -33.1$\pm$1.3 & 20.3$\pm$1.3 & & & & 3.2$\pm$0.5 & & & &\ & 2017-May-27 & -34.4$\pm$0.8 & 24.7$\pm$0.8 & & 1.1$\pm$0.2 & 2.1$\pm$0.3 & 3.5$\pm$0.5 & 7.0$\pm$1.2 & 9.8$\pm$2.4 & 8.3$\pm$1.2 &\ & 2017-Jun-03 & -34.0$\pm$1.4 & 22.6$\pm$1.6 & & & & 2.5$\pm$0.4 & & & &\ & 2017-Jul-05 & -34.7$\pm$1.5 & 21.3$\pm$1.9 & & & & 3.5$\pm$0.6 & & & &\ & 2017-Jul-21 & -35.8$\pm$0.5 & 25.7$\pm$0.7 & & & & 6.6$\pm$1.2 & 7.9$\pm$1.2 & & 8.3$\pm$1.2 &\ & 2017-Dec-11 & -33.9$\pm$0.4 & 20.6$\pm$0.3 & & & & 4.5$\pm$0.7 & & & &\ & 2018-Jan-17 & -33.8$\pm$0.7 & 19.6$\pm$0.8 & & & & 4.7$\pm$0.7 & & & &\ & 2018-Jan-19 & -36.0$\pm$0.3 & 16.8$\pm$0.0 & & & & 5.7$\pm$0.9 & & & 9.1$\pm$1.4 &\ & 2018-May-07 & -35.8$\pm$1.3 & 22.0$\pm$2.0 & & & & 3.0$\pm$0.5 & & & &\ & 2018-Jun-06 & -31.0$\pm$1.2 & 19.7$\pm$1.3 & & & & 2.4$\pm$0.4 & & & &\ & 2018-Jun-15 & -32.5$\pm$1.3 & 20.0$\pm$1.3 & & & & 2.9$\pm$0.4 & & & &\ & 2018-Jun-22 & -30.1$\pm$1.2 & 17.9$\pm$1.3 & & & & 3.0$\pm$0.5 & & & &\ & 2018-Jul-01 & -33.1$\pm$1.3 & 21.3$\pm$1.5 & & & & 5.3$\pm$0.8 & & & &\ & **Average** & **-34.4** & **21.0** &\ Kanehekili Fluctus & 2014-Jan-20 & -17.3$\pm$0.7 & 32.3$\pm$1.1 & & & & 1.5$\pm$0.4 & & & 3.3$\pm$0.5 &\ & 2014-Oct-30 & -19.1$\pm$1.2 & 30.9$\pm$0.7 & & & & 1.0$\pm$0.4 & & & 1.5$\pm$0.2 &\ & 2015-Jan-16 & -16.6$\pm$1.8 & 31.3$\pm$2.3 & & & & & & & 1.4$\pm$0.7 &\ & 2015-Apr-02 & -15.7$\pm$0.8 & 34.4$\pm$1.9 & & & & & & & 1.4$\pm$0.5 &\ & 2015-Apr-04 & -17.6$\pm$0.9 & 35.2$\pm$0.5 & & & & 1.0$\pm$0.5 & & & 2.0$\pm$0.3 &\ & 2015-Apr-29 & -15.2$\pm$1.0 & 36.5$\pm$1.2 & & & & & & & 1.4$\pm$0.5 &\ & 2015-Jun-05 & -16.1$\pm$1.5 & 34.5$\pm$0.6 & & & & & & & 1.3$\pm$0.4 &\ & 2017-May-27 & -17.7$\pm$0.8 & 34.5$\pm$0.3 & & & & & 1.7$\pm$0.3 & 2.4$\pm$0.6 & &\ & **Average** & **-17.0** & **34.5** &\ Janus Patera & 2013-Sep-02 & -4.5$\pm$1.4 & 37.0$\pm$1.4 & & & & 5.0$\pm$0.7 & & & &\ & 2013-Sep-04 & -4.4$\pm$1.7 & 36.2$\pm$2.4 & & & & 4.6$\pm$0.8 & & & &\ & 2013-Sep-07 & -4.6$\pm$1.4 & 37.3$\pm$3.5 & & & & 11$\pm$2 & & & &\ & 2013-Sep-09 & -5.0$\pm$1.3 & 36.5$\pm$1.4 & & & & 5.2$\pm$0.8 & & & &\ & 2013-Nov-26 & -4.8$\pm$1.1 & 37.9$\pm$1.1 & & & & 3.5$\pm$0.5 & & & &\ & 2013-Dec-03 & -4.0$\pm$1.0 & 37.7$\pm$1.1 & & & & 6.1$\pm$1.1 & & & &\ & 2013-Dec-05 & -4.4$\pm$1.7 & 38.0$\pm$2.4 & & & & 2.5$\pm$1.2 & & & &\ & 2013-Dec-12 & -5.3$\pm$1.8 & 34.1$\pm$2.3 & & & & 3.9$\pm$0.6 & & & &\ & 2014-Jan-20 & -4.8$\pm$0.5 & 38.0$\pm$0.7 & & 3.5$\pm$0.5 & 4.2$\pm$0.6 & 4.4$\pm$0.7 & & & 4.7$\pm$0.7 &\ & 2014-Mar-07 & -3.7$\pm$1.1 & 39.7$\pm$1.1 & & & & 3.0$\pm$0.5 & & & &\ & 2014-Mar-14 & -3.9$\pm$1.2 & 39.0$\pm$1.2 & & & & 3.6$\pm$0.5 & & & &\ & 2014-Oct-09 & -3.8$\pm$1.6 & 39.6$\pm$2.4 & & & & 5.3$\pm$1.0 & & & &\ & 2014-Oct-23 & -3.2$\pm$1.3 & 40.5$\pm$1.4 & & & & 4.1$\pm$0.6 & & & &\ & 2014-Oct-30 & -4.8$\pm$0.4 & 36.6$\pm$0.4 & & & & 5.1$\pm$0.8 & & & 4.8$\pm$0.7 &\ & 2014-Nov-27 & -4.3$\pm$1.2 & 39.2$\pm$3.4 & & & & 8.4$\pm$4.2 & & & &\ & 2014-Nov-29 & -4.3$\pm$1.2 & 37.4$\pm$1.3 & & & & 4.6$\pm$0.7 & & & &\ & 2014-Dec-01 & -4.8$\pm$1.8 & 37.8$\pm$2.3 & & & & 3.7$\pm$0.7 & & & &\ & 2014-Dec-08 & -5.0$\pm$1.2 & 35.6$\pm$1.2 & & & & 5.1$\pm$0.8 & & & &\ & 2014-Dec-15 & -5.4$\pm$1.1 & 36.9$\pm$1.1 & & & & 4.7$\pm$0.7 & & & &\ & 2015-Jan-12 & -4.8$\pm$0.6 & 34.6$\pm$1.5 & & & 5.0$\pm$0.7 & 6.0$\pm$0.9 & & & 5.7$\pm$0.9 &\ & 2015-Jan-16 & -4.5$\pm$0.4 & 37.0$\pm$0.8 & & & & 3.9$\pm$0.6 & & & 4.5$\pm$0.7 &\ & 2015-Mar-26 & -3.7$\pm$1.1 & 38.5$\pm$1.2 & & & & 4.5$\pm$0.7 & & & &\ & 2015-Mar-28 & -4.9$\pm$1.1 & 41.3$\pm$1.1 & & & & 3.9$\pm$0.6 & & & &\ & 2015-Apr-02 & -4.4$\pm$0.6 & 40.8$\pm$1.3 & & & & 4.5$\pm$1.2 & & & 4.4$\pm$0.7 &\ & 2015-Apr-04 & -4.1$\pm$0.4 & 40.9$\pm$0.5 & & & & 5.9$\pm$0.9 & & & 5.7$\pm$0.9 &\ & 2015-Apr-06 & -3.6$\pm$1.9 & 40.3$\pm$2.2 & & & & 3.2$\pm$0.5 & & & &\ & 2015-Apr-20 & -5.5$\pm$1.2 & 39.8$\pm$1.2 & & & & 4.3$\pm$0.6 & & & &\ & 2015-Apr-27 & -4.6$\pm$1.2 & 41.8$\pm$1.3 & & & & 5.3$\pm$0.8 & & & &\ & 2015-Apr-29 & -4.4$\pm$0.5 & 39.5$\pm$0.8 & & & & 2.2$\pm$0.3 & & & 2.2$\pm$0.3 &\ & 2015-Jun-05 & -2.8$\pm$0.5 & 38.7$\pm$0.7 & & & 2.1$\pm$0.3 & 3.4$\pm$0.5 & & & 3.3$\pm$0.5 &\ & 2015-Dec-22 & -3.5$\pm$0.7 & 36.5$\pm$2.0 & & & & 5.0$\pm$2.5 & & & 7.4$\pm$1.3 &\ & 2015-Dec-25 & -3.6$\pm$0.9 & 37.3$\pm$2.0 & & & & 7.5$\pm$1.5 & & & 6.5$\pm$1.0 &\ & 2016-Feb-04 & -5.8$\pm$1.1 & 38.0$\pm$1.3 & & & & 3.6$\pm$0.7 & & & &\ & 2016-Feb-11 & -5.1$\pm$1.1 & 33.7$\pm$1.1 & & & & 3.9$\pm$0.7 & & & &\ & 2016-Feb-18 & -1.6$\pm$1.1 & 34.8$\pm$1.1 & & & & 3.5$\pm$0.7 & & & &\ & 2016-Feb-20 & -0.7$\pm$1.1 & 37.2$\pm$1.1 & & & & 4.1$\pm$0.8 & & & &\ & 2016-Mar-12 & -2.5$\pm$1.1 & 36.1$\pm$2.3 & & & & 6.9$\pm$1.4 & & & &\ & 2016-Mar-14 & -1.6$\pm$1.1 & 35.8$\pm$1.1 & & & & 3.2$\pm$0.6 & & & &\ & 2016-May-01 & -1.3$\pm$1.2 & 37.1$\pm$1.2 & & & & 4.3$\pm$0.8 & & & &\ & 2016-May-08 & -2.0$\pm$1.2 & 40.4$\pm$1.2 & & & & 6.4$\pm$1.2 & & & &\ & 2016-May-10 & -1.4$\pm$1.2 & 38.1$\pm$1.5 & & & & 3.2$\pm$0.6 & & & &\ & 2016-May-13 & -2.6$\pm$1.2 & 36.4$\pm$2.2 & & & & 6.4$\pm$1.2 & & & &\ & 2016-May-15 & -6.1$\pm$0.7 & 39.5$\pm$0.8 & & & 5.0$\pm$0.7 & 4.9$\pm$0.7 & & & 5.1$\pm$0.8 &\ & 2016-May-17 & -2.4$\pm$1.2 & 39.9$\pm$1.3 & & & & 3.5$\pm$0.7 & & & &\ & 2016-May-24 & -1.0$\pm$1.2 & 38.7$\pm$1.3 & & & & 4.7$\pm$0.9 & & & &\ & 2016-May-31 & -1.0$\pm$1.3 & 36.4$\pm$1.3 & & & & 4.4$\pm$0.8 & & & &\ & 2016-Jun-02 & -1.0$\pm$1.3 & 33.3$\pm$1.8 & & & & 3.0$\pm$0.6 & & & &\ & 2016-Jun-07 & -2.8$\pm$1.3 & 39.3$\pm$1.3 & & & & 4.5$\pm$0.8 & & & &\ & 2016-Jun-09 & -3.0$\pm$1.3 & 39.8$\pm$1.4 & & & & 2.8$\pm$0.5 & & & &\ & 2016-Jun-16 & -4.2$\pm$1.3 & 36.4$\pm$1.4 & & & & 3.2$\pm$0.6 & & & &\ & 2016-Nov-22 & -2.9$\pm$1.5 & 35.5$\pm$1.6 & & & & 5.0$\pm$0.8 & & & &\ & 2016-Nov-29 & -3.7$\pm$1.5 & 34.5$\pm$1.5 & & & & 5.2$\pm$0.9 & & & &\ & 2016-Dec-22 & -5.7$\pm$0.3 & 36.9$\pm$0.6 & & & 6.2$\pm$0.9 & 8.3$\pm$1.2 & 8.9$\pm$1.3 & 11$\pm$2 & 6.9$\pm$1.0 &\ & 2016-Dec-24 & -3.0$\pm$1.4 & 39.6$\pm$2.0 & & & & 8.7$\pm$1.5 & & & &\ & 2017-Jan-07 & -5.1$\pm$0.6 & 37.0$\pm$0.6 & & 4.2$\pm$0.6 & 5.9$\pm$0.9 & 7.1$\pm$1.1 & 7.5$\pm$1.1 & 9.0$\pm$1.9 & 7.1$\pm$1.1 &\ & 2017-Jan-14 & -0.5$\pm$1.3 & 34.4$\pm$1.3 & & & & 5.3$\pm$0.9 & & & &\ & 2017-Jan-23 & -5.4$\pm$0.8 & 38.3$\pm$0.3 & & 3.0$\pm$0.5 & 4.5$\pm$0.7 & 5.7$\pm$0.9 & & & &\ & 2017-Jan-30 & -3.1$\pm$1.2 & 35.7$\pm$1.3 & & & & 5.4$\pm$0.9 & & & &\ & 2017-Feb-06 & -5.0$\pm$0.6 & 35.7$\pm$1.1 & & 3.3$\pm$0.5 & & 5.1$\pm$0.8 & 6.3$\pm$0.9 & 6.1$\pm$0.9 & 6.6$\pm$1.0 &\ & 2017-Feb-24 & -1.2$\pm$1.2 & 34.2$\pm$1.3 & & & & 3.1$\pm$0.5 & & & &\ & 2017-Apr-02 & -2.3$\pm$1.1 & 36.3$\pm$1.1 & & & & 3.2$\pm$0.5 & & & &\ & 2017-Apr-04 & -1.9$\pm$1.1 & 36.5$\pm$1.2 & & & & 3.5$\pm$0.6 & & & &\ & 2017-May-04 & -1.5$\pm$1.1 & 39.2$\pm$1.2 & & & & 3.0$\pm$0.5 & & & &\ & 2017-May-06 & -1.3$\pm$1.1 & 37.9$\pm$1.4 & & & & 5.8$\pm$1.0 & & & &\ & 2017-May-11 & -0.7$\pm$1.1 & 35.9$\pm$1.1 & & & & 3.3$\pm$0.6 & & & &\ & 2017-May-22 & -1.4$\pm$1.2 & 38.6$\pm$1.6 & & & & 4.9$\pm$0.8 & & & &\ & 2017-May-27 & -4.8$\pm$0.5 & 40.6$\pm$0.8 & & 2.6$\pm$0.4 & 3.9$\pm$0.7 & 3.9$\pm$0.6 & 7.3$\pm$1.2 & 9.6$\pm$2.3 & 5.9$\pm$0.9 &\ & 2017-May-29 & -2.5$\pm$1.2 & 37.1$\pm$1.4 & & & & 5.5$\pm$0.9 & & & &\ & 2017-Jun-03 & -1.0$\pm$1.2 & 38.8$\pm$1.2 & & & & 3.7$\pm$0.6 & & & &\ & 2017-Jun-28 & -4.8$\pm$1.3 & 39.9$\pm$1.3 & & & & 4.4$\pm$0.7 & & & &\ & 2017-Jul-05 & -2.3$\pm$1.3 & 39.4$\pm$1.3 & & & & 5.4$\pm$0.9 & & & &\ & 2017-Jul-21 & -6.6$\pm$0.6 & 40.0$\pm$0.6 & & 2.9$\pm$0.4 & 5.0$\pm$0.8 & 8.6$\pm$1.5 & 8.3$\pm$1.3 & & 7.6$\pm$1.1 &\ & 2017-Jul-23 & -5.1$\pm$0.6 & 41.5$\pm$1.2 & & & & & & 6.9$\pm$1.0 & &\ & 2017-Dec-11 & -5.5$\pm$0.3 & 37.4$\pm$0.3 & & & & 3.6$\pm$0.5 & & & &\ & 2017-Dec-13 & -5.5$\pm$1.0 & 38.0$\pm$0.2 & & & & 6.4$\pm$1.0 & & & 8.0$\pm$1.2 &\ & 2018-Jan-17 & -4.3$\pm$0.6 & 36.2$\pm$0.7 & & & & 3.3$\pm$0.5 & & & &\ & 2018-Jan-19 & -5.3$\pm$0.2 & 36.0$\pm$0.8 & & & 6.1$\pm$0.9 & 6.3$\pm$1.0 & & & 7.1$\pm$1.1 &\ & 2018-Mar-15 & -1.2$\pm$1.2 & 34.7$\pm$1.8 & & & & 5.3$\pm$0.9 & & & &\ & 2018-May-07 & -3.9$\pm$1.1 & 37.9$\pm$1.1 & & & & 3.2$\pm$0.5 & & & &\ & 2018-Jun-06 & -4.2$\pm$1.1 & 37.4$\pm$1.2 & & & & 4.3$\pm$0.6 & & & &\ & 2018-Jun-15 & -0.5$\pm$1.1 & 36.5$\pm$1.2 & & & & 4.7$\pm$0.7 & & & &\ & 2018-Jun-17 & -2.6$\pm$1.1 & 36.5$\pm$1.2 & & & & 3.5$\pm$0.5 & & & &\ & 2018-Jun-22 & -0.9$\pm$1.1 & 37.4$\pm$1.2 & & & & 3.6$\pm$0.5 & & & &\ & 2018-Jul-01 & -3.3$\pm$1.1 & 37.0$\pm$1.2 & & & & 4.0$\pm$0.6 & & & &\ & **Average** & **-3.9** & **37.4** &\ UP 38W & 2018-Jan-19 & -25.3$\pm$0.7 & 37.7$\pm$0.9 & & & & & & & 1.9$\pm$0.3 &\ & **Average** & **-25.3** & **37.7** &\ Pfu374 & 2014-Jan-20 & -23.7$\pm$1.0 & 47.7$\pm$0.5 & & & & & & & 1.3$\pm$0.4 &\ & 2015-Jan-16 & -25.7$\pm$0.7 & 49.7$\pm$0.7 & & & & & & & 0.94$\pm$0.47 &\ & 2015-Apr-29 & -24.3$\pm$0.5 & 52.4$\pm$1.5 & & & & & & & 1.0$\pm$0.4 &\ & **Average** & **-24.3** & **49.7** &\ Masubi & 2014-Jan-20 & -42.9$\pm$0.7 & 52.5$\pm$1.3 & & & & 2.4$\pm$0.4 & & & 3.4$\pm$0.5 &\ & 2015-Jan-16 & -42.5$\pm$1.5 & 54.9$\pm$1.4 & & & & 2.9$\pm$0.4 & & & 3.1$\pm$0.5 &\ & 2015-Apr-04 & -41.4$\pm$1.6 & 56.5$\pm$2.4 & & & & & & & 2.5$\pm$0.4 &\ & 2015-Apr-29 & -41.6$\pm$1.2 & 56.8$\pm$0.9 & & & & 2.7$\pm$0.4 & & & 2.9$\pm$0.4 &\ & 2015-Jun-05 & -42.7$\pm$1.3 & 54.8$\pm$1.6 & & & & & & & 3.0$\pm$0.5 &\ & 2016-Dec-22 & -44.1$\pm$0.8 & 50.8$\pm$1.0 & & & & & & & 2.5$\pm$0.4 &\ & 2017-Jan-07 & -45.2$\pm$0.9 & 51.6$\pm$0.4 & & & & & & 3.4$\pm$0.5 & 3.0$\pm$0.5 &\ & 2017-Feb-06 & -45.4$\pm$0.9 & 52.0$\pm$0.8 & & & & 1.7$\pm$0.3 & 2.2$\pm$0.3 & 1.6$\pm$0.2 & 2.8$\pm$0.4 &\ & 2018-Jan-19 & -43.7$\pm$0.8 & 53.7$\pm$1.0 & & & & & & & 3.0$\pm$0.4 &\ & **Average** & **-42.9** & **53.7** &\ PFd1691 & 2016-May-01 & 12.2$\pm$1.2 & 57.2$\pm$1.3 & & & & 3.9$\pm$0.7 & & & &\ & 2016-May-03 & 15.0$\pm$1.2 & 60.7$\pm$1.6 & & & & 4.3$\pm$0.8 & & & &\ & 2016-May-08 & 11.9$\pm$1.2 & 58.8$\pm$1.6 & & & & 3.6$\pm$0.7 & & & &\ & 2016-May-10 & 14.7$\pm$1.3 & 56.8$\pm$1.4 & & & & 3.7$\pm$0.7 & & & &\ & 2016-May-15 & 8.6$\pm$0.7 & 61.0$\pm$0.3 & & & & 2.8$\pm$0.5 & & & 4.4$\pm$0.7 &\ & 2016-May-17 & 10.6$\pm$1.2 & 58.5$\pm$1.2 & & & & 3.3$\pm$0.6 & & & &\ & 2016-May-24 & 13.5$\pm$1.3 & 59.2$\pm$1.4 & & & & 2.6$\pm$0.5 & & & &\ & 2016-May-31 & 12.8$\pm$1.3 & 55.1$\pm$1.5 & & & & 2.5$\pm$0.5 & & & &\ & 2016-Jun-02 & 7.6$\pm$1.3 & 54.6$\pm$1.5 & & & & 2.5$\pm$0.5 & & & &\ & 2016-Jun-07 & 10.2$\pm$1.3 & 58.8$\pm$1.7 & & & & 2.6$\pm$0.5 & & & &\ & 2016-Jun-09 & 10.4$\pm$1.3 & 59.2$\pm$1.3 & & & & 2.3$\pm$0.4 & & & &\ & 2016-Jun-16 & 6.7$\pm$1.3 & 54.5$\pm$1.4 & & & & 1.6$\pm$0.3 & & & &\ & 2016-Dec-22 & 7.1$\pm$0.6 & 56.9$\pm$0.3 & & & & 2.2$\pm$0.3 & 2.5$\pm$0.4 & 2.7$\pm$0.4 & 2.6$\pm$0.4 &\ & 2017-Jan-07 & 8.4$\pm$0.7 & 57.2$\pm$1.4 & & & & 2.0$\pm$0.3 & 2.0$\pm$0.3 & 2.1$\pm$0.5 & 2.6$\pm$0.4 &\ & 2017-Jan-23 & 10.5$\pm$0.6 & 60.5$\pm$0.6 & & & & 1.9$\pm$0.3 & & & &\ & 2017-Feb-06 & 8.8$\pm$0.3 & 57.4$\pm$0.7 & & & & 1.3$\pm$0.2 & 1.2$\pm$0.2 & 1.2$\pm$0.2 & 1.9$\pm$0.3 &\ & 2017-May-27 & 9.9$\pm$0.8 & 61.7$\pm$0.1 & & & & 1.5$\pm$0.2 & 2.7$\pm$0.5 & 3.0$\pm$0.7 & 2.6$\pm$0.4 &\ & 2017-Jul-21 & 7.6$\pm$0.6 & 61.6$\pm$0.6 & & & & 1.8$\pm$0.3 & & & &\ & 2017-Jul-23 & 8.7$\pm$0.6 & 60.1$\pm$0.1 & & & & 1.9$\pm$0.3 & & 2.1$\pm$0.3 & &\ & 2017-Dec-11 & 8.3$\pm$1.3 & 57.2$\pm$0.0 & & & & 2.4$\pm$0.4 & & & &\ & 2018-Jan-17 & 8.3$\pm$0.6 & 54.4$\pm$0.9 & & & & 3.0$\pm$0.5 & & & &\ & 2018-Jan-19 & 8.5$\pm$0.6 & 58.0$\pm$0.9 & & & 2.8$\pm$0.4 & 2.4$\pm$0.4 & & & 2.3$\pm$0.3 &\ & **Average** & **9.4** & **58.3** &\ Laki-Oi Patera & 2016-May-15 & -46.4$\pm$0.8 & 60.7$\pm$1.5 & & & & & & & 4.0$\pm$0.6 &\ & 2017-May-27 & -43.9$\pm$0.6 & 57.5$\pm$1.1 & & & & & 3.8$\pm$0.6 & 4.8$\pm$1.2 & 5.1$\pm$0.8 &\ & 2017-Jul-21 & -45.2$\pm$2.3 & 58.6$\pm$0.9 & & & & 2.7$\pm$0.5 & 3.3$\pm$0.5 & & 4.0$\pm$0.6 &\ & 2017-Jul-23 & -42.0$\pm$0.8 & 62.5$\pm$1.5 & & & & & & 2.8$\pm$0.4 & &\ & **Average** & **-44.6** & **59.7** &\ Shamshu Patera & 2017-May-27 & -8.3$\pm$0.2 & 61.5$\pm$2.0 & & & & & & 2.2$\pm$0.5 & 1.2$\pm$0.2 &\ & **Average** & **-8.3** & **61.5** &\ Tejeto Patera & 2016-May-01 & -44.0$\pm$1.6 & 65.9$\pm$2.4 & & & & 4.4$\pm$0.9 & & & &\ & 2016-May-03 & -42.6$\pm$1.5 & 71.5$\pm$2.2 & & & & 5.5$\pm$1.1 & & & &\ & 2016-May-10 & -42.1$\pm$1.6 & 68.4$\pm$1.8 & & & & 3.6$\pm$0.7 & & & &\ & 2016-May-17 & -43.1$\pm$1.6 & 68.9$\pm$2.0 & & & & 4.2$\pm$0.8 & & & &\ & **Average** & **-42.9** & **68.7** &\ Chalybes Regio & 2013-Sep-02 & 52.0$\pm$2.2 & 63.4$\pm$4.3 & & & & 8.1$\pm$1.7 & & & &\ & 2013-Nov-26 & 54.5$\pm$1.8 & 65.8$\pm$2.6 & & & & 4.9$\pm$2.5 & & & &\ & 2013-Dec-03 & 56.6$\pm$1.9 & 69.0$\pm$6.0 & & & & 8.7$\pm$4.4 & & & &\ & 2013-Dec-05 & 57.3$\pm$1.9 & 66.8$\pm$2.6 & & & & 6.5$\pm$3.2 & & & &\ & 2013-Dec-12 & 56.3$\pm$2.0 & 66.5$\pm$2.2 & & & & 4.8$\pm$2.4 & & & &\ & 2014-Jan-20 & 55.7$\pm$2.2 & 66.6$\pm$3.0 & & 4.3$\pm$0.6 & 6.4$\pm$1.0 & 7.4$\pm$1.1 & & & 9.7$\pm$1.4 &\ & 2014-Mar-07 & 56.6$\pm$2.1 & 75.8$\pm$5.9 & & & & 9.7$\pm$4.8 & & & &\ & 2014-Mar-11 & 58.4$\pm$1.7 & 71.8$\pm$4.7 & & & & 9.0$\pm$1.9 & & & 8.2$\pm$2.2 &\ & 2014-Mar-14 & 51.9$\pm$1.9 & 68.8$\pm$5.6 & & & & 8.1$\pm$4.0 & & & &\ & 2014-Oct-30 & 57.9$\pm$1.0 & 67.2$\pm$1.7 & & & & 8.0$\pm$1.2 & & & 11$\pm$2 &\ & 2014-Dec-15 & 54.8$\pm$2.1 & 65.4$\pm$3.9 & & & & 6.1$\pm$3.0 & & & &\ & 2015-Jan-16 & 55.6$\pm$1.1 & 64.8$\pm$1.3 & & & & 4.5$\pm$0.7 & & & 7.7$\pm$1.2 &\ & 2015-Mar-26 & 57.2$\pm$2.2 & 72.3$\pm$7.0 & & & & 9.4$\pm$4.7 & & & &\ & 2015-Mar-28 & 55.1$\pm$2.1 & 75.7$\pm$4.9 & & & & 7.0$\pm$3.5 & & & &\ & 2015-Apr-01 & 56.1$\pm$0.8 & 77.5$\pm$2.8 & & & & & & & 6.2$\pm$0.9 &\ & 2015-Apr-04 & 54.9$\pm$0.9 & 68.0$\pm$1.9 & & & & 7.2$\pm$1.1 & & & 11$\pm$2 &\ & 2015-Apr-06 & 56.9$\pm$2.2 & 75.1$\pm$2.6 & & & & 5.8$\pm$2.9 & & & &\ & 2015-Apr-20 & 54.0$\pm$2.1 & 73.2$\pm$4.9 & & & & 9.9$\pm$1.8 & & & &\ & 2015-Apr-27 & 51.0$\pm$2.0 & 67.5$\pm$5.8 & & & & 7.5$\pm$3.8 & & & &\ & 2015-Apr-29 & 55.4$\pm$1.5 & 70.1$\pm$1.5 & & & 4.0$\pm$0.6 & 5.0$\pm$0.8 & & & 7.9$\pm$1.2 &\ & 2015-Jun-05 & 55.0$\pm$1.0 & 68.5$\pm$1.9 & & & 9.5$\pm$1.4 & 8.7$\pm$1.3 & & & 11$\pm$2 &\ & 2015-Dec-22 & 59.3$\pm$1.4 & 57.6$\pm$2.8 & & & & 6.0$\pm$3.0 & & & 14$\pm$2 &\ & 2016-Feb-04 & 54.9$\pm$2.1 & 67.2$\pm$2.2 & & & & 6.9$\pm$1.4 & & & &\ & 2016-Feb-11 & 54.4$\pm$2.1 & 61.4$\pm$3.1 & & & & 6.4$\pm$1.3 & & & &\ & 2016-Feb-20 & 61.2$\pm$2.6 & 71.8$\pm$4.7 & & & & 7.3$\pm$1.6 & & & &\ & 2016-Mar-14 & 57.3$\pm$2.3 & 62.1$\pm$5.5 & & & & 7.9$\pm$1.8 & & & &\ & 2016-May-01 & 59.9$\pm$2.7 & 71.4$\pm$5.6 & & & & 7.8$\pm$1.8 & & & &\ & 2016-May-03 & 63.0$\pm$3.0 & 68.2$\pm$4.9 & & & & 14$\pm$3 & & & &\ & 2016-May-08 & 59.7$\pm$2.8 & 72.5$\pm$9.1 & & & & 11$\pm$3 & & & &\ & 2016-May-10 & 59.8$\pm$2.7 & 69.2$\pm$2.8 & & & & 8.0$\pm$1.7 & & & &\ & 2016-May-15 & 50.7$\pm$0.8 & 65.9$\pm$1.4 & & & 7.8$\pm$1.2 & 6.0$\pm$1.2 & & & 9.6$\pm$1.4 &\ & 2016-May-17 & 62.1$\pm$3.0 & 70.5$\pm$4.1 & & & & 10$\pm$2 & & & &\ & 2016-May-24 & 61.2$\pm$3.0 & 71.4$\pm$6.1 & & & & 11$\pm$3 & & & &\ & 2016-May-31 & 56.1$\pm$2.6 & 63.3$\pm$5.2 & & & & 8.3$\pm$1.8 & & & &\ & 2016-Jun-02 & 59.6$\pm$2.9 & 65.9$\pm$3.3 & & & & 10$\pm$2 & & & &\ & 2016-Jun-07 & 57.8$\pm$2.9 & 71.2$\pm$9.3 & & & & 9.9$\pm$3.0 & & & &\ & 2016-Jun-09 & 58.4$\pm$2.8 & 74.7$\pm$4.0 & & & & 9.6$\pm$2.0 & & & &\ & 2016-Jun-16 & 53.2$\pm$2.4 & 63.0$\pm$3.7 & & & & 8.1$\pm$1.7 & & & &\ & 2016-Jun-25 & 56.2$\pm$2.7 & 73.9$\pm$2.5 & & & & 11$\pm$2 & & & &\ & 2016-Nov-22 & 57.2$\pm$3.2 & 64.7$\pm$4.0 & & & & 17$\pm$3 & & & &\ & 2016-Nov-24 & 50.1$\pm$2.6 & 70.9$\pm$4.3 & & & & 12$\pm$3 & & & &\ & 2016-Nov-29 & 54.9$\pm$3.0 & 63.7$\pm$5.4 & & & & 14$\pm$3 & & & &\ & 2016-Dec-22 & 49.0$\pm$1.4 & 63.4$\pm$1.6 & & & 10$\pm$2 & 9.3$\pm$1.4 & 10.0$\pm$1.5 & 10$\pm$2 & 11$\pm$2 &\ & 2016-Dec-24 & 54.5$\pm$2.7 & 70.3$\pm$3.2 & & & & 11$\pm$2 & & & &\ & 2017-Jan-04 & 50.8$\pm$1.2 & 80.0$\pm$4.2 & & & & & & & 6.9$\pm$1.5 &\ & 2017-Jan-07 & 50.9$\pm$1.9 & 64.2$\pm$3.8 & & & 9.2$\pm$1.4 & 6.9$\pm$1.0 & 8.2$\pm$1.2 & 8.0$\pm$1.7 & 9.8$\pm$1.6 &\ & 2017-Jan-09 & 56.8$\pm$2.9 & 66.0$\pm$5.1 & & & & 13$\pm$3 & & & &\ & 2017-Jan-14 & 61.2$\pm$3.4 & 66.5$\pm$7.9 & & & & 14$\pm$4 & & & &\ & 2017-Jan-23 & 50.8$\pm$1.7 & 74.3$\pm$0.4 & & 7.4$\pm$1.1 & 7.9$\pm$1.2 & 9.9$\pm$1.5 & & & &\ & 2017-Jan-25 & 57.8$\pm$3.2 & 63.3$\pm$7.8 & & & & 12$\pm$5 & & & &\ & 2017-Jan-30 & 55.3$\pm$2.5 & 66.4$\pm$3.7 & & & & 12$\pm$2 & & & &\ & 2017-Feb-06 & 50.6$\pm$1.3 & 64.6$\pm$2.8 & & 3.6$\pm$0.5 & & 6.0$\pm$0.9 & 7.0$\pm$1.1 & 7.2$\pm$1.1 & 11$\pm$2 &\ & 2017-Feb-24 & 58.4$\pm$2.5 & 62.4$\pm$2.4 & & & & 10$\pm$2 & & & &\ & 2017-Mar-05 & 54.8$\pm$2.3 & 70.1$\pm$3.5 & & & & 10$\pm$2 & & & &\ & 2017-Apr-02 & 54.9$\pm$2.3 & 66.4$\pm$6.0 & & & & 8.1$\pm$1.8 & & & &\ & 2017-Apr-04 & 56.6$\pm$2.2 & 70.5$\pm$2.3 & & & & 11$\pm$2 & & & &\ & 2017-May-04 & 54.4$\pm$2.1 & 76.1$\pm$2.8 & & & & 8.7$\pm$1.6 & & & &\ & 2017-May-06 & 57.3$\pm$2.3 & 73.5$\pm$2.1 & & & & 12$\pm$2 & & & &\ & 2017-May-11 & 59.5$\pm$3.0 & 74.4$\pm$13.0 & & & & 13$\pm$5 & & & &\ & 2017-May-22 & 54.4$\pm$2.2 & 76.9$\pm$2.2 & & & & 13$\pm$2 & & & &\ & 2017-May-27 & 51.3$\pm$1.2 & 72.1$\pm$3.6 & & 8.9$\pm$1.3 & 7.8$\pm$1.4 & 11$\pm$2 & 14$\pm$3 & 21$\pm$6 & 16$\pm$2 &\ & 2017-May-29 & 53.3$\pm$2.2 & 73.0$\pm$2.3 & & & & 13$\pm$2 & & & &\ & 2017-May-31 & 56.7$\pm$2.9 & 62.3$\pm$7.9 & & & & 15$\pm$7 & & & &\ & 2017-Jun-03 & 59.1$\pm$2.9 & 75.3$\pm$9.6 & & & & 13$\pm$4 & & & &\ & 2017-Jun-16 & 50.9$\pm$1.0 & 84.8$\pm$2.4 & & & & 10$\pm$2 & & & &\ & 2017-Jun-28 & 52.8$\pm$2.4 & 70.2$\pm$3.4 & & & & 8.3$\pm$1.5 & & & &\ & 2017-Jun-30 & 57.4$\pm$2.8 & 72.3$\pm$4.5 & & & & 14$\pm$3 & & & &\ & 2017-Jul-05 & 57.0$\pm$2.9 & 71.9$\pm$7.0 & & & & 10$\pm$2 & & & &\ & 2017-Jul-07 & 59.3$\pm$3.0 & 66.1$\pm$3.9 & & & & 15$\pm$3 & & & &\ & 2017-Jul-21 & 46.8$\pm$1.7 & 75.6$\pm$1.1 & & & 6.7$\pm$1.0 & 7.5$\pm$1.3 & & & 13$\pm$2 &\ & 2017-Jul-23 & 48.3$\pm$0.7 & 78.8$\pm$1.4 & & & 7.9$\pm$1.2 & 7.6$\pm$1.1 & 9.6$\pm$1.4 & 14$\pm$2 & 16$\pm$2 &\ & 2017-Dec-11 & 46.4$\pm$0.7 & 72.7$\pm$0.2 & & & & 10$\pm$2 & & & &\ & 2017-Dec-13 & 49.0$\pm$0.3 & 77.0$\pm$0.8 & & & & 10$\pm$2 & & & 17$\pm$3 &\ & 2018-Jan-14 & 49.3$\pm$1.5 & 82.1$\pm$3.6 & & & & 12$\pm$2 & & 11$\pm$2 & 25$\pm$4 &\ & 2018-Jan-17 & 48.7$\pm$1.1 & 59.6$\pm$2.6 & & & & 8.3$\pm$1.3 & & & &\ & 2018-Jan-19 & 48.9$\pm$0.8 & 73.7$\pm$1.6 & & & 13$\pm$2 & 11$\pm$2 & & & 22$\pm$3 &\ & 2018-Mar-15 & 54.0$\pm$2.3 & 71.6$\pm$2.6 & & & & 10$\pm$2 & & & &\ & 2018-May-07 & 55.5$\pm$2.3 & 79.9$\pm$4.3 & & & & 9.5$\pm$1.9 & & & &\ & 2018-Jun-17 & 50.3$\pm$2.0 & 77.8$\pm$2.7 & & & & 12$\pm$2 & & & &\ & 2018-Jul-12 & 60.5$\pm$3.1 & 69.8$\pm$5.9 & & & & 18$\pm$5 & & & &\ & **Average** & **55.4** & **70.2** &\ Zal Patera & 2013-Nov-26 & 38.8$\pm$1.7 & 71.6$\pm$2.4 & & & & 1.5$\pm$0.7 & & & &\ & 2013-Nov-28 & 37.8$\pm$1.6 & 72.6$\pm$2.4 & & & & 4.0$\pm$2.0 & & & &\ & 2013-Dec-05 & 41.2$\pm$1.9 & 72.3$\pm$2.2 & & & & 1.8$\pm$0.9 & & & &\ & 2013-Dec-12 & 39.3$\pm$2.0 & 72.3$\pm$2.1 & & & & 1.4$\pm$0.7 & & & &\ & 2014-Jan-20 & 37.6$\pm$0.8 & 73.6$\pm$1.0 & & & & 2.7$\pm$0.6 & & & 5.0$\pm$0.7 &\ & 2014-Oct-30 & 37.0$\pm$1.8 & 76.6$\pm$2.3 & & & & & & & 4.2$\pm$0.6 &\ & 2014-Dec-01 & 38.6$\pm$2.0 & 73.5$\pm$2.1 & & & & 2.8$\pm$1.4 & & & &\ & 2014-Dec-08 & 37.3$\pm$1.7 & 69.8$\pm$2.4 & & & & 4.6$\pm$0.7 & & & &\ & 2014-Dec-10 & 41.3$\pm$1.6 & 71.1$\pm$2.4 & & & & 3.7$\pm$1.8 & & & &\ & 2014-Dec-15 & 40.7$\pm$1.5 & 71.1$\pm$2.7 & & & & 3.2$\pm$1.6 & & & &\ & 2015-Jan-11 & 37.6$\pm$1.0 & 78.6$\pm$2.8 & & & & & & & 6.2$\pm$0.9 &\ & 2015-Jan-16 & 38.4$\pm$1.1 & 73.8$\pm$1.1 & & & & 2.2$\pm$0.3 & & & 5.0$\pm$0.7 &\ & 2015-Mar-28 & 36.5$\pm$1.4 & 78.7$\pm$2.7 & & & & 2.6$\pm$1.3 & & & &\ & 2015-Apr-01 & 38.5$\pm$0.7 & 77.3$\pm$1.9 & & & & & & & 4.7$\pm$0.7 &\ & 2015-Apr-04 & 36.7$\pm$1.3 & 78.5$\pm$2.6 & & & & & & & 5.5$\pm$0.8 &\ & 2015-Apr-20 & 38.3$\pm$1.6 & 73.4$\pm$2.8 & & & & 4.5$\pm$2.3 & & & &\ & 2015-Apr-22 & 35.4$\pm$2.0 & 78.7$\pm$2.2 & & & & 3.2$\pm$0.5 & & & &\ & 2015-Apr-29 & 39.5$\pm$0.8 & 75.6$\pm$0.8 & & & & 2.8$\pm$0.6 & & & 5.0$\pm$0.8 &\ & 2015-Jun-05 & 37.9$\pm$1.3 & 74.9$\pm$1.6 & & & & & & & 3.2$\pm$0.5 &\ & 2016-Dec-22 & 33.4$\pm$0.8 & 74.3$\pm$1.3 & & & & & & & 2.4$\pm$0.4 &\ & 2017-Jan-07 & 34.2$\pm$0.7 & 76.4$\pm$1.0 & & & & & & & 1.7$\pm$0.2 &\ & 2017-May-27 & 36.6$\pm$1.7 & 76.2$\pm$1.6 & & & & & & 4.9$\pm$1.2 & 3.9$\pm$0.6 &\ & 2017-Jul-21 & 38.0$\pm$1.1 & 76.8$\pm$0.4 & & & & 4.7$\pm$0.8 & 7.4$\pm$1.1 & & &\ & 2017-Jul-23 & 34.2$\pm$0.8 & 79.1$\pm$0.9 & & & & 1.9$\pm$0.3 & & & &\ & **Average** & **37.9** & **74.6** &\ Tawhaki Patera & 2013-Aug-21 & 3.6$\pm$0.6 & 78.1$\pm$0.8 & & & & & & & 2.0$\pm$0.3 &\ & 2013-Dec-05 & 2.5$\pm$1.0 & 73.4$\pm$1.0 & & & & 1.0$\pm$0.5 & & & &\ & 2013-Dec-12 & 2.5$\pm$1.0 & 74.4$\pm$1.0 & & & & 1.2$\pm$0.6 & & & &\ & 2013-Dec-14 & 2.8$\pm$1.7 & 75.1$\pm$2.4 & & & & 1.9$\pm$1.0 & & & &\ & 2014-Jan-20 & 2.1$\pm$0.6 & 74.1$\pm$0.9 & & & & 1.6$\pm$0.2 & & & 2.0$\pm$0.5 &\ & 2014-Mar-11 & 3.7$\pm$0.9 & 75.1$\pm$1.8 & & & & 4.5$\pm$0.7 & & & 3.5$\pm$0.5 &\ & 2015-Jan-11 & 2.7$\pm$0.8 & 74.8$\pm$1.9 & & & & 4.6$\pm$2.3 & & & 3.9$\pm$0.6 &\ & 2015-Jan-16 & 4.2$\pm$0.6 & 76.3$\pm$0.6 & & & & & & & 1.4$\pm$0.7 &\ & 2015-Apr-01 & 3.7$\pm$0.6 & 76.5$\pm$1.3 & & & & 3.5$\pm$1.2 & & & 3.4$\pm$0.5 &\ & 2015-Apr-29 & 3.1$\pm$0.5 & 76.8$\pm$0.6 & & & 1.5$\pm$0.2 & 1.7$\pm$0.4 & & & 2.0$\pm$0.3 &\ & 2015-Jun-05 & 4.3$\pm$1.3 & 77.2$\pm$1.6 & & & & 1.9$\pm$0.3 & & & &\ & 2016-May-15 & 1.5$\pm$0.5 & 75.6$\pm$0.6 & & & & 1.1$\pm$0.2 & & & &\ & 2017-Jan-07 & 1.6$\pm$0.8 & 73.8$\pm$1.6 & & & & 1.3$\pm$0.2 & & 2.0$\pm$0.3 & 1.3$\pm$0.2 &\ & 2017-Jan-23 & 1.6$\pm$1.0 & 75.6$\pm$0.2 & & 1.9$\pm$0.3 & 2.8$\pm$0.4 & 3.2$\pm$0.5 & & & &\ & 2017-Feb-06 & 1.5$\pm$1.7 & 72.3$\pm$0.8 & & & & & & 1.0$\pm$0.2 & 1.0$\pm$0.15 &\ & 2017-Jul-21 & 1.1$\pm$0.3 & 76.7$\pm$0.9 & & & & 1.4$\pm$0.3 & 1.8$\pm$0.3 & & &\ & 2017-Jul-23 & 1.7$\pm$0.3 & 77.4$\pm$0.8 & & & & 1.5$\pm$0.2 & 1.7$\pm$0.3 & & &\ & 2018-Jan-14 & 1.0$\pm$0.6 & 79.6$\pm$0.9 & & & & & & 2.2$\pm$0.3 & &\ & 2018-Jan-19 & 0.7$\pm$0.6 & 73.7$\pm$0.1 & & & & 1.4$\pm$0.2 & & & 1.3$\pm$0.2 &\ & **Average** & **2.5** & **75.6** &\ Ekhi Patera & 2017-Jul-23 & -28.4$\pm$1.1 & 86.7$\pm$1.4 & & 3.2$\pm$0.5 & 3.8$\pm$0.6 & 3.9$\pm$0.6 & 3.8$\pm$0.6 & 4.5$\pm$0.7 & 3.8$\pm$0.6 &\ & **Average** & **-28.4** & **86.7** &\ Gish Bar & 2013-Nov-26 & 14.5$\pm$1.7 & 87.9$\pm$2.4 & & & & 2.9$\pm$1.4 & & & &\ & 2013-Nov-28 & 15.7$\pm$1.1 & 87.0$\pm$1.2 & & & & 2.2$\pm$1.1 & & & &\ & 2013-Dec-03 & 15.4$\pm$1.1 & 90.2$\pm$3.5 & & & & 7.4$\pm$3.7 & & & &\ & 2013-Dec-05 & 15.8$\pm$1.1 & 87.3$\pm$1.1 & & & & 2.4$\pm$1.2 & & & &\ & 2013-Dec-12 & 14.5$\pm$1.1 & 86.1$\pm$1.2 & & & & 1.8$\pm$0.9 & & & &\ & 2014-Jan-20 & 15.2$\pm$0.5 & 89.5$\pm$0.9 & & & & 3.3$\pm$1.3 & & & 5.5$\pm$0.9 &\ & 2014-Mar-11 & 16.1$\pm$1.1 & 90.5$\pm$1.7 & & & & 1.9$\pm$0.3 & & & 2.0$\pm$0.3 &\ & 2015-Jan-11 & 14.5$\pm$1.0 & 89.3$\pm$1.8 & & & & 5.4$\pm$0.8 & & & 4.9$\pm$0.7 &\ & 2015-Jan-13 & 14.2$\pm$1.5 & 89.6$\pm$2.5 & & & & 3.8$\pm$1.9 & & & &\ & 2015-Jan-16 & 14.7$\pm$0.6 & 88.7$\pm$0.4 & & & & 5.4$\pm$0.8 & & & 7.9$\pm$1.2 &\ & 2015-Apr-01 & 16.1$\pm$1.2 & 91.9$\pm$1.2 & & & & 1.3$\pm$0.5 & & & 2.0$\pm$0.3 &\ & 2015-Apr-04 & 15.9$\pm$1.3 & 90.7$\pm$2.6 & & & & 4.9$\pm$2.4 & & & &\ & 2015-Apr-29 & 15.2$\pm$0.3 & 89.8$\pm$0.7 & & & & 0.85$\pm$0.32 & & & 1.4$\pm$0.5 &\ & 2016-Feb-20 & 20.1$\pm$1.2 & 89.0$\pm$1.9 & & & & 5.8$\pm$1.1 & & & &\ & 2017-Jan-07 & 15.5$\pm$0.6 & 82.6$\pm$0.8 & & & & & & 1.9$\pm$0.3 & &\ & 2017-Feb-06 & 16.2$\pm$0.1 & 86.4$\pm$1.7 & & & & & & 1.5$\pm$0.2 & 2.8$\pm$0.4 &\ & 2017-Jul-23 & 16.1$\pm$0.6 & 89.6$\pm$1.1 & & & & & & 2.2$\pm$0.3 & &\ & 2018-Mar-15 & 16.9$\pm$1.2 & 88.2$\pm$1.2 & & & & 1.9$\pm$0.3 & & & &\ & **Average** & **15.6** & **89.1** &\ Aluna Patera & 2015-Jan-11 & 41.5$\pm$1.3 & 91.6$\pm$2.6 & & & & & & & 4.6$\pm$0.7 &\ & 2015-Jan-16 & 42.0$\pm$2.2 & 88.7$\pm$2.0 & & & & & & & 1.8$\pm$0.3 &\ & **Average** & **41.7** & **90.1** &\ P207 & 2013-Aug-21 & -36.5$\pm$1.2 & 91.1$\pm$0.8 & & & & 5.0$\pm$1.2 & & & 8.0$\pm$1.2 &\ & **Average** & **-36.5** & **91.1** &\ Shango Patera & 2013-Aug-21 & 33.5$\pm$0.7 & 100.0$\pm$0.9 & & & & & & & 2.2$\pm$0.3 &\ & 2014-Jan-20 & 32.0$\pm$1.1 & 92.9$\pm$1.7 & & & & & & & 2.6$\pm$0.5 &\ & 2015-Apr-29 & 34.2$\pm$1.5 & 95.6$\pm$1.4 & & & & & & & 0.85$\pm$0.39 &\ & **Average** & **33.5** & **95.6** &\ Itzamna Patera & 2015-Jan-11 & -13.2$\pm$1.0 & 98.7$\pm$1.2 & & & & 2.7$\pm$0.4 & & & 2.7$\pm$0.4 &\ & 2015-Jan-16 & -14.4$\pm$0.4 & 98.0$\pm$0.5 & & & & 1.3$\pm$0.6 & & & 1.4$\pm$0.7 &\ & 2015-Apr-01 & -14.4$\pm$0.8 & 100.6$\pm$0.9 & & & & 1.1$\pm$0.4 & & & 1.0$\pm$0.4 &\ & 2015-Apr-29 & -15.0$\pm$0.5 & 99.0$\pm$1.0 & & & & 1.5$\pm$0.3 & & & 1.3$\pm$0.5 &\ & 2017-Jul-21 & -17.6$\pm$0.6 & 104.3$\pm$1.3 & & & & & 3.7$\pm$0.6 & & &\ & 2017-Jul-23 & -14.0$\pm$0.9 & 99.2$\pm$1.1 & & & & 1.7$\pm$0.3 & 1.9$\pm$0.3 & 2.6$\pm$0.4 & 1.7$\pm$0.3 &\ & 2017-Dec-31 & -18.4$\pm$0.7 & 94.7$\pm$1.9 & & & & & & 3.0$\pm$0.4 & &\ & 2018-Jan-14 & -15.5$\pm$0.6 & 100.1$\pm$0.8 & & & & & & 1.9$\pm$0.3 & &\ & 2018-Jan-19 & -16.8$\pm$0.6 & 94.8$\pm$0.9 & & & & 1.6$\pm$0.2 & & & &\ & **Average** & **-15.0** & **99.0** &\ Arusha Patera & 2017-Dec-13 & -39.2$\pm$2.1 & 99.6$\pm$0.6 & & & & 3.4$\pm$0.5 & & & 7.3$\pm$1.1 &\ & 2017-Dec-31 & -40.5$\pm$1.2 & 98.4$\pm$0.7 & & & & & & 3.9$\pm$0.6 & 9.9$\pm$1.5 &\ & 2018-Jan-14 & -39.1$\pm$0.3 & 102.6$\pm$1.4 & & & & 3.3$\pm$0.5 & & 4.5$\pm$0.7 & 9.0$\pm$1.3 &\ & 2018-Jan-19 & -39.9$\pm$1.5 & 96.6$\pm$1.3 & & & & 3.9$\pm$0.6 & & & 6.3$\pm$0.9 &\ & **Average** & **-39.6** & **99.0** &\ Sigurd Patera & 2014-Mar-07 & -5.1$\pm$1.2 & 98.9$\pm$3.0 & & & & 5.5$\pm$2.8 & & & &\ & 2014-Mar-11 & -5.0$\pm$0.6 & 99.6$\pm$0.5 & & & & 6.0$\pm$1.2 & & & 11$\pm$2 &\ & 2014-Mar-27 & -7.3$\pm$1.2 & 100.6$\pm$1.5 & & & & 4.8$\pm$0.7 & & & &\ & 2014-Apr-03 & -5.0$\pm$1.2 & 100.9$\pm$1.2 & & & & 3.7$\pm$0.6 & & & &\ & 2015-Jan-11 & -5.7$\pm$0.6 & 97.5$\pm$0.8 & & & & & & & 2.3$\pm$0.3 &\ & 2015-Jan-16 & -5.2$\pm$0.6 & 95.5$\pm$0.7 & & & & & & & 0.89$\pm$0.45 &\ & 2015-Apr-01 & -3.5$\pm$0.4 & 101.5$\pm$1.1 & & & & & & & 1.1$\pm$0.4 &\ & 2015-Apr-29 & -4.3$\pm$0.5 & 96.0$\pm$0.5 & & & & & & & 1.0$\pm$0.4 &\ & **Average** & **-5.1** & **99.2** &\ P197 & 2014-Mar-11 & -44.2$\pm$1.2 & 110.6$\pm$1.7 & & & & 3.5$\pm$0.5 & & & 1.5$\pm$0.2 &\ & 2014-Oct-09 & -44.4$\pm$1.9 & 107.7$\pm$3.0 & & & & 7.9$\pm$1.5 & & & &\ & 2014-Oct-25 & -47.8$\pm$2.0 & 105.6$\pm$2.6 & & & & 5.5$\pm$1.1 & & & &\ & 2014-Oct-27 & -45.1$\pm$1.9 & 109.4$\pm$3.5 & & & & 7.0$\pm$1.5 & & & &\ & 2014-Dec-01 & -46.6$\pm$1.7 & 104.4$\pm$3.6 & & & & 6.5$\pm$3.3 & & & &\ & 2014-Dec-10 & -48.5$\pm$1.8 & 105.9$\pm$2.3 & & & & 6.6$\pm$1.3 & & & &\ & 2015-Jan-11 & -46.9$\pm$0.7 & 107.3$\pm$0.7 & & & & 12$\pm$2 & & & 13$\pm$2 &\ & 2015-Jan-13 & -49.2$\pm$1.6 & 107.4$\pm$2.7 & & & & 4.6$\pm$2.3 & & & &\ & 2015-Jan-16 & -46.9$\pm$0.9 & 106.8$\pm$1.4 & & & & 4.9$\pm$0.7 & & & 6.9$\pm$1.0 &\ & 2015-Apr-01 & -48.3$\pm$1.0 & 109.7$\pm$1.1 & & & & 3.7$\pm$0.6 & & & 4.2$\pm$0.6 &\ & 2015-Apr-29 & -44.1$\pm$1.2 & 103.2$\pm$1.7 & & & & & & & 2.5$\pm$0.4 &\ & **Average** & **-46.9** & **107.3** &\ Amirani & 2013-Aug-21 & 21.0$\pm$1.5 & 114.6$\pm$1.2 & & & & 2.4$\pm$0.6 & & & 3.9$\pm$0.6 &\ & 2013-Aug-23 & 22.4$\pm$1.4 & 115.9$\pm$2.6 & & & & & & & 3.9$\pm$0.6 &\ & 2013-Dec-14 & 19.5$\pm$1.1 & 112.3$\pm$1.1 & & & & 2.1$\pm$1.0 & & & &\ & 2014-Mar-11 & 21.5$\pm$0.4 & 116.2$\pm$0.5 & & & & 3.1$\pm$0.5 & & & 3.5$\pm$0.5 &\ & 2014-Dec-01 & 18.8$\pm$1.5 & 109.8$\pm$2.5 & & & & 3.3$\pm$1.6 & & & &\ & 2014-Dec-10 & 20.8$\pm$1.2 & 112.6$\pm$1.4 & & & & 2.9$\pm$1.4 & & & &\ & 2015-Jan-11 & 20.5$\pm$1.5 & 113.3$\pm$0.5 & & & & 2.4$\pm$0.4 & & & 4.7$\pm$0.7 &\ & 2015-Jan-16 & 19.5$\pm$1.0 & 107.5$\pm$1.6 & & & & 1.3$\pm$0.6 & & & 3.3$\pm$0.5 &\ & 2015-Apr-01 & 21.3$\pm$1.3 & 117.8$\pm$1.1 & & & & 1.7$\pm$0.3 & & & 2.8$\pm$0.4 &\ & 2015-Apr-22 & 22.4$\pm$1.7 & 116.5$\pm$2.4 & & & & 4.2$\pm$0.6 & & & &\ & 2015-Apr-29 & 20.7$\pm$0.7 & 113.2$\pm$1.0 & & & & 2.3$\pm$0.6 & & & 4.1$\pm$0.6 &\ & 2015-Dec-22 & 21.6$\pm$0.6 & 112.5$\pm$0.5 & & & & 1.9$\pm$0.3 & & & 4.8$\pm$0.7 &\ & 2016-May-03 & 20.5$\pm$1.3 & 112.0$\pm$1.5 & & & & 1.5$\pm$0.3 & & & &\ & 2016-May-10 & 23.9$\pm$1.4 & 112.8$\pm$2.1 & & & & 3.1$\pm$0.6 & & & &\ & 2017-Jan-04 & 19.7$\pm$0.6 & 113.3$\pm$1.9 & & & & 2.6$\pm$0.4 & 4.1$\pm$0.6 & 4.0$\pm$0.6 & 4.0$\pm$0.6 &\ & 2017-Jan-07 & 18.4$\pm$1.2 & 109.2$\pm$0.1 & & & & & & 4.6$\pm$0.7 & 6.7$\pm$1.0 &\ & 2017-Feb-05 & 18.3$\pm$0.9 & 117.4$\pm$3.8 & & & & & & 6.3$\pm$4.4 & &\ & 2017-May-29 & 22.6$\pm$1.4 & 112.4$\pm$2.4 & & & & 5.4$\pm$1.0 & & & &\ & 2017-Jun-16 & 19.6$\pm$0.6 & 118.3$\pm$0.6 & & & & 2.1$\pm$0.3 & & & &\ & 2017-Jun-30 & 21.4$\pm$1.4 & 113.2$\pm$1.5 & & & & 2.9$\pm$0.5 & & & &\ & 2017-Jul-21 & 15.6$\pm$0.2 & 110.8$\pm$0.0 & & & & & & & 6.6$\pm$1.0 &\ & 2017-Jul-23 & 18.9$\pm$1.0 & 113.3$\pm$0.9 & & & & 1.8$\pm$0.3 & 2.8$\pm$0.4 & 3.3$\pm$0.5 & 3.3$\pm$0.5 &\ & 2017-Dec-13 & 17.5$\pm$0.7 & 109.8$\pm$0.8 & & & & & & & 3.0$\pm$0.4 &\ & 2017-Dec-31 & 19.0$\pm$0.0 & 116.3$\pm$1.8 & & & & & & 2.6$\pm$0.4 & 4.8$\pm$0.7 &\ & 2018-Jan-14 & 18.1$\pm$1.3 & 114.8$\pm$0.7 & & & & 1.9$\pm$0.3 & & 3.2$\pm$0.5 & 5.0$\pm$0.7 &\ & 2018-Jan-19 & 20.2$\pm$0.7 & 110.5$\pm$1.3 & & & & & & & 6.5$\pm$1.0 &\ & 2018-Jul-12 & 20.7$\pm$1.3 & 109.0$\pm$1.3 & & & & 2.8$\pm$0.4 & & & &\ & **Average** & **20.5** & **113.2** &\ Dusura Patera & 2014-Mar-11 & 36.4$\pm$0.6 & 121.1$\pm$0.9 & & & & 7.5$\pm$1.3 & & & 6.4$\pm$1.0 &\ & 2014-Mar-27 & 35.4$\pm$1.5 & 121.0$\pm$1.6 & & & & 8.1$\pm$1.2 & & & &\ & 2014-Apr-03 & 36.8$\pm$1.7 & 124.0$\pm$2.4 & & & & 6.9$\pm$1.3 & & & &\ & **Average** & **36.4** & **121.1** &\ Maui Patera & 2017-Jan-20 & 18.6$\pm$1.4 & 124.9$\pm$1.6 & & & & 3.1$\pm$0.5 & & & &\ & 2017-Jan-27 & 17.7$\pm$1.4 & 126.6$\pm$1.6 & & & & 3.5$\pm$0.6 & & & &\ & **Average** & **18.2** & **125.8** &\ P95 & 2016-May-17 & -9.7$\pm$0.4 & 126.2$\pm$1.4 & 56$\pm$11 & & & 58$\pm$13 & & & &\ & 2016-May-19 & -10.2$\pm$1.2 & 129.4$\pm$1.3 & & & & 15$\pm$3 & & & &\ & **Average** & **-10.0** & **127.8** &\ Malik Patera & 2013-Aug-21 & -32.2$\pm$0.5 & 127.6$\pm$1.4 & & & & 3.3$\pm$0.8 & & & 3.9$\pm$0.6 &\ & 2013-Aug-23 & -31.0$\pm$0.9 & 130.2$\pm$1.3 & & & & 3.0$\pm$0.7 & & & 2.7$\pm$0.4 &\ & 2014-Mar-11 & -33.6$\pm$0.6 & 129.9$\pm$0.6 & & & & & & & 1.3$\pm$0.6 &\ & 2015-Jan-11 & -32.9$\pm$1.6 & 127.3$\pm$1.1 & & & & 1.7$\pm$0.3 & & & 1.7$\pm$0.3 &\ & 2015-Jan-16 & -33.5$\pm$0.9 & 125.9$\pm$1.8 & & & & 6.9$\pm$1.0 & & & 6.6$\pm$1.0 &\ & 2015-Jan-22 & -34.7$\pm$1.7 & 129.6$\pm$2.4 & & & & 2.4$\pm$1.2 & & & &\ & 2015-Apr-01 & -32.4$\pm$1.0 & 132.0$\pm$0.8 & & & & 1.5$\pm$0.3 & & & 1.9$\pm$0.3 &\ & 2015-Apr-29 & -35.1$\pm$1.2 & 130.3$\pm$2.7 & & & & & & & 3.2$\pm$0.5 &\ & 2018-Jan-14 & -30.9$\pm$0.7 & 128.0$\pm$0.7 & & & & & & 1.0$\pm$0.2 & &\ & **Average** & **-32.9** & **129.6** &\ UP 132W & 2017-Jan-04 & 18.8$\pm$0.9 & 129.8$\pm$1.1 & & 3.3$\pm$0.5 & 3.1$\pm$0.5 & 3.8$\pm$0.6 & 4.3$\pm$0.6 & 3.7$\pm$0.6 & 3.9$\pm$0.6 &\ & 2017-Feb-05 & 18.4$\pm$0.9 & 131.6$\pm$0.8 & & & & 5.4$\pm$0.8 & & & 5.3$\pm$0.8 &\ & 2017-Jun-16 & 17.7$\pm$0.6 & 133.7$\pm$0.6 & & & & 1.7$\pm$0.3 & & & &\ & 2017-Jul-23 & 19.9$\pm$0.7 & 131.0$\pm$0.9 & & & & & & 1.7$\pm$0.2 & &\ & 2018-Jan-14 & 17.4$\pm$0.7 & 134.7$\pm$0.7 & & & & & & 0.83$\pm$0.12 & &\ & **Average** & **18.4** & **131.6** &\ Thor & 2015-Jan-11 & 41.6$\pm$2.0 & 133.0$\pm$2.1 & & & & & & & 1.5$\pm$0.7 &\ & 2015-Apr-01 & 39.5$\pm$1.5 & 136.4$\pm$1.4 & & & & & & & 1.2$\pm$0.4 &\ & **Average** & **40.6** & **134.7** &\ P123 & 2015-Jan-11 & -42.0$\pm$1.2 & 138.6$\pm$1.4 & & & & 2.9$\pm$0.4 & & & 2.6$\pm$0.4 &\ & 2015-Apr-01 & -41.2$\pm$0.7 & 143.4$\pm$0.9 & & & 2.5$\pm$0.4 & 2.7$\pm$0.4 & & & 2.7$\pm$0.4 &\ & 2015-Dec-22 & -41.8$\pm$1.1 & 135.4$\pm$1.3 & & & & 4.6$\pm$0.7 & & & 7.8$\pm$1.2 &\ & 2016-Jan-30 & -42.1$\pm$1.5 & 135.2$\pm$1.9 & & & & 2.8$\pm$0.5 & & & &\ & 2016-Feb-15 & -42.4$\pm$1.4 & 136.8$\pm$1.9 & & & & 2.9$\pm$0.6 & & & &\ & 2016-Feb-17 & -41.1$\pm$1.4 & 139.4$\pm$1.5 & & & & 1.9$\pm$0.4 & & & &\ & 2016-Mar-11 & -41.2$\pm$1.4 & 139.7$\pm$1.8 & & & & 2.7$\pm$0.5 & & & &\ & 2017-Jan-02 & -39.7$\pm$1.7 & 134.2$\pm$2.5 & & & & 7.1$\pm$1.2 & & & &\ & 2017-Jan-04 & -42.7$\pm$1.1 & 142.0$\pm$1.2 & & 3.9$\pm$0.6 & 3.8$\pm$0.6 & 6.2$\pm$0.9 & 7.2$\pm$1.1 & 5.9$\pm$0.9 & 8.2$\pm$1.2 &\ & 2017-Jan-18 & -39.8$\pm$1.6 & 136.5$\pm$2.4 & & & & 5.1$\pm$0.9 & & & &\ & 2017-Jan-20 & -39.4$\pm$1.6 & 138.9$\pm$1.8 & & & & 5.8$\pm$1.0 & & & &\ & 2017-Jan-25 & -42.8$\pm$1.6 & 134.7$\pm$2.5 & & & & 8.5$\pm$1.5 & & & &\ & 2017-Jan-27 & -40.4$\pm$1.6 & 139.0$\pm$1.8 & & & & 5.5$\pm$0.9 & & & &\ & 2017-Feb-05 & -43.4$\pm$0.6 & 141.3$\pm$0.9 & & & & 8.0$\pm$1.2 & 7.1$\pm$1.1 & 5.4$\pm$0.8 & 10$\pm$2 &\ & 2017-Jun-16 & -42.4$\pm$0.2 & 145.2$\pm$0.1 & & & 2.9$\pm$0.4 & 4.8$\pm$0.7 & & & &\ & 2017-Jul-23 & -41.4$\pm$0.3 & 139.5$\pm$0.0 & & & & & & 5.3$\pm$0.8 & 6.7$\pm$1.0 &\ & 2017-Dec-13 & -44.7$\pm$1.3 & 138.2$\pm$1.0 & & & & 6.5$\pm$1.0 & & & 9.0$\pm$1.3 &\ & 2017-Dec-31 & -44.6$\pm$1.9 & 142.0$\pm$1.1 & & & & 3.4$\pm$0.5 & & 2.9$\pm$0.4 & 6.9$\pm$1.0 &\ & 2018-Jan-02 & -42.2$\pm$0.7 & 143.9$\pm$2.5 & & & & & 5.3$\pm$0.8 & 3.2$\pm$0.5 & 7.8$\pm$1.5 &\ & 2018-Jan-14 & -41.7$\pm$1.6 & 142.2$\pm$0.9 & & & & 2.4$\pm$0.4 & & 2.9$\pm$0.4 & 4.7$\pm$0.7 &\ & **Average** & **-41.9** & **139.2** &\ Tupan Patera & 2015-Jan-11 & -18.2$\pm$0.8 & 139.2$\pm$0.7 & & & & 1.3$\pm$0.6 & & & 1.6$\pm$0.2 &\ & 2015-Apr-01 & -17.7$\pm$1.0 & 143.7$\pm$1.0 & & & & 0.85$\pm$0.3 & & & 1.3$\pm$0.4 &\ & 2015-Dec-22 & -17.4$\pm$1.0 & 136.9$\pm$1.2 & & & & 1.7$\pm$0.2 & & & 3.5$\pm$0.5 &\ & 2016-Mar-11 & -15.1$\pm$1.1 & 139.9$\pm$1.2 & & & & 0.96$\pm$0.18 & & & &\ & 2017-Jan-04 & -18.3$\pm$0.6 & 140.4$\pm$1.1 & & & & 2.1$\pm$0.3 & 2.4$\pm$0.4 & 2.6$\pm$0.4 & 2.9$\pm$0.4 &\ & 2017-Jun-16 & -17.8$\pm$0.5 & 144.0$\pm$0.6 & & & & 1.9$\pm$0.3 & & & &\ & 2017-Jul-23 & -19.4$\pm$0.6 & 141.4$\pm$1.1 & & & & & & 2.4$\pm$0.4 & &\ & 2017-Dec-31 & -20.0$\pm$0.8 & 140.4$\pm$1.9 & & & & 2.3$\pm$0.3 & & 2.7$\pm$0.4 & 3.3$\pm$0.5 &\ & 2018-Jan-02 & -17.1$\pm$0.7 & 142.2$\pm$1.7 & & & & & & 2.6$\pm$0.4 & &\ & 2018-Jan-14 & -18.4$\pm$1.3 & 140.7$\pm$1.0 & & & & 2.1$\pm$0.3 & & 3.7$\pm$0.6 & 3.3$\pm$0.5 &\ & **Average** & **-18.0** & **140.5** &\ Surya Patera & 2015-Mar-25 & 21.5$\pm$1.2 & 148.7$\pm$1.2 & & & & 2.7$\pm$1.3 & & & &\ & 2015-Apr-01 & 21.9$\pm$0.9 & 148.8$\pm$0.7 & & & & 3.6$\pm$0.5 & & & 6.6$\pm$1.0 &\ & 2015-Apr-17 & 21.0$\pm$1.3 & 150.0$\pm$1.3 & & & & 2.0$\pm$1.0 & & & &\ & 2015-May-05 & 20.6$\pm$1.0 & 151.1$\pm$2.8 & & & & & & & 5.0$\pm$0.8 &\ & **Average** & **21.2** & **149.4** &\ Shamash Patera & 2016-Jun-20 & -34.6$\pm$0.4 & 151.9$\pm$0.5 & 51$\pm$8 & & & 53$\pm$9 & & & &\ & 2016-Jun-27 & -31.9$\pm$2.4 & 149.1$\pm$3.9 & 18$\pm$3 & & & 30$\pm$6 & & & &\ & **Average** & **-33.2** & **150.5** &\ Sobo Fluctus & 2018-Jan-14 & 12.9$\pm$0.6 & 152.8$\pm$0.8 & & & & & & 1.0$\pm$0.2 & &\ & **Average** & **12.9** & **152.8** &\ Prometheus & 2013-Aug-21 & -0.0$\pm$0.6 & 150.3$\pm$0.5 & & & & 2.4$\pm$0.6 & & & 3.5$\pm$0.5 &\ & 2013-Aug-23 & 0.0$\pm$0.9 & 152.9$\pm$0.4 & & & & 2.7$\pm$0.6 & & & 3.8$\pm$0.6 &\ & 2013-Nov-18 & -1.5$\pm$1.2 & 154.2$\pm$2.7 & & & & 5.3$\pm$0.8 & & & &\ & 2013-Dec-02 & -1.1$\pm$1.0 & 152.8$\pm$1.0 & & & & 2.4$\pm$1.2 & & & &\ & 2013-Dec-04 & -1.2$\pm$1.7 & 156.5$\pm$2.4 & & & & 2.2$\pm$1.1 & & & &\ & 2013-Dec-14 & -2.2$\pm$1.8 & 154.3$\pm$2.3 & & & & 1.9$\pm$0.9 & & & &\ & 2014-Mar-11 & -1.3$\pm$0.4 & 155.2$\pm$1.3 & & & & 3.4$\pm$0.5 & & & 3.9$\pm$0.6 &\ & 2014-Mar-27 & -1.8$\pm$1.2 & 156.8$\pm$1.4 & & & & 4.0$\pm$0.6 & & & &\ & 2015-Jan-11 & -2.3$\pm$0.7 & 152.4$\pm$0.4 & & & & 2.8$\pm$0.4 & & & 3.9$\pm$0.6 &\ & 2015-Apr-01 & -1.1$\pm$0.5 & 155.9$\pm$0.5 & & & & 1.5$\pm$0.2 & & & 2.2$\pm$0.3 &\ & 2015-Apr-26 & -1.5$\pm$1.6 & 156.4$\pm$2.5 & & & & 4.3$\pm$2.2 & & & &\ & 2015-May-05 & -0.0$\pm$0.6 & 155.8$\pm$1.3 & & & & 4.1$\pm$0.6 & & & 4.4$\pm$0.7 &\ & 2015-Dec-22 & -0.8$\pm$1.3 & 151.3$\pm$1.3 & & & & 1.6$\pm$0.2 & & & 3.7$\pm$0.6 &\ & 2016-Mar-11 & 0.0$\pm$1.1 & 150.1$\pm$1.1 & & & & 1.4$\pm$0.3 & & & &\ & 2017-Jan-04 & -2.1$\pm$0.6 & 153.1$\pm$0.8 & & & & 3.1$\pm$0.5 & 3.8$\pm$0.6 & 3.3$\pm$0.5 & 3.9$\pm$0.6 &\ & 2017-Jan-27 & 0.3$\pm$1.2 & 149.1$\pm$1.2 & & & & 1.9$\pm$0.3 & & & &\ & 2017-Feb-05 & -2.1$\pm$0.9 & 151.8$\pm$1.6 & & & & 2.3$\pm$0.4 & 2.8$\pm$0.4 & 2.3$\pm$0.3 & 4.1$\pm$0.6 &\ & 2017-Jun-16 & -1.7$\pm$0.5 & 155.7$\pm$0.6 & & & & 3.0$\pm$0.4 & & & &\ & 2017-Jul-23 & -2.8$\pm$0.0 & 153.0$\pm$1.3 & & & & & 2.8$\pm$0.4 & 5.0$\pm$0.7 & &\ & 2017-Dec-31 & -2.9$\pm$0.8 & 153.6$\pm$1.5 & & & & 3.7$\pm$0.5 & & 3.7$\pm$0.6 & 5.7$\pm$0.9 &\ & 2018-Jan-02 & -1.5$\pm$0.7 & 152.3$\pm$0.9 & & & & 3.7$\pm$0.6 & 4.4$\pm$0.7 & 3.5$\pm$0.5 & 5.5$\pm$0.8 &\ & 2018-Jan-14 & -2.4$\pm$1.5 & 153.6$\pm$1.1 & & & & & & 2.9$\pm$0.4 & 4.4$\pm$0.7 &\ & **Average** & **-1.5** & **153.3** &\ Culann & 2013-Aug-23 & -16.5$\pm$0.4 & 161.6$\pm$0.9 & & & & 1.4$\pm$0.7 & & & 2.1$\pm$0.3 &\ & 2014-Mar-11 & -16.0$\pm$0.4 & 162.0$\pm$0.5 & & & & 2.2$\pm$0.3 & & & 1.9$\pm$0.3 &\ & 2015-Jan-11 & -17.2$\pm$0.4 & 159.8$\pm$0.5 & & & & 1.9$\pm$0.3 & & & 2.3$\pm$0.3 &\ & 2015-Apr-01 & -16.0$\pm$0.5 & 164.6$\pm$1.1 & & & & 1.1$\pm$0.4 & & & 1.7$\pm$0.2 &\ & 2015-May-05 & -14.7$\pm$1.0 & 169.4$\pm$1.8 & & & & & & & 2.1$\pm$0.3 &\ & 2017-Jan-04 & -17.2$\pm$0.6 & 161.0$\pm$0.7 & & & & 2.1$\pm$0.3 & 2.2$\pm$0.3 & 1.9$\pm$0.3 & 2.4$\pm$0.4 &\ & 2017-Feb-05 & -17.4$\pm$0.9 & 160.5$\pm$0.7 & & & & 2.4$\pm$0.4 & 2.5$\pm$0.4 & 2.1$\pm$0.3 & 3.1$\pm$0.5 &\ & 2017-Jun-16 & -17.1$\pm$0.6 & 163.7$\pm$0.3 & & & 2.8$\pm$0.4 & 2.7$\pm$0.4 & & & &\ & 2017-Dec-31 & -18.8$\pm$1.1 & 160.5$\pm$0.8 & & & & 2.4$\pm$0.4 & & 2.3$\pm$0.4 & 3.2$\pm$0.5 &\ & 2018-Jan-02 & -17.8$\pm$0.7 & 161.8$\pm$2.1 & & & & 2.6$\pm$0.4 & 2.5$\pm$0.4 & 2.1$\pm$0.3 & 3.2$\pm$0.5 &\ & 2018-Jan-14 & -19.1$\pm$0.8 & 162.1$\pm$2.1 & & & & 2.0$\pm$0.3 & & 2.4$\pm$0.4 & 2.8$\pm$0.4 &\ & **Average** & **-17.2** & **161.8** &\ Zamama & 2017-Jan-04 & 18.5$\pm$1.1 & 173.0$\pm$1.0 & & & & 1.3$\pm$0.2 & 1.8$\pm$0.3 & 1.2$\pm$0.2 & 1.7$\pm$0.3 &\ & 2017-Feb-05 & 19.4$\pm$1.3 & 173.8$\pm$1.0 & & & & 1.2$\pm$0.2 & 1.4$\pm$0.2 & 1.1$\pm$0.2 & 1.9$\pm$0.3 &\ & 2017-Dec-31 & 16.8$\pm$0.7 & 173.2$\pm$0.7 & & & & & & & 1.4$\pm$0.2 &\ & **Average** & **18.5** & **173.2** &\ Illyrikon Regio & 2016-Jun-17 & -73.2$\pm$0.8 & 192.8$\pm$3.3 & 200$\pm$190 & & & 100$\pm$70 & & & &\ & 2016-Jun-20 & -68.8$\pm$0.3 & 173.0$\pm$2.6 & 180$\pm$70 & & & 110$\pm$40 & & & &\ & 2016-Jun-24 & -72.7$\pm$1.1 & 186.8$\pm$10.7 & 310$\pm$260 & & & 100$\pm$50 & & & &\ & 2016-Jun-27 & -67.2$\pm$2.9 & 164.9$\pm$11.3 & 94$\pm$21 & & & 130$\pm$70 & & & &\ & **Average** & **-70.8** & **179.9** &\ & 2015-Mar-27 & -50.0$\pm$1.7 & 195.5$\pm$2.4 & & & & 8.1$\pm$1.5 & & & &\ & 2015-Mar-29 & -51.4$\pm$1.8 & 197.9$\pm$4.7 & & & & 11$\pm$3 & & & &\ & 2015-Apr-01 & -50.2$\pm$0.9 & 198.0$\pm$2.4 & & 27$\pm$5 & 22$\pm$4 & 33$\pm$5 & & & 28$\pm$4 &\ & 2015-Apr-19 & -50.0$\pm$2.0 & 198.1$\pm$2.2 & & & & 7.2$\pm$1.3 & & & &\ & 2015-Apr-26 & -49.6$\pm$1.9 & 199.7$\pm$2.2 & & & & 4.8$\pm$2.4 & & & &\ & 2015-May-05 & -48.5$\pm$1.6 & 198.8$\pm$1.0 & & 6.0$\pm$0.9 & 3.9$\pm$0.6 & 5.8$\pm$0.9 & & & 9.3$\pm$1.4 &\ & **Average** & **-50.0** & **198.1** &\ Isum Patera & 2015-May-05 & 31.9$\pm$0.4 & 209.2$\pm$0.5 & & & & & & & 1.9$\pm$0.3 &\ & 2016-Dec-23 & 31.0$\pm$0.8 & 203.5$\pm$1.2 & & & & & & & 2.7$\pm$0.4 &\ & 2017-Jan-04 & 29.3$\pm$1.4 & 201.2$\pm$2.0 & & & & & 3.9$\pm$0.6 & 3.1$\pm$0.5 & 2.9$\pm$0.4 &\ & 2017-Jan-08 & 30.4$\pm$1.0 & 204.5$\pm$1.8 & & & & & & 1.9$\pm$0.3 & 2.1$\pm$0.3 &\ & 2017-Feb-05 & 27.2$\pm$0.6 & 204.3$\pm$0.6 & & & & & & 2.0$\pm$0.3 & &\ & 2017-May-28 & 31.1$\pm$0.9 & 209.6$\pm$1.1 & & & & & 2.0$\pm$0.3 & 2.3$\pm$0.3 & 2.0$\pm$0.3 &\ & 2017-Dec-31 & 27.4$\pm$0.6 & 202.7$\pm$0.4 & & & & & & 3.1$\pm$0.5 & 4.6$\pm$0.7 &\ & 2018-Jan-02 & 30.1$\pm$1.0 & 205.2$\pm$1.7 & & & & 1.9$\pm$0.3 & 2.8$\pm$0.4 & 2.9$\pm$0.4 & 3.2$\pm$0.5 &\ & 2018-May-27 & 32.5$\pm$0.6 & 208.7$\pm$2.2 & 86$\pm$13 & & & 64$\pm$16 & & & &\ & 2018-May-31 & 30.3$\pm$1.8 & 205.6$\pm$3.0 & 82$\pm$12 & & & 59$\pm$9 & & & &\ & 2018-Jun-02 & 30.5$\pm$1.1 & 209.0$\pm$6.9 & & & & 51$\pm$25 & & & &\ & 2018-Jun-16 & 32.2$\pm$1.2 & 204.3$\pm$0.7 & 48$\pm$7 & & & 39$\pm$6 & & & &\ & 2018-Jun-18 & 31.5$\pm$1.0 & 206.9$\pm$2.0 & 71$\pm$14 & & & 35$\pm$7 & & & &\ & 2018-Jun-23 & 31.4$\pm$0.6 & 205.9$\pm$0.7 & 30$\pm$4 & & & 32$\pm$5 & & & &\ & 2018-Jun-25 & 31.6$\pm$2.0 & 206.0$\pm$0.5 & 45$\pm$7 & & & 29$\pm$5 & & & &\ & 2018-Jun-30 & 33.5$\pm$0.0 & 204.3$\pm$1.3 & 28$\pm$4 & & & 28$\pm$4 & & & &\ & **Average** & **31.1** & **205.4** &\ Marduk Fluctus & 2013-Aug-20 & -24.8$\pm$0.8 & 211.1$\pm$2.8 & & & & & & & 5.7$\pm$0.9 &\ & 2013-Aug-23 & -22.7$\pm$0.8 & 207.4$\pm$1.4 & & & & 6.0$\pm$1.4 & & & 7.5$\pm$1.1 &\ & 2013-Sep-01 & -26.5$\pm$1.6 & 208.0$\pm$1.8 & & & & 9.2$\pm$1.4 & & & &\ & 2013-Sep-03 & -21.0$\pm$1.7 & 207.9$\pm$2.4 & & & & 5.6$\pm$1.1 & & & &\ & 2013-Sep-10 & -22.8$\pm$1.5 & 209.1$\pm$1.7 & & & & 7.1$\pm$1.3 & & & &\ & 2013-Nov-18 & -24.4$\pm$0.6 & 208.5$\pm$0.6 & & 3.4$\pm$0.5 & 4.8$\pm$0.7 & 6.7$\pm$1.0 & & & 9.2$\pm$1.4 &\ & 2013-Nov-27 & -27.9$\pm$1.3 & 203.3$\pm$2.9 & & & & 5.4$\pm$2.7 & & & &\ & 2013-Dec-02 & -22.5$\pm$1.2 & 209.1$\pm$2.9 & & & & 5.8$\pm$2.9 & & & &\ & 2013-Dec-04 & -23.4$\pm$1.1 & 210.0$\pm$1.2 & & & & 4.6$\pm$0.7 & & & &\ & 2013-Dec-13 & -23.7$\pm$1.9 & 210.7$\pm$2.2 & & & & 3.7$\pm$0.6 & & & &\ & 2014-Feb-08 & -23.8$\pm$1.1 & 213.3$\pm$2.0 & & & & 8.3$\pm$1.2 & & & 6.5$\pm$1.0 &\ & 2014-Mar-10 & -24.0$\pm$1.5 & 215.1$\pm$2.5 & & & & 5.4$\pm$1.0 & & & &\ & 2014-Oct-22 & -24.7$\pm$1.5 & 208.6$\pm$1.5 & & & & 5.7$\pm$0.9 & & & &\ & 2014-Oct-31 & -23.2$\pm$0.8 & 210.0$\pm$0.5 & & & & 6.5$\pm$1.0 & & & 9.0$\pm$1.4 &\ & 2014-Nov-30 & -25.0$\pm$1.3 & 208.0$\pm$1.6 & & & & 5.5$\pm$0.8 & & & &\ & 2014-Dec-02 & -23.5$\pm$0.9 & 213.2$\pm$1.5 & & & & 5.4$\pm$1.9 & & & 8.4$\pm$1.3 &\ & 2014-Dec-09 & -23.5$\pm$1.5 & 207.9$\pm$2.5 & & & & 3.8$\pm$1.9 & & & &\ & 2014-Dec-16 & -25.4$\pm$1.5 & 206.8$\pm$2.5 & & & & 4.0$\pm$2.0 & & & &\ & 2015-Jan-11 & -23.7$\pm$0.8 & 205.5$\pm$2.8 & & & & & & & 7.6$\pm$1.1 &\ & 2015-Jan-15 & -23.5$\pm$1.1 & 212.2$\pm$1.4 & & & & 4.0$\pm$0.6 & & & &\ & 2015-Jan-26 & -23.8$\pm$1.5 & 211.0$\pm$2.5 & & & & 2.9$\pm$1.5 & & & &\ & 2015-Mar-27 & -24.8$\pm$1.2 & 211.1$\pm$1.2 & & & & 4.6$\pm$0.7 & & & &\ & 2015-Mar-29 & -25.7$\pm$1.6 & 211.1$\pm$2.4 & & & & 5.6$\pm$1.1 & & & &\ & 2015-Apr-01 & -22.2$\pm$0.7 & 217.4$\pm$6.1 & & & & & & & 8.5$\pm$6.1 &\ & 2015-Apr-05 & -23.4$\pm$1.6 & 215.9$\pm$2.4 & & & & 3.6$\pm$0.7 & & & &\ & 2015-Apr-19 & -24.7$\pm$1.3 & 215.2$\pm$1.6 & & & & 5.3$\pm$1.0 & & & &\ & 2015-Apr-21 & -24.3$\pm$1.7 & 215.4$\pm$2.4 & & & & 4.4$\pm$0.8 & & & &\ & 2015-Apr-26 & -23.9$\pm$1.3 & 215.7$\pm$1.5 & & & & 4.1$\pm$0.6 & & & &\ & 2015-May-05 & -23.7$\pm$0.5 & 214.4$\pm$1.0 & & 4.3$\pm$0.6 & 3.6$\pm$0.5 & 5.7$\pm$0.8 & & & 11$\pm$2 &\ & 2016-Feb-03 & -22.5$\pm$1.2 & 210.6$\pm$1.3 & & & & 7.4$\pm$1.4 & & & &\ & 2016-Feb-17 & -22.7$\pm$1.2 & 206.1$\pm$3.6 & & & & 9.1$\pm$2.1 & & & &\ & 2016-Feb-19 & -23.6$\pm$1.1 & 205.4$\pm$1.2 & & & & 12$\pm$2 & & & &\ & 2016-Feb-21 & -23.2$\pm$1.2 & 211.0$\pm$2.1 & & & & 8.6$\pm$1.7 & & & &\ & 2016-Mar-11 & -23.7$\pm$1.2 & 209.1$\pm$2.7 & & & & 13$\pm$3 & & & &\ & 2016-Mar-13 & -24.2$\pm$1.1 & 209.8$\pm$1.3 & & & & 12$\pm$2 & & & &\ & 2016-Apr-30 & -20.9$\pm$1.2 & 209.2$\pm$1.3 & & & & 13$\pm$2 & & & &\ & 2016-May-02 & -22.0$\pm$1.3 & 215.5$\pm$1.9 & & & & 12$\pm$2 & & & &\ & 2016-May-09 & -21.2$\pm$1.3 & 211.4$\pm$1.6 & & & & 13$\pm$2 & & & &\ & 2016-May-11 & -22.3$\pm$1.3 & 212.0$\pm$4.4 & & & & 9.9$\pm$3.3 & & & &\ & 2016-May-14 & -20.6$\pm$1.3 & 214.2$\pm$1.5 & & & & 11$\pm$2 & & & &\ & 2016-May-16 & -23.4$\pm$1.3 & 210.5$\pm$1.4 & & & & 9.7$\pm$1.8 & & & &\ & 2016-May-18 & -21.4$\pm$1.3 & 215.1$\pm$2.5 & & & & 18$\pm$4 & & & &\ & 2016-May-23 & -22.5$\pm$1.3 & 213.9$\pm$1.5 & & & & 16$\pm$3 & & & &\ & 2016-May-25 & -24.4$\pm$1.4 & 215.2$\pm$2.5 & & & & 14$\pm$3 & & & &\ & 2016-Jun-01 & -20.2$\pm$1.3 & 214.5$\pm$1.6 & & & & 13$\pm$2 & & & &\ & 2016-Jun-03 & -21.1$\pm$1.4 & 214.2$\pm$4.4 & & & & 8.8$\pm$2.7 & & & &\ & 2016-Jun-08 & -20.1$\pm$1.4 & 212.7$\pm$1.4 & & & & 10$\pm$2 & & & &\ & 2016-Jun-10 & -20.2$\pm$1.4 & 214.4$\pm$2.7 & & & & 13$\pm$3 & & & &\ & 2016-Jun-17 & -25.3$\pm$1.5 & 212.3$\pm$2.3 & & & & 22$\pm$4 & & & &\ & 2016-Jun-24 & -25.7$\pm$1.5 & 212.0$\pm$1.9 & & & & 15$\pm$3 & & & &\ & 2016-Nov-23 & -20.9$\pm$1.6 & 211.9$\pm$2.7 & & & & 22$\pm$4 & & & &\ & 2016-Dec-23 & -24.7$\pm$0.3 & 210.8$\pm$0.6 & & & 12$\pm$2 & 15$\pm$2 & 15$\pm$2 & 13$\pm$2 & 27$\pm$4 &\ & 2017-Jan-04 & -26.3$\pm$0.9 & 206.7$\pm$1.8 & & 7.6$\pm$1.1 & 9.4$\pm$1.4 & 16$\pm$2 & 15$\pm$2 & 12$\pm$2 & 20$\pm$3 &\ & 2017-Jan-08 & -25.9$\pm$0.9 & 208.7$\pm$1.0 & & 10$\pm$2 & 18$\pm$3 & 18$\pm$3 & 16$\pm$2 & 13$\pm$2 & 27$\pm$5 &\ & 2017-Jan-15 & -20.4$\pm$1.4 & 205.1$\pm$1.9 & & & & 12$\pm$2 & & & &\ & 2017-Jan-20 & -24.6$\pm$1.4 & 205.1$\pm$3.7 & & & & 21$\pm$4 & & & &\ & 2017-Jan-22 & -23.1$\pm$1.3 & 206.0$\pm$1.5 & & & & 10$\pm$2 & & & &\ & 2017-Jan-24 & -26.4$\pm$0.5 & 208.2$\pm$0.5 & & & 15$\pm$2 & 18$\pm$3 & & & 25$\pm$4 &\ & 2017-Jan-27 & -24.4$\pm$1.4 & 205.1$\pm$3.5 & & & & 20$\pm$4 & & & &\ & 2017-Jan-31 & -25.4$\pm$1.4 & 206.7$\pm$2.7 & & & & 18$\pm$3 & & & &\ & 2017-Feb-05 & -27.3$\pm$0.6 & 208.2$\pm$0.9 & & 15$\pm$2 & 22$\pm$3 & 27$\pm$4 & 25$\pm$4 & 22$\pm$3 & 33$\pm$5 &\ & 2017-Feb-23 & -24.6$\pm$1.2 & 211.5$\pm$1.5 & & & & 8.1$\pm$1.4 & & & &\ & 2017-Mar-04 & -22.3$\pm$1.2 & 211.9$\pm$1.8 & & & & 7.1$\pm$1.2 & & & &\ & 2017-Apr-03 & -21.0$\pm$1.1 & 209.4$\pm$1.4 & & & & 7.9$\pm$1.3 & & & &\ & 2017-May-05 & -22.3$\pm$1.2 & 212.8$\pm$1.9 & & & & 6.9$\pm$1.2 & & & &\ & 2017-May-07 & -21.5$\pm$1.2 & 215.3$\pm$2.7 & & & & 11$\pm$2 & & & &\ & 2017-May-10 & -22.9$\pm$1.2 & 210.5$\pm$2.4 & & & & 12$\pm$2 & & & &\ & 2017-May-12 & -23.9$\pm$1.2 & 213.2$\pm$1.3 & & & & 9.7$\pm$1.6 & & & &\ & 2017-May-14 & -25.0$\pm$1.2 & 213.4$\pm$2.1 & & & & 8.0$\pm$1.4 & & & &\ & 2017-May-28 & -24.9$\pm$0.7 & 215.5$\pm$0.4 & & 5.6$\pm$0.8 & 8.5$\pm$1.3 & 12$\pm$2 & 18$\pm$3 & 15$\pm$2 & 20$\pm$3 &\ & 2017-May-30 & -21.4$\pm$1.3 & 211.5$\pm$3.2 & & & & 15$\pm$3 & & & &\ & 2017-Jun-22 & -23.6$\pm$1.4 & 214.4$\pm$3.4 & & & & 12$\pm$3 & & & &\ & 2017-Jun-27 & -24.1$\pm$1.3 & 212.2$\pm$1.4 & & & & 12$\pm$2 & & & &\ & 2017-Jun-29 & -25.6$\pm$1.4 & 209.2$\pm$2.7 & & & & 12$\pm$2 & & & &\ & 2017-Jul-04 & -20.6$\pm$1.3 & 208.7$\pm$1.5 & & & & 7.1$\pm$1.2 & & & &\ & 2017-Jul-06 & -20.9$\pm$1.4 & 214.4$\pm$1.8 & & & & 8.4$\pm$1.4 & & & &\ & 2017-Jul-31 & -27.9$\pm$1.3 & 218.2$\pm$2.9 & & & & 9.2$\pm$1.8 & & 6.3$\pm$1.0 & 18$\pm$3 &\ & 2017-Dec-12 & -24.6$\pm$0.8 & 212.6$\pm$0.9 & & & 8.5$\pm$1.3 & 12$\pm$2 & & & &\ & 2017-Dec-31 & -26.5$\pm$0.9 & 208.5$\pm$2.0 & & 11$\pm$2 & 13$\pm$2 & 18$\pm$3 & & 17$\pm$3 & 27$\pm$4 &\ & 2018-Jan-02 & -26.8$\pm$1.2 & 209.7$\pm$0.3 & & 10$\pm$2 & 16$\pm$2 & 20$\pm$3 & 23$\pm$3 & 18$\pm$3 & 28$\pm$4 &\ & 2018-Jan-12 & -25.3$\pm$0.7 & 214.3$\pm$1.3 & & & & 19$\pm$3 & & & 27$\pm$4 &\ & 2018-May-31 & -26.2$\pm$1.2 & 212.8$\pm$1.6 & & & & 8.7$\pm$1.3 & & & &\ & 2018-Jun-16 & -24.9$\pm$1.2 & 210.2$\pm$1.2 & & & & 12$\pm$2 & & & &\ & 2018-Jun-18 & -23.2$\pm$1.2 & 214.2$\pm$2.4 & & & & 13$\pm$2 & & & &\ & 2018-Jun-23 & -25.4$\pm$1.2 & 212.6$\pm$1.4 & & & & 11$\pm$2 & & & &\ & 2018-Jun-25 & -24.6$\pm$1.2 & 214.0$\pm$1.8 & & & & 9.8$\pm$1.5 & & & &\ & 2018-Jun-30 & -22.1$\pm$1.2 & 210.1$\pm$1.3 & & & & 12$\pm$2 & & & &\ & **Average** & **-23.7** & **211.1** &\ Kurdalagon & 2013-Nov-18 & -45.3$\pm$1.1 & 214.9$\pm$1.0 & & & & 2.2$\pm$0.8 & & & 2.2$\pm$0.3 &\ & 2015-Jan-26 & -48.6$\pm$1.1 & 219.4$\pm$2.0 & 81$\pm$12 & & & 56$\pm$9 & & & &\ & 2015-Mar-27 & -50.0$\pm$1.9 & 219.4$\pm$2.3 & & & & 15$\pm$3 & & & &\ & 2015-Mar-29 & -49.6$\pm$1.7 & 223.8$\pm$2.5 & & & & 14$\pm$3 & & & &\ & 2015-Mar-31 & -50.0$\pm$0.9 & 224.5$\pm$2.3 & & 11$\pm$3 & 9.6$\pm$1.8 & 6.7$\pm$2.2 & & & 12$\pm$6 &\ & 2015-Apr-05 & -47.7$\pm$1.2 & 223.7$\pm$2.0 & 120$\pm$20 & & & 68$\pm$11 & & & &\ & 2015-Apr-17 & -50.4$\pm$2.1 & 224.2$\pm$2.0 & & & & 52$\pm$23 & & & &\ & 2015-Apr-19 & -50.0$\pm$1.9 & 224.4$\pm$3.6 & & & & 36$\pm$7 & & & &\ & 2015-Apr-21 & -49.7$\pm$1.9 & 223.3$\pm$2.7 & & & & 22$\pm$4 & & & &\ & 2015-Apr-26 & -49.6$\pm$1.9 & 223.7$\pm$2.9 & & & & 19$\pm$4 & & & &\ & 2015-May-05 & -48.4$\pm$2.1 & 222.3$\pm$1.1 & & 12$\pm$2 & 12$\pm$2 & 17$\pm$3 & & & 32$\pm$5 &\ & 2016-Feb-03 & -49.3$\pm$1.6 & 216.6$\pm$1.8 & & & & 2.5$\pm$0.5 & & & &\ & 2016-Feb-19 & -48.6$\pm$1.6 & 210.3$\pm$1.8 & & & & 2.9$\pm$0.6 & & & &\ & 2016-Mar-13 & -48.5$\pm$1.5 & 213.9$\pm$2.3 & & & & 3.8$\pm$0.7 & & & &\ & 2016-Apr-30 & -45.2$\pm$1.6 & 214.1$\pm$2.2 & & & & 3.5$\pm$0.7 & & & &\ & 2016-May-14 & -47.0$\pm$1.7 & 217.3$\pm$2.5 & & & & 5.6$\pm$1.1 & & & &\ & 2016-May-16 & -49.3$\pm$1.8 & 215.8$\pm$1.9 & & & & 4.6$\pm$0.9 & & & &\ & 2016-May-18 & -48.5$\pm$1.8 & 218.0$\pm$4.4 & & & & 11$\pm$3 & & & &\ & 2016-May-23 & -45.7$\pm$1.7 & 218.1$\pm$2.4 & & & & 8.9$\pm$1.7 & & & &\ & 2016-May-25 & -48.4$\pm$1.8 & 220.8$\pm$4.0 & & & & 14$\pm$3 & & & &\ & 2016-Jun-01 & -45.8$\pm$1.8 & 220.2$\pm$2.3 & & & & 8.8$\pm$1.7 & & & &\ & 2016-Jun-08 & -45.1$\pm$1.8 & 217.7$\pm$1.9 & & & & 5.8$\pm$1.1 & & & &\ & 2016-Dec-23 & -48.7$\pm$0.8 & 216.8$\pm$1.5 & & & & 5.3$\pm$0.8 & 5.7$\pm$0.8 & 4.5$\pm$0.7 & 8.8$\pm$1.3 &\ & 2017-Jan-04 & -50.0$\pm$0.7 & 208.3$\pm$2.0 & & & & 7.7$\pm$1.2 & 7.2$\pm$1.1 & 4.8$\pm$0.7 & 7.7$\pm$1.2 &\ & 2017-Jan-08 & -50.5$\pm$1.5 & 214.7$\pm$2.0 & & & 3.5$\pm$0.5 & 5.4$\pm$1.0 & 5.6$\pm$0.8 & 5.1$\pm$0.8 & 9.5$\pm$1.4 &\ & 2017-Jan-22 & -45.5$\pm$1.7 & 214.2$\pm$1.9 & & & & 2.4$\pm$0.4 & & & &\ & 2017-Jan-24 & -53.0$\pm$0.9 & 212.8$\pm$3.6 & & & & 6.1$\pm$1.1 & & & &\ & 2017-Feb-05 & -49.9$\pm$1.1 & 213.3$\pm$1.1 & & & & 5.3$\pm$0.8 & 5.8$\pm$0.9 & 5.7$\pm$0.9 & 9.6$\pm$1.4 &\ & 2017-Dec-12 & -51.3$\pm$1.0 & 209.8$\pm$0.2 & & & 13$\pm$2 & 12$\pm$2 & & & &\ & 2017-Dec-31 & -51.7$\pm$2.0 & 210.3$\pm$2.1 & & & & 11$\pm$2 & & 9.1$\pm$1.4 & 17$\pm$2 &\ & 2018-Jan-02 & -51.5$\pm$2.2 & 213.6$\pm$0.5 & & 4.7$\pm$0.7 & 8.1$\pm$1.2 & 9.9$\pm$1.5 & 12$\pm$2 & 9.1$\pm$1.4 & 15$\pm$2 &\ & 2018-May-31 & -50.4$\pm$1.6 & 214.7$\pm$2.7 & & & & 4.7$\pm$0.7 & & & &\ & 2018-Jun-16 & -46.7$\pm$1.5 & 211.9$\pm$1.7 & & & & 5.7$\pm$0.9 & & & &\ & 2018-Jun-23 & -50.8$\pm$1.7 & 215.3$\pm$2.5 & & & & 7.0$\pm$1.1 & & & &\ & 2018-Jun-25 & -50.1$\pm$1.7 & 218.0$\pm$3.1 & & & & 6.9$\pm$1.1 & & & &\ & 2018-Jun-30 & -45.3$\pm$1.5 & 214.8$\pm$2.0 & & & & 4.8$\pm$0.7 & & & &\ & **Average** & **-49.3** & **216.7** &\ Unknown & 2018-Jan-02 & 53.6$\pm$0.6 & 217.8$\pm$1.2 & & & & & 2.9$\pm$0.4 & 2.3$\pm$0.3 & &\ & **Average** & **53.6** & **217.8** &\ & 2013-Aug-20 & 17.0$\pm$0.7 & 226.0$\pm$2.4 & & & & 2.9$\pm$1.4 & & & 6.1$\pm$0.9 &\ & 2013-Aug-23 & 18.2$\pm$1.1 & 221.8$\pm$2.7 & & & & & & & 5.9$\pm$0.9 &\ & 2013-Nov-18 & 21.5$\pm$1.1 & 218.2$\pm$1.0 & & 0.62$\pm$0.09 & 1.8$\pm$0.3 & 3.8$\pm$0.7 & & & 7.8$\pm$1.2 &\ & 2013-Dec-04 & 20.0$\pm$1.9 & 221.2$\pm$2.2 & & & & 1.3$\pm$0.7 & & & &\ & 2013-Dec-13 & 18.9$\pm$2.0 & 220.8$\pm$2.1 & & & & 1.1$\pm$0.6 & & & &\ & 2014-Feb-08 & 18.4$\pm$1.2 & 225.7$\pm$1.3 & & & & 1.6$\pm$0.6 & & & 1.8$\pm$0.3 &\ & 2014-Oct-31 & 20.6$\pm$0.8 & 220.6$\pm$1.0 & & & & & & & 1.1$\pm$0.5 &\ & 2015-May-05 & 16.6$\pm$1.0 & 223.3$\pm$1.4 & & & & 0.49$\pm$0.17 & & & 0.93$\pm$0.33 &\ & 2018-Jan-02 & 14.5$\pm$0.7 & 220.5$\pm$0.7 & & & & & 1.6$\pm$0.2 & & &\ & 2018-Jan-12 & 21.4$\pm$1.4 & 218.1$\pm$4.0 & & & 21$\pm$3 & 20$\pm$3 & & & 19$\pm$3 &\ & **Average** & **18.6** & **221.0** &\ 201308C & 2013-Aug-29 & 29.6$\pm$1.7 & 225.7$\pm$9.2 & & & & $>$500 & & & &\ & 2013-Aug-30 & 29.1$\pm$1.7 & 227.2$\pm$6.1 & & & & 360$^{+820}_{-200}$ & & & &\ & 2013-Sep-01 & 29.1$\pm$1.6 & 226.9$\pm$2.6 & & & & 73$\pm$11 & & & &\ & 2013-Sep-03 & 28.6$\pm$1.5 & 227.5$\pm$1.8 & & & & 27$\pm$4 & & & &\ & 2013-Sep-05 & 29.5$\pm$1.6 & 227.6$\pm$3.9 & & & & 20$\pm$5 & & & &\ & 2013-Sep-10 & 28.4$\pm$1.5 & 228.0$\pm$1.6 & & & & 12$\pm$2 & & & &\ & 2013-Nov-18 & 29.4$\pm$0.9 & 228.2$\pm$0.5 & & 1.2$\pm$0.2 & 2.4$\pm$0.4 & 3.0$\pm$0.5 & & & 4.4$\pm$0.7 &\ & 2014-Feb-08 & 29.2$\pm$1.0 & 234.5$\pm$1.2 & & & & 1.6$\pm$0.4 & & & 1.8$\pm$0.3 &\ & 2014-Oct-31 & 31.5$\pm$0.9 & 231.4$\pm$1.0 & & & & & & & 1.8$\pm$0.3 &\ & 2014-Dec-02 & 27.4$\pm$0.8 & 230.3$\pm$1.4 & & & & 3.0$\pm$0.5 & & & 2.5$\pm$0.4 &\ & 2015-May-05 & 28.1$\pm$0.7 & 232.4$\pm$1.1 & & & & 1.0$\pm$0.4 & & & 1.5$\pm$0.3 &\ & **Average** & **29.1** & **228.0** &\ P17 & 2017-Jan-08 & -3.5$\pm$0.9 & 228.8$\pm$0.4 & & & & 1.8$\pm$0.3 & 1.6$\pm$0.2 & 1.3$\pm$0.2 & &\ & **Average** & **-3.5** & **228.8** &\ P13 & 2017-Feb-05 & 13.6$\pm$0.6 & 228.4$\pm$0.8 & & 9.0$\pm$1.3 & 15$\pm$2 & 23$\pm$4 & 23$\pm$3 & 21$\pm$3 & 34$\pm$5 &\ & 2017-Feb-23 & 13.6$\pm$1.2 & 228.7$\pm$1.2 & & & & 2.2$\pm$0.4 & & & &\ & 2017-Mar-04 & 16.8$\pm$1.2 & 229.3$\pm$1.3 & & & & 2.2$\pm$0.4 & & & &\ & 2017-May-28 & 14.1$\pm$1.1 & 234.6$\pm$0.1 & & & & & 1.1$\pm$0.2 & 1.0$\pm$0.2 & 1.9$\pm$0.3 &\ & **Average** & **13.9** & **229.0** &\ East Girru & 2013-Dec-04 & 21.3$\pm$1.7 & 232.1$\pm$2.4 & & & & 5.0$\pm$0.8 & & & &\ & 2013-Dec-06 & 20.2$\pm$1.6 & 234.7$\pm$2.4 & & & & 5.4$\pm$1.0 & & & &\ & 2013-Dec-13 & 22.3$\pm$1.1 & 233.5$\pm$1.1 & & & & 4.6$\pm$0.7 & & & &\ & **Average** & **21.3** & **233.5** &\ Reiden Patera & 2017-Dec-12 & -17.0$\pm$0.7 & 234.9$\pm$1.2 & & & 6.0$\pm$0.9 & 3.5$\pm$0.5 & & & &\ & 2018-Jan-02 & -18.9$\pm$2.1 & 234.0$\pm$1.2 & & 2.7$\pm$0.4 & 2.6$\pm$0.4 & & & & 4.1$\pm$0.6 &\ & **Average** & **-18.0** & **234.4** &\ Pyerun Patera & 2013-Nov-18 & -57.7$\pm$1.9 & 237.1$\pm$2.3 & & & & & & & 4.0$\pm$0.6 &\ & **Average** & **-57.7** & **237.1** &\ SE of Pele & 2016-Dec-23 & -34.0$\pm$1.8 & 240.0$\pm$1.6 & & & 3.0$\pm$0.4 & 3.3$\pm$0.5 & 2.6$\pm$0.4 & 2.5$\pm$0.4 & 5.8$\pm$0.9 &\ & 2017-Jan-08 & -35.1$\pm$1.0 & 238.9$\pm$0.8 & & 3.6$\pm$0.5 & 4.5$\pm$0.7 & 4.2$\pm$0.6 & 3.9$\pm$0.6 & 3.0$\pm$0.5 & 5.9$\pm$1.0 &\ & 2017-Jan-15 & -30.6$\pm$1.4 & 232.5$\pm$1.5 & & & & 3.4$\pm$0.6 & & & &\ & 2017-Jan-22 & -34.0$\pm$1.5 & 236.6$\pm$1.9 & & & & 2.6$\pm$0.4 & & & &\ & 2017-Jan-24 & -36.4$\pm$1.0 & 239.8$\pm$1.8 & & & 3.9$\pm$0.6 & 3.2$\pm$0.5 & & & 5.2$\pm$0.8 &\ & 2017-Jan-31 & -36.4$\pm$1.5 & 240.0$\pm$1.9 & & & & 3.3$\pm$0.6 & & & &\ & 2017-Feb-05 & -36.9$\pm$1.5 & 235.6$\pm$1.1 & & & & 3.4$\pm$0.5 & 3.9$\pm$0.6 & 3.1$\pm$0.5 & 5.8$\pm$0.9 &\ & 2017-Feb-23 & -36.7$\pm$1.4 & 238.8$\pm$1.6 & & & & 2.0$\pm$0.3 & & & &\ & 2017-Mar-04 & -41.9$\pm$1.4 & 235.2$\pm$1.7 & & & & 6.0$\pm$1.0 & & & &\ & 2017-Apr-03 & -30.9$\pm$1.2 & 236.4$\pm$1.3 & & & & 2.2$\pm$0.4 & & & &\ & 2017-May-05 & -34.2$\pm$1.3 & 239.0$\pm$1.6 & & & & 2.4$\pm$0.4 & & & &\ & 2017-May-07 & -31.2$\pm$1.3 & 244.2$\pm$1.8 & & & & 2.9$\pm$0.5 & & & &\ & 2017-May-12 & -34.5$\pm$1.3 & 240.0$\pm$2.3 & & & & 4.6$\pm$0.8 & & & &\ & 2017-May-14 & -33.4$\pm$1.3 & 242.8$\pm$1.5 & & & & 2.3$\pm$0.4 & & & &\ & 2017-May-28 & -35.1$\pm$1.0 & 242.2$\pm$0.8 & & 2.6$\pm$0.4 & 3.0$\pm$0.4 & 4.0$\pm$0.6 & 5.8$\pm$0.9 & 4.6$\pm$0.7 & 6.6$\pm$1.0 &\ & 2017-May-30 & -31.6$\pm$1.3 & 239.8$\pm$2.1 & & & & 4.1$\pm$0.7 & & & &\ & 2017-Jun-27 & -34.3$\pm$1.5 & 237.9$\pm$2.3 & & & & 3.3$\pm$0.6 & & & &\ & 2017-Jun-29 & -38.4$\pm$1.6 & 238.0$\pm$2.1 & & & & 3.8$\pm$0.7 & & & &\ & 2017-Jul-06 & -34.2$\pm$1.5 & 238.4$\pm$1.6 & & & & 2.8$\pm$0.5 & & & &\ & 2017-Jul-31 & -38.7$\pm$0.6 & 241.0$\pm$1.1 & & & & 3.8$\pm$0.6 & & 3.1$\pm$0.5 & 8.0$\pm$1.2 &\ & 2017-Dec-12 & -35.5$\pm$0.6 & 239.3$\pm$1.7 & & & 5.6$\pm$0.8 & 5.0$\pm$0.7 & & & &\ & 2018-Jan-02 & -35.1$\pm$1.7 & 237.2$\pm$1.2 & & & 4.7$\pm$0.7 & 6.7$\pm$1.0 & 7.4$\pm$1.1 & 6.6$\pm$1.0 & 14$\pm$2 &\ & 2018-Jan-12 & -36.3$\pm$3.0 & 239.8$\pm$2.4 & & & & 6.7$\pm$1.0 & & & 13$\pm$2 &\ & 2018-Apr-24 & -32.9$\pm$1.3 & 236.8$\pm$2.6 & & & & 4.1$\pm$0.8 & & & &\ & 2018-May-31 & -34.5$\pm$1.2 & 243.3$\pm$1.4 & & & & 4.2$\pm$0.6 & & & &\ & 2018-Jun-16 & -36.2$\pm$1.3 & 240.1$\pm$2.0 & & & & 5.9$\pm$0.9 & & & &\ & 2018-Jun-18 & -31.4$\pm$1.3 & 244.4$\pm$1.6 & & & & 4.6$\pm$0.7 & & & &\ & 2018-Jun-23 & -32.3$\pm$1.3 & 240.7$\pm$2.5 & & & & 5.4$\pm$0.8 & & & &\ & 2018-Jun-25 & -35.3$\pm$1.3 & 240.1$\pm$1.5 & & & & 4.3$\pm$0.6 & & & &\ & 2018-Jun-30 & -33.1$\pm$1.3 & 237.4$\pm$2.3 & & & & 4.5$\pm$0.7 & & & &\ & **Average** & **-34.5** & **239.5** &\ Pillan Patera & 2015-Mar-27 & -11.8$\pm$1.1 & 243.2$\pm$1.5 & & & & 7.6$\pm$1.1 & & & &\ & 2015-Mar-29 & -12.1$\pm$1.1 & 245.4$\pm$1.2 & & & & 6.5$\pm$1.0 & & & &\ & 2015-Mar-31 & -11.4$\pm$0.4 & 248.7$\pm$1.3 & & & 3.1$\pm$0.5 & 6.2$\pm$0.9 & & & 15$\pm$2 &\ & 2015-Apr-05 & -11.3$\pm$1.2 & 245.4$\pm$1.2 & & & & 5.3$\pm$0.8 & & & &\ & 2015-Apr-21 & -12.1$\pm$1.2 & 244.6$\pm$1.2 & & & & 4.1$\pm$0.6 & & & &\ & 2015-Apr-26 & -11.8$\pm$1.6 & 244.4$\pm$2.4 & & & & 3.3$\pm$0.6 & & & &\ & 2015-May-05 & -11.1$\pm$0.6 & 245.3$\pm$0.8 & & & & 3.7$\pm$0.5 & & & 11$\pm$2 &\ & 2017-Feb-05 & -12.2$\pm$0.7 & 240.1$\pm$0.8 & & 4.4$\pm$0.7 & 8.9$\pm$1.3 & 9.9$\pm$1.5 & 9.3$\pm$1.4 & 8.8$\pm$1.3 & 11$\pm$2 &\ & 2017-Feb-23 & -11.5$\pm$1.2 & 242.6$\pm$1.2 & & & & 27$\pm$5 & & & &\ & 2017-Mar-04 & -9.5$\pm$1.1 & 240.8$\pm$1.1 & & & & 24$\pm$4 & & & &\ & 2017-Mar-06 & -9.3$\pm$1.2 & 243.7$\pm$2.1 & & & & 24$\pm$4 & & & &\ & 2017-Apr-03 & -8.1$\pm$1.1 & 239.6$\pm$1.1 & & & & 4.3$\pm$0.7 & & & &\ & 2017-May-05 & -10.1$\pm$1.1 & 243.7$\pm$1.1 & & & & 2.1$\pm$0.4 & & & &\ & 2017-May-07 & -9.6$\pm$1.1 & 245.6$\pm$1.3 & & & & 2.3$\pm$0.4 & & & &\ & 2017-May-14 & -11.3$\pm$1.1 & 243.6$\pm$1.2 & & & & 1.6$\pm$0.3 & & & &\ & 2017-May-28 & -10.8$\pm$0.7 & 245.2$\pm$0.5 & & & & 3.3$\pm$0.5 & 5.7$\pm$0.9 & 5.1$\pm$0.8 & 9.1$\pm$1.4 &\ & 2017-May-30 & -7.5$\pm$1.2 & 239.2$\pm$1.5 & & & & 2.1$\pm$0.4 & & & &\ & 2017-Jun-29 & -12.7$\pm$1.3 & 242.0$\pm$1.4 & & & & 1.8$\pm$0.3 & & & &\ & 2017-Jul-06 & -9.2$\pm$1.3 & 237.6$\pm$1.3 & & & & 1.5$\pm$0.3 & & & &\ & 2017-Jul-31 & -14.1$\pm$2.9 & 245.8$\pm$0.8 & & & & 2.3$\pm$0.3 & & & 7.1$\pm$1.1 &\ & 2018-Jan-12 & -10.9$\pm$0.6 & 243.9$\pm$0.9 & & & & & & & 3.8$\pm$0.6 &\ & **Average** & **-11.3** & **243.7** &\ Chors Patera & 2014-Oct-22 & 65.1$\pm$3.5 & 245.6$\pm$12.0 & & & & 57$\pm$19 & & & &\ & 2014-Oct-24 & 65.1$\pm$3.4 & 245.3$\pm$4.3 & & & & 35$\pm$8 & & & &\ & 2014-Oct-31 & 66.4$\pm$2.6 & 243.8$\pm$1.2 & & & & 23$\pm$3 & & & 53$\pm$8 &\ & 2014-Dec-02 & 63.0$\pm$1.0 & 248.5$\pm$1.5 & & & & 7.4$\pm$1.1 & & & 14$\pm$2 &\ & 2015-Mar-31 & 66.3$\pm$1.6 & 254.5$\pm$4.7 & & & & & & & 3.0$\pm$1.5 &\ & **Average** & **65.1** & **245.6** &\ UP 254W & 2018-May-10 & -36.8$\pm$1.2 & 251.9$\pm$0.3 & 120$\pm$20 & & & 130$\pm$20 & & & &\ & 2018-May-31 & -37.3$\pm$1.3 & 257.0$\pm$1.8 & & & & 2.0$\pm$0.3 & & & &\ & **Average** & **-37.1** & **254.5** &\ Pele & 2013-Aug-20 & -17.4$\pm$0.4 & 255.1$\pm$1.6 & & & & 2.1$\pm$0.5 & & & 3.3$\pm$0.5 &\ & 2013-Nov-18 & -18.1$\pm$0.9 & 251.6$\pm$0.8 & & 0.72$\pm$0.11 & 1.0$\pm$0.2 & 2.7$\pm$0.8 & & & 3.7$\pm$0.6 &\ & 2014-Feb-08 & -17.6$\pm$0.6 & 257.3$\pm$0.4 & & & & 2.2$\pm$0.3 & & & 2.5$\pm$0.4 &\ & 2014-Oct-31 & -18.1$\pm$0.7 & 253.1$\pm$0.4 & & & & 2.7$\pm$0.4 & & & 3.3$\pm$0.5 &\ & 2014-Dec-02 & -20.3$\pm$1.6 & 253.6$\pm$0.5 & & 1.1$\pm$0.2 & & 3.4$\pm$0.5 & & & 4.6$\pm$0.7 &\ & 2015-Jan-26 & -19.2$\pm$2.0 & 255.2$\pm$2.1 & & & & 0.34$\pm$0.17 & & & &\ & 2015-Mar-31 & -19.4$\pm$0.7 & 260.2$\pm$2.5 & & 0.8$\pm$0.12 & 1.1$\pm$0.2 & 1.1$\pm$0.4 & & & 1.7$\pm$0.3 &\ & 2015-Apr-19 & -16.7$\pm$1.2 & 258.2$\pm$3.6 & & & & 7.0$\pm$3.5 & & & &\ & 2015-May-05 & -19.2$\pm$1.0 & 257.9$\pm$1.8 & & & & 1.5$\pm$0.6 & & & &\ & 2015-Nov-23 & -17.3$\pm$0.7 & 256.7$\pm$2.9 & & & & & & & 4.1$\pm$0.7 &\ & 2016-Feb-21 & -18.2$\pm$1.1 & 253.2$\pm$1.1 & & & & 1.3$\pm$0.2 & & & &\ & 2016-Dec-23 & -17.8$\pm$1.1 & 253.7$\pm$0.6 & & & & & & 2.5$\pm$0.4 & 2.5$\pm$0.4 &\ & 2017-Jan-03 & -17.8$\pm$0.6 & 255.4$\pm$1.8 & & & & 3.5$\pm$0.5 & & 7.2$\pm$1.1 & 6.4$\pm$1.0 &\ & 2017-Jan-08 & -18.2$\pm$0.7 & 253.4$\pm$1.1 & & & & 1.6$\pm$0.4 & 1.6$\pm$0.2 & 2.3$\pm$0.4 & 2.9$\pm$0.7 &\ & 2017-Jan-24 & -19.4$\pm$0.4 & 254.2$\pm$0.7 & & & & 2.6$\pm$0.4 & & & 3.2$\pm$0.5 &\ & 2017-May-28 & -19.2$\pm$1.2 & 259.0$\pm$0.6 & & & 0.92$\pm$0.14 & 2.0$\pm$0.3 & 3.4$\pm$0.5 & 3.6$\pm$0.5 & 3.2$\pm$0.5 &\ & 2017-Jul-31 & -19.9$\pm$0.8 & 256.7$\pm$0.9 & & & & & 2.0$\pm$0.3 & 2.3$\pm$0.3 & &\ & 2017-Dec-12 & -18.6$\pm$0.7 & 254.6$\pm$0.7 & & & & 1.1$\pm$0.2 & & & &\ & 2018-Jan-12 & -19.3$\pm$0.0 & 256.0$\pm$1.0 & & & & 1.6$\pm$0.2 & & & 2.4$\pm$0.4 &\ & **Average** & **-18.2** & **255.2** &\ Shakuru Patera & 2018-Apr-24 & 27.9$\pm$1.3 & 260.3$\pm$1.6 & & & & 3.9$\pm$0.7 & & & &\ & 2018-May-31 & 21.7$\pm$1.2 & 263.2$\pm$1.5 & & & & 1.5$\pm$0.2 & & & &\ & **Average** & **24.8** & **261.7** &\ Mithra Patera & 2015-Jan-10 & -59.8$\pm$2.1 & 264.3$\pm$4.4 & & & & 55$\pm$12 & & & &\ & 2015-Jan-12 & -57.7$\pm$0.8 & 269.6$\pm$6.0 & & & & 13$\pm$8 & & & 20$\pm$8 &\ & 2015-Jan-15 & -57.3$\pm$1.9 & 263.5$\pm$4.9 & & & & 8.5$\pm$4.2 & & & &\ & 2015-Mar-31 & -58.3$\pm$0.9 & 266.9$\pm$1.8 & & & & & & & 3.1$\pm$0.5 &\ & **Average** & **-58.0** & **265.6** &\ Svarog Patera & 2016-Dec-23 & -52.9$\pm$1.0 & 269.3$\pm$1.9 & & & & & & & 5.1$\pm$0.8 &\ & 2017-Jan-08 & -51.0$\pm$0.0 & 266.6$\pm$0.3 & & & & & & & 2.5$\pm$0.4 &\ & 2017-May-28 & -51.6$\pm$0.8 & 275.5$\pm$2.7 & & & & & & & 4.9$\pm$0.7 &\ & **Average** & **-51.6** & **269.3** &\ Daedalus Patera & 2016-May-11 & 19.1$\pm$1.3 & 274.7$\pm$1.3 & & & & 2.8$\pm$0.5 & & & &\ & 2017-Jan-24 & 18.7$\pm$0.1 & 274.9$\pm$1.3 & & & & 0.97$\pm$0.15 & & & 2.1$\pm$0.3 &\ & 2017-Jul-01 & 18.7$\pm$1.4 & 273.9$\pm$1.8 & & & & 5.0$\pm$0.9 & & & &\ & 2017-Jul-08 & 16.8$\pm$1.4 & 272.3$\pm$1.5 & & & & 2.0$\pm$0.3 & & & &\ & 2018-Jan-12 & 18.1$\pm$0.0 & 273.3$\pm$0.3 & & & & 2.2$\pm$0.3 & & & 4.3$\pm$0.6 &\ & **Average** & **18.7** & **273.9** &\ PV59 & 2014-Feb-08 & -38.6$\pm$0.3 & 289.5$\pm$0.4 & & & & 16$\pm$2 & & & 17$\pm$3 &\ & 2014-Feb-10 & -38.3$\pm$0.3 & 290.6$\pm$0.4 & & & 13$\pm$2 & 18$\pm$3 & & & 21$\pm$3 &\ & 2014-Mar-12 & -42.2$\pm$1.9 & 292.4$\pm$2.2 & & & & 7.9$\pm$1.5 & & & &\ & 2014-Oct-31 & -37.9$\pm$1.0 & 286.5$\pm$1.8 & & & & 4.7$\pm$0.7 & & & 3.2$\pm$0.5 &\ & 2014-Dec-02 & -37.0$\pm$0.9 & 289.1$\pm$1.0 & & & & & & & 1.9$\pm$0.3 &\ & 2015-Mar-31 & -37.0$\pm$1.0 & 293.6$\pm$0.8 & & & 1.1$\pm$0.2 & 1.5$\pm$0.3 & & & 1.7$\pm$0.3 &\ & 2015-Apr-02 & -38.4$\pm$1.1 & 290.6$\pm$2.7 & & & & & & & 2.5$\pm$0.5 &\ & 2015-Dec-25 & -38.8$\pm$0.5 & 285.9$\pm$1.6 & & 4.3$\pm$0.6 & 5.3$\pm$0.8 & 7.1$\pm$1.1 & & & 12$\pm$2 &\ & 2016-Dec-23 & -39.0$\pm$0.6 & 288.8$\pm$1.7 & & & & & 4.5$\pm$0.7 & 3.2$\pm$0.5 & 7.7$\pm$1.2 &\ & 2017-Jan-03 & -38.4$\pm$0.7 & 288.9$\pm$1.4 & & & & 2.9$\pm$0.4 & 4.0$\pm$0.6 & 4.6$\pm$0.7 & 6.3$\pm$0.9 &\ & 2017-Jan-08 & -38.4$\pm$1.2 & 285.1$\pm$1.3 & & & & 3.0$\pm$0.5 & 3.1$\pm$0.8 & 2.2$\pm$0.6 & 5.6$\pm$0.8 &\ & 2017-Jan-24 & -39.5$\pm$1.4 & 287.0$\pm$0.4 & & & & 2.4$\pm$0.4 & & & 4.6$\pm$0.7 &\ & 2017-May-07 & -35.4$\pm$1.3 & 291.9$\pm$1.8 & & & & 3.4$\pm$0.6 & & & &\ & 2017-May-14 & -35.8$\pm$1.3 & 288.3$\pm$2.3 & & & & 3.9$\pm$0.7 & & & &\ & 2017-May-23 & -33.6$\pm$1.3 & 294.0$\pm$1.5 & & & & 6.8$\pm$1.2 & & & &\ & 2017-May-25 & -37.4$\pm$1.4 & 293.3$\pm$2.7 & & & & 13$\pm$2 & & & &\ & 2017-May-28 & -37.2$\pm$0.4 & 293.7$\pm$2.5 & & & 13$\pm$4 & 13$\pm$2 & 19$\pm$4 & 15$\pm$2 & 18$\pm$3 &\ & 2017-May-30 & -34.7$\pm$1.4 & 287.2$\pm$1.8 & & & & 8.3$\pm$1.4 & & & &\ & 2017-Jun-15 & -37.6$\pm$1.5 & 292.6$\pm$1.7 & & & & 3.3$\pm$0.6 & & & &\ & 2017-Jul-31 & -38.5$\pm$0.4 & 293.1$\pm$0.8 & & & & & 2.0$\pm$0.3 & 1.9$\pm$0.3 & 5.9$\pm$0.9 &\ & 2018-Jan-12 & -38.2$\pm$2.0 & 288.9$\pm$0.6 & & & & 2.5$\pm$0.4 & & & 4.4$\pm$0.7 &\ & 2018-Jun-18 & -38.4$\pm$1.4 & 289.8$\pm$2.1 & & & & 5.7$\pm$0.9 & & & &\ & **Average** & **-38.2** & **289.7** &\ N Lerna Regio & 2013-Nov-18 & -56.8$\pm$1.4 & 279.7$\pm$4.7 & & & & & & & 2.7$\pm$1.4 &\ & 2013-Dec-06 & -58.0$\pm$2.0 & 285.7$\pm$3.1 & & & & 7.1$\pm$3.6 & & & &\ & 2014-Feb-10 & -56.1$\pm$1.0 & 294.0$\pm$1.5 & & & 4.0$\pm$0.6 & & & & &\ & 2014-Oct-31 & -56.0$\pm$1.4 & 281.1$\pm$1.9 & & & & 5.6$\pm$1.4 & & & 5.5$\pm$0.8 &\ & 2014-Dec-02 & -60.0$\pm$1.1 & 284.1$\pm$1.3 & & & & 5.4$\pm$1.2 & & & 4.3$\pm$0.6 &\ & 2015-Mar-31 & -57.5$\pm$2.9 & 295.2$\pm$0.8 & & 5.3$\pm$0.8 & 3.6$\pm$0.5 & 3.2$\pm$1.1 & & & 3.3$\pm$0.5 &\ & 2015-Nov-23 & -58.3$\pm$1.7 & 285.3$\pm$2.5 & & & & 4.5$\pm$2.3 & & & 7.0$\pm$1.1 &\ & 2015-Dec-25 & -56.6$\pm$1.0 & 290.6$\pm$2.4 & & & & 3.1$\pm$1.1 & & & 4.3$\pm$0.6 &\ & 2016-May-04 & -51.9$\pm$1.8 & 294.7$\pm$2.0 & & & & 5.3$\pm$1.0 & & & &\ & 2016-May-20 & -50.8$\pm$1.9 & 295.2$\pm$2.7 & & & & 7.4$\pm$1.5 & & & &\ & 2016-May-25 & -54.6$\pm$2.1 & 291.3$\pm$4.6 & & & & 7.0$\pm$1.5 & & & &\ & 2016-May-27 & -53.1$\pm$2.0 & 291.6$\pm$2.4 & & & & 6.7$\pm$1.3 & & & &\ & 2016-Jun-12 & -53.9$\pm$2.1 & 293.2$\pm$3.3 & & & & 8.0$\pm$1.6 & & & &\ & 2016-Jun-19 & -50.5$\pm$2.0 & 289.9$\pm$2.3 & & & & 5.8$\pm$1.1 & & & &\ & 2017-Jan-03 & -55.1$\pm$1.7 & 292.9$\pm$1.8 & & & & 3.3$\pm$0.5 & & & 4.5$\pm$0.7 &\ & 2017-Jan-08 & -54.8$\pm$1.0 & 283.2$\pm$1.8 & & & & 4.0$\pm$0.6 & & & &\ & 2017-Jan-24 & -59.4$\pm$1.0 & 288.8$\pm$0.6 & & & & 3.5$\pm$0.5 & & & 3.5$\pm$0.5 &\ & 2017-Jul-31 & -54.8$\pm$1.0 & 295.9$\pm$1.5 & & & & & & & 5.3$\pm$0.8 &\ & 2018-Jan-12 & -57.7$\pm$0.5 & 290.0$\pm$2.0 & & & & 4.1$\pm$0.6 & & & 5.8$\pm$0.9 &\ & **Average** & **-56.0** & **290.6** &\ Kibero Patera & 2016-Jun-24 & -14.8$\pm$1.5 & 296.6$\pm$4.4 & & & & 15$\pm$4 & & & &\ & 2016-Jun-28 & -10.2$\pm$1.4 & 297.7$\pm$1.7 & & & & 9.0$\pm$1.5 & & & &\ & **Average** & **-12.5** & **297.1** &\ Amaterasu Patera & 2015-Dec-25 & 39.4$\pm$0.8 & 304.3$\pm$1.1 & & 16$\pm$2 & 28$\pm$4 & 43$\pm$6 & & & 110$\pm$20 &\ & 2016-Feb-09 & 42.0$\pm$1.6 & 299.8$\pm$2.2 & & & & 6.8$\pm$1.3 & & & &\ & 2016-Feb-16 & 40.9$\pm$1.5 & 305.1$\pm$1.6 & & & & 5.9$\pm$1.1 & & & &\ & 2016-Feb-23 & 40.8$\pm$1.5 & 306.8$\pm$1.7 & & & & 5.9$\pm$1.1 & & & &\ & 2016-Mar-12 & 38.8$\pm$1.5 & 301.3$\pm$2.4 & & & & 3.8$\pm$0.8 & & & &\ & 2016-May-04 & 40.3$\pm$1.6 & 308.4$\pm$1.8 & & & & 3.5$\pm$0.7 & & & &\ & 2016-May-11 & 38.5$\pm$1.6 & 305.4$\pm$2.4 & & & & 3.5$\pm$0.7 & & & &\ & 2016-May-20 & 39.8$\pm$1.7 & 305.5$\pm$1.8 & & & & 2.5$\pm$0.5 & & & &\ & 2016-May-27 & 38.4$\pm$1.7 & 303.3$\pm$1.8 & & & & 3.3$\pm$0.6 & & & &\ & 2016-Jun-05 & 37.5$\pm$1.7 & 301.2$\pm$2.5 & & & & 3.8$\pm$0.7 & & & &\ & 2017-Jan-03 & 35.0$\pm$1.9 & 305.2$\pm$0.3 & & & & 1.7$\pm$0.2 & & & 6.9$\pm$1.0 &\ & 2017-Jan-08 & 33.7$\pm$1.8 & 302.0$\pm$1.2 & & & & & & 1.7$\pm$0.3 & 5.3$\pm$0.8 &\ & 2017-Jan-24 & 31.6$\pm$0.7 & 301.2$\pm$1.0 & & & & & & & 3.1$\pm$0.5 &\ & **Average** & **38.8** & **304.3** &\ Sengen Patera & 2016-Jun-19 & -32.4$\pm$1.6 & 304.9$\pm$1.8 & & & & 2.9$\pm$0.6 & & & &\ & 2017-Jan-03 & -30.0$\pm$1.0 & 310.9$\pm$0.6 & & & & & 1.7$\pm$0.3 & 2.0$\pm$0.3 & 2.4$\pm$0.4 &\ & 2017-Mar-04 & -29.6$\pm$1.3 & 303.0$\pm$3.6 & & & & 9.6$\pm$1.9 & & & &\ & 2017-Mar-06 & -26.4$\pm$1.2 & 305.4$\pm$1.3 & & & & 6.0$\pm$1.0 & & & &\ & **Average** & **-29.8** & **305.1** &\ Rarog Patera & 2013-Aug-15 & -40.5$\pm$2.2 & 305.5$\pm$2.6 & 500$\pm$80 & & & 330$\pm$80 & & & &\ & 2013-Aug-20 & -40.1$\pm$1.4 & 302.1$\pm$1.4 & 4.0$\pm$1.9 & & & 23$\pm$4 & & & &\ & 2013-Aug-22 & -39.3$\pm$1.8 & 305.4$\pm$1.8 & & & & 21$\pm$3 & & & 35$\pm$5 &\ & 2013-Sep-07 & -41.0$\pm$1.9 & 304.8$\pm$2.9 & & & & 7.7$\pm$1.5 & & & &\ & 2013-Nov-29 & -42.1$\pm$2.1 & 304.9$\pm$2.1 & & & & 4.3$\pm$0.6 & & & &\ & 2013-Dec-15 & -39.7$\pm$1.8 & 303.0$\pm$2.3 & & & & 2.9$\pm$1.4 & & & &\ & 2014-Feb-10 & -37.8$\pm$0.4 & 307.9$\pm$1.1 & & & 3.5$\pm$0.5 & 3.3$\pm$0.5 & & & 2.6$\pm$0.4 &\ & 2014-Mar-10 & -41.9$\pm$1.6 & 300.2$\pm$3.8 & & & & 5.2$\pm$2.6 & & & &\ & 2014-Mar-12 & -39.0$\pm$1.9 & 309.9$\pm$2.2 & & & & 2.7$\pm$1.4 & & & &\ & 2014-Mar-28 & -37.2$\pm$1.6 & 306.0$\pm$2.5 & & & & 4.9$\pm$1.0 & & & &\ & 2015-Jan-12 & -33.7$\pm$1.4 & 304.9$\pm$1.5 & & & & 1.9$\pm$0.3 & & & 2.0$\pm$0.3 &\ & 2015-Mar-31 & -37.4$\pm$1.1 & 307.2$\pm$1.0 & & 2.8$\pm$0.4 & 1.9$\pm$0.3 & 1.6$\pm$0.4 & & & 0.99$\pm$0.35 &\ & 2015-Apr-02 & -39.0$\pm$2.9 & 308.3$\pm$1.2 & & & & 2.2$\pm$0.8 & & & 1.3$\pm$0.5 &\ & 2015-Apr-09 & -39.1$\pm$1.9 & 309.0$\pm$2.2 & & & & 5.5$\pm$0.8 & & & &\ & **Average** & **-39.2** & **305.4** &\ Heno Patera & 2013-Aug-15 & -55.2$\pm$2.1 & 307.5$\pm$2.1 & 92$\pm$15 & & & 270$\pm$70 & & & &\ & 2013-Aug-20 & -53.0$\pm$2.1 & 304.0$\pm$3.0 & 4.9$\pm$2.4 & & & 40$\pm$6 & & & &\ & 2013-Aug-22 & -55.6$\pm$1.7 & 309.5$\pm$1.7 & 9.3$\pm$2.6 & & & 50$\pm$8 & & & 69$\pm$11 &\ & 2013-Aug-29 & -55.7$\pm$2.5 & 309.2$\pm$3.0 & & & & 31$\pm$6 & & & &\ & 2013-Sep-05 & -55.0$\pm$2.5 & 307.1$\pm$4.6 & & & & 15$\pm$3 & & & &\ & 2013-Sep-07 & -56.2$\pm$2.5 & 307.5$\pm$4.3 & & & & 17$\pm$4 & & & &\ & 2015-Apr-02 & -57.9$\pm$0.8 & 305.0$\pm$1.9 & & & & & & & 2.8$\pm$0.4 &\ & **Average** & **-55.6** & **307.5** &\ Loki Patera & 2013-Aug-15 & 12.0$\pm$1.4 & 309.0$\pm$1.4 & 7.7$\pm$1.4 & & & 60$\pm$10 & & & &\ & 2013-Aug-20 & 11.9$\pm$0.8 & 308.7$\pm$1.2 & 13$\pm$3 & & & 130$\pm$30 & & & 200$\pm$30 &\ & 2013-Aug-22 & 11.7$\pm$1.2 & 308.4$\pm$1.0 & 7.5$\pm$1.3 & & & 140$\pm$20 & & & 210$\pm$30 &\ & 2013-Aug-29 & 11.3$\pm$1.4 & 307.1$\pm$1.4 & & & & 130$\pm$20 & & & &\ & 2013-Sep-03 & 10.9$\pm$1.5 & 308.4$\pm$4.7 & & & & 50$\pm$14 & & & &\ & 2013-Sep-05 & 11.3$\pm$1.4 & 307.4$\pm$1.6 & & & & 73$\pm$11 & & & &\ & 2013-Sep-07 & 12.0$\pm$1.4 & 307.0$\pm$1.7 & & & & 84$\pm$13 & & & &\ & 2013-Sep-09 & 10.2$\pm$1.5 & 310.3$\pm$5.0 & & & & 37$\pm$19 & & & &\ & 2013-Sep-10 & 12.0$\pm$1.4 & 295.1$\pm$5.3 & & & & 15$\pm$5 & & & &\ & 2013-Nov-18 & 14.2$\pm$0.5 & 309.5$\pm$3.7 & & & & & & & 28$\pm$10 &\ & 2013-Nov-27 & 11.9$\pm$1.4 & 307.0$\pm$2.6 & & & & 12$\pm$2 & & & &\ & 2013-Nov-29 & 12.4$\pm$1.1 & 306.5$\pm$1.1 & & & & 13$\pm$2 & & & &\ & 2013-Dec-06 & 12.0$\pm$1.1 & 307.5$\pm$1.5 & & & & 11$\pm$2 & & & &\ & 2013-Dec-15 & 12.5$\pm$1.0 & 307.3$\pm$1.1 & & & & 11$\pm$2 & & & &\ & 2014-Jan-20 & 12.2$\pm$0.6 & 312.6$\pm$2.9 & & & & & & & 17$\pm$4 &\ & 2014-Feb-08 & 12.6$\pm$0.9 & 310.7$\pm$0.3 & & & & 8.5$\pm$1.3 & & & 25$\pm$4 &\ & 2014-Feb-10 & 12.6$\pm$0.3 & 310.0$\pm$0.3 & & & 2.1$\pm$0.3 & 11$\pm$2 & & & 32$\pm$5 &\ & 2014-Mar-10 & 12.1$\pm$1.2 & 314.0$\pm$2.7 & & & & 6.4$\pm$3.2 & & & &\ & 2014-Mar-12 & 11.9$\pm$1.2 & 311.3$\pm$1.2 & & & & 7.7$\pm$1.1 & & & &\ & 2014-Mar-28 & 13.1$\pm$1.3 & 309.5$\pm$1.5 & & & & 7.6$\pm$1.1 & & & &\ & 2014-Oct-03 & 12.2$\pm$1.4 & 306.9$\pm$1.4 & & & & 130$\pm$20 & & & &\ & 2014-Oct-10 & 11.6$\pm$1.4 & 306.6$\pm$1.6 & & & & 130$\pm$20 & & & &\ & 2014-Oct-24 & 10.8$\pm$1.4 & 304.7$\pm$2.3 & & & & 66$\pm$10 & & & &\ & 2014-Oct-30 & 11.1$\pm$0.5 & 309.9$\pm$4.8 & & & & 40$\pm$28 & & & 43$\pm$25 &\ & 2014-Oct-31 & 12.2$\pm$0.5 & 306.3$\pm$1.7 & & & & 78$\pm$12 & & & 160$\pm$20 &\ & 2014-Nov-25 & 10.6$\pm$1.2 & 307.0$\pm$1.5 & & & & 55$\pm$8 & & & &\ & 2014-Nov-27 & 11.1$\pm$1.2 & 305.6$\pm$1.5 & & & & 54$\pm$8 & & & &\ & 2014-Nov-29 & 9.4$\pm$1.2 & 298.9$\pm$7.8 & & & & 59$\pm$31 & & & &\ & 2014-Nov-30 & 9.8$\pm$1.2 & 293.1$\pm$3.7 & & & & 10$\pm$2 & & & &\ & 2014-Dec-02 & 10.2$\pm$1.1 & 305.3$\pm$0.6 & & 12$\pm$2 & & 47$\pm$7 & & & 120$\pm$20 &\ & 2014-Dec-06 & 11.7$\pm$1.2 & 306.0$\pm$1.4 & & & & 29$\pm$4 & & & &\ & 2014-Dec-09 & 10.7$\pm$1.3 & 303.3$\pm$2.6 & & & & 24$\pm$4 & & & &\ & 2014-Dec-16 & 11.2$\pm$1.2 & 305.1$\pm$2.7 & & & & 14$\pm$3 & & & &\ & 2014-Dec-18 & 12.0$\pm$1.2 & 306.5$\pm$1.3 & & & & 24$\pm$4 & & & &\ & 2015-Jan-10 & 12.9$\pm$1.1 & 307.2$\pm$1.2 & & & & 12$\pm$2 & & & &\ & 2015-Jan-12 & 11.2$\pm$0.6 & 306.3$\pm$0.4 & & & 7.7$\pm$1.2 & 23$\pm$3 & & & 66$\pm$10 &\ & 2015-Jan-14 & 12.4$\pm$1.1 & 307.4$\pm$1.2 & & & & 19$\pm$3 & & & &\ & 2015-Jan-26 & 11.6$\pm$1.1 & 307.6$\pm$2.7 & & & & 7.7$\pm$3.9 & & & &\ & 2015-Mar-29 & 12.7$\pm$1.2 & 311.7$\pm$4.2 & & & & 11$\pm$5 & & & &\ & 2015-Mar-31 & 12.3$\pm$0.4 & 312.0$\pm$0.8 & & & 2.4$\pm$0.4 & 9.8$\pm$1.5 & & & 30$\pm$4 &\ & 2015-Apr-02 & 12.7$\pm$1.0 & 311.5$\pm$0.8 & & & & 9.3$\pm$1.4 & & & 28$\pm$4 &\ & 2015-Apr-04 & 12.7$\pm$0.7 & 318.9$\pm$1.8 & & & & & & & 15$\pm$2 &\ & 2015-Apr-05 & 12.0$\pm$1.2 & 307.0$\pm$2.7 & & & & 6.0$\pm$3.0 & & & &\ & 2015-Apr-09 & 12.4$\pm$1.2 & 311.7$\pm$1.2 & & & & 8.7$\pm$1.3 & & & &\ & 2015-Nov-23 & 11.7$\pm$0.6 & 308.1$\pm$1.1 & & & & 2.9$\pm$0.4 & & & 13$\pm$2 &\ & 2015-Dec-25 & 15.5$\pm$0.6 & 306.4$\pm$0.9 & & 17$\pm$3 & 28$\pm$4 & 38$\pm$6 & & & 85$\pm$13 &\ & 2016-Feb-09 & 17.5$\pm$1.2 & 303.3$\pm$1.3 & & & & 100$\pm$20 & & & &\ & 2016-Feb-16 & 16.9$\pm$1.1 & 306.7$\pm$1.2 & & & & 75$\pm$13 & & & &\ & 2016-Feb-18 & 14.8$\pm$1.0 & 296.6$\pm$7.2 & & & & 120$\pm$110 & & & &\ & 2016-Feb-21 & 12.6$\pm$1.2 & 304.7$\pm$2.0 & & & & 44$\pm$9 & & & &\ & 2016-Feb-23 & 16.1$\pm$1.1 & 308.1$\pm$1.2 & & & & 67$\pm$13 & & & &\ & 2016-Mar-12 & 12.4$\pm$1.1 & 306.6$\pm$1.4 & & & & 69$\pm$13 & & & &\ & 2016-Mar-14 & 12.4$\pm$0.9 & 305.5$\pm$7.1 & & & & 63$\pm$60 & & & &\ & 2016-May-02 & 13.5$\pm$1.3 & 311.7$\pm$3.3 & & & & 22$\pm$5 & & & &\ & 2016-May-04 & 16.1$\pm$1.2 & 312.2$\pm$1.3 & & & & 26$\pm$5 & & & &\ & 2016-May-09 & 14.9$\pm$1.4 & 310.1$\pm$6.5 & & & & 11$\pm$5 & & & &\ & 2016-May-11 & 13.6$\pm$1.3 & 309.6$\pm$1.6 & & & & 20$\pm$4 & & & &\ & 2016-May-13 & 13.0$\pm$1.3 & 308.4$\pm$1.7 & & & & 21$\pm$4 & & & &\ & 2016-May-15 & 10.4$\pm$0.6 & 314.0$\pm$2.5 & & & & & & & 18$\pm$4 &\ & 2016-May-18 & 13.4$\pm$1.3 & 310.1$\pm$2.2 & & & & 17$\pm$3 & & & &\ & 2016-May-20 & 16.2$\pm$1.3 & 307.3$\pm$1.3 & & & & 21$\pm$4 & & & &\ & 2016-May-25 & 11.7$\pm$1.3 & 310.4$\pm$2.4 & & & & 15$\pm$3 & & & &\ & 2016-May-27 & 13.1$\pm$1.3 & 307.2$\pm$1.4 & & & & 20$\pm$4 & & & &\ & 2016-Jun-03 & 18.3$\pm$1.4 & 309.3$\pm$1.8 & & & & 14$\pm$3 & & & &\ & 2016-Jun-05 & 12.5$\pm$1.3 & 305.0$\pm$1.6 & & & & 16$\pm$3 & & & &\ & 2016-Jun-10 & 16.7$\pm$1.4 & 311.0$\pm$2.6 & & & & 10$\pm$2 & & & &\ & 2016-Jun-12 & 13.5$\pm$1.4 & 308.3$\pm$1.4 & & & & 15$\pm$3 & & & &\ & 2016-Jun-17 & 13.3$\pm$1.5 & 306.7$\pm$3.4 & & & & 11$\pm$2 & & & &\ & 2016-Jun-19 & 14.7$\pm$1.4 & 306.2$\pm$1.5 & & & & 13$\pm$2 & & & &\ & 2016-Jun-28 & 15.5$\pm$1.5 & 309.9$\pm$1.6 & & & & 13$\pm$2 & & & &\ & 2016-Nov-18 & 18.2$\pm$1.6 & 305.8$\pm$1.8 & & & & 8.8$\pm$1.5 & & & &\ & 2016-Dec-23 & 12.4$\pm$0.5 & 306.5$\pm$1.4 & & & & & 7.2$\pm$1.1 & 5.4$\pm$0.8 & 17$\pm$3 &\ & 2017-Jan-03 & 11.9$\pm$0.9 & 308.5$\pm$0.7 & & & & 6.9$\pm$1.0 & 12$\pm$2 & 14$\pm$2 & 28$\pm$4 &\ & 2017-Jan-08 & 11.2$\pm$0.9 & 305.2$\pm$0.9 & & & & 4.5$\pm$0.7 & 6.2$\pm$0.9 & 5.2$\pm$0.8 & 18$\pm$4 &\ & 2017-Jan-12 & 13.8$\pm$1.4 & 308.1$\pm$1.6 & & & & 4.4$\pm$0.8 & & & &\ & 2017-Jan-24 & 10.6$\pm$0.0 & 306.2$\pm$0.7 & & & & 4.4$\pm$0.7 & & & 17$\pm$3 &\ & 2017-Jan-26 & 15.0$\pm$1.3 & 302.7$\pm$1.4 & & & & 4.6$\pm$0.8 & & & &\ & 2017-Mar-04 & 14.7$\pm$1.3 & 309.0$\pm$3.5 & & & & 22$\pm$5 & & & &\ & 2017-Mar-06 & 17.3$\pm$1.2 & 307.9$\pm$1.2 & & & & 31$\pm$5 & & & &\ & 2017-Mar-29 & 17.8$\pm$1.2 & 305.4$\pm$1.2 & & & & 56$\pm$9 & & & &\ & 2017-Apr-02 & 16.4$\pm$0.9 & 305.6$\pm$6.9 & & & & 38$\pm$28 & & & &\ & 2017-Apr-03 & 16.7$\pm$1.4 & 303.4$\pm$5.5 & & & & 30$\pm$10 & & & &\ & 2017-May-05 & 17.2$\pm$1.3 & 307.6$\pm$2.7 & & & & 52$\pm$10 & & & &\ & 2017-May-07 & 17.0$\pm$1.2 & 311.1$\pm$1.8 & & & & 65$\pm$11 & & & &\ & 2017-May-09 & 16.1$\pm$1.2 & 306.0$\pm$1.4 & & & & 67$\pm$11 & & & &\ & 2017-May-11 & 16.7$\pm$1.0 & 304.3$\pm$6.6 & & & & 81$\pm$52 & & & &\ & 2017-May-14 & 14.3$\pm$1.3 & 308.0$\pm$2.4 & & & & 61$\pm$11 & & & &\ & 2017-May-23 & 16.4$\pm$1.2 & 311.5$\pm$1.5 & & & & 79$\pm$13 & & & &\ & 2017-May-25 & 16.0$\pm$1.2 & 309.6$\pm$1.4 & & & & 82$\pm$14 & & & &\ & 2017-May-27 & 11.9$\pm$0.6 & 314.6$\pm$1.8 & & 27$\pm$4 & 45$\pm$8 & 61$\pm$9 & 100$\pm$20 & 140$\pm$40 & 170$\pm$30 &\ & 2017-May-28 & 11.6$\pm$1.1 & 304.8$\pm$1.7 & & 9.0$\pm$2.8 & 13$\pm$7 & 27$\pm$15 & 57$\pm$23 & 57$\pm$45 & 49$\pm$20 &\ & 2017-May-30 & 15.7$\pm$1.3 & 306.3$\pm$1.7 & & & & 89$\pm$15 & & & &\ & 2017-Jun-03 & 12.7$\pm$1.0 & 312.1$\pm$6.6 & & & & 39$\pm$26 & & & &\ & 2017-Jun-15 & 12.2$\pm$1.3 & 311.1$\pm$1.5 & & & & 73$\pm$12 & & & &\ & 2017-Jun-22 & 13.6$\pm$1.3 & 308.8$\pm$1.8 & & & & 72$\pm$12 & & & &\ & 2017-Jun-24 & 14.4$\pm$1.3 & 308.7$\pm$1.4 & & & & 95$\pm$16 & & & &\ & 2017-Jun-29 & 11.7$\pm$1.4 & 306.7$\pm$2.4 & & & & 47$\pm$8 & & & &\ & 2017-Jul-01 & 12.5$\pm$1.3 & 310.5$\pm$1.3 & & & & 67$\pm$11 & & & &\ & 2017-Jul-06 & 14.2$\pm$1.5 & 308.6$\pm$4.5 & & & & 29$\pm$7 & & & &\ & 2017-Jul-08 & 14.7$\pm$1.4 & 308.6$\pm$1.6 & & & & 40$\pm$7 & & & &\ & 2017-Jul-31 & 9.3$\pm$1.1 & 310.4$\pm$1.1 & & 4.8$\pm$0.7 & 12$\pm$2 & 27$\pm$4 & 38$\pm$6 & 34$\pm$5 & 120$\pm$20 &\ & 2017-Dec-12 & 10.8$\pm$0.7 & 306.1$\pm$1.4 & & & & 8.2$\pm$1.2 & & & &\ & 2018-Jan-12 & 11.0$\pm$0.4 & 307.8$\pm$0.9 & & & & 10$\pm$2 & & & 40$\pm$6 &\ & 2018-Mar-02 & 14.4$\pm$1.3 & 307.0$\pm$1.3 & & & & 5.3$\pm$0.9 & & & &\ & 2018-Apr-24 & 15.1$\pm$1.1 & 303.1$\pm$1.3 & & & & 2.4$\pm$0.4 & & & &\ & 2018-May-10 & 14.5$\pm$1.1 & 305.7$\pm$1.2 & & & & 3.3$\pm$0.6 & & & &\ & 2018-May-31 & 16.8$\pm$1.4 & 308.4$\pm$4.7 & & & & 21$\pm$5 & & & &\ & 2018-Jun-02 & 17.4$\pm$1.2 & 307.5$\pm$1.4 & & & & 26$\pm$4 & & & &\ & 2018-Jun-06 & 14.9$\pm$1.4 & 309.4$\pm$3.8 & & & & 24$\pm$7 & & & &\ & 2018-Jun-15 & 15.7$\pm$1.5 & 306.2$\pm$4.6 & & & & 43$\pm$18 & & & &\ & 2018-Jun-18 & 16.1$\pm$1.3 & 309.9$\pm$2.1 & & & & 40$\pm$6 & & & &\ & 2018-Jun-22 & 15.5$\pm$1.0 & 303.8$\pm$6.0 & & & & 37$\pm$19 & & & &\ & 2018-Jun-25 & 14.2$\pm$1.3 & 304.3$\pm$2.7 & & & & 37$\pm$6 & & & &\ & **Average** & **12.6** & **307.5** &\ Shoshu Patera & 2017-Jun-15 & -17.6$\pm$1.3 & 322.9$\pm$1.9 & & & & 2.7$\pm$0.5 & & & &\ & **Average** & **-17.6** & **322.9** &\ Tol-Ava Patera & 2013-Sep-05 & 0.9$\pm$1.8 & 325.9$\pm$2.3 & & & & 7.7$\pm$1.2 & & & &\ & 2013-Sep-07 & 0.5$\pm$1.3 & 324.1$\pm$1.4 & & & & 8.4$\pm$1.5 & & & &\ & 2014-Dec-06 & 1.0$\pm$1.2 & 327.0$\pm$1.2 & & & & 0.87$\pm$0.44 & & & &\ & 2015-Jan-14 & 0.6$\pm$1.1 & 327.3$\pm$1.1 & & & & 0.99$\pm$0.49 & & & &\ & **Average** & **0.7** & **326.5** &\ PV170 & 2014-Dec-02 & -50.0$\pm$1.1 & 324.9$\pm$3.2 & & 19$\pm$3 & & 12$\pm$2 & & & 6.8$\pm$1.0 &\ & 2014-Dec-06 & -47.9$\pm$2.0 & 327.8$\pm$2.1 & & & & 7.2$\pm$1.1 & & & &\ & 2015-Dec-25 & -46.0$\pm$0.9 & 329.3$\pm$1.4 & & & 2.1$\pm$0.4 & 2.5$\pm$0.4 & & & 4.0$\pm$0.6 &\ & **Average** & **-47.9** & **327.8** &\ Fuchi Patera & 2014-Feb-10 & 28.3$\pm$1.1 & 328.7$\pm$1.1 & & & & 1.1$\pm$0.4 & & & &\ & **Average** & **28.3** & **328.7** &\ Surt & 2014-Feb-10 & 44.2$\pm$1.7 & 336.4$\pm$2.4 & & & & & & & 0.99$\pm$0.5 &\ & 2015-Jan-12 & 44.6$\pm$2.0 & 331.7$\pm$2.1 & & & & & & & 1.5$\pm$0.2 &\ & **Average** & **44.4** & **334.1** &\ Pfu1063 & 2014-Oct-30 & 38.1$\pm$0.9 & 358.5$\pm$2.4 & & & & 2.4$\pm$0.7 & & & 4.4$\pm$0.7 &\ & 2015-Jan-12 & 41.8$\pm$0.8 & 352.9$\pm$0.8 & & & & & & & 3.1$\pm$0.5 &\ & 2015-Apr-02 & 41.7$\pm$1.2 & 357.7$\pm$1.7 & & & & & & & 1.2$\pm$0.4 &\ & **Average** & **41.7** & **357.7** &\ Paive Patera & 2015-Apr-02 & -42.1$\pm$1.5 & 359.6$\pm$1.6 & & & & & & & 0.99$\pm$0.35 &\ & 2017-May-27 & -43.7$\pm$0.7 & 357.0$\pm$1.1 & & & & & & 2.1$\pm$0.5 & &\ & **Average** & **-42.9** & **358.3** &\ Paive Patera & 2015-Apr-02 & -42.1$\pm$1.5 & 359.6$\pm$1.6 & & & & & & & 0.99$\pm$0.35 &\ & 2017-May-27 & -43.7$\pm$0.7 & 357.0$\pm$1.1 & & & & & & 2.1$\pm$0.5 & &\ & **Average** & **-42.9** & **358.3** &\ [^1]: https://www2.keck.hawaii.edu/inst/tda/TwilightZone.html
--- abstract: 'Canonical commutation relations for the Bateman-Hillion type nondispersive wave packets are constructed' author: - 'M.V.Altaisky[^1], N.E.Kaputkina[^2]\' date: 'Aug 21, 2012' title: On quantization of nondispersive wave packets --- Introduction ============ The packet-like asymptotic solutions of the wave equation $$(\d^2_{x^2} + \d^2_{y^2} +\d^2_{z^2} -\d^2_{t^2})u(x,y,z,t)=0 \label{W:eq}.$$ trace their origin from the Bateman’s works on conformal symmetry [@Bateman1909; @Bateman1955]. Historically such solutions were first derived approximately in terms of parabolic equation of the diffraction and are related to paraxial optical beams, see [@Kiselev2007eng] for a review. The interest to the localized solutions of wave equation is encouraged by the progress in generation of ultra-short laser pulses [@BK2000], in view of the fact that large variety of optical problems can be solved in terms of scalar equation. The packet-like localized solutions of equation have a general form [@Hillion1992w]: $$u=\frac{f(\theta)}{\sqrt{2}\xi_+ -\imath\varepsilon}, \quad \theta = \sqrt{2}\xi_- + \frac{x^2+y^2}{\sqrt{2}\xi_+ - \imath\varepsilon}, \label{bts:eq}$$ where $f$ is an arbitrary localized function, $$\xi_\pm = \frac{z\pm t}{\sqrt{2}} \label{LC:eq},$$ are the light-cone coordinates, describing a particle moving at a speed of light ($c=1$) along $z$ axis, $\varepsilon>0$. Their particular form with $f(\theta)=e^{\imath q \theta}$ was called ’quasiphotons’ by Babich and Ulin [@BU1982], or generally ’focus wave modes’ [@Brittingham1983], for the waves which remain focused in $\xi_-$ coordinate. Although the plane-wave solutions $\exp(\imath(\vk\vx-\omega t))$ of equation have laid the basis of quantum electrodynamics [@Feynman1950; @BS1959], the packet-like solutions of type were considered only as a special case of classical electrodynamics. Their importance has raised after the seminal paper of Brittingham [@Brittingham1983], who has proved the -type solutions – the focus wave modes – ([*i*]{}) satisfy the homogeneous Maxwell’s equations, ([*ii*]{}) are continuous and non-singular, ([*iii*]{}) have a three-dimensional pulse structure, ([*iv*]{}) be non-dispersive for all time, ([*v*]{}) move at light velocity in straight lines. This stimulated a series of work indented for practical application of localized electromagnetic pulses for long-range energy transfer without dispersion, electromagnetic and acoustic bullets, Gaussian wave beams in optics and geophysics, optical information processing in microcavities [@Kiselev1983; @MP1990; @Overfelt1991; @KPC2012]. The application of the Bateman-Hillion-like solutions or weakly localized wave packets to quantum field theory was restricted to the formal studies of the localized solutions of the Klein-Gordon and the Dirac equations [@SBZ1990; @PF2001eng]. In contrast to the solitons, the solutions localized due to nonlinearity, which have long been studied in quantum field theory, see e.g.[@Neveu1977], neither the Bateman-Hillion solutions of wave equations nor the Moses-Prosser wave bullets have been ever considered as operator-valued functions and have been ever subjected to canonical commutation relations. In the present paper we consider the nondispersive solutions of wave equation, as particle-like solutions of the field equations, describing a quantum particles subjected to canonical commutation relations. This quantization condition results in certain restrictions on the amplitude and the width of the pulse-wave which makes it into quantum particle. In the next sections we will consider this problem for free scalar field theory models in $1+1$ and $3+1$ dimensions. $\bm{d=2}$ ========== The nondispersive localized wave solutions in three plus one dimensions are generalizations of the traveling wave solutions of the wave equation in one plus one dimension: $$(\d^2_{z^2}-\d^2_{t^2})u(z,t)=0 \label{w11:eq},$$ which is equivalent to the equation $ \d^2_{\xi_+\xi_-} u(\xi_+,\xi_-)=0$. The solution of the equation is a superposition of two independent solutions traveling right and left along the $z$ axis: $$u = f(\xi_-) + g(\xi_+),$$ where $f$ and $g$ are arbitrary functions. In quantum field theory the field $\phi$, which satisfies the massless field equation , is considered as an operator-valued function $\phi = \phi(z,t)$. Using the Fourier transform the field $\phi$ can be casted as a sum of the positive and the negative frequency components $$\phi^{\pm}(x) = \int_{k_0>0} e^{\pm \imath k x} \delta(k^2-m^2) \phi(\pm k) dk .$$ In case of massless field in two dimensions, $x=(z,t)$, this gives $$\phi(z,t) = \int \frac{dk}{2\pi 2 \omega_k} \left[ e^{-\imath \omega_k t + \imath k z} \hat u(k) + e^{\imath \omega_k t - \imath k z} \hat{u}^\dagger (k) \right], \label{lf:eq}$$ where the integration over $\frac{dkd\omega}{(2\pi)^2}$ is made into one dimensional integration $\frac{dk}{2\pi 2 \omega_k}$ using the mass-shell delta-function $\delta(\omega^2-k^2)$, which results in $\omega_k=|k|$. The operators $\hat u(k)$ and $\hat{u}^\dagger(k)$ are referred to as the annihilation and the creation operators for the quanta with momentum $k$. They satisfy commutation relations $$[\hat u(k), \hat{u}^\dagger(k') ] = 2\pi 2\omega_k \delta(k-k') \label{cr1}.$$ The equation is the basis of field quantization. However, its physical interpretation leads to counterintuitive result: if a photon is described by plane wave , then the absorption of photon by photographic plate should expose the whole plate: for the plane wave is present everywhere. In reality the exposure is very local. This prompts us to use some localized function, travelling at a speed of light, instead of plane waves. Same as for classical $c$-valued fields the localization is achieved by substituting plane wave by the Fourier image of some localized function. This leads to the operator-valued solution of the massless field equation $$\phi(z,t) = \int \frac{dk}{2\pi 2 \omega_k} \Bigl[ e^{-\imath \omega_k t + \imath k z} c(k)\hat u(k) + e^{\imath \omega_k t - \imath k z} c^*(k) \hat{u}^\dagger (k) \Bigr], \label{llf:eq}$$ here we assume $c^*(k)=c(-k)$. The canonical momentum, conjugated to the field density $\phi(z,t)$ is $\pi(z,t) = \frac{\d\phi}{\d t}$: $$\pi(z,t) = -\frac{\imath}{2} \int \frac{dk}{2\pi} \Bigl[ e^{-\imath \omega_k t + \imath k z} c(k)\hat u(k) - e^{\imath \omega_k t - \imath k z} c^*(k) \hat{u}^\dagger (k) \Bigr]. \label{md:eq}$$ Since we consider the wave packet $\phi$ as a quantum particle we can introduce the operator of the “mean field coordinate” $\hat{Q}(t)$ and the total momentum $\hat{P}(t)$ of the wave packet $$\hat{Q}(t) = \frac{1}{V}\int \phi(z,t) dz, \quad \hat{P}(t) = \int \pi(z',t)dz',$$ where $V$ is the volume occupied by the field. Using the commutation relations for the Fourier modes, we get the commutator $$[\hat Q,\hat P] = \frac{\imath}{V} \int dzdz' e^{\imath k(z-z')} |c(k)|^2 \frac{dk}{2\pi}.$$ Thus to ensure canonical commutation relations for the wave packet $$[\hat Q,\hat P]=\imath \label{ccr}$$ we need to fulfil the constraint $$\int \Lambda(z-z') \frac{dzdz'}{V} = 1,$$ where $$\Lambda(z) = \int e^{\imath k z} |c(k)|^2 \frac{dk}{2\pi}.$$ Using the symmetry $|c(k)|^2= |c(-k)|^2$ the constraint that ensures canonical commutation relations for the operator-valued wave packet can be written in the form $$2\int_0^\infty dz \int_{-\infty}^\infty \cos(kz) |c(k)|^2\frac{dk}{2\pi} = 1.$$ For the Gaussian wave packet with $c(k) = A e^{-k^2\sigma^2/2}$ this leads to the constraint $A^2=1$ independent on $\sigma$. $\bm{d=4}$ ========== Without loss of generality we can consider the Klein-Gordon equation in $3+1$ dimensions $$(\d^2_{x^2} + \d^2_{y^2} +\d^2_{z^2} -\d^2_{t^2}-m^2)u(x,y,z,t)=0 \label {KG:eq}$$ with $m$ set to zero for the case of massless field. The Fourier image of the localized solution of the Klein-Gordon equation [@PF2001eng] can be written as $$g(\sqrt{2}k_+) = \frac{B}{k_+^\delta} \tilde f \left(\frac{k_+}{\sqrt{2}} \right) \delta(k^2-m^2) , \quad k_+ = \frac{k_z+\omega_k}{\sqrt{2}},$$ where $B$ is a constant, $\delta=0,\frac{1}{2}$ – for massless, and massive field, respectively, see \[ml:sec\], \[apm:sec\]. This leads to the operator-valued solution of the field equation $$\begin{aligned} \nonumber \phi(\vx,t) &=& \int \frac{d^3\vk}{(2\pi)^3 2 \omega_k} \Bigl[ e^{-\imath \omega_k t + \imath \vk \vx} g(k_z+\omega_k) \hat{u}(\vk) \\ &+& e^{\imath \omega_k t - \imath \vk \vx} g(-k_z-\omega_k ) \hat{u}^\dagger (\vk) \Bigr], \label{llf4:eq}\end{aligned}$$ where the annihilation and the creation operators satisfy the commutation relations $$[\hat u(\vk), \hat{u}^\dagger(\vk') ] = (2\pi)^3 \cdot 2\omega_k \cdot \delta(\vk-\vk'), \quad \omega_k = \sqrt{\vk^2+m^2} \label{cr4}.$$ The canonical momentum, conjugated to the field density $\phi(\vx,t)$, is $\pi(\vx,t) = \frac{\d\phi}{\d t}$: $$\begin{aligned} \nonumber \pi(\vx,t) &=& -\frac{\imath}{2} \int \frac{d^3k}{(2\pi)^3} \Bigl[ e^{-\imath \omega_k t + \imath \vk \vx} g(k_z+\omega_k)\hat u(\vk) \\ &-& e^{\imath \omega_k t - \imath \vk \vx} g(-k_z-\omega_k) \hat{u}^\dagger (\vk) \Bigr]. \label{md3:eq}\end{aligned}$$ The “mean field coordinate” $\hat{Q}(t)$ and the total momentum $\hat{P}(t)$ of the wave packet are: $$\hat{Q}(t) = \frac{1}{V}\int \phi(\vx,t) d^3\vx, \quad \hat{P}(t) = \int \pi(\vx',t)d^3\vx'.$$ The constraint results in $$\begin{aligned} \nonumber 1 &=& \int d^3\vx e^{\imath \vk\vx} g(k_z+\omega_k)g(-k_z-\omega_k) \frac{d^3\vk}{(2\pi)^3} \\ &=& \left. g(k_z+\omega_k)g(-k_z-\omega_k)\right|_{\vk=0}.\label{cc4}\end{aligned}$$ For Gaussian packet $g(k)=Ae^{-\frac{k^2\sigma^2}{2}}$ this gives the normalization constraint $$A^2 e^{-\sigma^2 m^2} = 1.$$ Conclusions =========== The idea of construction of nondispersive wave packets, which follow a trajectory of classical particle, from the coherent superposition of harmonic oscillators can be traced back to Schrödinger [@Schrodinger1926]. In classical scales such packets of different form can be created by combination of electromagnetic pulses of equispaced frequencies. In quantum physics, where the energy levels of atoms, used for laser pulse generation, are not equispaced, the known way to suppress wave packet dispersion is to use external electromagnetic fields [@KE1996; @BDZ2002]. The most intensive experimental studies have been performed in creation Trojan wave packets – nondispersive wave packets of electron density on circular orbits [@Wyker2011]. The creation of nondispersive packet on a circular orbit is achieved by using circularly polarized laser beams and is a technically sophisticated, however it seems that a simpler problem of creating a nondispersive wave packet moving linearly, which was studied in classical electrodynamics since Brittingham [@Brittingham1983] and is implemented experimentally, was not implemented in quantum case, regardless an evident advantage of no need of the external field for suppressing dispersion. In present paper, using a simple scalar model, we constructed the quantization condition for such packets in purely relativistic case. If such mechanism exists in quantum electrodynamics, it may be related to the coherent electromagnetic energy transfer at mesoscopic scales, along with excitons and other nonrelativistic mechanisms, see e.g. [@ETC2007]. In future we hope to apply the localized wave packets, subjected to quantization constraints described in this paper, to the scattering problems of localized quantum particles described by quantum field theory methods [@AltaiskyPRD10]. In particular the existence of the localized focus wave modes [@Brittingham1983] in classical electrodynamics may be of interest for the quantum theory of gauge fields [@FGL2007]. Acknowledgement {#acknowledgement .unnumbered} =============== The research was supported in part by the Program of Creation and Development of the National University of Science and Technology “MISiS” and by the RFBR project 11-02-00604a. The authors have benifited from the discussions of these results with Drs. A.L.Kataev, A.P.Kiselev, M.V.Perel and O.V.Teryaev. [10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{} H. Bateman, The conformal transformations of a space of four dimensions and their applications to geometrical optics, Proc. Lond. Math. Soc. 7 (1909) 70–89. H. Bateman, The mathematical analysis of the electrical and optical wave-motion on the basis of [M]{}axwell’s equations, Dover, 1955. A. Kiselev, [Localized light waves: paraxial and exact solutions of the wave equation (a review)]{}, Opt. Spectrosc. 102 (2007) 661. T. Brabec, F. Krausz, Intense few-cycle laser fields: Frontiers of nonlinear optics, Rev. Mod. Phys. 72 (2) (2000) 545–591. P. Hillion, Nonhomogeneous nondispersion waves: classification, Wave motion 16 (1992) 81–87. V. Babich, V. Ulin, Complex space-time ray method and ’quasiphotons’, J. Sov. Math. 20 (1) (1982) 1749–1753. J. Brittingham, [Focus waves modes in homogeneous Maxwell’s equations: Transverse electric mode]{}, J. Appl. Phys. 54 (3) (1983) 1179–1189. R. Feynman, Mathematical formulation of the quantum theory of electromagnetic interaction, Phys. Rev. 80 (3) (1950) 440–457. N. Bogoliubov, D. Shirkov, Introduction to the theory of quantized fields, Interscence, 1959. A. Kiselev, Modulated gaussian beams, Radiophys. Quant. Electron. 26 (5) (1983) 755–761. H. Moses, R. Prosser, Acoustic and electromagnetic bullets: Derivation of new exact solutions of the acoustic and [M]{}axwell’s equations, SIAM J. Appl. Math. 50 (5) (1990) 1325–1340. P. Overfelt, Bessel-[G]{}auss pulses, Phys. Rev. A 44 (4) (1991) 3941–3947. A. Kiselev, A. Plachenov, P. Chamorro-Posada, Nonparaxial wave beams and packets with general astigmatism, Phys. Rev. A 85 (2012) 043835. A. Shaarawi, M. Besieris, R. W. Ziolkowski, [A novel approach to the synthesis of nondispersive wave packet solutions to the Klein–Gordon and Dirac equations ]{}, J. Math. Phys. 31 (10) (1990) 2511–2519. M. Perel, I. Fialkovsky, [Exponentially localized solutions of the Klein-Gordon equations]{}, J. Math. Sci. 117 (2) (2003) 3994–4000. A. Neveu, Quantization of non-linear systems, Rep. Progr. Phys. 40 (1977) 709. E. Schrödinger, The constant crossover of micro-to macro mechanics, Naturwissenschaften 14 (1926) 664–666. M. Kalinski, J. H. Eberly, Trojan wave packets: [M]{}athieu theory and generation from circular states, Phys. Rev. A 53 (1996) 1715–1724. A. Buchleitner, D. Delande, J. Zakrzewski, Non-dispersive wave packets in periodically driven quantum systems, Physics Reports 368 (5) (2002) 409 – 547. B. Wyker, S. Ye, F. B. Dunning, S. Yoshida, C. O. Reinhold, J. Burgdörfer, Creating and transporting trojan wave packets, Phys. Rev. Lett. 108 (2012) 043001. G. Engel, T. Calhoun, E. Read, T.-K. Ahn, T. Mančal, Y.-C. Cheng, R. Blankenship, G. Fleming, Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems, Nature 446 (05678) (2007) 782–786. M. Altaisky, [Quantum field theory without divergences]{}, Phys. Rev. D 81 (2010) 125003. K. Fukushima, F. Gelis, L. Mc[L]{}erran, Initial singularity of the little bang, Nucl. Phys. A 786 (2007) 107–130. The Bateman-Hillion solution of wave equation \[bat:sec\] ========================================================= The Bateman-Hillion solution of the wave equation can be obtained by the change of variable (§4.2.1 of [@Kiselev2007eng]): $$u = \int_{-\infty}^\infty \hat{u}e^{\imath \alpha a} da, \label{bsub}$$ where $$\hat{u} = \hat{u}(x,y,\alpha,\beta), \quad \alpha = z-t, \beta = z+t.$$ The new function $\hat{u}$ satisfies the parabolic equation $$\frac{\d^2 \hat{u}}{\d x^2} + \frac{\d^2 \hat{u}}{\d y^2} + 4\imath a \frac{\d \hat{u}}{\d \beta} = 0,$$ which has the solution $$\hat{u} = \beta^{-1} \exp(\imath a (x^2+y^2)/\beta) \hat{f}(a) \label{bsol},$$ where $\hat{f}(a)$ is arbitrary function. The Bateman-Hillion solution is obtained by substitution of into . Fourier image of the massless field solution \[ml:sec\] ======================================================= To evaluate the Fourier image of the solution of wave equation we use the light-cone variables $$\xi_{\pm} = \frac{z\pm t}{\sqrt{2}}, \quad k_\pm = \frac{k_z\pm\omega}{\sqrt{2}}.$$ In these variables $$\begin{aligned} \nonumber \tilde{\phi}(k_x,k_y,k_+,k_-) &=& \int dx dy d\xi_- d\xi_+ \frac{f(\theta)}{\sqrt{2}\xi_+} e^{-\imath (k_x x + k_y y + k_+ \xi_- + k_- \xi_+)} \\ &=& \frac{1}{2} \int d\theta d\xi_+ dx dy \frac{f(\theta)}{\xi_+} \exp\Bigl[ -\imath \bigl(k_x x + k_y y + k_- \xi_+ &+& \frac{k_+}{\sqrt{2}} \left[\theta - \frac{x^2+y^2}{\sqrt{2}\xi_+} \right] \bigr) \Bigr] .\end{aligned}$$ We have the product of two identical integrals in $x$ and $y$ transversal coordinates $$\int_{-\infty}^\infty e^{\imath k x^2 - \imath k_x x} dx \int_{-\infty}^\infty e^{\imath k y^2 - \imath k_y y} dy = \frac{\imath\pi}{k} e^{-\imath \frac{k_\perp^2}{4k}} \label{fi2}$$ with $k=\frac{k_+}{2\xi_+}$. The remaining integration in $\xi_+$ is performed by the change of variable $r=\xi_+/k_+$. This gives $$\tilde{\phi}(k_x,k_y,k_+,k_-) = 2\imath \pi^2 \tilde{f} \left( \frac{k_+}{\sqrt{2}}\right) \delta \left(\frac{k_\perp^2}{2} + k_+ k_- \right) \label{mlf:eq}$$ Fourier image of the massive solution \[apm:sec\] ================================================= To evaluate the Fourier image of the localized solution for the Klein-Gordon equation we integrate the massless solution in 5d over the extra variable $z'$ [@PF2001eng]: $$\begin{aligned} \tilde{\phi}_m(k_x,k_y,k_+,k_-) = \int e^{-\imath (k_x x + k_y y + m z' + k_+ \xi_- + k_- \xi_+)} \times \\ \nonumber \times \frac{f(\theta)}{(\sqrt{2}\xi_+)^{3/2}} dx dy dz' d\xi_- d\xi_+ = \int e^{-\imath (k_x x + k_y y + m z' + k_- \xi_+)} \times \\ \times e^{\imath\frac{k_+}{\sqrt{2}} \left(\theta - \frac{x^2+y^2+{z'}^2}{\sqrt{2}\xi_+} \right) } \frac{f(\theta)}{(\sqrt{2}\xi_+)^{3/2}} \frac{d\theta}{\sqrt{2}} d\xi_+ dx dy dz'. \end{aligned}$$ The product of three identical integrals in $x,y,z'$, cf. Eq. \[fi2\], is equal to $$\frac{(2\pi)^{3/2}}{\sqrt{\imath}}\left(\frac{\xi_+}{k_+} \right)^{3/2} e^{-\frac{\imath}{2}(k_\perp^2+m^2) \frac{\xi_+}{k_+}}.$$ Introducing the new variable $r= \frac{\xi_+}{k_+}$ and integrating over it we get $$\tilde{\phi}_m(k_x,k_y,k_+,k_-) = \frac{ \tilde{f}\left( \frac{k_+}{\sqrt{2}} \right) (2\pi)^{5/2} }{ \sqrt{2\imath k_+} 2^{3/4} } \delta \left(\frac{k_\perp^2+m^2}{2}+k_+ k_-\right).$$ [^1]: Space Research Institute RAS, Profsoyuznaya 84/32, Moscow, 117997, Russia; and Joint Institute for Nuclear Research, Joliot-Curie 6, Dubna, 141980, Russia; e-mail: altaisky@mx.iki.rssi.ru [^2]: National University of Science and Technology “MISiS”, Leninsky prospect 4, Moscow, 119049, Russia; e-mail: nataly@misis.ru
--- abstract: 'We report on a strictly differential line-by-line analysis of high quality UVES spectra of bright giants in the metal-poor globular cluster NGC 6752. We achieved high precision differential chemical abundance measurements for Fe, Na, Si, Ca, Ti, Cr, Ni, Zn, Y, Zr, Ba, La, Ce, Pr, Nd, Sm, Eu and Dy with uncertainties as low as $\sim$0.01 dex ($\sim$2%). We obtained the following main results. (1) The observed abundance dispersions are a factor of $\sim$2 larger than the average measurement uncertainty. (2) There are positive correlations, of high statistical significance, between all elements and Na. (3) For any pair of elements, there are positive correlations of high statistical significance, although the amplitudes of the abundance variations are small. Removing abundance trends with effective temperature and/or using a different pair of reference stars does not alter these results. These abundance variations and correlations may reflect a combination of ($a$) He abundance variations and $(b)$ inhomogeneous chemical evolution in the pre- or proto-cluster environment. Regarding the former, the current constraints on $\Delta Y$ from photometry likely preclude He as being the sole explanation. Regarding the latter, the nucleosynthetic source(s) must have synthesised Na, $\alpha$, Fe-peak and neutron-capture elements and in constant amounts for species heavier than Si; no individual object can achieve such nucleosynthesis. We speculate that other, if not all, globular clusters may exhibit comparable abundance variations and correlations to NGC 6752 if subjected to a similarly precise analysis.' author: - | David Yong$^{1}$[^1], Jorge Mel[' e]{}ndez$^2$, Frank Grundahl$^3$, Ian U. Roederer$^4$, John E. Norris$^1$, A. P. Milone$^1$, A. F. Marino$^1$, P. Coelho$^5$, Barbara E. McArthur$^6$, K. Lind$^7$, R. Collet$^1$ and Martin Asplund$^1$.\ $^{1}$Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia\ $^{2}$Departamento de Astronomia do IAG/USP, Universidade de Sao Paulo, Rua do Matao 1226, Sao Paulo, 05508-900, SP, Brasil\ $^{3}$Stellar Astrophysics Centre, Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C, Denmark\ $^{4}$Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101, USA\ $^{5}$Núcleo de Astrof[' i]{}sica Te[' o]{}rica, Universidade Cruzeiro do Sul, R. Galv[\~ a]{}o Bueno 868, Liberdade 01506-000, S[\~ a]{}o Paulo, Brazil\ $^{6}$McDonald Observatory, University of Texas, Austin, TX 78712, USA\ $^{7}$University of Cambridge, Madingley Road, Cambridge, CB3 0HA, UK title: 'High Precision Differential Abundance Measurements in Globular Clusters: Chemical Inhomogeneities in NGC 6752[^2]' --- \[firstpage\] Stars: abundances – Galaxy: abundances – globular clusters: individual: NGC 6752 Introduction ============ Understanding the origin of the star-to-star abundance variations of the light elements in globular clusters is one of the major challenges confronting stellar evolution, stellar nucleosynthesis and chemical evolution. Arguably the first evidence for chemical abundance inhomogeneity in a globular cluster was the discovery of a CN strong star in M13 by @popper47. A large number of subsequent studies have confirmed the star-to-star variation in the strength of the CN molecular bands in a given globular cluster, and these results have been extended to star-to-star abundance variations for the light elements – Li, C, N, O, F, Na, Mg and Al (e.g., see reviews by @smith87, @kraft94 and @gratton04 [@gratton12]). In light of the discovery of abundance variations in unevolved stars (e.g., @cannon98 [@gratton01; @ramirez02; @ramirez03]), the consensus view is that these light element abundance variations are attributed to a proto-cluster environment in which the gas was of an inhomogeneous composition. The interstellar medium from which some of the stars formed included material processed through hydrogen-burning at high temperatures. The source of that material and the nature of the nucleosynthesis, however, remain highly contentious with intermediate-mass asymptotic giant branch stars, fast rotating massive stars and massive binaries being the leading candidates (e.g., @fenner04, @ventura05, @decressin07, @demink09, @marcolini09). Recent discoveries of complex structure in colour-magnitude diagrams reveal that most, if not all, globular clusters host multiple populations; the evidence consists of multiple main sequences, subgiant branches, red giant branches and/or horizontal branches in Galactic (e.g., see @piotto09 for a review) and also extragalactic globular clusters (e.g., @mackey07 [@milone09]). When using appropriate photometric filters, all globular clusters show well-defined sequences with distinct chemical abundance patterns [@milone12]. These multiple populations can be best explained by different ages and/or chemical compositions. The sequence of events leading to the formation of multiple population globular clusters is not well understood (e.g., @dercole08 [@bekki11; @conroy11]). Although the census and characterization of the Galactic globular clusters remains incomplete, they may be placed into three general categories[^3]: ($i$) those that exhibit only light element abundance variations, which include NGC 6397, NGC 6752 and 47 Tuc (e.g., @gratton01 [@yong05; @dorazi10; @lind11; @campbell13]), ($ii$) those that exhibit light element abundance variations and neutron-capture element abundance dispersions such as M15 (e.g., @sneden97 [@sneden00; @sobeck11]) and ($iii$) those that exhibit light element abundance variations as well as significant abundance dispersions for Fe-peak elements[^4] such as $\omega$ Cen, M22, M54, NGC 1851, NGC 3201 and Terzan 5 (e.g., @norris95 [@yong081851; @marino09; @marino11; @carretta10; @johnson10; @villanova10; @carretta11; @origlia11; @roederer11; @alvesbrito12; @simmerer13]). At this stage, we do not attempt to classify a particularly unusual system like NGC 2419 [@cohen10; @cohen11; @cohen12; @mucciarelli12]. Given the surprisingly large star-to-star variations in element abundance ratios in a given cluster, how chemically homogeneous are the “well-behaved” elements in the “normal” globular clusters (i.e., clusters in category ($i$) above)? The answer to this question has important consequences for testing model predictions, setting constraints on the polluters and understanding the origin and evolution of globular clusters. @sneden05 considered the issue of cluster abundance accuracy limits and selected the \[Ni/Fe\] ratio as an example. This pair of elements was chosen as they present numerous spectral lines in the “uncomplicated yellow-red region” of the spectrum and share “common nucleosynthetic origins in supernovae”. @sneden05 noted that the dispersion in the \[Ni/Fe\] ratio in a cluster was $\sim$0.06 dex and appeared to show “little apparent trend as a function of the number of stars observed in a survey or of year of publication”. There are two possible reasons for the apparent limit in the $\sigma$\[Ni/Fe\] ratio. Perhaps clusters possess a single \[Ni/Fe\] ratio and the dispersion reflects the measurement uncertainties. Alternatively, globular clusters are chemically homogeneous in the \[Ni/Fe\] ratio at the $\sim$0.06 dex level. Bearing in mind this apparent limit in the \[Ni/Fe\] dispersion, in order to answer the question posed above, we require the highest possible precision when measuring chemical abundances. A number of recent studies have achieved precision in chemical abundance measurements as low as 0.01 dex (e.g., @melendez09 [@melendez12], @alvesbrito10, @nissen10 [@nissen11], @ramirez10 [@ramirez12]). These results were obtained by using ($i$) high quality spectra (R $\ge$ 60,000 and signal-to-noise ratios S/N $\ge$ 200 per pixel), ($ii$) a strictly differential line-by-line analysis and ($iii$) a well-chosen sample of stars covering a small range in stellar parameters (effective temperature, surface gravity, metallicity). Application of similar analysis techniques to high quality spectra of stars in globular clusters offers the hope that high precision chemical abundance measurements (at the $\sim$0.01 dex level) can also be obtained. To our knowledge, the highest precision chemical abundance measurements in globular clusters to date are at the $\sim$0.04 dex level include @yong05, @gratton05, @carretta09 and @melendez09m71. The aim of the present paper is to achieve high precision abundance measurements in the globular cluster NGC 6752 and to use these data to study the chemical enrichment history of this cluster. OBSERVATIONS AND ANALYSIS {#sec:obs} ========================= Target Selection and Spectroscopic Observations ----------------------------------------------- The targets for this study were taken from the $uvby$ photometry by @grundahl99. The sample consists of 17 stars located near the tip of the red giant branch (hereafter RGB tip stars) and 21 stars located at the bump in the luminosity function along the RGB (hereafter RGB bump stars). The list of targets can be found in Table \[tab:param\]. Observations were performed using the Ultraviolet and Visual Echelle Spectrograph (UVES; @dekker00) on the 8.2m Kueyen (VLT/UT2) telescope at Cerro Paranal, Chile. The RGB tip stars were observed at a resolving power of R = 110,000 and S/N $\ge$ 150 per pixel near 5140Å while the RGB bump stars were observed at R = 60,000 and S/N $\ge$ 100 per pixel near 5140Å. Analyses of these spectra have been reported in @grundahl02 and @yong03 [@yong05; @yong08nh]. The location of the program stars in a colour-magnitude diagram can be found in Figure 1 in @yong03. ----------- -------------- ---------- ------------- ------- ----------------------- ------------------ ------------------- -------------- Name1[^5] Name2 RA2000 DE2000 $V$ [$T_{\rm eff}$]{}[^6] [$\log g$]{}$^b$ [$\xi_t$]{}$^b$ \[Fe/H\]$^b$ (K) (cm s$^{-2}$) ([km s$^{-1}$]{}) (1) (2) (3) (4) (5) (6) (7) (8) (9) PD1 NGC6752-mg0 19:10:58 $-$59:58:07 10.70 3928 0.26 2.20 $-$1.67 B1630 NGC6752-mg1 19:11:11 $-$59:59:51 10.73 3900 0.24 2.25 $-$1.70 B3589 NGC6752-mg2 19:10:32 $-$59:57:01 10.94 3894 0.33 2.07 $-$1.66 B1416 NGC6752-mg3 19:11:17 $-$60:03:10 10.99 4050 0.50 1.88 $-$1.66 … NGC6752-mg4 19:10:43 $-$59:59:54 11.02 4065 0.53 1.86 $-$1.65 PD2 NGC6752-mg5 19:10:49 $-$59:59:34 11.03 4100 0.56 1.90 $-$1.65 B2113 NGC6752-mg6 19:11:03 $-$60:01:43 11.22 4154 0.68 1.85 $-$1.62 … NGC6752-mg8 19:10:38 $-$60:04:10 11.47 4250 0.80 1.71 $-$1.69 B3169 NGC6752-mg9 19:10:40 $-$59:58:14 11.52 4288 0.91 1.72 $-$1.66 B2575 NGC6752-mg10 19:10:54 $-$59:57:14 11.54 4264 0.90 1.66 $-$1.67 … NGC6752-mg12 19:10:58 $-$59:57:04 11.59 4286 0.94 1.73 $-$1.68 B2196 NGC6752-mg15 19:11:01 $-$59:57:18 11.68 4354 1.02 1.74 $-$1.64 B1518 NGC6752-mg18 19:11:15 $-$60:00:29 11.83 4398 1.11 1.68 $-$1.64 B3805 NGC6752-mg21 19:10:28 $-$59:59:49 11.99 4429 1.20 1.68 $-$1.65 B2580 NGC6752-mg22 19:10:54 $-$60:02:05 11.99 4436 1.20 1.71 $-$1.65 B1285 NGC6752-mg24 19:11:19 $-$60:00:31 12.15 4511 1.31 1.69 $-$1.67 B2892 NGC6752-mg25 19:10:46 $-$59:56:22 12.23 4489 1.33 1.70 $-$1.67 … NGC6752-0 19:11:03 $-$59:59:32 13.03 4699 1.83 1.43 $-$1.66 B2882 NGC6752-1 19:10:47 $-$60:00:43 13.27 4749 1.95 1.37 $-$1.63 B1635 NGC6752-2 19:11:11 $-$60:00:17 13.30 4779 1.98 1.37 $-$1.63 B2271 NGC6752-3 19:11:00 $-$59:56:40 13.41 4796 2.03 1.38 $-$1.69 B611 NGC6752-4 19:11:33 $-$60:00:02 13.42 4806 2.04 1.38 $-$1.65 B3490 NGC6752-6 19:10:34 $-$59:59:55 13.47 4804 2.06 1.33 $-$1.64 B2438 NGC6752-7 19:10:57 $-$60:00:41 13.53 4829 2.10 1.32 $-$1.86[^7] B3103 NGC6752-8 19:10:45 $-$59:58:18 13.56 4910 2.15 1.33 $-$1.69 B3880 NGC6752-9 19:10:26 $-$59:59:05 13.57 4824 2.11 1.41 $-$1.70 B1330 NGC6752-10 19:11:18 $-$59:59:42 13.60 4836 2.13 1.37 $-$1.65 B2728 NGC6752-11 19:10:50 $-$60:02:25 13.62 4829 2.13 1.34 $-$1.68 B4216 NGC6752-12 19:10:20 $-$60:00:30 13.64 4841 2.15 1.35 $-$1.66 B2782 NGC6752-15 19:10:49 $-$60:01:55 13.73 4850 2.19 1.36 $-$1.63 B4446 NGC6752-16 19:10:15 $-$59:59:14 13.78 4906 2.24 1.33 $-$1.63 B1113 NGC6752-19 19:11:23 $-$59:59:40 13.96 4928 2.32 1.33 $-$1.68 … NGC6752-20 19:10:36 $-$59:56:08 13.98 4929 2.33 1.32 $-$1.63 … NGC6752-21 19:11:13 $-$60:02:30 14.02 4904 2.33 1.31 $-$1.67 B1668 NGC6752-23 19:11:12 $-$59:58:29 14.06 4916 2.35 1.25 $-$1.66 … NGC6752-24 19:10:44 $-$59:59:41 14.06 4948 2.37 1.16 $-$1.71 … NGC6752-29 19:10:17 $-$60:01:00 14.18 4950 2.42 1.31 $-$1.69 … NGC6752-30 19:10:39 $-$59:59:47 14.19 4943 2.42 1.26 $-$1.64 ----------- -------------- ---------- ------------- ------- ----------------------- ------------------ ------------------- -------------- Based on multi-band *Hubble Space Telescope* (*HST*) and ground-based Str[" o]{}mgren photometry, @milone13 have identified three populations on the main sequence, subgiant branch and red giant branch of NGC 6752. These populations, which we refer to as $a$, $b$ and $c$, exhibit distinct chemical abundance patterns: population $a$ has a chemical composition similar to that of field halo stars (e.g., high O and low Na); population $c$ is enhanced in N, Na and He ($\Delta Y \sim 0.03)$ and depleted C and O; population $b$ has a chemical composition intermediate between populations $a$ and $c$ with slightly enhanced He ($\Delta Y \sim 0.01)$. Using the data from @milone13, we can classify all program stars according to their populations. In the relevant figures, stars of populations $a$, $b$ and $c$ are coloured green, magenta and blue, respectively. Line List and Equivalent Width Measurements ------------------------------------------- The first step in our analysis was to measure equivalent widths (EWs) for a large set of lines. The line list was taken primarily from @gratton03 and supplemented with laboratory measurements for [Fe[i]{}]{} from the Oxford group [@blackwell79feb; @blackwell79fea; @blackwell80fea; @blackwell86fea; @blackwell95fea], laboratory measurements for [Fe[ii]{}]{} from @biemont91 and for various elements, the values taken from the references listed in @yong05 (which are also listed in Tables \[tab:ewtip\] and \[tab:ewbump\]). We used the DAOSPEC [@stetson08] software package to measure EWs in our program stars. For the subset of lines we had previously measured using routines in IRAF[^8], we compared those values with the DAOSPEC measurements and found excellent agreement between the two sets of EW measurements for lines having strengths less than $\sim$100mÅ (see Figure \[fig:ewcomp\]). For the 1,542 lines with EW $<$ 100 mÅ, we find a mean difference EW(DY) $-$ EW(DAOSPEC) = 1.14 $\pm$ 0.05 mÅ ($\sigma$ = 1.92 mÅ). For our analysis, we adopted only lines with 5 mÅ $<$ EW $<$ 100 mÅ as measured by DAOSPEC. A further requirement was that a given line must be measured in every RGB tip star or every RGB bump star. That is, the line list for the RGB tip sample was different from the line list for the RGB bump sample, but for either sample of stars, each line was measured in every star within a particular sample. Due to the lower quality spectra for the RGB bump sample, we required lines to have EW $\ge$ 10 mÅ. The line list and EW measurements for the RGB tip sample and for the RGB bump sample are presented in Tables \[tab:ewtip\] and \[tab:ewbump\], respectively. ![Comparison of EWs measured using IRAF (DY) and DAOSPEC. The upper panel shows all lines (N = 1,795). The lower panel shows the distribution of the EW differences for the 1,542 lines with EW$_{\rm DY}$ $<$ 100 mÅ (i.e., measured using IRAF). We superimpose the Gaussian fit to the distribution and write the relevant parameters associated with the fit as well as the mean and dispersion.[]{data-label="fig:ewcomp"}](fig1.ps){width=".80\hsize"} ------------ ------------- ------- ----------- ---------- ------ ------ ------ ------ ------------- Wavelength Species[^9] L.E.P $\log gf$ mg0[^10] mg1 mg2 mg3 mg4 Source[^11] Å eV mÅ mÅ mÅ mÅ mÅ (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) 6154.23 11.0 2.10 $-$1.56 48.2 32.2 23.9 18.5 20.3 A 6160.75 11.0 2.10 $-$1.26 74.7 53.1 42.1 34.2 37.7 A 5645.61 14.0 4.93 $-$2.14 16.0 16.3 15.8 15.6 15.9 A 5665.56 14.0 4.92 $-$2.04 20.3 20.4 20.4 19.4 19.4 B 5684.49 14.0 4.95 $-$1.65 35.0 36.1 34.2 34.1 33.3 B ------------ ------------- ------- ----------- ---------- ------ ------ ------ ------ ------------- \ This table is published in its entirety in the electronic edition of the MNRAS. A portion is shown here for guidance regarding its form and content. ------------ -------------- ------- ----------- -------- ------ ------ ------ ------ ------------- Wavelength Species[^12] L.E.P $\log gf$ 0[^13] 1 2 3 4 Source[^14] Å eV mÅ mÅ mÅ mÅ mÅ (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) 5682.65 11.0 2.10 $-$0.71 52.1 18.6 56.1 15.3 50.2 A 5688.22 11.0 2.10 $-$0.40 77.0 31.9 75.5 27.3 73.8 A 5684.49 14.0 4.95 $-$1.65 24.4 22.9 22.5 20.8 23.6 B 5708.40 14.0 4.95 $-$1.47 38.3 28.7 33.9 28.4 30.4 B 5948.55 14.0 5.08 $-$1.23 43.5 36.9 39.4 31.8 37.5 A ------------ -------------- ------- ----------- -------- ------ ------ ------ ------ ------------- \ This table is published in its entirety in the electronic edition of the MNRAS. A portion is shown here for guidance regarding its form and content. Establishing Parameters for Reference Stars ------------------------------------------- In order to conduct the line-by-line strictly differential analysis, we needed to adopt a reference star. The reference star parameters were determined in the following manner. Note that since we did not know which reference stars would be adopted, the procedure was applied to all stars. Following our previous analyses of these spectra, effective temperatures, [$T_{\rm eff}$]{}, were derived from the @grundahl99 $uvby$ photometry using the @alonso99 [$T_{\rm eff}$]{}:colour:\[Fe/H\] relations. Surface gravities, [$\log g$]{}, were estimated using [$T_{\rm eff}$]{} and the stellar luminosity. The latter value was determined by assuming a mass of 0.84 M$_\odot$, a reddening $E(B-V)$ = 0.04 [@harris96] and bolometric corrections taken from a 14 Gyr isochrone with \[Fe/H\] = $-$1.54 from @vandenberg00. The model atmospheres used in the analysis were the one-dimensional, plane-parallel, local thermodynamic equilibrium (LTE), $\alpha$-enhanced, \[$\alpha$/Fe\] = +0.4, NEWODF grid of ATLAS9 models by @castelli03. We used linear interpolation software (written by Dr Carlos Allende Prieto and tested in @allende04) to produce a particular model. (See @meszaros13 for a discussion of interpolation of model atmospheres.) Using the 2011 version of the stellar line analysis program MOOG [@moog; @sobeck11], we computed the abundance for a given line. The microturbulent velocity, [$\xi_t$]{}, was set, in the usual way, by forcing the abundances from [Fe[i]{}]{} lines to have zero slope against the reduced equivalent width, EW$_r$ = $\log (W_\lambda/\lambda)$. The metallicity was inferred from [Fe[i]{}]{} lines. We iterated this process until the inferred metallicity matched the value adopted to generate the model atmosphere (this process usually converged within three iterations). (We exclude the RGB bump star NGC 6752-7 (B2438) due to its discrepant iron abundance, most likely resulting from a photometric blend which affected the [$T_{\rm eff}$]{} and [$\log g$]{} values.) Line-by-line Strictly Differential Stellar Parameters ----------------------------------------------------- Following @melendez12, we determined the stellar parameters using a strictly differential line-by-line analysis between the program stars and a reference star. Given the difference in [$T_{\rm eff}$]{} between the RGB tip and RGB bump samples, we treated each sample separately. For the RGB tip stars, we selected NGC 6752-mg9 to be the reference star since it had a [$T_{\rm eff}$]{} value close to the median for the RGB tip stars and the O/Na/Mg/Al abundances were also close to the median values. These decisions were motivated by the expectation that the errors in the derived stellar parameters, and therefore errors in the chemical abundances, would increase if there was a large difference in [$T_{\rm eff}$]{} between the program star and the reference star. Thus, we selected a star with [$T_{\rm eff}$]{} close to the median value to minimise the difference in [$T_{\rm eff}$]{} between the program stars and the reference star. Similarly, we were concerned that large differences in the abundances of O/Na/Mg/Al between the program star and the reference star could increase the errors in the derived stellar parameters and chemical abundances. Again, selecting the reference star to have O/Na/Mg/Al abundances close to the median value minimises the abundance differences between the program stars and the reference star. Application of a similar approach to the RGB bump sample resulted in the selection of NGC 6752-11 as the reference star. To determine the stellar parameters for a program star, we generated a model atmosphere with a particular combination of effective temperature ([$T_{\rm eff}$]{}), surface gravity ([$\log g$]{}), microturbulent velocity ([$\xi_t$]{}) and metallicity, \[m/H\]. The initial guesses for these parameters came from the values in Section 2.3. Using MOOG, we computed the abundances for [Fe[i]{}]{} and [Fe[ii]{}]{} lines. We then examined the [*line-by-line Fe abundance differences*]{}. Adopting the notation from @melendez12, the abundance difference (program star $-$ reference star) for a line is $$\delta A_i = A_i^{\rm program~star} - A_i^{\rm reference~star}.$$ We examined the abundance differences for [Fe[i]{}]{} as a function of lower excitation potential. We forced excitation equilibrium by imposing the following constraint $$\label{eq:teff} \frac{\partial(\delta A_i^{\rm FeI})}{\partial(\chi_{\rm exc})} = 0.$$ Next, we considered the abundance differences for [Fe[i]{}]{} as a function of reduced equivalent width, EW$_r$, and imposed the following constraint $$\label{eq:vt} \frac{\partial(\delta A_i^{\rm FeI})}{\partial({\rm EW}_r)} = 0.$$ For any species, [Fe[i]{}]{} in this example, we then defined the average abundance difference as $$\Delta^{\rm FeI} = \langle \delta A_i^{\rm FeI} \rangle = \frac{1}{N} \sum\limits_{i=1}^N \delta A_i^{\rm FeI}$$ Similarly, we defined the average [Fe[ii]{}]{} abundance as $\Delta^{\rm FeII}$ = $\langle \delta A_i^{\rm FeII} \rangle$, and the relative ionization equilibrium as $$\label{eq:logg} \Delta^{\rm FeI - FeII} = \Delta^{\rm FeI} - \Delta^{\rm FeII} = \langle \delta A_i^{\rm FeI} \rangle - \langle \delta A_i^{\rm FeII} \rangle = 0.$$ Unlike @melendez12, we did not take into account the relative ionization equilibria for Cr and Ti, nor did we consider non-LTE effects for any species. We note that while departures from LTE are expected for [Fe[i]{}]{} for metal-poor giants [@lind12], the relative non-LTE effects across our range of stellar parameters are vanishingly small. The final stellar parameters for a program star were obtained when equations (\[eq:teff\]), (\[eq:vt\]) and (\[eq:logg\]) were simultaneously satisfied and the derived metallicity was identical to that used in generating the model atmosphere. Regarding the latter criterion, we provide the following example. The metallicity of the reference star NGC 6752-mg9 was \[Fe/H\] = $-$1.66 when adopting the @asplund09 solar abundances and the photometric stellar parameters described in Section 2.3 (see Table \[tab:param\]). For star NGC 6752-mg8, the average abundance difference for [Fe[i]{}]{}, and also [Fe[ii]{}]{} given equation (\[eq:logg\]), was $\langle \delta A_i^{\rm FeI} \rangle$ = +0.01 dex. Thus, the stellar parameters can only be regarded as final if equations (\[eq:teff\]), (\[eq:vt\]) and (\[eq:logg\]) are satisfied and the model atmosphere is generated assuming a global metallicity of \[m/H\] = \[Fe/H\]$_{\rm NGC 6752-mg9}$ + $\langle \delta A_i^{\rm FeI} \rangle$ = $-$1.65. While equations (\[eq:teff\]), (\[eq:vt\]) and (\[eq:logg\]) are primarily sensitive to [$T_{\rm eff}$]{}, [$\xi_t$]{} and [$\log g$]{}, respectively, in practice, all three equations are affected by small changes in any stellar parameter. Derivation of these strictly differential stellar parameters required multiple iterations (up to 20) where each iteration selected a single value for \[m/H\] and five values for each parameter, [$T_{\rm eff}$]{}, [$\log g$]{} and [$\xi_t$]{}, in steps of 5 K, 0.05 dex and 0.05 [km s$^{-1}$]{}, respectively, i.e., 125 models per iteration. We then examined the output from the 125 models to see whether equations (2), (3) and (5) were simultaneously satisfied and whether the derived metallicity matched that of the model atmosphere. If not, the best model was identified and we repeated the process. If so, we conducted a final iteration in which we selected a single value for \[m/H\] and tested 11 values for each parameter, [$T_{\rm eff}$]{}, [$\log g$]{} and [$\xi_t$]{}, in steps of 1 K, 0.01 dex and 0.01 [km s$^{-1}$]{}, respectively, i.e., 1,331 models in the final iteration using a smaller step size for each parameter, and the best model was selected. As noted, this process was performed separately for the RGB tip sample and for the RGB bump sample. The strictly differential stellar parameters obtained using this pair of reference stars (RGB tip = NGC 6752-mg9, RGB bump = NGC 6752-11) are presented in Table \[tab:param1\]. (We exclude the RGB tip star NGC 6752-mg1 because the stellar parameters did not converge. Specifically, the best solution required a value for [$\log g$]{} beyond the boundary of the @castelli03 grid of model atmospheres.) Figures \[fig:paramtip\] and \[fig:parambum\] provide examples of $\delta A_i$, for [Fe[i]{}]{} and [Fe[ii]{}]{}, versus lower excitation potential and reduced EW for the strictly differential stellar parameters for a representative RGB tip star and a representative RGB bump star, respectively. That is, these figures show the results when equations (2), (3) and (5) are simultaneously satisfied and the derived metallicity is the same as that used to generate the model atmosphere. ![image](fig2.ps){width=".60\hsize"} ![image](fig3.ps){width=".60\hsize"} -------------- ------------------- ---------- --------------- --------------- ------------------- ------------------- ---------- Name [$T_{\rm eff}$]{} $\sigma$ [$\log g$]{} $\sigma$ [$\xi_t$]{} $\sigma$ \[Fe/H\] (K) (K) (cm s$^{-2}$) (cm s$^{-2}$) ([km s$^{-1}$]{}) ([km s$^{-1}$]{}) (1) (2) (3) (4) (5) (6) (7) (8) NGC6752-mg0 3919 20 0.16 0.01 2.24 0.05 $-$1.69 NGC6752-mg2 3938 22 0.23 0.01 2.13 0.05 $-$1.67 NGC6752-mg3 4066 19 0.53 0.01 1.93 0.04 $-$1.65 NGC6752-mg4 4081 18 0.54 0.01 1.90 0.04 $-$1.65 NGC6752-mg5 4100 17 0.56 0.01 1.93 0.04 $-$1.66 NGC6752-mg6 4151 19 0.65 0.01 1.88 0.04 $-$1.63 NGC6752-mg8 4284 14 0.93 0.01 1.73 0.04 $-$1.65 NGC6752-mg10 4291 12 0.92 0.01 1.70 0.03 $-$1.66 NGC6752-mg12 4315 13 0.96 0.01 1.76 0.04 $-$1.66 NGC6752-mg15 4339 13 1.01 0.01 1.76 0.04 $-$1.66 NGC6752-mg18 4380 15 1.07 0.01 1.71 0.04 $-$1.66 NGC6752-mg21 4437 13 1.16 0.01 1.69 0.05 $-$1.65 NGC6752-mg22 4444 14 1.19 0.01 1.71 0.04 $-$1.64 NGC6752-mg24 4505 17 1.30 0.01 1.72 0.07 $-$1.68 NGC6752-mg25 4471 15 1.24 0.01 1.74 0.07 $-$1.69 NGC6752-0 4706 12 1.85 0.01 1.44 0.02 $-$1.65 NGC6752-1 4719 11 1.94 0.01 1.37 0.02 $-$1.65 NGC6752-2 4739 12 1.95 0.01 1.35 0.02 $-$1.66 NGC6752-3 4749 13 2.00 0.01 1.34 0.02 $-$1.73 NGC6752-4 4794 13 2.08 0.01 1.37 0.02 $-$1.66 NGC6752-6 4795 11 2.10 0.01 1.32 0.02 $-$1.64 NGC6752-8 4930 15 2.29 0.01 1.31 0.03 $-$1.67 NGC6752-9 4795 21 2.09 0.01 1.40 0.04 $-$1.73 NGC6752-10 4811 10 2.11 0.01 1.35 0.02 $-$1.67 NGC6752-12 4822 13 2.15 0.01 1.34 0.02 $-$1.68 NGC6752-15 4830 12 2.23 0.01 1.34 0.02 $-$1.65 NGC6752-16 4875 15 2.24 0.01 1.31 0.03 $-$1.66 NGC6752-19 4892 12 2.32 0.01 1.31 0.02 $-$1.71 NGC6752-20 4899 12 2.32 0.01 1.30 0.02 $-$1.65 NGC6752-21 4884 14 2.32 0.01 1.30 0.03 $-$1.69 NGC6752-23 4912 12 2.33 0.01 1.25 0.02 $-$1.67 NGC6752-24 4911 17 2.39 0.01 1.14 0.03 $-$1.74 NGC6752-29 4923 13 2.40 0.01 1.30 0.02 $-$1.71 NGC6752-30 4919 12 2.47 0.01 1.24 0.02 $-$1.66 -------------- ------------------- ---------- --------------- --------------- ------------------- ------------------- ---------- In Figures \[fig:paramcomptip\] and \[fig:paramcompbum\] we compare the “reference star” stellar parameters (described in Sec 2.3) and the “strictly differential” stellar parameters (described above) for the RGB tip and RGB bump samples, respectively, using the reference stars noted above. For the RGB tip sample, the average difference between the “reference star” and “strictly differential” values for [$T_{\rm eff}$]{}, [$\log g$]{}, [$\xi_t$]{} and \[Fe/H\] are very small; 7.53 K $\pm$ 5.09 K, $-$0.015 dex (cgs) $\pm$ 0.015 dex (cgs), 0.031 [km s$^{-1}$]{} $\pm$ 0.004 [km s$^{-1}$]{} and $-$0.002 dex $\pm$ 0.004 dex, respectively. Comparably small differences in stellar parameters are obtained for the RGB bump sample. Therefore, an essential point we make here is that [*the strictly differential stellar parameters do not involve any substantial change for any parameter, relative to the “reference star” stellar parameters*]{}. For [$T_{\rm eff}$]{}, the changes are within the uncertainties of the photometry. ![Differences in [$T_{\rm eff}$]{}, [$\log g$]{}, [$\xi_t$]{} and \[Fe/H\] between the “reference star” (old) and the “strictly differential” (new) stellar parameters for the RGB tip sample (reference star is NGC 6752-mg9). The mean difference is written in each panel. The green, magenta and blue colours represent populations $a$, $b$ and $c$ from @milone13 (see Section 2.1 for details).[]{data-label="fig:paramcomptip"}](fig4.ps){width=".65\hsize"} ![Same as Figure \[fig:paramcomptip\] but for the RGB bump sample (reference star is NGC 6752-11).[]{data-label="fig:paramcompbum"}](fig5.ps){width=".65\hsize"} Chemical Abundances ------------------- Having obtained the strictly differential stellar parameters, we computed the abundances for the following species in every program star; Na, Si, Ca, [Ti[i]{}]{}, [Ti[ii]{}]{}, [Cr[i]{}]{}, [Cr[ii]{}]{}, Ni, Y, La, Nd and Eu. For the elements La and Eu, we used spectrum synthesis and $\chi^2$ analysis of the 5380Å and 6645Å lines, respectively, rather than an EW analysis since these lines are affected by hyperfine splitting (HFS) and/or isotope shifts. We treated these lines appropriately using the data from @kurucz95 and for Eu, we adopted the @lodders03 solar isotope ratios. The $\log gf$ values for the La and Eu lines were taken from @lawler01la and @lawler01eu, respectively. We used equation (1) to obtain the abundance difference (between the program star and the reference star) for any line. For a particular species, X, the average abundance difference is $\langle \delta A_i^{\rm X} \rangle$ which we write as $\Delta^{\rm X}$, i.e., as defined in equation (4) above. In Tables \[tab:abuna\] and \[tab:abunb\], we present the abundance differences for each element in all program stars. In order to put these abundance differences onto an absolute scale, in these tables we also provide the A(X) abundances for the reference stars when using the stellar parameters in Table \[tab:param\]. The new \[X/Fe\] values are in very good agreement with our previously published values [@grundahl02; @yong03; @yong05], although we have not attempted to reconcile the two sets of abundances. -------------- ------------------- ---------- ------------------- ---------- ------------------- ---------- ------------------- ---------- -------------------- ---------- --------------------- ---------- Star $\Delta^{\rm Fe}$ $\sigma$ $\Delta^{\rm Na}$ $\sigma$ $\Delta^{\rm Si}$ $\sigma$ $\Delta^{\rm Ca}$ $\sigma$ $\Delta^{\rm TiI}$ $\sigma$ $\Delta^{\rm TiII}$ $\sigma$ (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) NGC6752-mg0 $-$0.029 0.010 0.387 0.016 0.038 0.015 $-$0.023 0.033 0.021 0.020 $-$0.024 0.035 NGC6752-mg2 $-$0.011 0.011 $-$0.014 0.008 0.039 0.010 $-$0.021 0.049 0.050 0.024 0.036 0.039 NGC6752-mg3 0.007 0.015 $-$0.027 0.005 0.007 0.009 $-$0.003 0.045 0.020 0.017 0.043 0.041 NGC6752-mg4 0.010 0.014 0.041 0.010 0.030 0.010 0.008 0.038 0.023 0.012 0.043 0.045 NGC6752-mg5 0.005 0.008 0.052 0.008 0.015 0.008 0.001 0.015 0.006 0.012 0.038 0.035 NGC6752-mg6 0.032 0.009 $-$0.123 0.002 0.042 0.009 0.049 0.040 0.052 0.011 0.095 0.065 NGC6752-mg8 0.007 0.010 0.036 0.015 $-$0.001 0.017 0.004 0.024 0.008 0.023 0.029 0.019 NGC6752-mg10 0.007 0.010 0.013 0.004 $-$0.004 0.007 $-$0.005 0.017 $-$0.023 0.010 0.027 0.033 NGC6752-mg12 0.002 0.010 $-$0.342 0.004 $-$0.021 0.007 $-$0.017 0.016 0.006 0.007 $-$0.007 0.030 NGC6752-mg15 $-$0.001 0.009 0.044 0.009 $-$0.008 0.010 $-$0.008 0.011 $-$0.009 0.007 0.009 0.023 NGC6752-mg18 $-$0.002 0.010 $-$0.094 0.004 $-$0.006 0.009 $-$0.016 0.017 $-$0.018 0.007 0.044 0.033 NGC6752-mg21 0.018 0.009 0.282 0.009 0.043 0.009 0.032 0.013 $-$0.012 0.009 0.057 0.031 NGC6752-mg22 0.014 0.009 0.323 0.008 0.030 0.010 0.017 0.011 $-$0.012 0.010 0.012 0.031 NGC6752-mg24 $-$0.023 0.016 $-$0.345 0.035 $-$0.049 0.009 $-$0.040 0.012 $-$0.034 0.009 0.047 0.059 NGC6752-mg25 $-$0.027 0.010 $-$0.139 0.025 $-$0.008 0.010 $-$0.026 0.023 $-$0.045 0.009 $-$0.023 0.039 NGC6752-0 0.030 0.010 0.335 0.033 0.096 0.019 0.050 0.010 0.023 0.011 0.052 0.012 NGC6752-1 0.025 0.009 $-$0.366 0.020 $-$0.008 0.013 0.031 0.010 0.003 0.011 0.034 0.012 NGC6752-2 0.020 0.008 0.384 0.015 0.055 0.012 0.038 0.008 $-$0.001 0.008 0.031 0.014 NGC6752-3 $-$0.049 0.012 $-$0.444 0.016 $-$0.044 0.007 $-$0.044 0.009 $-$0.052 0.013 $-$0.036 0.017 NGC6752-4 0.017 0.015 0.352 0.021 0.026 0.021 0.065 0.011 0.007 0.013 0.034 0.017 NGC6752-6 0.036 0.014 0.262 0.017 0.032 0.008 0.060 0.011 0.027 0.013 0.042 0.014 NGC6752-8 0.010 0.014 $-$0.323 0.012 $-$0.045 0.017 0.027 0.010 0.030 0.012 0.018 0.013 NGC6752-9 $-$0.048 0.025 $-$0.396 0.056 $-$0.049 0.011 $-$0.038 0.013 $-$0.062 0.016 $-$0.045 0.018 NGC6752-10 0.013 0.011 0.357 0.020 0.016 0.012 0.039 0.014 0.007 0.019 0.032 0.014 NGC6752-12 0.000 0.013 $-$0.065 0.009 $-$0.012 0.016 0.003 0.010 $-$0.023 0.013 0.027 0.016 NGC6752-15 0.033 0.012 $-$0.355 0.075 $-$0.002 0.012 0.022 0.011 $-$0.006 0.015 0.042 0.015 NGC6752-16 0.021 0.016 0.091 0.014 $-$0.005 0.018 0.008 0.011 0.001 0.015 0.007 0.016 NGC6752-19 $-$0.029 0.012 $-$0.190 0.008 $-$0.048 0.010 $-$0.029 0.008 $-$0.046 0.011 $-$0.024 0.012 NGC6752-20 0.029 0.012 0.454 0.015 0.031 0.015 0.051 0.009 0.020 0.013 0.037 0.012 NGC6752-21 $-$0.007 0.013 $-$0.063 0.003 $-$0.019 0.018 0.010 0.011 $-$0.010 0.014 0.011 0.013 NGC6752-23 0.016 0.012 0.272 0.019 0.032 0.012 0.033 0.009 $-$0.002 0.013 0.024 0.015 NGC6752-24 $-$0.058 0.016 $-$0.408 0.010 $-$0.107 0.020 $-$0.048 0.015 $-$0.078 0.011 $-$0.081 0.018 NGC6752-29 $-$0.026 0.012 $-$0.421 0.032 $-$0.101 0.020 $-$0.025 0.009 $-$0.064 0.021 $-$0.043 0.012 NGC6752-30 0.025 0.011 $-$0.161 0.010 $-$0.007 0.013 0.056 0.012 0.003 0.015 0.051 0.014 -------------- ------------------- ---------- ------------------- ---------- ------------------- ---------- ------------------- ---------- -------------------- ---------- --------------------- ---------- \ In order to place the above values onto an absolute scale, the absolute abundances we obtain for the reference stars are given below. We caution, however, that the absolute scale has not been critically evaluated (see Section 2.5 for more details).\ NGC6752-mg9: A(Fe) = 5.85, A(Na) = 4.86, A(Si) = 6.23, A(Ca) = 4.99, A([Ti[i]{}]{}) = 3.54, A([Ti[ii]{}]{}) = 3.59.\ NGC6752-11: A(Fe) = 5.84, A(Na) = 4.84, A(Si) = 6.24, A(Ca) = 4.97, A([Ti[i]{}]{}) = 3.50, A([Ti[ii]{}]{}) = 3.72. -------------- -------------------- ---------- --------------------- ---------- ------------------- ---------- ------------------ ---------- ------------------- ---------- ------------------- ---------- ------------------- ---------- -- -- -- -- -- -- -- -- -- -- -- -- Star $\Delta^{\rm CrI}$ $\sigma$ $\Delta^{\rm CrII}$ $\sigma$ $\Delta^{\rm Ni}$ $\sigma$ $\Delta^{\rm Y}$ $\sigma$ $\Delta^{\rm La}$ $\sigma$ $\Delta^{\rm Nd}$ $\sigma$ $\Delta^{\rm Eu}$ $\sigma$ (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) NGC6752-mg0 0.013 0.059 0.018 0.077 $-$0.030 0.023 0.022 0.037 0.028 0.013 $-$0.011 0.042 $-$0.002 0.012 NGC6752-mg2 0.053 0.087 0.068 0.074 $-$0.000 0.021 0.087 0.045 0.081 0.017 0.051 0.058 $-$0.012 0.013 NGC6752-mg3 0.042 0.046 0.042 0.035 0.005 0.023 0.074 0.036 0.106 0.016 0.046 0.061 0.063 0.013 NGC6752-mg4 0.050 0.046 0.055 0.033 0.013 0.019 0.075 0.024 0.073 0.015 0.058 0.042 0.056 0.014 NGC6752-mg5 0.034 0.037 0.023 0.029 $-$0.001 0.011 0.006 0.035 0.140 0.016 0.029 0.026 0.027 0.014 NGC6752-mg6 0.028 0.042 0.044 0.024 0.038 0.023 0.098 0.028 0.109 0.017 0.067 0.053 0.060 0.014 NGC6752-mg8 $-$0.029 0.035 $-$0.095 0.085 0.007 0.014 0.015 0.013 0.087 0.016 0.026 0.016 0.053 0.016 NGC6752-mg10 $-$0.009 0.022 $-$0.055 0.074 $-$0.001 0.012 0.079 0.020 0.020 0.017 0.019 0.025 $-$0.032 0.016 NGC6752-mg12 $-$0.005 0.013 $-$0.014 0.006 0.003 0.008 $-$0.006 0.020 $-$0.036 0.016 0.000 0.021 0.013 0.016 NGC6752-mg15 $-$0.027 0.011 $-$0.019 0.014 $-$0.006 0.007 $-$0.001 0.004 0.042 0.016 0.015 0.013 $-$0.013 0.014 NGC6752-mg18 $-$0.026 0.016 $-$0.032 0.014 $-$0.007 0.010 0.014 0.026 0.005 0.018 $-$0.011 0.028 0.007 0.017 NGC6752-mg21 $-$0.003 0.023 $-$0.021 0.012 $-$0.002 0.008 0.068 0.023 0.059 0.017 0.010 0.022 $-$0.037 0.017 NGC6752-mg22 $-$0.017 0.042 0.007 0.039 0.009 0.009 0.047 0.018 0.049 0.017 0.013 0.016 0.008 0.018 NGC6752-mg24 $-$0.033 0.013 $-$0.060 0.013 $-$0.024 0.008 $-$0.062 0.015 $-$0.005 0.016 $-$0.032 0.018 0.018 0.018 NGC6752-mg25 $-$0.023 0.021 $-$0.046 0.014 $-$0.043 0.010 $-$0.038 0.018 0.108 0.015 $-$0.051 0.026 0.003 0.018 NGC6752-0 0.058 0.012 0.112 0.053 0.020 0.009 0.044 0.018 0.018 0.012 0.018 0.015 0.123 0.024 NGC6752-1 0.037 0.014 0.077 0.060 0.010 0.014 0.026 0.027 $-$0.060 0.012 $-$0.009 0.025 $-$0.068 0.026 NGC6752-2 0.009 0.012 0.038 0.005 $-$0.003 0.008 $-$0.017 0.023 0.032 0.011 $-$0.009 0.029 0.180 0.023 NGC6752-3 $-$0.053 0.023 $-$0.053 0.029 $-$0.057 0.013 $-$0.143 0.009 $-$0.039 0.012 $-$0.110 0.025 0.089 0.025 NGC6752-4 0.014 0.023 0.062 0.046 0.003 0.012 0.018 0.022 0.009 0.010 $-$0.014 0.027 0.328 0.025 NGC6752-6 0.038 0.027 0.068 0.052 0.004 0.012 0.005 0.025 0.027 0.013 0.041 0.035 0.208 0.025 NGC6752-8 0.019 0.016 0.061 0.055 $-$0.004 0.008 $-$0.026 0.026 0.064 0.010 0.033 0.014 0.179 0.029 NGC6752-9 $-$0.039 0.026 0.028 0.044 $-$0.054 0.016 $-$0.089 0.012 $-$0.014 0.011 $-$0.064 0.023 0.149 0.025 NGC6752-10 0.029 0.022 0.016 0.022 $-$0.016 0.014 0.016 0.013 0.076 0.012 $-$0.013 0.025 0.185 0.029 NGC6752-12 0.004 0.021 0.075 0.065 $-$0.016 0.010 $-$0.097 0.021 $-$0.006 0.011 $-$0.020 0.032 0.008 0.028 NGC6752-15 0.024 0.021 0.070 0.021 0.005 0.013 $-$0.046 0.026 $-$0.005 0.011 $-$0.010 0.025 $-$0.082 0.034 NGC6752-16 0.016 0.019 0.012 0.024 0.007 0.013 $-$0.048 0.015 0.031 0.013 0.045 0.031 $-$0.001 0.039 NGC6752-19 $-$0.036 0.021 0.016 0.048 $-$0.052 0.010 $-$0.107 0.013 0.018 0.011 $-$0.049 0.024 0.004 0.042 NGC6752-20 0.024 0.018 0.038 0.019 0.007 0.007 0.012 0.014 0.054 0.012 0.011 0.026 0.057 0.042 NGC6752-21 $-$0.014 0.018 0.052 0.025 $-$0.032 0.009 $-$0.013 0.015 0.087 0.011 $-$0.023 0.019 $-$0.032 0.039 NGC6752-23 0.006 0.025 0.102 0.036 $-$0.026 0.010 0.016 0.010 $-$0.028 0.011 $-$0.004 0.011 $-$0.033 0.040 NGC6752-24 $-$0.056 0.019 $-$0.031 0.020 $-$0.089 0.010 $-$0.135 0.018 $-$0.050 0.012 $-$0.075 0.016 0.141 0.050 NGC6752-29 $-$0.036 0.020 0.051 0.042 $-$0.056 0.011 $-$0.082 0.022 $-$0.094 0.012 $-$0.054 0.021 0.062 0.033 NGC6752-30 0.029 0.016 0.048 0.037 $-$0.007 0.010 0.000 0.032 0.047 0.011 0.025 0.017 0.235 0.031 -------------- -------------------- ---------- --------------------- ---------- ------------------- ---------- ------------------ ---------- ------------------- ---------- ------------------- ---------- ------------------- ---------- -- -- -- -- -- -- -- -- -- -- -- -- \ In order to place the above values onto an absolute scale, the absolute abundances we obtain for the reference stars are given below. We caution, however, that the absolute scale has not been critically evaluated (see Section 2.5 for more details).\ NGC6752-mg9: A([Cr[i]{}]{}) = 3.99, A([Cr[ii]{}]{}) = 4.10, A(Ni) = 4.56, A(Y) = 0.67, A(La) = $-$0.39, A(Nd) = 0.06, A(Eu) = $-$0.75.\ NGC6752-11: A([Cr[i]{}]{}) = 3.84, A([Cr[ii]{}]{}) = 4.12, A(Ni) = 4.54, A(Y) = 0.66, A(La) = $-$0.29, A(Nd) = 0.06, A(Eu) = $-$0.80. For Na, the range in abundance is 0.90 dex, in good agreement with our previously published values. We did not attempt to re-measure the abundances of other the light elements, O, Mg and Al, as multiple lines could not be measured in all stars. Additionally, given the well established correlations between the abundances of these elements, we believe that Na provides a reliable picture of the light element abundance variations in this cluster. The interested reader can find our abundances for N, O, Mg and Al in @grundahl02 and @yong03 [@yong08nh]. (C measurements in the RGB bump sample are ongoing and will be presented in a future work.) As mentioned, @melendez12 considered the relative ionization equilibria for Ti and Cr when establishing the strictly differential stellar parameters. Having measured the Ti and Cr abundances from neutral and ionised lines, we are now in a position to examine $\Delta^{\rm TiI - TiII} = \langle \delta A_i^{\rm TiI} \rangle - \langle \delta A_i^{\rm TiII} \rangle$ and $\Delta^{\rm CrI - CrII} = \langle \delta A_i^{\rm CrI} \rangle - \langle \delta A_i^{\rm CrII} \rangle$. In Figure \[fig:ticr\], we plot $\Delta^{\rm TiI - TiII}$ and $\Delta^{\rm CrI - CrII}$ versus [$\log g$]{} for both samples of stars. In this figure, it is clear that ionization equilibrium is not obtained for Ti or Cr and that there are trends between $\Delta^{\rm TiI - TiII}$ vs. [$\log g$]{} and $\Delta^{\rm CrI - CrII}$ vs. [$\log g$]{}. Nevertheless, we are satisfied with our approach which used only Fe lines to establish the differential stellar parameters. We expect that inclusion of Ti and Cr ionization equilibrium would have resulted in very small adjustments to the stellar parameters and to the differential chemical abundances. Finally, as it will be shown later, Ti and Cr have considerably higher uncertainties such that it may be better to rely only upon Fe for ionization balance. ![$\Delta^{\rm TiI - TiII}$ (upper panels) and $\Delta^{\rm CrI - CrII}$ (lower panels) for the RGB tip star sample (left panels) and the RGB bump star sample (right panels). (These results are obtained when using the reference stars RGB tip = NGC 6752-mg9 and RGB bump = NGC 6752-11.) The colours are the same as in Figure \[fig:paramcomptip\].[]{data-label="fig:ticr"}](fig6.ps){width="1.0\hsize"} Error Analysis -------------- To determine the errors in the stellar parameters, we adopted the following approach. For [$T_{\rm eff}$]{}, we determined the formal uncertainty in the slope between $\delta A_i^{\rm FeI}$ and the lower excitation potential. We then adjusted [$T_{\rm eff}$]{} until the formal slope matched the error. The difference between the new [$T_{\rm eff}$]{} and the original value is $\sigma$[$T_{\rm eff}$]{}. For the RGB tip and RGB bump stars, the average values of $\sigma$[$T_{\rm eff}$]{} were 7.53 K and 21.74 K, respectively. For [$\log g$]{}, we added the standard error of the mean for $\Delta^{\rm FeI}$ and $\Delta^{\rm FeII}$ in quadrature and then adjusted [$\log g$]{} until the quantity $\Delta^{\rm FeI - FeII}$, from equation (\[eq:logg\]) above, was equal to this value. The difference between the new [$\log g$]{} and the original value is $\sigma$[$\log g$]{}. For the RGB tip and RGB bump stars, the average values of $\sigma$[$\log g$]{} were 0.015 dex and 0.009 dex, respectively. For [$\xi_t$]{}, we measured the formal uncertainty in the slope between $\delta A_i^{\rm FeI}$ and the reduced equivalent width. We adjusted [$\xi_t$]{} until the formal slope was equal to this value. The difference between the new and old values is $\sigma$[$\xi_t$]{}. Average values for $\sigma$[$\xi_t$]{} for the RGB tip and RGB bump samples were 0.031 [km s$^{-1}$]{} and 0.018 [km s$^{-1}$]{}, respectively. Uncertainties in the element abundance measurements were obtained following the formalism given in @johnson02, which we repeat here for convenience, and we note that this approach is very similar to that of @mcwilliam95 and @barklem05. $$\begin{aligned} \sigma^2_{\rm{log}\epsilon}= \sigma^2_{\rm rand} + \left({\partial \rm{log}\epsilon\over\partial T}\right)^2 \sigma^2_T + \left({\partial \rm{log}\epsilon\over\partial {\rm log g}}\right)^2 \sigma^2_{{\rm log g}} + \nonumber\\ \left({\partial \rm{log}\epsilon\over\partial \xi}\right)^2 \sigma^2_{\xi} + 2\biggl[\left({\partial \rm{log}\epsilon\over\partial T}\right) \left({\partial \rm{log}\epsilon \over\partial {\rm log g}}\right)\sigma_{T{\rm log g}} + \nonumber\\ \left({\partial \rm{log}\epsilon\over\partial \xi}\right) \left({\partial \rm{log}\epsilon\over\partial {\rm log g}}\right) \sigma_{{\rm log g}\xi} + \left({\partial \rm{log}\epsilon\over\partial \xi}\right)\left({\partial \rm{log}\epsilon\over T}\right) \sigma_{\xi T} \biggr] \end{aligned}$$ The covariance terms, $\sigma_{T{\rm log g}}$, $\sigma_{{\rm log g}\xi}$ and $\sigma_{\xi T}$, were computed using the approach of @johnson02. These abundance uncertainties are included in Tables \[tab:abuna\], \[tab:abunb\], \[tab:abun2a\] and \[tab:abun2b\]. For La and Eu, the abundances were obtained from a single line. For these lines, we adopt the 1$\sigma$ fitting error from the $\chi^2$ analysis in place of the random error term, $\sigma_{\rm rand}$ (standard error of the mean). We note that these formal uncertainties, which take into account all covariance error terms, are below 0.02 dex for many elements in many stars, reaching values as low as $\sim$0.01 dex for a number of elements including Si, [Ti[i]{}]{}, Ni and Fe. Note that in Figure \[fig:ticr\], we regard $\Delta^{\rm TiI - TiII}$ as an abundance ratio between [Ti[i]{}]{} and [Ti[ii]{}]{}, and thus, we compute the error terms according to the relevant equations in @johnson02 which we again repeat here for convenience. $$\sigma^2(A/B)=\sigma^2(A)+\sigma^2(B)-2 \sigma_{A,B}$$ The covariance between two abundances is given by [ $$\begin{aligned} \sigma_{A,B} = \left({{\partial}{\rm{log}\epsilon}_A\over{\partial}T}\right)\left({{\partial}{\rm{log}\epsilon}_{B}\over{\partial}T}\right)\sigma^2_T + \hspace{0.8in} \nonumber\\ \left({{\partial}{\rm{log}\epsilon}_A\over{\partial}{\rm log g}}\right) \left({{\partial}{\rm{log}\epsilon}_B\over{\partial}{\rm log g}}\right) \sigma^2_{{\rm log g}} + \left({{\partial}{\rm{log}\epsilon}_A\over{\partial}\xi}\right) \left({{\partial}{\rm{log}\epsilon}_B\over{\partial}\xi}\right) \sigma^2_{\xi} \nonumber\\ + \biggl[\left({{\partial}{\rm{log}\epsilon}_{A}\over{\partial}T}\right)\left({{\partial}{\rm{log}\epsilon}_{B}\over{\partial}{\rm log g}}\right) + \left({{\partial}{\rm{log}\epsilon}_{A}\over{\partial}{\rm log g}}\right) \left({{\partial}{\rm{log}\epsilon}_{B}\over{\partial}T}\right)\biggr] \sigma_{T {\rm log g}} \nonumber\\ + \biggl[\left({{\partial}{\rm{log}\epsilon}_{A}\over{\partial}\xi}\right)\left({{\partial}{\rm{log}\epsilon}_{B}\over{\partial}{\rm log g}}\right) + \left({{\partial}{\rm{log}\epsilon}_{A}\over{\partial}{\rm log g}}\right) \left({{\partial}{\rm{log}\epsilon}_{B}\over{\partial}\xi}\right)\biggr]\sigma_{\xi {\rm log g}} \end{aligned}$$]{} RESULTS AND DISCUSSION {#sec:abund} ====================== Trends vs. [$T_{\rm eff}$]{} ---------------------------- In Figures \[fig:feteff\], \[fig:criiteff\] and \[fig:niteff\], we plot $\Delta^{\rm Fe}$, $\Delta^{\rm CrII}$ and $\Delta^{\rm Ni}$ versus [$T_{\rm eff}$]{}, respectively. In these figures, the RGB tip sample and the RGB bump sample are in the upper and lower panels, respectively. In each panel, we show the mean and the abundance dispersion for $\Delta^{\rm X}$ ($\sigma_{\rm A}$ in these figures). We also determine the linear least squares fit to the data and write the slope, uncertainty and abundance dispersion about the fit ($\sigma_{\rm B}$ in these figures). For the subset of RGB tip stars within 100 K and 200 K of the reference star, we compute and write the mean abundance and abundance dispersions ($\sigma_{\rm A}$ and $\sigma_{\rm B}$). Similarly, for the subset of RGB bump stars within 50 K and 100 K of the reference star, we write the same quantities. Finally, we also write the average abundance error, $<\sigma\Delta^{\rm X}>$, for a particular element for the RGB tip and RGB bump samples. ![$\Delta^{\rm Fe}$ vs. [$T_{\rm eff}$]{} for the RGB tip star sample (upper panel) and the RGB bump star sample (lower panel). In both panels, we show the location of the “reference star” as a black cross. We write the mean abundance and standard deviation ($\sigma_{\rm A}$) for stars within 100K and 200K of the reference star as well as for the full sample. The red dashed line is the linear least squares fit to the data. The slope, uncertainty and dispersion ($\sigma_{\rm B}$) about the linear fit are written. We also write the average abundance error, $<\sigma\Delta^{\rm Fe}>$, for each sample. (These results are obtained when using the reference stars RGB tip = NGC 6752-mg9 and RGB bump = NGC 6752-11.) The colours are the same as in Figure \[fig:paramcomptip\][]{data-label="fig:feteff"}](fig7.ps){width="0.9\hsize"} ![Same as Figure \[fig:feteff\] but for $\Delta^{\rm CrII}$ vs.[$T_{\rm eff}$]{}.[]{data-label="fig:criiteff"}](fig8.ps){width="0.9\hsize"} ![Same as Figure \[fig:feteff\] but for $\Delta^{\rm Ni}$ vs. [$T_{\rm eff}$]{}.[]{data-label="fig:niteff"}](fig9.ps){width="0.9\hsize"} Fe and Ni (Figures \[fig:feteff\] and \[fig:niteff\]) are examples where the average abundance errors are very small, $\sim$0.01 dex. [Cr[ii]{}]{} (Figure \[fig:criiteff\]) is the element that shows the highest average abundance error, $\sim$0.04 dex. Rather than showing similar figures for every element, in Figure \[fig:abunerr\] we plot ($i$) the average abundance error ($<\sigma\Delta^{\rm X}>$), ($ii$) the abundance dispersion ($\sigma_{\rm A}$) and ($iii$) the abundance dispersion about the linear fit to $\Delta^{\rm X}$ versus [$T_{\rm eff}$]{} ($\sigma_{\rm B}$), for all elements in the RGB tip sample (upper) and the RGB bump sample (lower). The main point to take from this figure is that we have achieved very high precision chemical abundance measurements from our strictly differential analysis for this sample of giant stars in the globular cluster NGC 6752. For the RGB tip sample, the lowest average abundance error is for Fe ($<\sigma\Delta^{\rm Fe}>$ = 0.011 dex) and the highest value is for CrII ($<\sigma\Delta^{\rm CrII}>$ = 0.052 dex). For the RGB bump sample the lowest average abundance errors are for Fe and La ($<\sigma\Delta^{\rm Fe,La}>$ = 0.013 dex) while the highest value is for CrII ($<\sigma\Delta^{\rm CrII}>$ = 0.041 dex). Another aspect to note in Figure \[fig:abunerr\] is that the measured dispersions ($\sigma_{\rm A}$ and $\sigma_{\rm B}$) for many elements appear to be considerably larger than the average abundance error. We interpret such a result as evidence for a genuine abundance dispersion in this cluster, although another possible explanation is that we have systematically underestimated the errors. ![Average abundance errors ($<\sigma\Delta^{X}>$, filled black circles), abundance dispersions ($\sigma_{\rm A}$, red crosses) and abundance dispersions about the linear fits as seen in Figures \[fig:feteff\] to \[fig:niteff\] ($\sigma_{\rm B}$, blue triangles) for all species in the RGB tip sample (upper panel) and RGB bump sample (lower panel). (These results are obtained when using the reference stars RGB tip = NGC 6752-mg9 and RGB bump = NGC 6752-11.)[]{data-label="fig:abunerr"}](fig10.ps){width="0.99\hsize"} $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$ -------------------------------------- In Figures \[fig:fena\] and \[fig:sina\], we plot $\Delta^{\rm Fe}$ vs.$\Delta^{\rm Na}$ and $\Delta^{\rm Si}$ vs. $\Delta^{\rm Na}$, respectively. In both figures, the RGB tip sample and the RGB bump sample are found in the upper and lower panels, respectively. (Here one readily sees that the populations $a$ (green), $b$ (magenta) and $c$ (blue) identified by @milone13 from colour-magnitude diagrams have distinct Na abundances.) We measure the linear least squares fit to the data and in each panel we write ($i$) the slope and uncertainty, ($ii$) the abundance dispersion ($\sigma_{\rm A}$), ($iii$) the abundance dispersion about the linear fit to $\Delta^{\rm X}$ versus $\Delta^{\rm Na}$ ($\sigma_{\rm B}$) and (iv) the average abundance error ($<\sigma\Delta^{\rm X}>$). Consideration of the slope and uncertainty of the linear fits reveals that while the amplitude may be small, there are statistically significant correlations between $\Delta^{\rm Fe}$ and $\Delta^{\rm Na}$ for the RGB bump sample and between $\Delta^{\rm Si}$ and $\Delta^{\rm Na}$ for the RGB tip and RGB bump samples. The results for Si confirm and expand on the correlations found between Si and Al [@yong05] and between Si and N [@yong08nh]. ![$\Delta^{\rm Fe}$ vs. $\Delta^{\rm Na}$ for the RGB tip star sample (upper) and the RGB bump star sample (lower). The red dashed line is the linear least squares fit to the data (slope and error are written). We write the dispersion in the $y$-direction ($\sigma_{\rm A}$), the dispersion about the linear fit ($\sigma_{\rm B}$) and the average abundance error, $<\sigma\Delta^{\rm Fe}>$, for each sample. (These results are obtained when using the reference stars RGB tip = NGC 6752-mg9 and RGB bump = NGC 6752-11.) The colours are the same as in Figure \[fig:paramcomptip\].[]{data-label="fig:fena"}](fig11.ps){width="0.9\hsize"} ![Same as Figure \[fig:fena\] but for $\Delta^{\rm Si}$ vs.$\Delta^{\rm Na}$.[]{data-label="fig:sina"}](fig12.ps){width="0.9\hsize"} In Figure \[fig:allna\], we plot the slope of the linear fit to $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$ for all elements in the RGB tip sample (upper) and the RGB bump sample (lower). With the exception of La and Eu (for the RGB tip sample), all the gradients are positive. For La and Eu in the RGB tip sample, the negative gradients are not statistically significant, $<$1$\sigma$. Assuming an equal likelihood of obtaining a positive or negative gradient, the probability of obtaining 22 positive values in a sample of 24 is $\sim$10$^{-5}$. Based on the slope and uncertainty, we obtain the significance of the correlations; eight of the 24 elements exhibit correlations that are significant at the 5$\sigma$ level or higher[^15]. Therefore, [*the first main conclusion we draw is that there are an unusually large number of elements that show positive correlations for $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$, and that an unusually large fraction of these correlations are of high statistical significance*]{}. We interpret this result as further evidence for a genuine abundance dispersion in this cluster. On this occasion, it is highly unlikely that such correlations could arise from underestimating the errors. NLTE corrections for Na, using improved atomic data, have been published by @lind11na. The corrections are negative and strongly dependent on line strength; for a given [$T_{\rm eff}$]{}:[$\log g$]{}:\[Fe/H\], stronger lines have larger amplitude (negative) NLTE corrections. Had we included these corrections, the $\Delta^{\rm X}$ vs. $\Delta^{\rm Na~(NLTE)}$ gradients would be even steeper. ![Slope of the fit to $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$, for X = Si to Eu, for the RGB tip sample (upper) and the RGB bump sample (lower). The colours represent the significance of the slope, i.e., the magnitude of the gradient divided by the uncertainty. (These results are obtained when using the reference stars RGB tip = NGC 6752-mg9 and RGB bump = NGC 6752-11.)[]{data-label="fig:allna"}](fig13.ps){width="0.9\hsize"} We also note that the gradients are, on average, of larger amplitude and of higher statistical significance for the RGB bump sample compared to the RGB tip sample. Other than spanning a different range in stellar parameters, one notable difference between the two samples is that the RGB bump sample exhibits a larger range in $\Delta^{\rm Na}$ than does the RGB tip sample. In particular, the numbers of RGB tip and RGB bump stars with $|\Delta^{\rm Na}|$ $\ge$ 0.20 dex are five and 14, respectively. (Equivalently, the numbers of stars in the @milone13 $b$ and $c$ populations are considerably larger in the RGB bump sample compared to the RGB tip sample.) Thus, we speculate that the RGB bump stars are the more reliable sample (based on the sample size and abundance distribution) from which to infer the presence of any trend between $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$. We conducted the following test in order to check whether differences in gradients for $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$ between the RGB tip and RGB bump samples can be attributed to differences in the Na distributions between the two samples. We start by assuming that the RGB bump sample provides the “correct” slope. For a given element, we consider the gradient and uncertainty for $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$ and draw a random number from a normal distribution (centered at zero) whose width corresponds to the uncertainty. We add that random number to the gradient to obtain a “new RGB bump gradient” for $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$. For each RGB tip star, we infer the corresponding $\Delta^{\rm X}$ using this “new RGB bump gradient”. We then draw another random number from a normal distribution (centered at zero) of width corresponding to the measurement uncertainty, $\sigma\Delta^{\rm X}$, and add that number to the $\Delta^{\rm X}$ value inferred. For a given element, we measure the gradient and uncertainty for this new set of $\Delta^{\rm X}$ values. We repeated the process for 1,000,000 realisations. Our expectation is that these Monte Carlo simulations predict the gradient for $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$ that would be obtained when combining ($i$) the RGB bump sample gradient with ($ii$) the RGB tip sample Na distribution, and this approach accounts for the uncertainties in the RGB bump sample gradients and measurement errors appropriate for the RGB tip sample. For all elements except Fe (61123) and Eu (543)[^16], the gradients measured from the RGB tip sample are consistent with those from the simulations. We thus conclude that for most, but not all, elements the differences in the $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$ gradients for the two samples can be attributed to the differences in the Na distribution. $\Delta^{\rm X}$ vs. $\Delta^{\rm Y}$ ------------------------------------- We now consider $\Delta^{\rm X}$ vs. $\Delta^{\rm Y}$, for every possible combination of elements. In Figures \[fig:sica\] and \[fig:nind\] we plot $\Delta^{\rm Ca}$ vs. $\Delta^{\rm Si}$ and $\Delta^{\rm Nd}$ vs.$\Delta^{\rm Ni}$, respectively. Once again we plot the linear least squares fit to the data and write the slope and uncertainty. Consideration of those quantities reveals that these pairs of elements show a statistically significant correlation, although the amplitudes of the abundance variations are small. In these figures, we write the abundance dispersions and average abundance errors in the $x$-direction and the $y$-direction. As seen in Figure \[fig:abunerr\], the abundance dispersions are almost always equal to, and in some cases substantially larger than, the average measurement uncertainty. ![$\Delta^{\rm Ca}$ vs. $\Delta^{\rm Si}$ for the RGB tip sample (upper) and the RGB bump sample (lower). The red dashed line is the linear least squares fit to the data (slope and error are written). We write the abundance dispersions in the $x-$direction ($\sigma_{\rm X}$) and $y$-direction ($\sigma_{\rm Y}$) and the average abundance errors, $<\sigma\Delta^{\rm Ca,Si}>$. (These results are obtained when using the reference stars RGB tip = NGC 6752-mg9 and RGB bump = NGC 6752-11.) The colours are the same as in Figure \[fig:paramcomptip\].[]{data-label="fig:sica"}](fig14.ps){width="0.9\hsize"} ![Same as Figure \[fig:sica\] but for $\Delta^{\rm Nd}$ vs.$\Delta^{\rm Ni}$.[]{data-label="fig:nind"}](fig15.ps){width="0.9\hsize"} In Figure \[fig:summarytip1\], we show the linear fit to $\Delta^{\rm X}$ vs. $\Delta^{\rm Y}$ for all combinations of elements for the RGB tip sample. The significance for a pair of elements, which is based on the slope and the uncertainty, is shown in this figure. The gradients are always positive, with the exception of the following pair of elements, Si and Eu (consideration of the uncertainty suggests that the gradients are not significant). That is, 65 out of 66 pairs of elements exhibit a positive correlation[^17]. The average gradient is 2.14 $\pm$ 0.29 ($\sigma$ = 2.37). ![Linear fit to $\Delta^{\rm X}$ vs. $\Delta^{\rm Y}$, for all combination of elements, for the RGB tip sample. The dimensions of the x-axis and y-axis are unity, such that a slope of gradient 1.0 would be represented by a straight line from the lower left corner to the upper right corner and a slope of gradient 0.0 would be a horizontal line. The significance of the gradients are indicated by the colour bar. (These results are obtained when using the reference stars RGB tip = NGC 6752-mg9 and RGB bump = NGC 6752-11.)[]{data-label="fig:summarytip1"}](fig16.ps){width="0.95\hsize"} Figure \[fig:summarybum1\] is the same as Figure \[fig:summarytip1\], but for the RGB bump sample. The gradients are always positive with an average value of 2.52 $\pm$ 0.40 ($\sigma$ = 3.29). Interestingly, the gradients are, in general, of considerably higher statistical significance than in the RGB tip sample. The average significance of the correlations is 2.0$\sigma$ for the RGB tip sample and 4.5$\sigma$ for the RGB bump sample. For the RGB bump sample, 25 pairs of elements (out of a total of 66) exhibit correlations that are significant at the 5$\sigma$ level or higher[^18]. Thus, [*the second main conclusion we draw is that there are an unusually large number of elements that show positive correlations for $\Delta^{\rm X}$ vs. $\Delta^{\rm Y}$ and that many of these pairs of elements exhibit correlations that are of high statistical significance*]{}. Again, we speculate that the higher statistical significance for the correlations between pairs of elements in the RGB bump sample, compared to the RGB tip sample, is due to the sample size and abundance distribution (i.e., the RGB bump sample includes many more stars at the extremes of the $\Delta^{\rm Na}$, and therefore $\Delta^{\rm X}$, distributions). Monte Carlo simulations indicate that the gradients for the RGB bump and RGB tip samples are consistent when taking into account the different distributions in $\Delta^{\rm X}$ between the two samples. We interpret the significant correlations bewteen $\Delta^{\rm X}$ and $\Delta^{\rm Y}$ as further indication of a genuine abundance dispersion in this globular cluster. ![Same as Figure \[fig:summarytip1\] but for the RGB bump sample.[]{data-label="fig:summarybum1"}](fig17.ps){width="0.95\hsize"} Removing Trends With [$T_{\rm eff}$]{} -------------------------------------- Inspection of Figures \[fig:feteff\], \[fig:criiteff\] and \[fig:niteff\] suggests that there are statistically significant trends between $\Delta^{\rm X}$ and [$T_{\rm eff}$]{}. We tentatively attribute those abundance trends with [$T_{\rm eff}$]{} to differential non-LTE effects and/or 3D effects (e.g., @asplund05). In this subsection, we explore whether or not our results change if we remove the abundance trends with [$T_{\rm eff}$]{}. That is, do the abundance trends between ($i$) $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$ and ($ii$) $\Delta^{\rm X}$ vs.$\Delta^{\rm Y}$ persist, or disappear, if we remove the abundance trends with [$T_{\rm eff}$]{}? We remove those abundance trends with [$T_{\rm eff}$]{} in the following manner. We define a new quantity, $\Delta^{\rm X}_{\rm T}$, as the difference between $\Delta^{\rm X}$ and the value of the linear fit to the data at the [$T_{\rm eff}$]{} of the program star. In Figure \[fig:allnat\], we plot the slope of $\Delta^{\rm X}_{\rm T}$ vs. $\Delta^{\rm Na}_{\rm T}$. This figure is similar to Figure \[fig:allna\], but we have removed the abundance trends with [$T_{\rm eff}$]{}. With the exception of Y in the RGB bump sample, our results are unchanged at the $<$1.0$\sigma$ level. For Y, the slope and error changed from 0.174 $\pm$ 0.011 to 0.131 $\pm$ 0.010, a difference of 3$\sigma$; in both cases the correlation is of high statistical significance. ![Same as Figure \[fig:allna\] but for $\Delta^{\rm X}_{\rm T}$ vs.$\Delta^{\rm Na}_{\rm T}$, i.e., the abundance trends with [$T_{\rm eff}$]{} have been removed as described in Section 3.4. (These results are obtained when using the reference stars RGB tip = NGC 6752-mg9 and RGB bump = NGC 6752-11.)[]{data-label="fig:allnat"}](fig18.ps){width="0.9\hsize"} Next, we examine the trends between $\Delta^{\rm X}_{\rm T}$ vs. $\Delta^{\rm Y}_{\rm T}$ (see Figures \[fig:summarytip2\] and \[fig:summarybum2\]). These figures are the same as Figures \[fig:summarytip1\] and \[fig:summarybum1\] but we have removed the abundance trends with [$T_{\rm eff}$]{}. On comparing the RGB tip samples (Figures \[fig:summarytip1\] vs.\[fig:summarytip2\]) and the RGB bump samples (Figures \[fig:summarybum1\] vs. \[fig:summarybum2\]), the results are unchanged, at the $<$2$\sigma$ level, for all pairs of elements. Therefore, we find positive correlations of high statistical significance between pairs of elements regardless of whether or not we remove any abundance trends with [$T_{\rm eff}$]{}. Such a result increases our confidence that the abundance trends we identify are real and not an artefact of systematic errors in the analysis. ![Same as Figure \[fig:summarytip1\] but for $\Delta^{\rm X}_{\rm T}$ vs. $\Delta^{\rm Y}_{\rm T}$ in the RGB tip sample, i.e., the abundance trends with [$T_{\rm eff}$]{} have been removed as described in Section 3.4. (These results are obtained when using the reference stars RGB tip = NGC 6752-mg9 and RGB bump = NGC 6752-11.)[]{data-label="fig:summarytip2"}](fig19.ps){width="0.95\hsize"} ![Same as Figure \[fig:summarytip2\] but for the RGB bump sample.[]{data-label="fig:summarybum2"}](fig20.ps){width="0.95\hsize"} Confirmation of Results When Using a Different Reference Star ------------------------------------------------------------- An important consideration is whether or not the results change for a different choice of reference stars. In this subsection, we repeat the entire analysis but using a new pair of reference stars. For the RGB tip sample and RGB bump sample, we use NGC 6752-mg6 and NGC 6752-1 as the reference stars, respectively. These stars were arbitrarily chosen to have higher S/N (and therefore lower [$T_{\rm eff}$]{}) than the previous pair of reference stars. Starting with the reference star parameters as described in Section 2.3, we obtained for each star in each sample, strictly differential stellar parameters using the line-by-line analysis described in Section 2.4. The new strictly differential stellar parameters are presented in Table \[tab:param2\]. As before, the strictly differential stellar parameters are very close to the “reference star” stellar parameters. -------------- ------------------- ---------- --------------- --------------- ------------------- ------------------- ---------- Name [$T_{\rm eff}$]{} $\sigma$ [$\log g$]{} $\sigma$ [$\xi_t$]{} $\sigma$ \[Fe/H\] (K) (K) (cm s$^{-2}$) (cm s$^{-2}$) ([km s$^{-1}$]{}) ([km s$^{-1}$]{}) (1) (2) (3) (4) (5) (6) (7) (8) NGC6752-mg0 3922 20 0.19 0.01 2.24 0.04 $-$1.68 NGC6752-mg2 3940 16 0.25 0.01 2.11 0.04 $-$1.66 NGC6752-mg3 4070 14 0.55 0.01 1.92 0.03 $-$1.64 NGC6752-mg4 4087 14 0.57 0.01 1.90 0.03 $-$1.64 NGC6752-mg5 4105 16 0.59 0.01 1.93 0.04 $-$1.64 NGC6752-mg8 4288 17 0.98 0.01 1.71 0.04 $-$1.64 NGC6752-mg9 4292 20 0.96 0.01 1.73 0.05 $-$1.65 NGC6752-mg10 4295 14 0.96 0.01 1.69 0.04 $-$1.64 NGC6752-mg12 4315 17 1.00 0.01 1.73 0.05 $-$1.65 NGC6752-mg15 4347 17 1.04 0.01 1.77 0.05 $-$1.65 NGC6752-mg18 4387 13 1.10 0.01 1.70 0.04 $-$1.65 NGC6752-mg21 4443 16 1.19 0.01 1.69 0.06 $-$1.63 NGC6752-mg22 4451 18 1.23 0.01 1.71 0.07 $-$1.64 NGC6752-mg24 4511 16 1.33 0.01 1.70 0.06 $-$1.67 NGC6752-mg25 4479 15 1.28 0.01 1.72 0.06 $-$1.67 NGC6752-0 4737 11 1.86 0.01 1.44 0.02 $-$1.62 NGC6752-2 4770 10 1.95 0.01 1.36 0.02 $-$1.63 NGC6752-3 4781 11 1.98 0.01 1.36 0.02 $-$1.70 NGC6752-4 4827 12 2.07 0.01 1.39 0.02 $-$1.63 NGC6752-6 4830 13 2.10 0.01 1.34 0.02 $-$1.61 NGC6752-8 4966 16 2.29 0.01 1.33 0.03 $-$1.64 NGC6752-9 4829 18 2.08 0.01 1.42 0.03 $-$1.69 NGC6752-10 4846 12 2.10 0.01 1.38 0.02 $-$1.63 NGC6752-11 4866 6 2.13 0.01 1.37 0.02 $-$1.64 NGC6752-12 4855 13 2.14 0.01 1.35 0.02 $-$1.64 NGC6752-15 4866 15 2.23 0.01 1.37 0.02 $-$1.61 NGC6752-16 4911 15 2.24 0.01 1.33 0.03 $-$1.62 NGC6752-19 4928 12 2.32 0.01 1.33 0.02 $-$1.67 NGC6752-20 4935 13 2.33 0.01 1.32 0.02 $-$1.62 NGC6752-21 4921 14 2.32 0.01 1.32 0.03 $-$1.65 NGC6752-23 4945 12 2.32 0.01 1.26 0.02 $-$1.63 NGC6752-24 4945 14 2.39 0.01 1.15 0.03 $-$1.70 NGC6752-29 4959 12 2.40 0.01 1.32 0.02 $-$1.67 NGC6752-30 4954 13 2.47 0.01 1.25 0.02 $-$1.62 -------------- ------------------- ---------- --------------- --------------- ------------------- ------------------- ---------- With these revised stellar parameters, we computed chemical abundances and conducted a full error analysis following the procedures outlined in Sections 2.5 and 2.6, respectively. In Tables \[tab:abun2a\] and \[tab:abun2b\] we present the abundance differences for each element in all program stars when using this new pair of reference stars. (We did not, however, recompute abundances based on spectrum synthesis analysis for La and Eu, and thus those elements will not be considered in this subsection). Again, we achieve high precision chemical abundance measurements and the measured dispersions ($\sigma_{\rm A}$, $\sigma_{\rm B}$) are, in general, larger than the average abundance error (particularly for the RGB bump sample). -------------- ------------------- ---------- ------------------- ---------- ------------------- ---------- ------------------- ---------- -------------------- ---------- --------------------- ---------- Star $\Delta^{\rm Fe}$ $\sigma$ $\Delta^{\rm Na}$ $\sigma$ $\Delta^{\rm Si}$ $\sigma$ $\Delta^{\rm Ca}$ $\sigma$ $\Delta^{\rm TiI}$ $\sigma$ $\Delta^{\rm TiII}$ $\sigma$ (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) NGC6752-mg0 $-$0.065 0.009 0.387 0.030 0.043 0.032 $-$0.022 0.040 0.023 0.050 $-$0.013 0.049 NGC6752-mg2 $-$0.047 0.011 $-$0.014 0.017 0.044 0.019 $-$0.017 0.028 0.054 0.035 0.050 0.038 NGC6752-mg3 $-$0.027 0.014 $-$0.026 0.026 0.012 0.025 0.001 0.031 0.024 0.044 0.056 0.046 NGC6752-mg4 $-$0.024 0.010 0.043 0.024 0.037 0.020 0.012 0.025 0.029 0.034 0.058 0.037 NGC6752-mg5 $-$0.029 0.008 0.052 0.021 0.023 0.020 0.002 0.030 0.010 0.039 0.054 0.046 NGC6752-mg8 $-$0.036 0.016 0.038 0.002 0.007 0.006 0.008 0.011 0.015 0.010 0.052 0.066 NGC6752-mg9 $-$0.036 0.016 0.002 0.018 0.006 0.019 $-$0.001 0.025 0.005 0.027 0.013 0.023 NGC6752-mg10 $-$0.028 0.011 0.015 0.014 0.003 0.016 $-$0.002 0.022 $-$0.018 0.024 0.043 0.017 NGC6752-mg12 $-$0.038 0.013 $-$0.347 0.024 $-$0.020 0.017 $-$0.020 0.028 0.004 0.043 0.004 0.026 NGC6752-mg15 $-$0.036 0.013 0.048 0.024 $-$0.002 0.018 $-$0.005 0.031 $-$0.002 0.040 0.022 0.033 NGC6752-mg18 $-$0.036 0.009 $-$0.093 0.017 $-$0.000 0.014 $-$0.011 0.020 $-$0.012 0.027 0.061 0.032 NGC6752-mg21 $-$0.018 0.011 0.283 0.019 0.048 0.015 0.034 0.025 $-$0.007 0.031 0.072 0.033 NGC6752-mg22 $-$0.021 0.013 0.326 0.015 0.035 0.016 0.020 0.022 $-$0.005 0.033 0.025 0.033 NGC6752-mg24 $-$0.056 0.015 $-$0.341 0.038 $-$0.043 0.016 $-$0.033 0.023 $-$0.026 0.027 0.066 0.063 NGC6752-mg25 $-$0.059 0.009 $-$0.135 0.026 $-$0.002 0.014 $-$0.022 0.019 $-$0.037 0.024 $-$0.001 0.038 NGC6752-0 0.006 0.009 0.699 0.051 0.105 0.024 0.021 0.014 0.020 0.021 0.019 0.016 NGC6752-2 $-$0.004 0.012 0.750 0.016 0.065 0.020 0.010 0.016 $-$0.005 0.020 $-$0.000 0.012 NGC6752-3 $-$0.071 0.012 $-$0.075 0.010 $-$0.032 0.018 $-$0.069 0.015 $-$0.054 0.020 $-$0.070 0.011 NGC6752-4 $-$0.002 0.015 0.726 0.044 0.041 0.015 0.043 0.018 0.008 0.027 0.002 0.012 NGC6752-6 0.019 0.016 0.636 0.014 0.048 0.014 0.039 0.019 0.030 0.027 0.015 0.013 NGC6752-8 $-$0.012 0.015 0.045 0.014 $-$0.032 0.014 $-$0.002 0.015 0.026 0.024 $-$0.019 0.017 NGC6752-9 $-$0.065 0.022 $-$0.024 0.038 $-$0.034 0.017 $-$0.060 0.024 $-$0.061 0.037 $-$0.074 0.012 NGC6752-10 $-$0.006 0.014 0.730 0.014 0.032 0.014 0.015 0.017 0.007 0.025 $-$0.002 0.013 NGC6752-11 $-$0.019 0.006 0.373 0.021 0.016 0.013 $-$0.024 0.011 0.001 0.015 $-$0.030 0.012 NGC6752-12 $-$0.018 0.014 0.306 0.016 0.003 0.017 $-$0.020 0.016 $-$0.022 0.026 $-$0.001 0.016 NGC6752-15 0.013 0.015 0.018 0.056 0.013 0.019 $-$0.003 0.019 $-$0.005 0.027 0.010 0.013 NGC6752-16 0.001 0.014 0.461 0.035 0.009 0.024 $-$0.017 0.017 0.001 0.025 $-$0.023 0.016 NGC6752-19 $-$0.050 0.013 0.179 0.014 $-$0.034 0.022 $-$0.056 0.014 $-$0.048 0.020 $-$0.055 0.013 NGC6752-20 0.009 0.012 0.822 0.036 0.045 0.018 0.024 0.016 0.019 0.023 0.006 0.013 NGC6752-21 $-$0.027 0.013 0.308 0.023 $-$0.005 0.019 $-$0.015 0.017 $-$0.009 0.022 $-$0.021 0.012 NGC6752-23 $-$0.006 0.012 0.641 0.009 0.045 0.018 0.005 0.015 $-$0.006 0.023 $-$0.012 0.017 NGC6752-24 $-$0.079 0.012 $-$0.041 0.013 $-$0.094 0.014 $-$0.076 0.018 $-$0.082 0.022 $-$0.110 0.013 NGC6752-29 $-$0.048 0.012 $-$0.052 0.013 $-$0.088 0.018 $-$0.054 0.014 $-$0.066 0.029 $-$0.077 0.011 NGC6752-30 0.004 0.012 0.207 0.013 0.005 0.013 0.030 0.016 0.001 0.024 0.020 0.015 -------------- ------------------- ---------- ------------------- ---------- ------------------- ---------- ------------------- ---------- -------------------- ---------- --------------------- ---------- -------------- -------------------- ---------- --------------------- ---------- ------------------- ---------- ------------------ ---------- ------------------- ---------- Star $\Delta^{\rm CrI}$ $\sigma$ $\Delta^{\rm CrII}$ $\sigma$ $\Delta^{\rm Ni}$ $\sigma$ $\Delta^{\rm Y}$ $\sigma$ $\Delta^{\rm Nd}$ $\sigma$ (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) NGC6752-mg0 0.011 0.067 0.028 0.093 $-$0.023 0.030 0.034 0.038 0.003 0.033 NGC6752-mg2 0.053 0.090 0.076 0.077 0.007 0.027 0.101 0.049 0.067 0.030 NGC6752-mg3 0.044 0.055 0.053 0.059 0.013 0.018 0.092 0.025 0.062 0.021 NGC6752-mg4 0.053 0.052 0.068 0.049 0.021 0.018 0.090 0.023 0.075 0.021 NGC6752-mg5 0.034 0.045 0.038 0.049 0.008 0.018 0.025 0.039 0.049 0.020 NGC6752-mg8 $-$0.025 0.044 $-$0.079 0.024 0.018 0.012 0.039 0.031 0.052 0.014 NGC6752-mg9 0.002 0.036 0.014 0.086 0.008 0.017 0.019 0.016 0.020 0.019 NGC6752-mg10 $-$0.006 0.028 $-$0.040 0.076 0.008 0.015 0.100 0.024 0.042 0.016 NGC6752-mg12 $-$0.008 0.032 $-$0.010 0.034 0.006 0.017 0.005 0.017 0.019 0.019 NGC6752-mg15 $-$0.024 0.032 $-$0.006 0.036 0.002 0.017 0.014 0.016 0.032 0.018 NGC6752-mg18 $-$0.022 0.025 $-$0.021 0.030 0.002 0.012 0.032 0.018 0.009 0.012 NGC6752-mg21 $-$0.000 0.031 $-$0.009 0.038 0.005 0.014 0.085 0.021 0.029 0.014 NGC6752-mg22 $-$0.014 0.048 0.019 0.053 0.016 0.016 0.062 0.029 0.030 0.016 NGC6752-mg24 $-$0.028 0.022 $-$0.047 0.019 $-$0.015 0.016 $-$0.045 0.015 $-$0.012 0.016 NGC6752-mg25 $-$0.019 0.027 $-$0.033 0.029 $-$0.033 0.013 $-$0.016 0.026 $-$0.026 0.014 NGC6752-0 0.021 0.021 0.034 0.021 0.012 0.016 0.020 0.012 0.030 0.021 NGC6752-2 $-$0.028 0.025 $-$0.038 0.063 $-$0.012 0.016 $-$0.041 0.044 0.002 0.012 NGC6752-3 $-$0.088 0.023 $-$0.127 0.085 $-$0.064 0.021 $-$0.171 0.030 $-$0.104 0.015 NGC6752-4 $-$0.018 0.029 $-$0.011 0.018 $-$0.001 0.020 $-$0.008 0.045 $-$0.004 0.017 NGC6752-6 0.008 0.039 $-$0.001 0.014 0.002 0.020 $-$0.016 0.031 0.056 0.028 NGC6752-8 $-$0.019 0.029 $-$0.016 0.024 $-$0.008 0.021 $-$0.051 0.011 0.043 0.027 NGC6752-9 $-$0.071 0.040 $-$0.042 0.023 $-$0.056 0.030 $-$0.109 0.017 $-$0.051 0.011 NGC6752-10 $-$0.007 0.029 $-$0.057 0.078 $-$0.019 0.018 $-$0.008 0.022 $-$0.001 0.015 NGC6752-11 $-$0.034 0.015 $-$0.072 0.061 $-$0.002 0.015 $-$0.019 0.028 0.015 0.025 NGC6752-12 $-$0.027 0.030 0.002 0.013 $-$0.020 0.020 $-$0.118 0.036 $-$0.008 0.010 NGC6752-15 $-$0.012 0.033 $-$0.000 0.049 0.002 0.020 $-$0.066 0.009 0.005 0.017 NGC6752-16 $-$0.020 0.026 $-$0.060 0.066 0.004 0.024 $-$0.068 0.031 0.060 0.034 NGC6752-19 $-$0.073 0.026 $-$0.056 0.018 $-$0.055 0.015 $-$0.127 0.028 $-$0.034 0.012 NGC6752-20 $-$0.013 0.028 $-$0.034 0.073 0.003 0.020 $-$0.008 0.019 0.026 0.012 NGC6752-21 $-$0.049 0.024 $-$0.021 0.057 $-$0.035 0.019 $-$0.033 0.014 $-$0.007 0.015 NGC6752-23 $-$0.031 0.033 0.025 0.031 $-$0.033 0.018 $-$0.010 0.029 0.005 0.026 NGC6752-24 $-$0.093 0.023 $-$0.103 0.077 $-$0.095 0.023 $-$0.156 0.015 $-$0.061 0.024 NGC6752-29 $-$0.075 0.024 $-$0.023 0.021 $-$0.061 0.019 $-$0.104 0.007 $-$0.041 0.025 NGC6752-30 $-$0.006 0.026 $-$0.026 0.026 $-$0.012 0.017 $-$0.022 0.043 0.038 0.016 -------------- -------------------- ---------- --------------------- ---------- ------------------- ---------- ------------------ ---------- ------------------- ---------- We examine the abundance trends $\Delta^{\rm X}$ versus $\Delta^{\rm Na}$ and $\Delta^{\rm X}_{\rm T}$ versus $\Delta^{\rm Na}_{\rm T}$ in Figures \[fig:allna2\] and \[fig:allnat2\], respectively. As in Sections 3.2 and 3.4, we find that the abundance trends with Na are always positive and that a large number of elements exhibit statistically significant correlations, albeit of small amplitude. These results remain even after removing the abundance trends as a function of [$T_{\rm eff}$]{}. ![Slope of the fit to $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$, for X = Si to Eu, for the RGB tip sample (upper) and the RGB bump sample (lower). The colours represent the significance of the slope. (This shows the same results as Figure \[fig:allna\] but for a different pair of reference stars, RGB tip = NGC 6752-mg6 and RGB bump = NGC 6752-1.)[]{data-label="fig:allna2"}](fig21.ps){width="0.9\hsize"} ![Same as Figure \[fig:allna2\] but for $\Delta^{\rm X}_{\rm T}$ vs.$\Delta^{\rm Na}_{\rm T}$, i.e., the abundance trends with [$T_{\rm eff}$]{} have been removed as described in Section 3.4. (These results are obtained when using the reference stars RGB tip = NGC 6752-mg6 and RGB bump = NGC 6752-1.)[]{data-label="fig:allnat2"}](fig22.ps){width="0.9\hsize"} Finally, we consider the abundance trends $\Delta^{\rm X}$ versus $\Delta^{\rm Y}$. Our results are essentially identical to those in Sections 3.3 and 3.4, namely, that for many pairs of elements, there are positive correlations of high statistical significance for $\Delta^{\rm X}$ versus $\Delta^{\rm Y}$. Again, these results remain even after removing the abundance trends with [$T_{\rm eff}$]{}. The essential point to take from this subsection is that our results are not sensitive to the choice of reference star, at least for the two cases we investigated. Consequences for Globular Cluster Chemical Evolution ---------------------------------------------------- We begin with a summary of our analysis and results. 1. From a strictly differential line-by-line analysis of a sample of RGB tip stars and RGB bump stars in the globular cluster NGC 6752, we have obtained revised stellar parameters which we refer to as “strictly differential” stellar parameters. 2. Using those “strictly differential” stellar parameters, we have computed differential chemical abundances, $\Delta^{\rm X}$ (for X = Fe, Na, Si, Ca, Ti, Cr, Ni, Y, La, Nd and Eu), and conducted a detailed error analysis. 3. We have achieved very high precision measurements; for a given element, our average relative abundance errors range from 0.01 dex to 0.05 dex. 4. When plotting our abundance ratios against Na, e.g., $\Delta^{\rm X}$ versus $\Delta^{\rm Na}$, an unusually large number of elements show positive correlations, often of high statistical significance, although the amplitudes of the abundance variations in $\Delta^{\rm X}$ are small. 5. When plotting the abundance ratios for any pair of elements, $\Delta^{\rm X}$ versus $\Delta^{\rm Y}$, the majority exhibit positive correlations, often of high statistical significance. 6. Points (iv) and (v) persist even after ($a$) removing abundance trends with [$T_{\rm eff}$]{} and/or ($b$) conducting a re-analysis using a different pair of reference stars, thereby increasing our confidence in these results. We now explore the consequences for globular cluster chemical evolution. At face value, our results would suggest that the globular cluster NGC 6752 is not chemically homogeneous at the $\sim$0.03 dex level for the elements studied here. Chemical inhomogeneity at this level can only be revealed when the measurement uncertainties are $<$0.03 dex, as in this study. By extension, we speculate that other globular clusters with no obvious dispersion in Fe-peak elements but large Na variations (e.g., 47 Tuc, NGC 6397) may also display similar behavior to NGC 6752 if subjected to a strictly differential chemical abundance analysis of comparably high-quality spectra to that of this study. The abundance variations and positive correlations between $\Delta^{\rm X}$ versus $\Delta^{\rm Na}$ and between $\Delta^{\rm X}$ versus $\Delta^{\rm Y}$ could be due to a number of possibilities. Here we discuss four potential scenarios, which are not mutually exclusive: (1) systematic errors in the stellar parameters; (2) star-to-star CNO abundance variations; (3) star-to-star helium abundance variations; (4) inhomogeneous chemical evolution in the early stages of globular cluster formation. ### Systematic errors in the stellar parameters In the first scenario, we assume that the abundance variations are due to systematic errors in the stellar parameters. As noted in Section 3.1, the abundance dispersions often exceed the average abundance error. Attributing the abundance variations to systematic errors in the stellar parameters would require a substantial underestimate of the stellar parameter uncertainties. Such an explanation may be plausible. However, the abundance variations are highly correlated and are seen for all elements which cover a variety of ionization potentials and ionization states. There is no single change in [$T_{\rm eff}$]{}, [$\log g$]{} or [$\xi_t$]{} that would remove the abundance correlations for all elements in any given star. Thus, we regard this hypothesis to be unlikely. ### Star-to-star CNO abundance variations In the second scenario, we assume that the abundance variations and correlations are due to neglect of the appropriate C, N and O abundances in the model atmospheres. The structure of the model atmosphere depends upon the adopted C, N and O abundances [@gustafsson75]. @drake93 studied the effect of CNO abundances on the atmospheric structure in giant stars with metallicities similar to that of NGC 6752. For the outer layers of the atmosphere, the “CN-weak” models (i.e., appropriate for Na-poor objects) were cooler than the “CN-strong” models (i.e., appropriate for Na-rich objects) and the maximum difference was $\sim$ 150K. The differences in abundances derived using the “CN-strong” models minus those from the “CN-weak” models for the [$T_{\rm eff}$]{} = 4400K, [$\log g$]{} = 1.3 and \[Fe/H\] = $-$1.5 case are almost all positive and range from $\sim$ 0.00 dex to $\sim$ 0.10 dex. While the magnitudes of the predicted abundance differences are similar to those of this study, these differences have the incorrect sign. That is, if we had analysed the most Na-rich stars using the “CN-strong” models, according to the @drake93 predictions the inferred abundances would be higher and the slope of the correlations between $\Delta^{\rm X}$ and $\Delta^{\rm Na}$ would be even steeper. We note, however, that the vast majority of our lines are weak ($\log (W_\lambda/\lambda)$ $\le$ $-$5.0) such that the predicted abundance differences are essentially zero and thus application of “CN-strong” models with appropriate CNO abundances to the Na-rich stars would not change the trends we find. In the @drake93 models, the C+N+O abundance sum was constant to within 0.12 dex between the “CN-weak” and “CN-strong” models. This assumption of almost constant C+N+O abundance is appropriate for NGC 6752 on two grounds. First, the presence of a substantial C+N+O abundance variation would manifest as a spread in the luminosity of subgiant branch stars [@rood85] and such a feature has not been detected in this cluster [@milone13]. Second, within their measurement uncertainties, @carretta05 found no evidence for a dispersion in the C+N+O abundance sum in NGC 6752 and preliminary work we are conducting also indicates a nearly constant C+N+O abundance sum. ### Star-to-star helium abundance variations In the third scenario, we assume that the abundance variations and correlations are due to star-to-star He abundance variations. A detailed analysis of the highest quality colour-magnitude diagrams available shows that NGC 6752 harbours an internal He spread of up to $\Delta Y$ $\sim$ 0.03 [@milone13]. The most Na-rich objects are assumed to be more He-rich relative to the Na-poor objects. Spectroscopic analysis by @villanova09 showed that He measurements are possible in the cooler blue horizontal branch stars of NGC 6752; they found a uniform He content, a result not unexpected given the O-Na abundances of their targets. He abundance variations would affect our analysis in two distinct ways. First, the structure of the model atmosphere depends upon the adopted He abundance [@stromgren82]. Second, for a fixed mass fraction of metals ($Z$), a change in the helium mass fraction ($Y$) will directly affect the hydrogen mass fraction ($X$) such that the metal-to-hydrogen ratio, $Z$/$X$ will change with helium mass fraction since $X+Y+Z=1$. We now consider both cases. Regarding the effect of He on the structure of a model atmosphere, @stromgren82 demonstrated that for F type dwarfs, changes in the He/H ratio “affect the mean molecular weight of the gas and have an impact on the gas pressure” and that “a helium-enriched atmosphere is similar to a helium-normal atmosphere with a higher surface gravity, in terms of temperature structure and electron pressure structure” [@lind11]. Equation 12 in @stromgren82 quantifies the change in [$\log g$]{} due to a change in He/H ratio; @lind11 showed that metal-poor giants behave similarly. From this equation, a change in He abundance from $Y$ = 0.25 to $Y$ = 0.28 would result in a shift in [$\log g$]{} of 0.012. Inclusion of He abundance variations in the model atmospheres would naively be expected to result in different stellar parameters than those derived in this work, for both a regular analysis (as used to define the reference star stellar parameters) and a strictly differential analysis. Using a revised set of stellar parameters would, of course, result in an updated set of chemical abundances (and line-by-line chemical abundance differences). We might therefore expect to find a correlation between the Na abundance (which is assumed to trace the He abundance) and the stellar parameters (or difference between the strictly differential stellar parameters and the reference star stellar parameters). In Figure \[fig:nadeltalogg\], we plot $\Delta^{\rm Na}$ against $\Delta$[$\log g$]{} (“reference star” values minus “strictly differential analysis” values). There are no significant correlations for either the RGB tip sample or the RGB bump sample. In light of the statistically significant correlation between Si and Na, we also include in Figure \[fig:nadeltalogg\] panels showing $\Delta^{\rm Si}$ against $\Delta$[$\log g$]{}. Again, there are no significant correlations. (Similar plots using $\Delta$[$T_{\rm eff}$]{} rather than $\Delta$[$\log g$]{} also reveal no significant correlations.) Given the magnitude of the change in [$\log g$]{} resulting from the difference in helium abundance, it is not surprising that we do not detect any significant trend between $\Delta^{\rm Na}$ and $\Delta$[$\log g$]{}. Indeed, @lind11 find that changes in helium of $\Delta Y$ = 0.03, as is the case for NGC 6752, would be expected to result in negligible changes in [$T_{\rm eff}$]{} and [$\log g$]{}. ![$\Delta^{\rm Na}$ (upper) and $\Delta^{\rm Si}$ (lower) vs.$\Delta$[$\log g$]{} (old = “reference star” values, new = “strictly differential” values) for the RGB tip sample (left) and the RGB bump sample (right). The red dashed line is the linear fit to the data. (These results are obtained when using the reference stars RGB tip = NGC 6752-mg9 and RGB bump = NGC 6752-11.) As in Figure \[fig:paramcomptip\], the green, magenta and blue colours represent populations $a$, $b$, and $c$, respectively, from @milone13 (see Section 2.1 for details).[]{data-label="fig:nadeltalogg"}](fig23.ps){width="1.0\hsize"} On the other hand, for a fixed mass fraction of metals ($Z$), a change in the helium mass fraction ($Y$) will change the hydrogen mass fraction ($X$) and the metal-to-hydrogen ratio, $Z$/$X$, since $X+Y+Z=1$, as we have already noted. If stars in a globular cluster have a constant mass fraction of metals, a He-rich star will appear to be more metal-rich than a He-normal star. The positive correlations we find between $\Delta^{\rm X}$ and $\Delta^{\rm Na}$ are consistent with a He abundance variation since a Na-rich star is expected to be He-rich relative to a Na-poor star. @bragaglia10 examined a large sample of RGB stars in globular clusters and argued that in addition to differences in metallicity, He-rich stars will have subtly different temperatures and RGB bump luminosites. They found evidence for all three effects in their sample. For their primordial (P) and extreme (E) populations[^19], they found \[Fe/H\]$_{\rm E}$ $-$ \[Fe/H\]$_{\rm P}$ = 0.027 $\pm$ 0.010. The @milone13 populations $a$ and $c$ may be regarded as being equivalent to the @carretta09pie P and E populations, respectively, and for the RGB bump sample we find $<\Delta^{\rm Fe}_{c}>$ $-$ $<\Delta^{\rm Fe}_{a}>$ = 0.039 $\pm$ 0.015, a value comparable to that of @bragaglia10. If we consider all elements, the mean value $<\Delta^{\rm X}_{c}>$ $-$ $<\Delta^{\rm X}_{a}>$ is 0.052 $\pm$ 0.005 ($\sigma$ = 0.019); the smallest difference is for [Cr[ii]{}]{} (0.031 $\pm$ 0.023) and the largest difference is for Si (0.092 $\pm$ 0.018). For a fixed value of $Z$, a change in helium abundance from $Y$=0.25 to $Y$=0.28 would produce a change in \[X/H\] of +0.018 dex. By combining our measurement errors with the expected 0.018 dex abundance variation due to He, we can predict the abundance variations in \[X/H\]. If we compare these values for each element to the observed variations, we find that the abundance dispersions are, on average, 60% $\pm$ 20% larger than those expected from a change in helium abundance of $\Delta Y$ = 0.03 combined with the measurement uncertainties. Therefore, we tentatively conclude that while the observed abundance variations are qualitatively consistent with a He variation, the magnitudes of the observed variations are unlikely to be explained solely by a He change of $\Delta Y$ = 0.03[^20]. To attribute the observed abundance variations entirely to He would require $\Delta Y$ $\simeq$ 0.065, although inclusion of 3D and/or NLTE effects could produce changes in the derived differential abundances. Given the constraints on $\Delta Y$ from photometry [@milone13], some process in addition to the He variation may be required to explain the abundance variations that we find. Before we consider another possibility, we briefly examine the data using ATLAS12 model atmospheres [@castelli05; @kurucz05; @sbordone05]. We constructed model atmospheres with [$T_{\rm eff}$]{} = 4800K, [$\log g$]{} = 2.0, [$\xi_t$]{} = 2.00 but with two different helium abundances $Y$ = 0.25 and $Y$ = 0.28. We also ensured that the two models had the same mass fraction of metals, $Z$, and thus they have slightly different metallicities $\Delta$\[m/H\] $\simeq$ 0.015. Using these two model atmospheres, we computed abundances for all elements in three RGB bump stars (9, 10 and 11). These three stars have very similar stellar parameters to the ATLAS12 models but they span a substantial range in Na abundance. For a given element in a given star, we measured the abundance difference when using the $Y$ = 0.28 vs. $Y$ = 0.25 models. The differences are very small and essentially identical for all three stars; the average abundance difference ($Y$ = 0.28 minus $Y$ = 0.25) is 0.001 dex $\pm$ 0.001 dex ($\sigma$ = 0.005 dex). That is, we obtain identical $Z$/$X$ ratios even though the two models have different compositions. Such a result is expected given that the line strength depends only on the ratio of the line opacity to continuous opacity (H- for the program stars), i.e., the $Z$/$X$ ratio. ### Inhomogeneous chemical evolution In the fourth scenario, we assume that the abundance variations are due to chemical inhomogeneities in the pre- or proto-cluster environment. We concentrate on the high statistical significance of the correlations between ($i$) Si and Na, ($ii$) Y and Na and ($iii$) Ca and Na seen in Figures \[fig:allna\], \[fig:allnat\], \[fig:allna2\] and \[fig:allnat2\]. Such correlations potentially provide great new insight into the origin of the Na abundance variations in NGC 6752, and perhaps in all globular clusters[^21]. The correlation between Si and Na could be attributed to leakage from the Mg-Al chain into $^{28}$Si via $^{27}$Al(p,$\gamma$)$^{28}$Si during hydrogen burning at high temperature [@ventura11]. As noted already, similar conclusions were drawn based on the correlations between Si and Al [@yong05] and Si and N [@yong08nh]. To our knowledge, such correlations could arise from both the asymptotic giant branch stars (AGB) and the fast rotating massive stars (FRMS) scenarios. The correlation between Y and Na would suggest that the nucleosynthetic site that produced Na also operated neutron-capture nucleosynthesis. To further explore this issue, we derived chemical abundances for a larger suite of elements expected to participate in neutron-capture reactions (Zn, Y, Zr, Ba, La, Ce, Pr, Nd, Sm, Eu, and Dy). We used only a subset of 10 RGB tip stars with favorable stellar parameters (4250 $\leq T_{\rm eff} \leq$ 4520 K; stars mg8 to mg25) and followed the same procedure described in Sections 2.5 and 2.6, using spectrum synthesis for all lines. (The reference star was NGC 6752-mg9.) For elements with only one measured line (Zn, Zr, Ba, Eu, and Dy), we adopted 0.02 dex as the “fitting error” and used this value as $\sigma_{\rm rand}$ in the error analysis. For comparison, in our analysis of the 5380Å La line in Section 2.5, the average fitting error for the same 10 stars (mg8 to mg25) was 0.016 dex ($\sigma$ = 0.001 dex), and the minimum and maximum values were 0.015 and 0.017 dex, respectively. For the 6645Å Eu line, the average fitting error and minimum and maximum values were 0.017 dex ($\sigma$ = 0.001), 0.014 dex, and 0.018 dex, respectively. Therefore, we regard our choice of 0.02 dex as a somewhat conservative estimate of the fitting error. The line list and abundance differences are presented in Tables \[tab:ncapline\] and \[tab:abunncapb\]. With the exception of Sm, the average errors are comparable to, or smaller than, the measured abundance dispersions. As before, we take this as evidence for a genuine abundance dispersion, of small amplitude, for these elements. ------------ -------------- ------- ----------- ------------- Wavelength Species[^22] L.E.P $\log gf$ Source[^23] Å eV (1) (2) (3) (4) (5) 4810.53 30.0 4.08 $-$0.15 12 4883.68 39.1 1.08 0.19 1 4900.12 39.1 1.03 0.03 1 4982.13 39.1 1.03 $-$1.32 1 5087.42 39.1 1.08 $-$0.16 1 5119.11 39.1 0.99 $-$1.33 1 5205.72 39.1 1.03 $-$0.28 1 5289.82 39.1 1.03 $-$1.68 1 5402.77 39.1 1.84 $-$0.31 1 5473.38 39.1 1.74 $-$0.78 1 5544.61 39.1 1.74 $-$0.83 1 5728.89 39.1 1.84 $-$1.15 1 5112.27 40.1 1.66 $-$0.85 10 6496.90 56.1 0.60 $-$0.41 11 5114.56 57.1 0.24 $-$1.03 5 5122.99 57.1 0.32 $-$0.91 5 5290.82 57.1 0.00 $-$1.65 4 5301.97 57.1 0.40 $-$0.94 5 5303.53 57.1 0.32 $-$1.35 5 5482.27 57.1 0.00 $-$2.23 5 6262.29 57.1 0.40 $-$1.22 5 6390.48 57.1 0.32 $-$1.41 5 5274.23 58.1 1.04 0.13 8 5330.56 58.1 0.87 $-$0.40 8 6043.37 58.1 1.20 $-$0.48 8 5259.73 59.1 0.63 0.11 3 5322.77 59.1 0.48 $-$0.12 9 4797.15 60.1 0.56 $-$0.69 2 4825.48 60.1 0.18 $-$0.42 2 4914.38 60.1 0.38 $-$0.70 2 4959.12 60.1 0.06 $-$0.80 2 4987.16 60.1 0.74 $-$0.79 2 5063.72 60.1 0.98 $-$0.62 2 5092.79 60.1 0.38 $-$0.61 2 5130.59 60.1 1.30 0.45 2 5132.33 60.1 0.56 $-$0.71 2 5234.19 60.1 0.55 $-$0.51 2 5249.58 60.1 0.98 0.20 2 5293.16 60.1 0.82 0.10 2 5306.46 60.1 0.86 $-$0.97 2 5311.45 60.1 0.98 $-$0.42 2 5319.81 60.1 0.55 $-$0.14 2 5356.97 60.1 1.26 $-$0.28 2 5485.70 60.1 1.26 $-$0.12 2 4815.81 62.1 0.18 $-$0.82 7 4844.21 62.1 0.28 $-$0.89 7 4854.37 62.1 0.38 $-$1.25 7 4913.26 62.1 0.66 $-$0.93 7 6645.10 63.1 1.38 0.12 6 5169.69 66.1 0.10 $-$1.95 13 ------------ -------------- ------- ----------- ------------- Star $\Delta^{\rm Zn}$ $\sigma$ $\Delta^{\rm Y}$ $\sigma$ $\Delta^{\rm Zr}$ $\sigma$ $\Delta^{\rm Ba}$ $\sigma$ $\Delta^{\rm La}$ $\sigma$ $\Delta^{\rm Ce}$ $\sigma$ -------------- ------------------- ---------- ------------------ ---------- ------------------- ---------- ------------------- ---------- ------------------- ---------- ------------------- ---------- -- -- -- -- NGC6752-mg8 $-$0.080 0.025 0.020 0.017 0.040 0.021 $-$0.130 0.035 0.025 0.016 0.007 0.024 NGC6752-mg10 $-$0.080 0.025 0.075 0.016 0.070 0.024 0.010 0.039 0.022 0.016 0.017 0.030 NGC6752-mg12 $-$0.060 0.029 $-$0.042 0.020 0.020 0.023 $-$0.030 0.036 $-$0.008 0.026 0.043 0.033 NGC6752-mg15 $-$0.050 0.033 $-$0.014 0.023 0.000 0.024 $-$0.090 0.045 0.003 0.011 0.037 0.035 NGC6752-mg18 $-$0.050 0.033 0.021 0.023 0.020 0.024 $-$0.070 0.044 0.029 0.017 0.047 0.033 NGC6752-mg21 0.010 0.031 0.060 0.020 0.040 0.024 $-$0.020 0.043 0.045 0.019 0.007 0.038 NGC6752-mg22 $-$0.020 0.026 0.045 0.019 0.080 0.021 $-$0.020 0.062 0.039 0.013 0.077 0.020 NGC6752-mg24 $-$0.050 0.037 $-$0.082 0.031 $-$0.020 0.025 $-$0.150 0.065 $-$0.004 0.012 $-$0.053 0.030 NGC6752-mg25 $-$0.130 0.025 $-$0.016 0.024 0.040 0.020 $-$0.150 0.038 $-$0.025 0.010 $-$0.010 0.025 \ In order to place the above values onto an absolute scale, the absolute abundances we obtain for the reference stars are given below. We caution, however, that the absolute scale has not been critically evaluated (see Section 2.5 for more details).\ NGC6752-mg9: A(Zn) = 3.02, A(Y) = 0.49, A(Zr) = 1.34, A(Ba) = 1.02, A(La) = $-$0.33, A(Ce) = 0.00, Star $\Delta^{\rm Pr}$ $\sigma$ $\Delta^{\rm Nd}$ $\sigma$ $\Delta^{\rm Sm}$ $\sigma$ $\Delta^{\rm Eu}$ $\sigma$ $\Delta^{\rm Dy}$ $\sigma$ -------------- ------------------- ---------- ------------------- ---------- ------------------- ---------- ------------------- ---------- ------------------- ---------- -- -- -- -- NGC6752-mg8 0.010 0.011 0.038 0.016 $-$0.037 0.029 0.070 0.021 0.040 0.021 NGC6752-mg10 0.005 0.021 0.029 0.016 $-$0.048 0.029 0.080 0.023 0.110 0.021 NGC6752-mg12 0.010 0.040 0.025 0.012 $-$0.022 0.016 0.030 0.022 0.040 0.021 NGC6752-mg15 $-$0.040 0.016 0.034 0.011 $-$0.010 0.027 0.040 0.025 0.130 0.021 NGC6752-mg18 $-$0.005 0.025 0.019 0.017 $-$0.030 0.025 0.020 0.024 0.070 0.021 NGC6752-mg21 0.005 0.035 0.057 0.014 0.005 0.047 0.050 0.024 0.100 0.021 NGC6752-mg22 0.005 0.022 0.036 0.017 0.010 0.024 0.030 0.021 0.140 0.022 NGC6752-mg24 $-$0.030 0.010 $-$0.012 0.017 $-$0.030 0.026 0.000 0.027 0.090 0.021 NGC6752-mg25 $-$0.045 0.031 $-$0.008 0.017 $-$0.065 0.045 0.000 0.021 0.020 0.021 \ In order to place the above values onto an absolute scale, the absolute abundances we obtain for the reference stars are given below. We caution, however, that the absolute scale has not been critically evaluated (see Section 2.5 for more details).\ NGC6752-mg9: A(Pr) = $-$0.75, A(Nd) = $-$0.02, A(Sm) = $-$0.38, A(Eu) = $-$0.69, A(Dy) = $-$0.25 For these new measurements, we fit the slope to $\Delta^{\rm X}$ vs.$\Delta^{\rm Na}$ as in Section 3.2. We find that the slope is positive for all elements. If we remove the abundance trends with [$T_{\rm eff}$]{} as described in Section 3.4, these results remain unchanged. For Y, La, Nd, and Eu, the results from this new analysis are in agreement with the previous results (at the $<$3$\sigma$ level). In Figure \[fig:ncaprat\], we plot the slope of the fit to $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$ against the percentage attributed to the $s$-process in the solar system, adopting the solar $s$-process percentages calculated by @bisterzo11. In this figure, we also show the slopes when fitting $\Delta^{\rm X}_{\rm T}$ vs. $\Delta^{\rm Na}_{\rm T}$, i.e., after removing the abundance trends with [$T_{\rm eff}$]{}. In both cases, the slopes are not of high statistical significance, $<$2$\sigma$ level. If we exclude Y, a possible outlier, the slopes are of even lower statistical significance, $<$1$\sigma$. (The neighboring elements Y and Zr are both members of the first $s$-process peak, so we would not expect their nucleosynthesis histories to be substantially different.) The absence of a significant trend in Figure \[fig:ncaprat\] suggests that the abundance variations are not the result of preferentially introducing more $s$-process material than $r$-process material. ![Slope of the fit to $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$ vs.percentage attributed to the $s$-process in solar system material (using the @bisterzo11 values). The red squares are from the “regular” analysis while the blue open circles are fits to the data when abundance trends with [$T_{\rm eff}$]{} have been removed, i.e., slopes of the fits to $\Delta^{\rm X}_{\rm T}$ vs. $\Delta^{\rm Na}_{\rm T}$. Small horizontal offsets ($\pm$ 0.5%) have been applied to aid visibility. Neither slope is significant at the 2$\sigma$ level.[]{data-label="fig:ncaprat"}](fig24.eps){width="0.9\hsize"} The correlation between Ca and Na requires massive stars to have played a role in the pre- or proto-cluster environment since the synthesis of Ca is believed to occur primarily during O-burning and Si-burning in those objects [@clayton03]. That said, the abundances for all elements are positively correlated with the Na abundance, and for any pair of elements heavier than Si, the abundances are positively correlated. Furthermore, the ratios for any pair of elements (e.g., $\Delta^{\rm Ni}$ $-$ $\Delta^{\rm Ca}$ using our terminology) are constant at the 0.036 dex $\pm$ 0.001 dex ($\sigma$ = 0.012) level for the RGB tip sample (excluding Eu, which has considerably larger measurement errors) and essentially identical results are found for the RGB bump sample. Thus, the origin of such correlations demands a source (or sources) capable of synthesis of Na, $\alpha$, Fe-peak and neutron-capture elements and this diverse suite of elements must be synthesized in essentially equal amounts. No individual star can achieve such nucleosynthesis, and therefore, a variety of sources is required. The underlying assumption in this work, and in other studies, is that the star-to-star light element abundance variations in mono-metallic globular clusters are produced by some source (AGB, FRMS and/or massive binaries) within the duration of star formation in the globular cluster. Such an assumption appears reasonable, although unresolved issues related to nucleosynthesis and enrichment timescales remain (e.g., @fenner04 [@decressin07; @prantzos07; @pumo08; @demink09; @dercole12]). Regarding the heavy elements, one might also assume that the star-to-star abundance variations and correlations with Na are produced by some source within the duration of star formation in this globular cluster, provided the heavy elements are produced in the same ratios as those already found in the first generation stars. Another possibility is that the heavy element abundance variations and correlations with Na arise because the ejecta from the source that produced Na was diluted into gas with slightly higher \[X/H\] ratios that entered the cluster while the later generations of stars formed. In this scenario, production of the light elements, including Na, is completely decoupled from production of all elements heavier than Si. Unfortunately, there are no obvious observational tests to distinguish between these two scenarios. We thus regard the “production during cluster formation” and “dilution with pristine material” scenarios as equally valid possibilities for the abundance variations. The penultimate issue we raise concerns whether the distribution of the heavy element abundances is discrete or continuous. As noted in Section 2.1, @milone13 have identified three stellar populations in NGC 6752 based on *HST* and ground-based Str[" o]{}mgren photometry. The three populations can be found at all evolutionary stages (main sequence, subgiant branch and red giant branch). Additionally, each population exhibits distinct chemical abundance patterns for the light elements (e.g., N, O, Na, Mg and Al). In Figure \[fig:fena\], populations $a$ (green), $b$ (magenta) and $c$ (blue) have distinct $\Delta^{\rm Na}$ abundances. In Figures \[fig:sica\] and \[fig:nind\] (and other figures), we use the same colour scheme to denote the three populations. In general, population $c$ (blue) exhibits a larger (i.e., more positive) value for $\Delta^{\rm X}$ than population $a$ (green), while population $b$ (magenta) lies between populations $a$ and $c$. Such a result is expected given ($i$) the Na abundances of each population and ($ii$) the correlation between $\Delta^{\rm X}$ and $\Delta^{\rm Na}$. Although we have achieved very high precision relative abundance measurements, it is not clear whether the abundance distributions seen in Figures \[fig:sica\] and \[fig:nind\] are consistent with three discrete values in the $\Delta^{\rm X}$ vs. $\Delta^{\rm Y}$ plane, corresponding to the @milone13 populations $a$, $b$ and $c$. (That said, it is not obvious whether the @milone13 data show three discrete photometric sequences.) Additional studies may be necessary to clarify whether the heavy element abundance distribution is discrete or continuous in this globular cluster. Finally, we mentioned in the introduction that @sneden05 examined the \[Ni/Fe\] ratio in the context of cluster abundance accuracy limits. There was an apparent limit in $\sigma$\[Ni/Fe\] at the $\sim$0.06 dex level. For the RGB tip and RGB bump samples, we find $\sigma$($\Delta^{\rm Ni}$ $-$ $\Delta^{\rm Fe}$) = 0.009 and 0.010, respectively, thereby highlighting the great improvement in abundance precision that can be obtained when conducting a strictly differential analysis of high quality spectra. SUMMARY ======= We have obtained very high precision chemical abundance measurements, $\Delta^{\rm X}$, through a strictly differential analysis of high quality UVES spectra of giant stars in the globular cluster NGC 6752. The measurement uncertainties and average uncertainties for a given element, $<\sigma\Delta^{\rm X}>$, are as low as $\sim$0.01 dex. The observed abundance dispersions, and abundance dispersions about various linear fits (e.g., $\Delta^{\rm X}$ vs. [$T_{\rm eff}$]{} or $\Delta^{\rm X}$ vs. $\Delta^{\rm Y}$), are often considerably larger than the average abundance uncertainty. We find positive correlations between any given element and Na, i.e., $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$, and indeed for any combination of elements, e.g., $\Delta^{\rm X}$ vs. $\Delta^{\rm Y}$. These correlations are often of high statistical significance ($>$ 5$\sigma$), although we note that the amplitudes of the abundance variations are small. These results are unchanged even after removing abundance trends with [$T_{\rm eff}$]{} and/or when using a different pair of reference stars. Indeed, the likelihood of these results being due to random error is exceedingly small. Therefore, we argue that there is a genuine abundance dispersion in this cluster, at the $\sim$0.03 dex level. In order to explain these results, we consider four possibilities. The abundance variations and correlations may reflect ($i$) systematic errors in the stellar parameters, ($ii$) star-to-star CNO abundance variations, ($iii$) star-to-star He abundance variations and/or ($iv$) inhomogeneous chemical evolution. In the context of point ($i$), the stellar parameter uncertainties would require substantial increases; our results are seen for all elements (covering a range of ionization potentials and ionization states) and no single change in [$T_{\rm eff}$]{}, [$\log g$]{} or [$\xi_t$]{} would remove the abundance correlations for all elements. Regarding point ($ii$), predictions by @drake93 suggest that for weak lines such as those in this study, using model atmospheres with appropriate CNO abundances will not change our results. Regarding point ($iii$), for a fixed mass fraction of metals ($Z$), an increase in helium abundance ($Y$) would result in a lower hydrogen abundance ($X$) and therefore a higher metal-to-hydrogen ratio, $Z$/$X$. Since Na and He abundances are expected to be correlated, the positive correlations we find between $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$ are consistent with a He abundance variation (for constant $Z$). Given the current constraints on $\Delta Y$ from photometry [@milone13], it is likely that the abundance variations cannot be attributed solely to He. Nevertheless, He abundance variations probably play an important role in producing the abundance variations that we find. Concerning point ($iv$), the correlation between Si and Na could arise from leakage from the Mg-Al chain into Si in either AGB or FRMS. For the neutron-capture elements, there is no significant trend between the slope of the fit to $\Delta^{\rm X}$ vs. $\Delta^{\rm Na}$ when plotted against percentage attributed to the $s$-process in solar system material. Thus, their abundance variations are probably not related to $s$-process production by whatever source produced the light element variations. That all elements are correlated requires a nucleosynthetic source(s) capable of synthesizing Na, $\alpha$, Fe-peak and neutron-capture elements. Additionally, element-to-element ratios (e.g., $\Delta^{\rm Ni}$ $-$ $\Delta^{\rm Ca}$ using our terminology) are constant at the $\sim$0.03 dex level. No individual object can achieve the required nucleosynthesis. We cannot ascertain whether the heavy elements were produced ($a$) within the duration of star formation in this globular cluster or ($b$) by dilution of Na-rich material into gas with slightly higher \[X/H\] ratios that entered the cluster while the second (and later) generations of stars formed. In summary, our results may be explained by some combination of He abundance variations and inhomogeneous chemical evolution (i.e., metallicity variations). There may be other explanations for the observed abundance variations and correlations. Nevertheless, we encourage similar studies of other globular clusters with no obvious dispersion in Fe-peak elements. Acknowledgments {#acknowledgments .unnumbered} =============== We warmly thank the referee, Raffaele Gratton, for helpful comments that improved and clarified this work. We thank J. A. Johnson and A. I. Karakas for helpful discussions. D. Y., J. E. N., A. P. M., A. F. M., R. C.and M. A. gratefully acknowledge support from the Australian Research Council (grants DP0984924, FL110100012, DP120100475, DP120100991 and DE120102940). J.M. would like to acknowledge support from FAPESP (2010/17510-3; 2012/24392-2) and CNPq (Bolsa de Produtividade). Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation. The research is supported by the ASTERISK project (ASTERoseismic Investigations with SONG and Kepler) funded by the European Research Council (Grant agreement no.: 267864). I. U. R. is grateful for support from the Carnegie Institution for Science through the Barbara McClintock Fellowship. P. C. acknowledges support from FAPESP Project 2008/58406-4. [128]{} natexlab\#1[\#1]{} , C., [Barklem]{}, P. S., [Lambert]{}, D. L., & [Cunha]{}, K. 2004, , 420, 183 , A., [Arribas]{}, S., & [Mart[í]{}nez-Roger]{}, C. 1999, , 140, 261 , A., [Mel[é]{}ndez]{}, J., [Asplund]{}, M., [Ram[í]{}rez]{}, I., & [Yong]{}, D. 2010, , 513, A35 , A., [Yong]{}, D., [Mel[é]{}ndez]{}, J., [V[á]{}squez]{}, S., & [Karakas]{}, A. I. 2012, , 540, A3 , M. 2005, , 43, 481 , M., [Grevesse]{}, N., [Sauval]{}, A. J., & [Scott]{}, P. 2009, , 47, 481 , P. S., [Christlieb]{}, N., [Beers]{}, T. C., [Hill]{}, V., [Bessell]{}, M. S., [Holmberg]{}, J., [Marsteller]{}, B., [Rossi]{}, S., [Zickgraf]{}, F., & [Reimers]{}, D. 2005, , 439, 129 , K. 2011, , 412, 2241 , E., [Baudoux]{}, M., [Kurucz]{}, R. L., [Ansbacher]{}, W., & [Pinnington]{}, E. H. 1991, , 249, 539 , [É]{}., [Blagoev]{}, K., [Engstr[ö]{}m]{}, L., [Hartman]{}, H., [Lundberg]{}, H., [Malcheva]{}, G., [Nilsson]{}, H., [Whitehead]{}, R. B., [Palmeri]{}, P., & [Quinet]{}, P. 2011, , 414, 3350 , S., [Gallino]{}, R., [Straniero]{}, O., [Cristallo]{}, S., & [K[ä]{}ppeler]{}, F. 2011, , 418, 284 , D. E., [Booth]{}, A. J., [Haddock]{}, D. J., [Petford]{}, A. D., & [Leggett]{}, S. K. 1986, , 220, 549 , D. E., [Ibbetson]{}, P. A., [Petford]{}, A. D., & [Shallis]{}, M. J. 1979, , 186, 633 , D. E., [Lynas-Gray]{}, A. E., & [Smith]{}, G. 1995, , 296, 217 , D. E., [Petford]{}, A. D., & [Shallis]{}, M. J. 1979, , 186, 657 , D. E., [Petford]{}, A. D., [Shallis]{}, M. J., & [Simmons]{}, G. J. 1980, , 191, 445 , A., [Carretta]{}, E., [Gratton]{}, R., [D’Orazi]{}, V., [Cassisi]{}, S., & [Lucatello]{}, S. 2010, , 519, A60 , R., [Caloi]{}, V., [Castellani]{}, V., [Corsi]{}, C., [Fusi Pecci]{}, F., & [Gratton]{}, R. 1986, , 66, 79 , S. W., [D’Orazi]{}, V., [Yong]{}, D., [Constantino]{}, T. N., [Lattanzio]{}, J. C., [Stancliffe]{}, R. J., [Angelou]{}, G. C., [Wylie-de Boer]{}, E. C., & [Grundahl]{}, F. 2013, , 498, 198 , R. D., [Croke]{}, B. F. W., [Bell]{}, R. A., [Hesser]{}, J. E., & [Stathakis]{}, R. A. 1998, , 298, 601 , E., [Bragaglia]{}, A., [Gratton]{}, R., [D’Orazi]{}, V., & [Lucatello]{}, S. 2009, , 508, 695 , E., [Bragaglia]{}, A., [Gratton]{}, R. G., [Lucatello]{}, S., [Bellazzini]{}, M., [Catanzaro]{}, G., [Leone]{}, F., [Momany]{}, Y., [Piotto]{}, G., & [D’Orazi]{}, V. 2010, , 714, L7 , E., [Bragaglia]{}, A., [Gratton]{}, R. G., [Lucatello]{}, S., [Catanzaro]{}, G., [Leone]{}, F., [Bellazzini]{}, M., [Claudi]{}, R., [D’Orazi]{}, V., [Momany]{}, Y., [Ortolani]{}, S., [Pancino]{}, E., [Piotto]{}, G., [Recio-Blanco]{}, A., & [Sabbi]{}, E. 2009, , 505, 117 , E., [Gratton]{}, R. G., [Lucatello]{}, S., [Bragaglia]{}, A., & [Bonifacio]{}, P. 2005, , 433, 597 , E., [Lucatello]{}, S., [Gratton]{}, R. G., [Bragaglia]{}, A., & [D’Orazi]{}, V. 2011, , 533, A69 , F. 2005, Memorie della Societa Astronomica Italiana Supplementi, 8, 25 , F. & [Kurucz]{}, R. L. 2003, in IAU Symp. 210, Modelling of Stellar Atmospheres, ed. N. Piskunov, W. W. Weiss, & D. F. Gray (San Francisco, CA: ASP), A20 , D. 2003, [Handbook of Isotopes in the Cosmos]{} , J. G., [Huang]{}, W., & [Kirby]{}, E. N. 2011, , 740, 60 , J. G. & [Kirby]{}, E. N. 2012, , 760, 86 , J. G., [Kirby]{}, E. N., [Simon]{}, J. D., & [Geha]{}, M. 2010, , 725, 288 , C. & [Spergel]{}, D. N. 2011, , 726, 36 , S. E., [Pols]{}, O. R., [Langer]{}, N., & [Izzard]{}, R. G. 2009, , 507, L1 , T., [Meynet]{}, G., [Charbonnel]{}, C., [Prantzos]{}, N., & [Ekstr[ö]{}m]{}, S. 2007, , 464, 1029 , H., [D’Odorico]{}, S., [Kaufer]{}, A., [Delabre]{}, B., & [Kotzlowski]{}, H. 2000, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4008, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. M. [Iye]{} & A. F. [Moorwood]{}, 534–545 , E. A., [Lawler]{}, J. E., [Sneden]{}, C., & [Cowan]{}, J. J. 2003, , 148, 543 , A., [D’Antona]{}, F., [Carini]{}, R., [Vesperini]{}, E., & [Ventura]{}, P. 2012, , 423, 1521 , A., [Vesperini]{}, E., [D’Antona]{}, F., [McMillan]{}, S. L. W., & [Recchi]{}, S. 2008, , 391, 825 , V., [Campbell]{}, S. W., [Lugaro]{}, M., [Lattanzio]{}, J. C., [Pignatari]{}, M., & [Carretta]{}, E. 2013,  in press (arXiv:1304.7009) , V., [Lucatello]{}, S., [Gratton]{}, R., [Bragaglia]{}, A., [Carretta]{}, E., [Shen]{}, Z., & [Zaggia]{}, S. 2010, , 713, L1 , J. J., [Plez]{}, B., & [Smith]{}, V. V. 1993, , 412, 612 , Y., [Campbell]{}, S., [Karakas]{}, A. I., [Lattanzio]{}, J. C., & [Gibson]{}, B. K. 2004, , 353, 789 Fuhr, J. R. & Wiese, W. L. 2009, Atomic Transition Probabilities, published in the CRC Handbook of Chemistry and Physics, 90th Edition, ed. Lide, D. R., CRC Press, Inc., Boca Raton, FL, 10 , R., [Sneden]{}, C., & [Carretta]{}, E. 2004, , 42, 385 , R. G., [Bonifacio]{}, P., [Bragaglia]{}, A., [Carretta]{}, E., [Castellani]{}, V., [Centurion]{}, M., [Chieffi]{}, A., [Claudi]{}, R., [Clementini]{}, G., [D’Antona]{}, F., [Desidera]{}, S., [Fran[ç]{}ois]{}, P., [Grundahl]{}, F., [Lucatello]{}, S., [Molaro]{}, P., [Pasquini]{}, L., [Sneden]{}, C., [Spite]{}, F., & [Straniero]{}, O. 2001, , 369, 87 , R. G., [Bragaglia]{}, A., [Carretta]{}, E., [de Angeli]{}, F., [Lucatello]{}, S., [Piotto]{}, G., & [Recio Blanco]{}, A. 2005, , 440, 901 , R. G., [Carretta]{}, E., & [Bragaglia]{}, A. 2012, , 20, 50 , R. G., [Carretta]{}, E., [Bragaglia]{}, A., [Lucatello]{}, S., & [D’Orazi]{}, V. 2010, , 517, A81 , R. G., [Carretta]{}, E., [Claudi]{}, R., [Lucatello]{}, S., & [Barbieri]{}, M. 2003, , 404, 187 , F., [Briley]{}, M., [Nissen]{}, P. E., & [Feltzing]{}, S. 2002, , 385, L14 , F., [Catelan]{}, M., [Landsman]{}, W. B., [Stetson]{}, P. B., & [Andersen]{}, M. I. 1999, , 524, 242 , B., [Bell]{}, R. A., [Eriksson]{}, K., & [Nordlund]{}, A. 1975, , 42, 407 , B., [Edvardsson]{}, B., [Eriksson]{}, K., [J[ø]{}rgensen]{}, U. G., [Nordlund]{}, [Å]{}., & [Plez]{}, B. 2008, , 486, 951 , W. E. 1996, , 112, 1487 , I. I., [Kraft]{}, R. P., [Sneden]{}, C., [Smith]{}, G. H., [Rich]{}, R. M., & [Shetrone]{}, M. 2001, , 122, 1438 , I. I., [Simmerer]{}, J., [Sneden]{}, C., [Lawler]{}, J. E., [Cowan]{}, J. J., [Gallino]{}, R., & [Bisterzo]{}, S. 2006, , 645, 613 , S., [Litz[é]{}n]{}, U., & [Wahlgren]{}, G. M. 2001, , 64, 455 , W. H., [Fitzpatrick]{}, M. J., & [McArthur]{}, B. E. 1988, Celestial Mechanics, 41, 39 , C. I. & [Pilachowski]{}, C. A. 2010, , 722, 1373 , J. A. 2002, , 139, 219 , R. P. 1994, , 106, 553 , R. & [Bell]{}, B. 1995, Atomic Line Data (R.L. Kurucz and B. Bell) Kurucz CD-ROM No. 23. Cambridge, Mass.: Smithsonian Astrophysical Observatory, 1995., 23 , R. L. 2005, Memorie della Societa Astronomica Italiana Supplementi, 8, 14 , J. E., [Bonvallet]{}, G., & [Sneden]{}, C. 2001, , 556, 452 , J. E., [Den Hartog]{}, E. A., [Sneden]{}, C., & [Cowan]{}, J. J. 2006, , 162, 227 , J. E., [Sneden]{}, C., [Cowan]{}, J. J., [Ivans]{}, I. I., & [Den Hartog]{}, E. A. 2009, , 182, 51 , J. E., [Wickliffe]{}, M. E., [den Hartog]{}, E. A., & [Sneden]{}, C. 2001, , 563, 1075 , R., [Chatelain]{}, R., [Holt]{}, R. A., [Rehse]{}, S. J., [Rosner]{}, S. D., & [Scholl]{}, T. J. 2007, , 76, 577 , K., [Asplund]{}, M., [Barklem]{}, P. S., & [Belyaev]{}, A. K. 2011, , 528, A103 , K., [Bergemann]{}, M., & [Asplund]{}, M. 2012, , 427, 50 , K., [Charbonnel]{}, C., [Decressin]{}, T., [Primas]{}, F., [Grundahl]{}, F., & [Asplund]{}, M. 2011, , 527, A148 , G., [Nilsson]{}, H., [Asplund]{}, M., & [Johansson]{}, S. 2006, , 456, 1181 , K. 2003, , 591, 1220 , A. D. & [Broby Nielsen]{}, P. 2007, , 379, 151 , A., [Gibson]{}, B. K., [Karakas]{}, A. I., & [S[á]{}nchez-Bl[á]{}zquez]{}, P. 2009, , 395, 719 , A. F., [Milone]{}, A. P., [Piotto]{}, G., [Villanova]{}, S., [Bedin]{}, L. R., [Bellini]{}, A., & [Renzini]{}, A. 2009, , 505, 1099 , A. F., [Sneden]{}, C., [Kraft]{}, R. P., [Wallerstein]{}, G., [Norris]{}, J. E., [da Costa]{}, G., [Milone]{}, A. P., [Ivans]{}, I. I., [Gonzalez]{}, G., [Fulbright]{}, J. P., [Hilker]{}, M., [Piotto]{}, G., [Zoccali]{}, M., & [Stetson]{}, P. B. 2011, , 532, A8 , A., [Preston]{}, G. W., [Sneden]{}, C., & [Searle]{}, L. 1995, , 109, 2757 , J., [Asplund]{}, M., [Gustafsson]{}, B., & [Yong]{}, D. 2009, , 704, L66 , J., [Bergemann]{}, M., [Cohen]{}, J. G., [Endl]{}, M., [Karakas]{}, A. I., [Ram[í]{}rez]{}, I., [Cochran]{}, W. D., [Yong]{}, D., [MacQueen]{}, P. J., [Kobayashi]{}, C., & [Asplund]{}, M. 2012, , 543, A29 , J. & [Cohen]{}, J. G. 2009, , 699, 2017 , S. & [Allende Prieto]{}, C. 2013, , A. P., [Bedin]{}, L. R., [Piotto]{}, G., & [Anderson]{}, J. 2009, , 497, 755 , A. P., [Marino]{}, A. F., [Piotto]{}, G., [Bedin]{}, L. R., [Anderson]{}, J., [Aparicio]{}, A., [Bellini]{}, A., [Cassisi]{}, S., [D’Antona]{}, F., [Grundahl]{}, F., [Monelli]{}, M., & [Yong]{}, D. 2013,  in press (arXiv:1301.7044) , A. P., [Piotto]{}, G., [Bedin]{}, L. R., [King]{}, I. R., [Anderson]{}, J., [Marino]{}, A. F., [Bellini]{}, A., [Gratton]{}, R., [Renzini]{}, A., [Stetson]{}, P. B., [Cassisi]{}, S., [Aparicio]{}, A., [Bragaglia]{}, A., [Carretta]{}, E., [D’Antona]{}, F., [Di Criscienzo]{}, M., [Lucatello]{}, S., [Monelli]{}, M., & [Pietrinferni]{}, A. 2012, , 744, 58 , A., [Bellazzini]{}, M., [Ibata]{}, R., [Merle]{}, T., [Chapman]{}, S. C., [Dalessandro]{}, E., & [Sollima]{}, A. 2012, , 426, 2889 , P. E. & [Schuster]{}, W. J. 2010, , 511, L10 —. 2011, , 530, A15 , J. E. & [Da Costa]{}, G. S. 1995, , 447, 680 , L., [Rich]{}, R. M., [Ferraro]{}, F. R., [Lanzoni]{}, B., [Bellazzini]{}, M., [Dalessandro]{}, E., [Mucciarelli]{}, A., [Valenti]{}, E., & [Beccari]{}, G. 2011, , 726, L20 , A. J. & [Dickens]{}, R. J. 1986, , 220, 845 , G. 2009, in IAU Symposium, Vol. 258, IAU Symposium, ed. E. E. [Mamajek]{}, D. R. [Soderblom]{}, & R. F. G. [Wyse]{}, 233–244 , G., [Villanova]{}, S., [Bedin]{}, L. R., [Gratton]{}, R., [Cassisi]{}, S., [Momany]{}, Y., [Recio-Blanco]{}, A., [Lucatello]{}, S., [Anderson]{}, J., [King]{}, I. R., [Pietrinferni]{}, A., & [Carraro]{}, G. 2005, , 621, 777 , D. M. 1947, , 105, 204 , N., [Charbonnel]{}, C., & [Iliadis]{}, C. 2007, , 470, 179 , J. X., [Naumov]{}, S. O., [Carney]{}, B. W., [McWilliam]{}, A., & [Wolfe]{}, A. M. 2000, , 120, 2513 , M. L., [D’Antona]{}, F., & [Ventura]{}, P. 2008, , 672, L25 , I., [Asplund]{}, M., [Baumann]{}, P., [Mel[é]{}ndez]{}, J., & [Bensby]{}, T. 2010, , 521, A33 , I., [Mel[é]{}ndez]{}, J., & [Chanam[é]{}]{}, J. 2012, , 757, 164 , S. V. & [Cohen]{}, J. G. 2002, , 123, 3277 —. 2003, , 125, 224 , I. U. & [Lawler]{}, J. E. 2012, , 750, 76 , I. U., [Marino]{}, A. F., & [Sneden]{}, C. 2011, , 742, 37 , R. T. & [Crocker]{}, D. A. 1985, in European Southern Observatory Conference and Workshop Proceedings, Vol. 21, European Southern Observatory Conference and Workshop Proceedings, ed. I. J. [Danziger]{}, F. [Matteucci]{}, & K. [Kjar]{}, 61–69 , I., [Da Costa]{}, G. S., [Held]{}, E. V., [Sommariva]{}, V., [Gullieuszik]{}, M., [Barbuy]{}, B., & [Ortolani]{}, S. 2012,  in press (arXiv:1202.1304) , L. 2005, Memorie della Societa Astronomica Italiana Supplementi, 8, 61 , L., [Salaris]{}, M., [Weiss]{}, A., & [Cassisi]{}, S. 2011, , 534, A9 , J., [Ivans]{}, I. I., [Filler]{}, D., [Francois]{}, P., [Charbonnel]{}, C., [Monier]{}, R., & [James]{}, G. 2013, , 764, L7 , G. H. 1987, , 99, 67 , C. 1973, , 184, 839 , C. 2005, in IAU Symposium, Vol. 228, From Lithium to Uranium: Elemental Tracers of Early Cosmic Evolution, ed. V. [Hill]{}, P. [Fran[c c]{}ois]{}, & F. [Primas]{}, 337–344 , C., [Johnson]{}, J., [Kraft]{}, R. P., [Smith]{}, G. H., [Cowan]{}, J. J., & [Bolte]{}, M. S. 2000, , 536, L85 , C., [Kraft]{}, R. P., [Shetrone]{}, M. D., [Smith]{}, G. H., [Langer]{}, G. E., & [Prosser]{}, C. F. 1997, , 114, 1964 , C., [Lawler]{}, J. E., [Cowan]{}, J. J., [Ivans]{}, I. I., & [Den Hartog]{}, E. A. 2009, , 182, 80 , J. S., [Kraft]{}, R. P., [Sneden]{}, C., [Preston]{}, G. W., [Cowan]{}, J. J., [Smith]{}, G. H., [Thompson]{}, I. B., [Shectman]{}, S. A., & [Burley]{}, G. S. 2011, , 141, 175 , P. B. & [Pancino]{}, E. 2008, , 120, 1332 , B., [Gustafsson]{}, B., & [Olsen]{}, E. H. 1982, , 94, 5 , D. A., [Swenson]{}, F. J., [Rogers]{}, F. J., [Iglesias]{}, C. A., & [Alexander]{}, D. R. 2000, , 532, 430 , P., [Carini]{}, R., & [D’Antona]{}, F. 2011, , 415, 3865 , P. & [D’Antona]{}, F. 2005, , 635, L149 , S. & [Geisler]{}, D. 2011, , 535, A31 , S., [Geisler]{}, D., & [Piotto]{}, G. 2010, , 722, L18 , S., [Piotto]{}, G., & [Gratton]{}, R. G. 2009, , 499, 755 , M. E., [Lawler]{}, J. E., & [Nave]{}, G. 2000, , 66, 363 , D. & [Grundahl]{}, F. 2008, , 672, L29 , D., [Grundahl]{}, F., [Johnson]{}, J. A., & [Asplund]{}, M. 2008, , 684, 1159 , D., [Grundahl]{}, F., [Lambert]{}, D. L., [Nissen]{}, P. E., & [Shetrone]{}, M. D. 2003, , 402, 985 , D., [Grundahl]{}, F., [Nissen]{}, P. E., [Jensen]{}, H. R., & [Lambert]{}, D. L. 2005, , 438, 875 \[lastpage\] [^1]: E-mail: yong@mso.anu.edu.au [^2]: Based on observations collected at the European Southern Observatory, Chile (ESO Programmes 67.D-0145 and 65.L-0165A). [^3]: There are subtle, and not so subtle, differences within a given category. [^4]: @saviane12 have identified a metallicity dispersion in NGC 5824. To our knowledge, there are no published studies of the light element abundances based on high-resolution spectroscopy, so we cannot yet place this globular cluster in category ($iii$). [^5]: PD1 and PD2 are from @penny86 and BXXXX names are from @buonanno86. [^6]: These stellar parameters are for the so-called “reference star” values (see Section 2.3 for details). [^7]: We exclude this star from the subsequent differential analysis due to its discrepant metallicity. [^8]: IRAF (Image Reduction and Analysis Facility) is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. [^9]: The digits to the left of the decimal point are the atomic number. The digit to the right of the decimal point is the ionization state (“0” = neutral, “1” = singly ionised). [^10]: Star names are abbreviated. See Table \[tab:param\] for the full names. [^11]: A = $\log gf$ values taken from @yong05 where the references include @denhartog03, @ivans01, @kurucz95, @prochaska00, @ramirez02; B = @gratton03; C = Oxford group including @blackwell79feb [@blackwell79fea; @blackwell80fea; @blackwell86fea; @blackwell95fea]; D = @biemont91 [^12]: The digits to the left of the decimal point are the atomic number. The digit to the right of the decimal point is the ionization state (“0” = neutral, “1” = singly ionised). [^13]: Star names are abbreviated. See Table \[tab:param\] for the full names. [^14]: A = $\log gf$ values taken from @yong05 where the references include @denhartog03, @ivans01, @kurucz95, @prochaska00, @ramirez02; B = @gratton03; C = Oxford group including @blackwell79feb [@blackwell79fea; @blackwell80fea; @blackwell86fea; @blackwell95fea]; D = @biemont91 [^15]: We also performed linear fits to these data using the GaussFit program for robust estimation [@jefferys88]. While we again find positive gradients for 22 of the 24 elements, on average the significance of these correlations decreases from 3.9$\sigma$ (least squares fitting) to 2.6$\sigma$ (robust fitting). When using the GaussFit robust fitting routines, three of the 24 elements exhibit correlations that are significant at the 5$\sigma$ level or higher. [^16]: The values in parentheses refer to the numbers of realisations in which the gradient in the simulations was consistent with the measured gradient. Fe is a $\sim$2$\sigma$ outlier. While Eu is clearly an outlier, we note that the abundances are derived from a single line that is rather weak in the RGB bump stars. [^17]: When using the GaussFit robust estimation for the RGB tip sample, 64 out of 66 pairs of elements exhibit a positive correlation. On average, the correlations for the robust fitting (3.6$\sigma$) are of higher statistical significance than for the least squares fitting (2.0$\sigma$) and 15 pairs of elements exhibit correlations at the 5$\sigma$ level or higher. The average gradient is 2.06 $\pm$ 0.26 ($\sigma$ = 2.11) which is similar to the linear least squares fitting. [^18]: When using the GaussFit robust estimation for the RGB bump sample, all pairs of elements exhibit positive gradients. On average, the correlations for the robust fitting (5.9$\sigma$) are of higher statistical significance than for the least squares fitting (4.0$\sigma$) and 36 pairs of elements exhibit correlations at the 5$\sigma$ level or higher. The average gradient is 3.04 $\pm$ 0.65 ($\sigma$ = 5.30) and is only slightly higher than for the linear least squares fitting. [^19]: A given star is assigned to a particular population based on location in the \[O/Fe\] vs. \[Na/Fe\] plane according to @carretta09pie. [^20]: The referee has pointed out that an analysis of the colours and magnitudes of HB stars suggest a value of $\Delta Y$ = 0.059 for NGC 6752 [@gratton10]. For such a value, He alone could explain the abundance variations we find. [^21]: In NGC 6397, @lind11 found evidence for a possible spread in yttrium abundance, 0.04 dex. In M4, @villanova11 also found evidence for a spread in yttrium abundance at the $\sim$0.1 dex level, although @dorazi13 do not confirm that result. [^22]: The digits to the left of the decimal point are the atomic number. The digit to the right of the decimal point is the ionization state (“0” = neutral, “1” = singly ionised). [^23]: 1 = @biemont11; 2 = @denhartog03; 3 = @ivarsson01, using HFS from @sneden09; 4 = @lawler01la; 5 = @lawler01la, using HFS from @ivans06; 6 = @lawler01eu, using HFS and isotope shifts from @ivans06; 7 = @lawler06; 8 = @lawler09; 9 = @li07, using HFS from @sneden09; 10 = @ljung06; 11 = @fuhr09; 12 = @roederer12; 13 = @wickliffe00
--- abstract: 'We consider surfaces with parallel mean curvature vector (pmc surfaces) in $\mathbb{C}P^n\times\mathbb{R}$ and $\mathbb{C}H^n\times\mathbb{R}$, and, more generally, in cosymplectic space forms. We introduce a holomorphic quadratic differential on such surfaces. This is then used in order to show that the anti-invariant pmc $2$-spheres of a $5$-dimensional non-flat cosymplectic space form of product type are actually the embedded rotational spheres $S_H^2\subset\bar M^2\times\mathbb{R}$ of Hsiang and Pedrosa, where $\bar M^2$ is a complete simply-connected surface with constant curvature. When the ambient space is a cosymplectic space form of product type and its dimension is greater than $5$, we prove that an immersed non-minimal non-pseudo-umbilical anti-invariant $2$-sphere lies in a product space $\bar M^4\times\mathbb{R}$, where $\bar M^4$ is a space form. We also provide a reduction of codimension theorem for the pmc surfaces of a non-flat cosymplectic space form.' address: - | Department of Mathematics\ “Gh. Asachi” Technical University of Iasi\ Bd. Carol I no. 11\ 700506 Iasi, Romania - | IMPA\ Estrada Dona Castorina\ 110, 22460-320 Rio de Janeiro, Brasil author: - Dorel Fetcu - Harold Rosenberg title: 'Surfaces with parallel mean curvature in $\mathbb{C}P^n\times\mathbb{R}$ and $\mathbb{C}H^n\times\mathbb{R}$' --- Introduction ============ Surfaces with constant mean curvature (cmc surfaces) in $3$-dimensional ambient spaces have been intensively studied in the last six decades and a very useful tool proved to be the holomorphic quadratic forms defined on such surfaces. In 1951, H. Hopf used for the first time a holomorphic quadratic form in order to show that any cmc surface in a Euclidean space, homeomorphic to a sphere, is actually a round sphere (see [@HH]) and then his result was extended to cmc surfaces in $3$-dimensional space forms by S.-S. Chern, in [@C]. When the codimension is greater than $1$, a natural generalization of cmc surfaces are surfaces with parallel mean curvature vector (pmc surfaces). These surfaces are studied since the early seventies, among the first papers to treat this subject being [@DF] by D. Ferus, [@CL] by B.-Y. Chen and G. D. Ludden, [@DH] by D. A. Hoffman and [@Y] by S.-T. Yau. In this last paper it is proved that a pmc surface immersed in a space form either lies in a totally geodesic $3$-dimensional space or it is a minimal surface of an umbilical hypersurface. The next natural step was taken by U. Abresch and H. Rosenberg, who studied in [@AR; @AR2] cmc surfaces and obtained Hopf type results in product spaces of type $M^2(\rho)\times\mathbb{R}$, where $M^2(\rho)$ is a complete simply-connected surface with constant curvature $\rho$, as well as the homogeneous $3$-manifolds $Nil(3)$, $\widetilde{PSL(2,\mathbb{R})}$ and Berger spheres. As for the study of pmc surfaces in product spaces of type $M^n(\rho)\times\mathbb{R}$, where $M^n(\rho)$ is a space form with constant sectional curvature $\rho$, the papers [@AdCT1] and [@AdCT] by H. Alencar, M. do Carmo and R. Tribuzy, are devoted to this subject. The principal tool they use is a holomorphic quadratic form, which in the $3$-dimensional case is just the Abresch-Rosenberg differential, introduced in [@AR]. In [@AdCT] the authors proved, amongst others, a very nice reduction of the codimension theorem, showing that a pmc surface immersed in $M^n(\rho)\times\mathbb{R}$ is either a minimal surface in a totally umbilical hypersurface of $M^n(\rho)$; a cmc surface in a $3$-dimensional totally umbilical submanifold, or in a totally geodesic submanifold of $M^n(\rho)$; or it lies in $M^4(\rho)\times\mathbb{R}$. In the recent paper [@F] a similar result is proved for pmc surfaces immersed in a complex space form, i.e. a Kähler manifold with constant holomorphic sectional curvature. There it is shown that a non-minimal pmc surface immersed in a non-flat complex space form $N^n(\rho)$, where $\rho$ is the (constant) holomorphic sectional curvature and $n\geq 3$, is a pseudo-umbilical totally real surface or it lies in a complex space form $N^r(\rho)$, with $r\leq 5$. The products between a complex space form and a one dimensional manifold are the main examples of cosymplectic space forms, which are often seen as the odd-dimensional version of complex space forms. Therefore, working in such spaces seems to be the natural continuation of [@F]. The other option for odd-dimensional ambient spaces with nice curvature properties is represented by the Sasakian space forms, amongst them being the odd-dimensional spheres and the generalized Heisenberg group. Although the present paper is devoted to the study of pmc surfaces in cosymplectic space forms it is sure that interesting results could be also obtained by considering this second option. The paper is organized as follows. In Section \[sintro\] we briefly recall some general facts about the cosymplectic space forms, as they are presented in [@ABC; @B; @BG; @CdLM]. In Section \[sqf\] we introduce a quadratic form $Q$ defined on surfaces immersed in such a space and prove that its $(2,0)$-part is holomorphic when the mean curvature vector of the surface is parallel. In Section \[scyl\] we characterize the pmc surfaces of type $\Sigma^2=\pi^{-1}(\gamma)$ in a product space $M^n(\rho)\times\mathbb{R}$, where $M^n(\rho)$ is a complex space form, $\pi:M^n(\rho)\times\mathbb{R}\rightarrow M^n(\rho)$ is the projection map and $\gamma:I\rightarrow M^n(\rho)$ is a Frenet curve of osculating order $r$ in $M^n(\rho)$. We also prove that such surfaces with vanishing $(2,0)$-part of $Q$ exist if and only if $\rho<0$. The main result of Section \[sred\] is a reduction theorem, which states that a non-minimal pmc surface $\Sigma^2$ in a non-flat cosymplectic space form $N^{2n+1}(\rho)$ either is pseudo-umbilical and then the characteristic vector field is orthogonal to $\Sigma^2$ and the surface is anti-invariant, or it is not pseudo-umbilical and lies in a totally geodesic invariant submanifold of $N^{2n+1}(\rho)$ with dimension less than or equal to $11$. The last Section is devoted to the study of anti-invariant pmc surfaces. We prove that any non-minimal anti-invariant pmc $2$-sphere in $M^2(\rho)\times\mathbb{R}$ is an embedded rotationally invariant cmc sphere $S_H^2\subset\bar M^2(\frac{\rho}{4})\times\mathbb{R}$, where $\bar M^2(\frac{\rho}{4})$ is a complete simply-connected surface with constant curvature $\frac{\rho}{4}$, immersed as a totally-geodesic Lagrangian submanifold in the complex space form $M^2(\rho)$. When the dimension of the ambient space is greater than $5$, we show that a non-minimal non-pseudo-umbilical anti-invariant $2$-sphere immersed in $M^n(\rho)\times\mathbb{R}$ lies in a product space $\bar M^4(\frac{\rho}{4})\times\mathbb{R}$, where $\bar M^4(\frac{\rho}{4})$ is a space form immersed as a totally geodesic totally real submanifold in $M^n(\rho)$. Preliminaries {#sintro} ============= Let $M^n(\rho)$ be a complex space form with the complex structure $(J,\langle,\rangle_M)$, consider the product manifold $N^{2n+1}=M^n(\rho)\times\mathbb{R}$ and define the following tensors on $N^{2n+1}$: $$\varphi=J\circ d\pi,\quad\xi=\frac{\partial}{\partial t},\quad\eta=dt\quad\textnormal{and}\quad \langle,\rangle_N=\langle,\rangle_M+dt\otimes dt,$$ where $\pi:M^n(\rho)\times\mathbb{R}\rightarrow M^n(\rho)$ is the projection map and $t$ is the standard coordinate function on the real axis. Then $(N^{2n+1},\varphi,\xi,\eta,\langle,\rangle_N)$ is a cosymplectic space form with constant $\varphi$-sectional curvature equal to $\rho$ (see [@ABC; @BG]). We shall explain what this means in the following. An *almost contact metric structure* on an odd-dimensional manifold $N^{2n+1}$ is given by $(\varphi,\xi,\eta,\langle,\rangle)$, where $\varphi$ is a tensor field of type $(1,1)$ on $N$, $\xi$ is a vector field, $\eta$ is its dual $1$-form and $\langle,\rangle$ is a Riemannian metric such that $$\varphi^{2}U=-U+\langle U,\xi\rangle\xi\quad\textnormal{and}\quad \langle\varphi U,\varphi V\rangle=\langle U,V\rangle-\eta(U)\eta(V),$$ for all tangent vector fields $U$ and $V$. An almost contact metric structure $(\varphi,\xi,\eta,\langle,\rangle)$ is called [*normal*]{} if $$N_{\varphi}(U,V)+2d\eta(U,V)\xi=0,$$ where $$N_{\varphi}(U,V)=[\varphi U,\varphi V]-\varphi \lbrack \varphi U,V]-\varphi \lbrack U,\varphi V]+\varphi^{2}[U,V],$$ is the Nijenhuis tensor field of $\varphi$. An almost contact metric manifold $(N,\varphi,\xi,\eta,g)$ is a *cosymplectic manifold* if it is normal and both the $1$-form $\eta$ and the fundamental $2$-form $\Omega$, defined by $\Omega(U,V)=\langle U,\varphi V\rangle$, are closed. Equivalently, an almost contact metric manifold is cosymplectic if and only if $\varphi$ is parallel, i.e. $\nabla^N\varphi=0$, where $\nabla^N$ is the Levi-Civita connection. This implies that also the vector field $\xi$ and the $1$-form $\eta$ are parallel. We note that a cosymplectic manifold has a natural local product structure as a product of a Kähler manifold and a $1$-dimensional manifold but there exist compact cosymplectic manifolds which are not global products (see [@B; @CdLM]). We also recall that a submanifold $M$ of a cosymplectic manifold is called *invariant* when $\varphi(TM)\subset TM$ and *anti-invariant* when $\varphi(TM)\subset NM$, where $NM$ is the normal bundle of $M$. Let $(N,\varphi,\xi,\eta,\langle,\rangle)$ be a cosymplectic manifold. The sectional curvature of a $2$-plane generated by $U$ and $\varphi U$, where $U$ is a unit vector orthogonal to $\xi$, is called *$\varphi$-sectional curvature* determined by $U$. A cosymplectic manifold with constant $\varphi$-sectional curvature $\rho$ is called a *cosymplectic space form* and is denoted by $N(\rho)$. The curvature tensor field of a cosymplectic space form $N(\rho)$ is given by $$\label{eq:curv} \begin{array}{lcl} R^N(U,V)W&=&\frac{\rho}{4}\{\langle V,W\rangle U-\langle U,W\rangle V+\langle U,\varphi W\rangle \varphi V-\langle V,\varphi W\rangle\varphi U\\ \\&&+2\langle U,\varphi V\rangle\varphi W+\eta(U)\eta(W)V-\eta(V)\eta(W)U\\ \\&&+\langle U,W\rangle\eta(V)\xi-\langle V,W\rangle\eta(U)\xi\}. \end{array}$$ A quadratic form with holomorphic $(2,0)$-part {#sqf} ============================================== Although our main interest is to study the immersed pmc surfaces in product spaces of type $M^n(\rho)\times\mathbb{R}$, where $M^n(\rho)$ is a complex space form, it is more convenient to treat the more general case where the surfaces are immersed in an arbitrary cosymplectic space forms. Let $\Sigma^2$ be an immersed surface in a cosymplectic space form $N^{2n+1}(\rho)$, endowed with the cosymplectic structure $(\varphi,\xi,\eta,\langle,\rangle)$ and having constant $\varphi$-sectional curvature $\rho$. If the mean curvature vector $H$ of the surface $\Sigma^2$ is parallel in the normal bundle, i.e. $\nabla^{\perp}H=0$, the normal connection $\nabla^{\perp}$ being defined by the equation of Weingarten $$\nabla^{N}_XV=-A_VX+\nabla^{\perp}_XV,$$ for any vector field $X$ tangent to $\Sigma^2$ and any vector field $V$ normal to the surface, where $\nabla^{N}$ is the Levi-Civita connection on $N$ and $A$ is the shape operator, then $\Sigma^2$ is called a *pmc surface*. We define a quadratic form $Q$ on $\Sigma^2$ by $$Q(X,Y)=8|H|^2\langle\sigma(X,Y),H\rangle-\rho|H|^2\eta(X)\eta(Y)+3\rho\langle \varphi X, H\rangle\langle \varphi Y, H\rangle,$$ where $\sigma$ is the second fundamental form of the surface, and claim that the $(2,0)$-part of $Q$ is holomorphic. In order to prove this, we first consider the isothermal coordinates $(u,v)$ on $\Sigma^2$. Then $ds^2=\lambda^2(du^2+dv^2)$ and let us define $z=u+iv$, $\widehat z=u-iv$, $dz=\frac{1}{\sqrt{2}}(du+idv)$, $d\widehat z=\frac{1}{\sqrt{2}}(du-idv)$ and $$Z=\frac{1}{\sqrt{2}}\Big(\frac{\partial}{\partial u}-i\frac{\partial}{\partial v}\Big),\quad \widehat Z=\frac{1}{\sqrt{2}}\Big(\frac{\partial}{\partial u}+i\frac{\partial}{\partial v}\Big).$$ We get $\langle Z,\widehat Z\rangle=\langle\frac{\partial}{\partial u},\frac{\partial}{\partial u}\rangle=\langle\frac{\partial}{\partial v},\frac{\partial}{\partial v}\rangle=\lambda^2$. We mention that this rather unusual notation for the conjugation is used only for the reader’s convenience. Now, we shall compute $$\widehat Z(Q(Z,Z))=\widehat Z(8|H|^2\langle\sigma(Z,Z),H\rangle-\rho|H|^2(\eta(Z))^2+3\rho\langle \varphi Z, H\rangle^2).$$ We have $$\begin{array}{ll} \widehat Z(\langle\sigma(Z,Z),H\rangle)&=\langle\nabla^N_{\widehat Z}\sigma(Z,Z),H\rangle+\langle\sigma(Z,Z),\nabla^N_{\widehat Z}H\rangle\\ \\&=\langle\nabla^{\perp}_{\widehat Z}\sigma(Z,Z),H\rangle+\langle\sigma(Z,Z),\nabla^{\perp}_{\widehat Z}H\rangle\\ \\&=\langle(\nabla^{\perp}_{\widehat Z}\sigma)(Z,Z),H\rangle+\langle\sigma(Z,Z),\nabla^{\perp}_{\widehat Z}H\rangle, \end{array}$$ since $$(\nabla^{\perp}_{\widehat Z}\sigma)(Z,Z)=\nabla^{\perp}_{\widehat Z}\sigma(Z,Z)-2\sigma(\nabla_{\widehat Z}Z,Z)=\nabla^{\perp}_{\widehat Z}\sigma(Z,Z)$$ and $\nabla_{\widehat Z}Z=0$, from the definition of the connection $\nabla$ on the surface. Next, using the Codazzi equation, we get $$\label{eq:1} \begin{array}{lll} \widehat Z(\langle \sigma(Z,Z),H\rangle)&=&\langle(\nabla^{\perp}_{Z}\sigma)(\widehat Z,Z),H\rangle+\langle (R^N(\widehat Z,Z)Z)^{\perp},H\rangle\\ \\&&+\langle\sigma(Z,Z),\nabla^{\perp}_{\widehat Z}H\rangle\\ \\&=&\langle(\nabla^{\perp}_{Z}\sigma)(\widehat Z,Z),H\rangle+\langle R^N(\widehat Z,Z)Z,H\rangle+\langle\sigma(Z,Z),\nabla^{\perp}_{\widehat Z}H\rangle. \end{array}$$ From the expression , of the curvature tensor field of $N$, it follows $$\label{eq:2} \langle R^N(\widehat Z,Z)Z,H\rangle=\frac{\rho}{4}\{\langle Z,\widehat Z\rangle\eta(Z)\eta(H)+3\langle\widehat Z,\varphi Z\rangle\langle H,\varphi Z\rangle\}.$$ Working just like in [@AdCT] (or in [@F]), we can prove that $$\label{eq:3} \langle(\nabla^{\perp}_{Z}\sigma)(\widehat Z,Z),H\rangle=\langle\widehat Z,Z\rangle\langle\nabla^{\perp}_ZH,H\rangle.$$ Indeed, if we consider the unit vector fields $e_1$ and $e_2$ corresponding to $\frac{\partial}{\partial u}$ and $\frac{\partial}{\partial v}$, respectively, then we get $Z=\frac{\lambda}{\sqrt{2}}(e_1-ie_2)$ and $$\sigma(\widehat Z,Z)=\frac{\lambda^2}{2}\sigma(e_1-ie_2,e_1+ie_2)=\frac{\lambda^2}{2}(\sigma(e_1,e_1)+\sigma(e_2,e_2))=\langle\widehat Z,Z\rangle H.$$ Since we also have $\nabla_ZZ=\frac{1}{\lambda^2}\langle\nabla_ZZ,\widehat Z\rangle Z$, it follows that $$\begin{array}{lll} \langle(\nabla^{\perp}_{Z}\sigma)(\widehat Z,Z),H\rangle&=&\langle\nabla^N_Z\sigma(\widehat Z,Z),H\rangle-\langle\sigma(\nabla_Z\widehat Z, Z), H\rangle-\langle\sigma(\widehat Z,\nabla_ZZ), H\rangle\\ \\ &=&\langle\nabla^N_Z(\langle\widehat Z,Z\rangle H),H\rangle-\frac{1}{\lambda^2}\langle\nabla_ZZ,\widehat Z\rangle \langle\sigma(\widehat Z,Z), H\rangle \\ \\&=&\langle\nabla^N_Z(\langle\widehat Z,Z\rangle H),H\rangle-\langle\nabla_ZZ,\widehat Z\rangle\langle H,H\rangle\\ \\&=&\langle\nabla_Z\widehat Z,Z\rangle\langle H,H\rangle+\langle\nabla_ZZ,\widehat Z\rangle\langle H,H\rangle\\ \\&&+\langle\widehat Z,Z\rangle\langle\nabla^{\perp}_ZH,H\rangle-\langle\nabla_ZZ,\widehat Z\rangle\langle H,H\rangle\\ \\&=&\langle\widehat Z,Z\rangle\langle\nabla^{\perp}_ZH,H\rangle. \end{array}$$ Replacing and in , and using the fact that $H$ is parallel, it follows that $$\label{eq:term1} \widehat Z(\langle \sigma(Z,Z),H\rangle)=\frac{\rho}{4}\{\langle Z,\widehat Z\rangle\eta(Z)\eta(H)+3\langle\widehat Z,\varphi Z\rangle\langle H,\varphi Z\rangle\}.$$ As the characteristic vector field $\xi$ is parallel, equation also implies that $$\label{eq:term2} \widehat Z((\eta(Z))^2)=2\langle Z,\widehat Z\rangle\eta(Z)\eta(H).$$ Finally, since $\nabla^N\varphi=0$ and $H$ is parallel, using $\nabla^N_{\widehat Z}Z=\sigma(\widehat Z,Z)=\langle\widehat Z,Z\rangle H$ and $(\varphi Z)^{\top}=\frac{1}{\lambda^2}\langle \varphi Z,\widehat Z\rangle Z$, that can be easily checked, one obtains $$\label{eq:term3} \begin{array}{lll} \widehat Z(\langle\varphi Z,H\rangle^2)&=&2\langle \varphi Z,H\rangle\{\langle\nabla^N_{\widehat Z}\varphi Z,H\rangle+\langle\varphi Z,\nabla^N_{\widehat Z}H\rangle\}\\ \\ &=&2\langle \varphi Z,H\rangle\{\langle\varphi \nabla^N_{\widehat Z}Z,H\rangle+\langle\varphi Z,\nabla^N_{\widehat Z}H\rangle\}\\ \\&=&2\langle\varphi Z,H\rangle\{\langle\widehat Z,Z\rangle\langle\varphi H,H\rangle-\langle(\varphi Z)^{\top},A_H\widehat Z\rangle+\langle(\varphi Z)^{\perp},\nabla^{\perp}_{\widehat Z}H\rangle\}\\ \\ &=&-2\langle\varphi Z,H\rangle\langle\sigma((\varphi Z)^{\top},\widehat Z),H\rangle\\ \\&=&-2\langle \varphi Z,H\rangle\langle\varphi Z,\widehat Z\rangle|H|^2. \end{array}$$ From , and we see that $\widehat Z(Q(Z,Z))=0$, and we can state the following. If $\Sigma^2$ is an immersed pmc surface in a cosymplectic space form $N^{2n+1}(\rho)$, then the $(2,0)$-part of the quadratic form $Q$, defined on $\Sigma^2$ by $$Q(X,Y)=8|H|^2\langle\sigma(X,Y),H\rangle-\rho|H|^2\eta(X)\eta(Y)+3\rho\langle \varphi X, H\rangle\langle \varphi Y, H\rangle,$$ is holomorphic. Vertical cylinders with parallel mean curvature vector in product spaces {#scyl} ======================================================================== Let $\gamma:I\subset\mathbb{R}\rightarrow M^n(\rho)$ be a curve parametrized by arc-length in a complex space form with complex dimension $n$ and constant holomorphic sectional curvature $\rho$, i.e. $\mathbb{C}P^n(\rho)$, $\mathbb{C}^n$ or $\mathbb{C}H^n(\rho)$ as $\rho>0$, $\rho=0$ or $\rho<0$. The curve $\gamma$ is called a [*Frenet curve of osculating order*]{} $r$, $1\leq r\leq 2n$, if there exist $r$ orthonormal vector fields $\{E_1=\gamma',\ldots,E_{r}\}$ along $\gamma$ such that $$\begin{cases} \nabla^M_{E_{1}}E_{1}=\kappa_{1}E_{2} \\ \nabla^M_{E_{1}}E_{i}=-\kappa_{i-1}E_{i-1} + \kappa_{i}E_{i+1}, \quad \forall i=2,\dots,r-1, \\ \nabla^M_{E_{1}}E_{r}=-\kappa_{r-1}E_{r-1} \end{cases}$$ where $\{\kappa_{1},\kappa_{2},\kappa_{3},\ldots,\kappa_{r-1}\}$ are positive functions on $I$ called the [*curvatures*]{} of $\gamma$ and $\nabla^M$ denotes the Levi-Civita connection on $M^n(\rho)$. A Frenet curve of osculating order $r$ is called a [*helix of order $r$*]{} if $\kappa_i=\operatorname{constant}>0$ for $1\leq i\leq r-1$. A helix of order $2$ is called a [*circle*]{}, and a helix of order $3$ is simply called [*helix*]{}. S. Maeda and Y. Ohnita defined in [@MO] the [*complex torsions*]{} of the curve $\gamma$ by $\tau_{ij}=\langle E_i, J E_j \rangle$, $1\leq i<j\leq r$, where $(J,\langle,\rangle)$ is the complex structure on $M^n(\rho)$. A helix of order $r$ is called a [*holomorphic helix of order $r$*]{} if all the complex torsions are constant. It is easy to see that a circle is always a holomorphic circle. In order to find examples of pmc surfaces we will focus our attention on the *vertical cylinders* $\Sigma^2=\pi^{-1}(\gamma)$ in product spaces $M^n(\rho)\times\mathbb{R}$, where $\pi:M^n(\rho)\times\mathbb{R}\rightarrow M^n(\rho)$ is the projection map and $\gamma:I\rightarrow M^n(\rho)$ is a Frenet curve of osculating order $r$ in $M^n(\rho)$. For any vector field $X$ tangent to $M^n(\rho)$ we shall denote by $X^H$ its horizontal lift to $M^n(\rho)\times\mathbb{R}$. As for the Riemannian metrics on $M^n(\rho)$ and $M^n(\rho)\times\mathbb{R}$, we will use the same notation $\langle,\rangle$. Obviously, $\{E_1^H,\xi\}$ is a local orthonormal frame on $\Sigma^2$ and $E_i^H$, $1<i\leq r$, are normal vector fields. Then the mean curvature vector $H$ is given by $$H=\frac{1}{2}(\sigma(E_1^H,E_1^H)+\sigma(\xi,\xi))=\frac{1}{2}\kappa_1E_2^H,$$ where $\kappa_1=\kappa_1\circ\pi$ and we used the first Frenet equation for $\gamma$ and O’Neill’s equation [@O] in the case of cosymplectic space forms, i.e. $\nabla^N_{X^H}Y^H=(\nabla^M_XY)^H$, for any vector fields $X$ and $Y$ tangent to $M^n(\rho)$ (see also [@ABC]). Next, from the second Frenet equation, we have $$\label{eq:PMC1} \nabla^N_{E_1^H}H=\frac{1}{2}(\nabla^M_{E_1}(\kappa_1E_2))^H=\frac{1}{2}(\kappa_1'E_2-\kappa_1^2E_1+\kappa_1\kappa_2E_3)^H.$$ It is easy to verify that $\nabla_{\xi}E_1^H=\nabla_{E_1^H}\xi=0$, where $\nabla$ is the connection on the surface, and then we get that $[\xi,E_1^H]=0$, which means $\nabla^N_{\xi}E_1^H=\nabla^N_{E_1^H}\xi=0$. Now, since from it follows that $R^N(\xi,E_1^H)E_1^H=0$, we obtain $$\label{eq:PMC2} \nabla^N_{\xi}H=\frac{1}{2}\nabla^N_{\xi}\nabla^N_{E_1^H}E_1^H=0.$$ From and we see that $H$ is parallel if and only if either - $\gamma$ is a geodesic in $M^n(\rho)$; or - $\gamma$ is a circle in $M^n(\rho)$ with the curvature $\kappa_1=2|H|=\operatorname{constant}>0$. Obviously, in the first case, $\Sigma^2$ is a minimal surface. In the second case, the $(2,0)$-part of $Q$ vanishes if and only if $$16|H|^4+\rho|H|^2+3\rho\langle\varphi E_1^H,H\rangle^2=0,$$ that is equivalent to $$4\kappa_1^2+\rho(1+3\tau_{12}^2)=0.$$ Now, we can conclude. A vertical cylinder $\Sigma^2=\pi^{-1}(\gamma)$ in $M^n(\rho)\times\mathbb{R}$ has non-zero parallel mean curvature vector and the $(2,0)$-part of the quadratic form $Q$ vanishes on $\Sigma^2$ if and only if $\rho<0$ and the curve $\gamma$ is a circle in $M^n(\rho)$ with the curvature $\kappa=\frac{1}{2}\sqrt{-\rho(1+3\tau^2)}$, where $\tau$ is the complex torsion of $\gamma$. S. Maeda and T. Adachi proved in [@MA] that for any positive number $\kappa$ and for any number $\tau$, such that $|\tau|<1$, there exits a circle with curvature $\kappa$ and complex torsion $\tau$ in any complex space form. Therefore, for any $\rho<0$, we know that circles $\gamma$, like in the previous Proposition, do exist. Since $0\leq\tau^2\leq 1$ we get that $\frac{1}{2}\sqrt{-\rho}\leq\kappa\leq\sqrt{-\rho}$, which means that the mean curvature of a non-minimal pmc cylinder $\Sigma^2=\pi^{-1}(\gamma)$, with vanishing $(2,0)$-part of $Q$, satisfies $\frac{\sqrt{-\rho}}{4}\leq|H|\leq\frac{\sqrt{-\rho}}{2}$. A reduction theorem {#sred} =================== Let $\Sigma^2$ be an immersed non-minimal pmc surface in a non-flat cosymplectic space form $N^{2n+1}(\rho)$, $n\geq 2$. \[lemma\_com\] For any vector $V$ normal to $\Sigma^2$, which is also orthogonal to $\varphi T\Sigma^2$ and to $\varphi H$, we have $[A_H,A_V]=0$, i.e. $A_H$ commutes with $A_V$. The conclusion follows easily from the Ricci equation $$\langle R^{\perp}(X,Y)H,V\rangle=\langle[A_H,A_V]X,Y\rangle+\langle R^N(X,Y)H,V\rangle,$$ since $$\begin{array}{lll} \langle R^N(X,Y)H,V\rangle&=&\frac{\rho}{4}\{\langle X,\varphi H\rangle\langle\varphi Y,V\rangle-\langle Y,\varphi H\rangle\langle\varphi X,V\rangle+2\langle X, \varphi Y\rangle\langle\varphi H,V\rangle\}\\ \\&=&0 \end{array}$$ and $R^{\perp}(X,Y)H=0$. \[lemma\_split\] Either $H$ is an umbilical direction or there exists a basis that diagonalizes simultaneously $A_H$ and $A_V$, for all normal vectors $V$ satisfying $V\perp \varphi T\Sigma^2$ and $V\perp \varphi H$. Now, assume that $H$ is an umbilical direction everywhere, which means that the surface is pseudo-umbilical, i.e. $A_H=|H|^2\operatorname{I}$. For such a surface, since $H$ is also parallel, we have $$\begin{array}{ll} R^N(X,Y)H&=\nabla_X\nabla_YH-\nabla_Y\nabla_XH-\nabla_{[X,Y]}H\\ \\&=-|H|^2(\nabla_XY-\nabla_YX-[X,Y])=0, \end{array}$$ for any tangent vector fields $X$ and $Y$. In the following we shall prove that, in this case, $\xi\perp T\Sigma^2$ and $\varphi(T\Sigma^2)\subset N\Sigma^2$, where $N\Sigma^2$ is the normal bundle of the surface. First, we have \[l1umb\] The following four relations are equivalent: 1. $\xi\perp T\Sigma^2$; 2. $H\perp\xi$; 3. $\varphi(T\Sigma^2)\subset N\Sigma^2$; 4. $\varphi H\perp T\Sigma^2$. As $H$ is umbilical, it results that $\langle\sigma(Z,Z),H\rangle=0$ and, consequently, the $(2,0)$-part of $Q$ is, in this case, $$Q(Z,Z)=-\rho|H|^2(\eta(Z))^2+3\rho\langle \varphi Z, H\rangle^2,$$ where $Z$ and its conjugate $\widehat Z$ are the complex vectors on $\Sigma^2$, defined in Section \[sqf\]. Since $Q(Z,Z)$ is holomorphic and $H$ is umbilical and parallel, it follows that $$\langle Z,\widehat Z\rangle\eta(Z)\eta(H)+3\langle\varphi Z, H\rangle\langle\varphi Z,\widehat Z\rangle=0.$$ Now, it is easy to see that $\eta(Z)\eta(H)=0$ is equivalent to $\langle\varphi Z, H\rangle\langle\varphi Z,\widehat Z\rangle=0$, and then we only have to prove the equivalence between ([*i*]{}) and ([*ii*]{}) and between ([*iii*]{}) and ([*iv*]{}), respectively. First, if $\eta(Z)=0$ then $\eta(\nabla^N_{\widehat Z}Z)=\langle Z,\widehat Z\rangle\eta(H)=0$, as $N^{2n+1}(\rho)$ is a cosymplectic space form and $\nabla^N_{\widehat Z}Z=\langle Z,\widehat Z\rangle H$. Conversely, if $\eta(H)=0$, we have $$\eta(\nabla^N_ZH)=-\eta(A_HZ)=-|H|^2\eta(Z)=0.$$ Next, since $R^N(X,Y)H=0$, for any tangent vector fields $X$ and $Y$, we get $$\begin{array}{ll} 0=R^N(\widehat Z,Z)H=&\frac{\rho}{4}\{\langle\varphi H,\widehat Z\rangle\varphi Z-\langle\varphi H,Z\rangle\varphi\widehat Z+\langle\varphi Z,\widehat Z\rangle\varphi H\\ \\&+\eta(\widehat Z)\eta(H)Z-\eta(Z)\eta(H)\widehat Z\}. \end{array}$$ Assume that relation ([*iii*]{}) holds, i.e. that $\langle\varphi Z,\widehat Z\rangle=0$. As we have seen, this also implies $\eta(Z)=\eta(\widehat Z)=0$ and $\eta(H)=0$. Then, by using the definition of the cosymplectic structure on $N^{2n+1}(\rho)$, we have $$\langle R^N(\widehat Z,Z)H,\varphi Z\rangle=-\frac{\rho}{4}\langle Z,\widehat Z\rangle\langle\varphi H,Z\rangle=0.$$ Conversely, if ([*iv*]{}) holds, i.e. if $\langle\varphi H,Z\rangle=0$, we have $$\begin{array}{lcl} 0&=&\langle\nabla_{\widehat Z}\varphi H,Z\rangle=\langle\varphi\nabla_{\widehat Z}H,Z\rangle+\langle\varphi H,\nabla_{\widehat Z}Z\rangle=-\langle\varphi A_H\widehat Z,Z\rangle+\langle Z,\widehat Z\rangle\langle\varphi H,H\rangle\\ \\&=&|H|^2\langle\varphi Z,\widehat Z\rangle, \end{array}$$ and come to the conclusion. Now, let us assume that relations ([*i*]{})-([*iv*]{}) do not hold on our surface. We choose an orthonormal basis $\{e_1,e_2\}$ on $\Sigma^2$ such that $e_1\perp\xi$, i.e. $\eta(e_1)=0$. Then, from $\langle R^N(e_1,e_2)H,e_2\rangle=0$, we obtain $$\langle\varphi e_2,e_1\rangle\langle\varphi H,e_2\rangle=0,$$ which means that $\langle\varphi H,e_2\rangle=0$, and then $R^N(e_1,e_2)H=0$ can be written as $$\label{eq:R} 2\langle\varphi e_2,e_1\rangle\varphi H+\langle\varphi H,e_1\rangle\varphi e_2-\eta(e_2)\eta(H)e_1=0.$$ We take the product of this equation with $\varphi H$, $e_1$ and $\varphi e_2$, respectively, and obtain $$\label{eq:R1} \langle\varphi e_2,e_1\rangle\langle\varphi H,\varphi H\rangle=\eta(e_2)\eta(H)\langle\varphi H,e_1\rangle$$ $$\label{eq:R2} 3\langle\varphi e_2,e_1\rangle\langle\varphi H,e_1\rangle=\eta(e_2)\eta(H)$$ and $$\label{eq:R3} 3\langle\varphi e_2,e_1\rangle\eta(e_2)\eta(H)=\langle\varphi H,e_1\rangle\langle\varphi e_2,\varphi e_2\rangle.$$ Since $\langle\varphi e_2,e_1\rangle\neq 0$ and $\langle\varphi H,e_1\rangle\neq 0$, from the first two equations, we get $$\label{eq:R1.1} \langle\varphi H,\varphi H\rangle=|H|^2-(\eta(H))^2=3\langle\varphi H,e_1\rangle^2$$ and, from the last two, $$\label{eq:R1.2} \langle\varphi e_2,\varphi e_2\rangle=1-(\eta(e_2))^2=9\langle\varphi e_2,e_1\rangle^2.$$ \[lemma\_basis\] If the relations ([*i*]{})-([*iv*]{}) in Lemma \[l1umb\] do not hold on $\Sigma^2$ then we have 1. $2|H|^2\langle\varphi e_2,e_1\rangle=\langle\varphi H,\sigma(e_1,e_2)\rangle$; 2. $\langle\varphi H,\sigma(e_1,e_1)\rangle=\langle\varphi H,\sigma(e_2,e_2)\rangle=0$; 3. $\nabla_{e_2}e_2=\nabla_{e_2}e_1=0$; 4. $\eta(\sigma(e_1,e_2))=0$ and $\langle\varphi e_1,\sigma(e_1,e_2)\rangle=0$. From equation it follows $$2\langle\varphi H, \varphi\nabla^N_{e_2}H\rangle=6\langle\varphi H,e_1\rangle\rangle(\langle\varphi\nabla^N_{e_2}H,e_1\rangle+\langle\varphi H,\nabla^N_{e_2}e_1\rangle).$$ But we also know that $\nabla^N_{e_2}H=-|H|^2e_2$ and, since $\langle\varphi H, e_2\rangle=0$, that $\langle\varphi H,\nabla_{e_2}e_1\rangle=0$. Replacing in the above equation and using equation one obtains $$2|H|^2\langle\varphi e_2,e_1\rangle=\langle\varphi H,\sigma(e_1,e_2)\rangle.$$ In the same manner, from equation , we obtain $\langle\varphi H,\sigma(e_1,e_1)\rangle=0$ and then $\langle\varphi H,\sigma(e_2,e_2)\rangle=0$. As $\langle\varphi H,e_2\rangle=0$ we get $\langle\varphi H,\nabla^N_{e_2}e_2\rangle=0$, which implies $$\langle\varphi H,\nabla_{e_2}e_2\rangle=\langle\varphi H,e_1\rangle\langle\nabla_{e_2}e_2,e_1\rangle=0,$$ meaning that $\nabla_{e_2}e_2=0$. Since $e_1\perp e_2$, we also have $\nabla_{e_2}e_1=0$. Finally, $\eta(e_1)=0$ and $\nabla^N\xi=0$ imply $\eta(\nabla^N_{e_2}e_1)=0$. Since $\nabla_{e_2}e_1=0$ it follows that $\eta(\sigma(e_1,e_2))=0$. Then the last identity in our Lemma follows easily by taking the product of with $\varphi\sigma(e_1,e_2)$. From the expression of the curvature tensor $R^N$ it can be easily checked that $R^N$ is parallel, i.e. $\nabla^NR^N=0$. Therefore, we have $(\nabla^N_{e_1}R^N)(e_1,e_2,H)=0$ and then, as $R^N(X,Y)H=0$, for any tangent vectors $X$ and $Y$, one obtains $$|H|^2R^N(e_1,e_2,e_1)-R^N(\sigma(e_1,e_1),e_2,H)-R^N(e_1,\sigma(e_1,e_2),H)=0.$$ By using , and Lemma \[lemma\_basis\], the above equation become, after a straightforward computation, $$\eta(\sigma(e_1,e_1))\eta(H)e_2-\eta(e_2)\eta(H)\sigma(e_1,e_1)+\langle\varphi H,e_1\rangle\varphi\sigma(e_1,e_2)-5|H|^2\langle\varphi e_2,e_1\rangle\varphi e_1=0,$$ and, by taking the product with $e_2$, we obtain that $$\label{eq:R_par} \eta(\sigma(e_1,e_1))\eta(H)+9|H|^2\langle\varphi e_2,e_1\rangle^2=0.$$ Next, from equations , and , it follows that $$3|H|^2\langle\varphi e_2,e_1\rangle^2=(1-6\langle\varphi e_2,e_1\rangle^2)(\eta(H))^2.$$ Hence, replacing in , we get $\eta(\sigma(e_1,e_1))=3\eta(H)(6\langle\varphi e_2,e_1\rangle^2-1)$ and then $ \eta(\sigma(e_2,e_2))=\eta(H)(5-18\langle\varphi e_2,e_1\rangle^2)$, which means that $$\label{eq:eta} \eta(\nabla^N_{e_2}e_2)=\eta(H)(5-18\langle\varphi e_2,e_1\rangle^2),$$ since $\nabla_{e_2}e_2=0$. From equation , we obtain $2\eta(e_2)\eta(\nabla^N_{e_2}e_2)=-18\langle\varphi e_2,e_1\rangle e_2(\langle\varphi e_2,e_1\rangle)$, and then, from and , it results $$\label{eq:e2} 3e_2(\langle\varphi e_2,e_1\rangle)=(18\langle\varphi e_2,e_1\rangle^2-5)\langle\varphi H, e_1\rangle.$$ Finally, we differentiate the equation , and using the equations and , the fact that $H$ is umbilical and parallel and Lemma \[lemma\_basis\], we obtain $$|H|^2+(5-18\langle\varphi e_2,e_1\rangle^2)(\eta(H))^2=0.$$ But, from equation , we know that $9\langle\varphi e_2,e_1\rangle^2<1$. Therefore, the last equation is a contradiction. Thus, it results that $\xi\perp T\Sigma^2$, $\varphi(T\Sigma^2)\subset N\Sigma^2$, $H\perp\xi$ and $\varphi H\perp T\Sigma^2$. Now, it is easy to see that, if $\{e_1,e_2\}$ is an orthonormal frame on $\Sigma^2$, then, at any point on the surface, the system $\{e_1,e_2,\varphi e_1,\varphi e_2, H,\varphi H,\xi\}$ is linearly independent, which means that $n\geq 3$. Thus we can state the following \[p\_umb\] Let $\Sigma^2$ be an immersed non-minimal pmc surface in a non-flat cosymplectic space form $N^{2n+1}(\rho)$, $n\geq 2$. If the mean curvature vector $H$ is an umbilical direction everywhere, then $\xi\perp T\Sigma^2$, $\varphi(T\Sigma^2)\subset N\Sigma^2$ and $n\geq 3$. Moreover, $H\perp\xi$ and $\varphi H\perp T\Sigma^2$. Let $N^{2n+1}(\rho)$ be the product between a non-flat complex space form $M^n(\rho)$, with complex dimension $n$, and $\mathbb{R}$. If $\Sigma^2$ is an immersed surface in $N^{2n+1}(\rho)$ as in the previous Proposition, it follows that $\Sigma^2$ is a totally real surface in $M^n(\rho)$. Moreover, since $N^{2n+1}(\rho)$ is a product space, we have $\nabla^N_{\widehat Z}Z=\nabla^M_{\widehat Z}Z$, $\nabla^N_{Z}Z=\nabla^M_{Z}Z$ and $\nabla^N_{X}H=\nabla^M_{X}H$, for any vector field $X$ tangent to $\Sigma^2$, where we have used the fact that $H\perp\xi$. From these identities, we obtain that the surface is pseudo-umbilical and with parallel mean curvature vector in $M^n(\rho)$. Hence, we have Let $\Sigma^2$ be an immersed non-minimal pmc surface in $M^{n}(\rho)\times\mathbb{R}$, $n\geq 2$, $\rho\neq 0$. If its mean curvature vector is an umbilical direction everywhere, then $\Sigma^2$ is a pseudo-umbilical non-minimal totally real pmc surface in $M^n(\rho)$, and $n\geq 3$. If the mean curvature vector of the surface $\Sigma^2$ is umbilical everywhere then the $(2,0)$-part of the quadratic form $Q$ defined on $\Sigma^2$ vanishes. The next step is to study the case when the mean curvature vector of the surface is nowhere umbilical. We shall prove that such a surface lies in a totally geodesic submanifold of $N^{2n+1}(\rho)$, with dimension less than or equal to $11$. \[lemma\_parallel\] Assume that $H$ is nowhere an umbilical direction. Then there exists a parallel subbundle of the normal bundle that contains the image of the second fundamental form $\sigma$ and has dimension less than or equal to $9$. We consider a subbundle $L$ of the normal bundle, given by $$L=\operatorname{span}\{\operatorname{Im}\sigma\cup(\varphi(\operatorname{Im}\sigma))^{\perp}\cup(\varphi(T\Sigma^2))^{\perp}\cup\xi^{\perp}\},$$ where $(\varphi(T\Sigma^2))^{\perp}=\{(\varphi X)^{\perp}:X\ \textnormal{tangent to}\ \Sigma^2\}$, $(\varphi(\operatorname{Im}\sigma))^{\perp}=\{(\varphi\sigma(X,Y))^{\perp}:X,Y\ \textnormal{tangent to}\ \Sigma^2\}$ and $\xi^{\perp}$ is the normal component of $\xi$ along the surface. We will show that $L$ is parallel. First, we have to prove that if $V$ is orthogonal to $L$, then $\nabla^{\perp}_{e_i}V$ is orthogonal to $\varphi(T\Sigma^2)$ and to $\varphi H$, where $\{e_1,e_2\}$ is a frame satisfying $$\langle\sigma(e_1,e_2),V\rangle=\langle\sigma(e_1,e_2),H\rangle=0.$$ Indeed, one gets $$\begin{array}{lll} \langle(\varphi H)^{\perp},\nabla^{\perp}_{e_i}V\rangle &=&\langle(\varphi H)^{\perp},\nabla^N_{e_i}V\rangle =-\langle\nabla^N_{e_i}(\varphi H)^{\perp},V\rangle\\ \\&=&-\langle\nabla^N_{e_i}\varphi H,V\rangle+\langle\nabla^N_{e_i}(\varphi H)^{\top},V\rangle\\ \\&=&\langle\varphi A_He_i,V\rangle+\langle\sigma(e_i,(\varphi H)^{\top}),V\rangle\\ \\&=&0 \end{array}$$ and $$\begin{array}{lll} \langle(\varphi e_j)^{\perp},\nabla^{\perp}_{e_i}V\rangle&=&-\langle\nabla^N_{e_i}(\varphi e_j)^{\perp},V\rangle\\ \\&=&-\langle\nabla^N_{e_i}\varphi e_j,V\rangle+\langle\nabla^N_{e_i}(\varphi e_j)^{\top},V\rangle\\ \\&=&-\langle\varphi\nabla_{e_i}e_j,V\rangle-\langle\varphi\sigma(e_i,e_j),V\rangle+\langle\sigma(e_i,(\varphi e_j)^{\top}),V\rangle\\ \\&=&0. \end{array}$$ Next, we shall prove that if a normal vector $V$ is orthogonal to $L$, then so is $\nabla^{\perp}V$, i.e. $$\langle\sigma(e_i,e_j),\nabla^{\perp}_{e_k}V\rangle=0,\quad \langle \varphi\sigma(e_i,e_j),\nabla^{\perp}_{e_k}V\rangle=0,$$ $$\langle\varphi e_i,\nabla^{\perp}_{e_k}V\rangle=0,\quad\langle\xi^{\perp},\nabla^{\perp}_{e_k}V\rangle=0.$$ We only have to prove the first two identities and the last one, since the third has been obtained above. Let us denote $A_{ijk}=\langle\nabla^{\perp}_{e_k}\sigma(e_i,e_j),V\rangle$. As $\sigma$ is symmetric, we have $A_{ijk}=A_{jik}$, and also $A_{ijk}=-\langle\sigma(e_i,e_j),\nabla^{\perp}_{e_k}V\rangle$, since $V$ is orthogonal to $L$. We get $$\begin{array}{lll} \langle(\nabla^{\perp}_{e_k}\sigma)(e_i,e_j),V\rangle&=& \langle\nabla^{\perp}_{e_k}\sigma(e_i,e_j),V\rangle-\langle\sigma(\nabla_{e_k}e_i,e_j),V\rangle -\langle\sigma(e_i,\nabla_{e_k}e_j),V\rangle\\ \\&=&\langle\nabla^{\perp}_{e_k}\sigma(e_i,e_j),V\rangle, \end{array}$$ and, from the Codazzi equation, again using $V\perp L$, $$\begin{array}{lll} \langle(\nabla^{\perp}_{e_k}\sigma)(e_i,e_j),V\rangle&=&\langle(\nabla^{\perp}_{e_i}\sigma)(e_k,e_j)+(R^N(e_k,e_i)e_j)^{\perp},V\rangle\\ \\&=&\langle(\nabla^{\perp}_{e_j}\sigma)(e_k,e_i)+(R^N(e_k,e_j)e_i)^{\perp},V\rangle\\ \\&=&\langle(\nabla^{\perp}_{e_i}\sigma)(e_k,e_j),V\rangle=\langle(\nabla^{\perp}_{e_j}\sigma)(e_k,e_i),V\rangle. \end{array}$$ We have just proved that $A_{ijk}=A_{kji}=A_{ikj}$. Next, since $\nabla^{\perp}_{e_k}V$ is orthogonal to $\varphi(T\Sigma^2)$ and to $\varphi H$, it follows that the frame field $\{e_1,e_2\}$ diagonalizes $A_{\nabla^{\perp}_{e_k}V}$ as well, and we get $$A_{ijk}=-\langle\sigma(e_i,e_j),\nabla^{\perp}_{e_k}V\rangle=-\langle e_i,A_{\nabla^{\perp}_{e_k}V}e_j\rangle=0$$ for any $i\neq j$. Hence, we have obtained that if two indices are different from each other then $A_{ijk}=0$. Next, we have $$\begin{array}{lll} A_{iii}&=&-\langle\sigma(e_i,e_i),\nabla^{\perp}_{e_i}V\rangle =-\langle 2H,\nabla^{\perp}_{e_i}V\rangle+\langle\sigma(e_j,e_j),\nabla^{\perp}_{e_i}V\rangle\\ \\&=&\langle 2\nabla^{\perp}_{e_i}H,V\rangle-A_{jji}=0, \end{array}$$ and, therefore, the first identity is proved. In order to obtain the second one, we observe first that if $V$ is orthogonal to $L$ then also $\varphi V$ is normal and orthogonal to $L$. It follows that $$\begin{array}{lll} \langle(\varphi\sigma(e_i,e_j))^{\perp},\nabla^{\perp}_{e_k} V\rangle &=&-\langle\nabla^N_{e_k}(\varphi\sigma(e_i,e_j))^{\perp},V\rangle\\ \\&=&-\langle\nabla^N_{e_k}\varphi\sigma(e_i,e_j),V\rangle+ \langle\nabla^N_{e_k}(\varphi\sigma(e_i,e_j))^{\top},V\rangle\\ \\ &=&\langle\varphi A_{\sigma(e_i,e_j)}e_k,V\rangle-\langle\varphi\nabla^{\perp}_{e_k}\sigma(e_i,e_j),V\rangle\\ \\&&+\langle\sigma(e_k,(\varphi\sigma(e_i,e_j))^{\top}),V\rangle\\ \\&=&\langle\nabla^{\perp}_{e_k}\sigma(e_i,e_j),\varphi V\rangle=-\langle\sigma(e_i,e_j),\nabla^{\perp}_{e_k}\varphi V\rangle \\ \\&=&0. \end{array}$$ Finally, we get $$\begin{array}{ll} \langle\xi^{\perp},\nabla^{\perp}_{e_k}V\rangle&=\langle\xi^{\perp},\nabla^N_{e_k}V\rangle =-\langle\nabla^N_{e_k}\xi^{\perp},V\rangle\\ \\&=-\langle\nabla^N_{e_k}\xi,V\rangle+\langle\nabla^N_{e_k}\xi^{\top},V\rangle=\langle \sigma(e_k,\xi^{\top}),V\rangle\\ \\&=0, \end{array}$$ which completes the proof. Since $\varphi(L\oplus T\Sigma^2)\subset L\oplus T\Sigma^2$ and $\xi\in L\oplus T\Sigma^2$ along the surface, it follows that $R^N(X,Y)Z\in L\oplus T\Sigma^2$ for any $X,Y,Z\in L\oplus T\Sigma^2$. Therefore, by using a result of J. H. Eschenburg and R. Tribuzy (Theorem 2 in [@ET]) and the result of H. Endo in [@E], we get \[p\_numb\] Let $\Sigma^2$ be an immersed non-minimal pmc surface in a non-flat cosymplectic space form $N^{2n+1}(\rho)$, $n\geq 2$. If its mean curvature vector is nowhere an umbilical direction, then the surface lies in a cosymplectic space form $N^{r}(\rho)$, where $r\leq 11$. If we consider the cosymplectic space form $N^{2n+1}(\rho)$ to be the product between a complex space form $M^{n}(\rho)$ and $\mathbb{R}$ and use again the facts that $\varphi(L\oplus T\Sigma^2)\subset L\oplus T\Sigma^2$ and $\xi\in L\oplus T\Sigma^2$, then we have the following Let $\Sigma^2$ be an immersed non-minimal pmc surface in $M^{n}(\rho)\times\mathbb{R}$, $n\geq 2$, $\rho\neq 0$. If its mean curvature vector is nowhere an umbilical direction, then the surface lies in $M^{r}(\rho)\times\mathbb{R}$, where $r\leq 5$. \[rem\_split\] Since the map $p\in\Sigma^2\rightarrow(A_H-\mu\operatorname{I})(p)$, where $\mu$ is a constant, is analytic, it follows that if $H$ is an umbilical direction, then this either holds on $\Sigma^2$ or only for a closed set without interior points. In this second case $H$ is not an umbilical direction in an open dense set, and then Proposition \[lemma\_parallel\] holds on this set. By continuity it holds on $\Sigma^2$. Consequently, only the two above studied cases can occur. Summarizing, we can state \[th\_red\] Let $\Sigma^2$ be an immersed non-minimal pmc surface in a non-flat cosymplectic space form $N^{2n+1}(\rho)$, $n\geq 2$. Then, one of the following holds: 1. $\Sigma^2$ is pseudo-umbilical and then $\xi\perp T\Sigma^2$, $\varphi(T\Sigma^2)\subset N\Sigma^2$, $H\perp\xi$, $\varphi H\perp T\Sigma^2$ and $n\geq 3$; or 2. $\Sigma^2$ is not pseudo-umbilical and lies in a cosymplectic space form $N^{r}(\rho)$, where $r\leq 11$. \[c\_red\] Let $\Sigma^2$ be an immersed non-minimal pmc surface in $N^{2n+1}(\rho)=M^{n}(\rho)\times\mathbb{R}$, where $M^n(\rho)$ is a non-flat complex space form, with complex dimension $n\geq 2$. Then one of the following holds: 1. $\Sigma^2$ is pseudo-umbilical in $N^{2n+1}(\rho)$ and then it is a pseudo-umbilical non-minimal totally real pmc surface in $M^n(\rho)$ and $n\geq 3$; or 2. $\Sigma^2$ is not pseudo-umbilical in $N^{2n+1}(\rho)$ and then it lies in $M^{r}(\rho)\times\mathbb{R}$, where $r\leq 5$. Anti-invariant pmc surfaces {#santi} =========================== Let $\Sigma^2$ be an immersed non-minimal anti-invariant pmc surface in a non-flat cosymplectic space form $N^{2n+1}(\rho)$ and define a new quadratic form $Q'$ on $\Sigma^2$ by $$Q'(X,Y)=8\langle\sigma(X,Y),H\rangle-\rho\eta(X)\eta(Y).$$ In the same way as in Section \[sqf\] it can be proved that the $(2,0)$-part of $Q'$ is holomorphic. In the following, we shall assume that the $(2,0)$-parts of $Q$ and $Q'$ vanish on the surface, i.e. the following equations hold on $\Sigma^2$: $$\label{eq:q'e} \begin{cases} 8|H|^2\langle\sigma(e_1,e_1)-\sigma(e_2,e_2),H\rangle-\rho|H|^2((\eta(e_1))^2-(\eta(e_2))^2)\\+3\rho(\langle\varphi e_1,H\rangle^2-\langle\varphi e_2,H\rangle^2)=0\\8|H|^2\langle\sigma(e_1,e_2),H\rangle-\rho|H|^2\eta(e_1)\eta(e_2)+3\rho\langle\varphi e_1,H\rangle\langle\varphi e_2,H\rangle)=0 \end{cases}$$ and $$\label{eq:q''e} \begin{cases} 8\langle\sigma(e_1,e_1)-\sigma(e_2,e_2),H\rangle-\rho((\eta(e_1))^2-(\eta(e_2))^2)=0\\ 8\langle\sigma(e_1,e_2),H\rangle-\rho\eta(e_1)\eta(e_2)=0, \end{cases}$$ where $\{e_1,e_2\}$ is an orthonormal frame on the surface. From it results that $\xi$ is orthogonal to the surface at a point $p$ if and only if $H$ is an umbilical direction at $p$. Therefore, using Remark \[rem\_split\], we obtain that either $\xi$ is orthogonal to the surface at any point or this holds only in a closed set without interior points. From Theorem \[th\_red\] we know that the first case is possible only for $n\geq 3$. Next, if $\xi_p$ is tangent to the surface at any point $p$ in an open, connected subset of $\Sigma^2$, it follows that the Gaussian curvature $K$ of $\Sigma^2$ vanishes on this set, since $\xi$ is parallel. Therefore, $K$ vanishes on the whole surface, and this cannot occur for $2$-spheres. We however studied this case in Section \[scyl\], where $N^{2n+1}(\rho)$ is the product between a non-flat complex space form and the Euclidean line $\mathbb{R}$. In general, for a surface in an arbitrary cosymplectic space form $N^{2n+1}(\rho)$, we can choose an orthonormal frame $\{e_1,\xi\}$ on the surface, and easily see that $\sigma(\xi,\xi)=0$, $\sigma(e_1,\xi)=0$ and $\sigma(e_1,e_1)=2H$. Moreover, from and , we have that $H\perp\varphi e_1$ and $\rho=-16|H|^2$. We shall use now an argument in [@AdCT], in order to show that either $\xi$ is tangent to $\Sigma^2$ everywhere or this holds only in a closed set without interior points. Let $f:\Sigma^2\rightarrow\mathcal{L}(N\Sigma^2,\mathbb{R})$ be the map that takes any point $p\in\Sigma^2$ to the linear function $f_p$ on $N_p\Sigma^2$, given by $f_p(X_p)=\eta_p(X_p)$, for any normal vector $X_p$ at $p$. Obviously, $\xi$ is tangent to the surface at $p\in\Sigma^2$ if and only if $f_p$ vanishes identically. By analyticity, either $f$ is identically zero on the surface or the set of its zeroes is closed and without interior points. Now, in order to treat the case where $\xi$ has non-vanishing tangent and normal components in an open dense set $T\subset\Sigma^2$, we shall split our study in two cases, as $n=2$ or $n\geq 3$. We will work in the open dense set $T$ and all results obtained below, that hold on this set, actually hold on $\Sigma^2$, by continuity. [****]{} Let us consider the orthonormal basis $\{e_1,e_2\}$ in $T_p\Sigma^2$ for any $p\in T$, where $e_2=\frac{\xi^{\top}}{|\xi^{\top}|}$ is the unit vector in the direction of the projection of $\xi$ on the tangent space. Then, since $\eta(e_1)=0$ and, from and , we have $\varphi e_1\perp H$ and $\varphi e_2\perp H$, it follows that $\{e_1,e_2,e_3=\varphi e_1,e_4=\frac{\varphi e_2}{|\varphi e_2|},e_5=\frac{H}{|H|}\}$ is an orthonormal basis in $T_pN^5$. Observe that, at any point $p\in T$, the characteristic vector field $\xi$ can be written as $$\label{eq:xi} \xi=\mu e_2+\nu e_5,$$ where $\mu=\eta(e_2)$ and $\nu=\eta(e_5)=\frac{\eta(H)}{|H|}$, is called the *angle function*. Next, from the second equation of , we get that $\{e_1,e_2\}$ diagonalizes $A_H$. Moreover, using the Ricci equation, one obtains that $\{e_1,e_2\}$ also diagonalizes $A_{\varphi e_1}$ and $A_{\varphi e_2}$, since $\langle R^N(e_1,e_2)H,\varphi e_1\rangle=\langle R^N(e_1,e_2)H,\varphi e_2\rangle=0$. Finally, the first equation of leads to $$\label{eq:ah} A_{e_5}=\left(\begin{array}{cc}\lambda_1&0\\0&\lambda_2\end{array}\right) =\left(\begin{array}{cc}|H|(1-\frac{\rho}{16|H|^2}\mu^2)&0 \\0&|H|(1+\frac{\rho}{16|H|^2}\mu^2)\end{array}\right).$$ \[lemma:calc5\] The following identities hold: 1. $e_1(\mu)=e_1(\nu)=0$; 2. $e_2(\mu)=\lambda_2\nu$ and $e_2(\nu)=-\lambda_2\mu$; 3. $\nabla_{e_1}e_1=-\lambda_1\frac{\nu}{\mu}e_2$ and $\nabla_{e_2}e_2=0$; 4. $\sigma(e_i,e_i)=\frac{\lambda_i}{|H|}H$, $i\in\{1,2\}$. The fact that $\xi$ is parallel, and imply that $$\begin{array}{lll} 0&=&\nabla^N_{e_1}\xi=\nabla^N_{e_1}(\mu e_2+\nu e_5) \\ \\&=&e_1(\mu)e_2+\mu\nabla_{e_1}e_2-\nu A_{e_5}e_1+\mu\sigma(e_1,e_2)+e_1(\nu)e_5 \\ \\&=&e_1(\mu)e_2+\mu\nabla_{e_1}e_2-\lambda_1\nu e_1+e_1(\nu)e_5. \end{array}$$ The tangent and the normal part in the right hand side vanish and then, since $\nabla_{e_1}e_2\perp e_2$, it follows that $e_1(\mu)=e_1(\nu)=0$ and $\nabla_{e_1}e_2=\lambda_1\frac{\nu}{\mu}e_1$. As $\langle\nabla_{e_1}e_2,e_1\rangle+\langle\nabla_{e_1}e_1,e_2\rangle=0$ and $\nabla_{e_1}e_1\perp e_1$, the last identity is equivalent to $\nabla_{e_1}e_1=-\lambda_1\frac{\nu}{\mu}e_2$. In the same way, we get $$\begin{array}{lll} 0&=&\nabla^N_{e_2}\xi=\nabla^N_{e_2}(\mu e_2+\nu e_5) \\ \\&=&e_2(\mu)e_2+\mu\nabla_{e_2}e_2-\nu A_{e_5}e_2+\mu\sigma(e_2,e_2)+e_2(\nu)e_5 \\ \\&=&e_2(\mu)e_2+\mu\nabla_{e_2}e_2-\lambda_2\nu e_2+e_2(\nu)e_5+\mu\sigma(e_2,e_2) \end{array}$$ and then $\nabla_{e_2}e_2=0$, $e_2(\mu)=\lambda_2\nu$ and, since $\mu^2+\nu^2=1$, $e_2(\nu)=-\lambda_2\mu$. We also obtain that $\sigma(e_2,e_2)=-\frac{e_2(\nu)}{\mu}e_5=\frac{\lambda_2}{|H|}H$ and $\sigma(e_1,e_1)=2H-\sigma(e_2,e_2)=\frac{\lambda_1}{|H|}H$. A direct consequence of the previous Lemma is that $A_{\varphi e_1}$ and $A_{\varphi e_2}$ vanish and then the only non-zero component of $A$ is $A_{e_5}$. Now, assume that the characteristic vector field $\xi$ is either tangent to the surface or it has non vanishing tangent and normal components in an open dense set $T\subset\Sigma^2$, and consider the subbundle of the normal bundle $L=\operatorname{Im}\sigma$. It is easy to see that $L$ is parallel, $\dim L=1$, $\varphi X\perp Y$, for any $X,Y\in T\Sigma^2\oplus L$ and that $T\Sigma^2\oplus L$ is invariant by $R^N$, since $\xi\in T\Sigma^2\oplus L$ along the surface. On the other hand, any non-minimal cmc surface immersed in an anti-invariant totally geodesic $3$-dimensional submanifold of $N^5(\rho)$ is an immersed non-minimal anti-invariant pmc surface in $N^5(\rho)$. Moreover, if we assume that the $(2,0)$-part of $Q'$ vanishes on such a surface, it follows that also the $(2,0)$-part of $Q$ vanishes. Therefore, using Theorem $2$ in [@ET], we get A surface $\Sigma^2$ can be immersed as a non-minimal anti-invariant pmc surface in a non-flat cosymplectic space form $N^5(\rho)$, with vanishing $(2,0)$-parts of the quadratic forms $Q$ and $Q'$, if and only if $\Sigma^2$ is an immersed non-minimal cmc surface in a $3$-dimensional totally geodesic anti-invariant submanifold of $N^5(\rho)$, such that the $(2,0)$-part of $Q'$ vanishes. The $3$-dimensional totally geodesic anti-invariant submanifolds of $M^2(\rho)\times\mathbb{R}$, where $M^2$ is a non-flat complex space form, are $\bar M^2\times\mathbb{R}$, where $\bar M^2$ is a totally geodesic Lagrangian submanifold of $M^2(\rho)$. B.-Y. Chen and K. Ogiue proved in [@CO] (Proposition 3.2) that a totally geodesic totally real submanifold $\bar M^m$ of a non-flat complex space form $M^n(\rho)$ is necessarily a space form with constant curvature $\frac{\rho}{4}$. Moreover, it is known that $\mathbb{S}^2(\frac{\rho}{4})$ and $\mathbb{H}^2(\frac{\rho}{4})$ can be isometrically immersed as totally geodesic Lagrangian submanifolds in $\mathbb{C}P^2(\rho)$ and $\mathbb{C}H^2(\rho)$, respectively (see [@CMU]). Hence, an immersed non-minimal anti-invariant surface pmc surface in $M^2(\rho)\times\mathbb{R}$ on which the $(2,0)$-parts of $Q$ and $Q'$ vanish, is a non-minimal cmc surface in $\bar M^2(\frac{\rho}{4})\times\mathbb{R}$ with vanishing $(2,0)$-part of $Q'$, which in this case is just the Abresch-Rosenberg differential introduced in [@AR], where $\bar M^2(\frac{\rho}{4})$ is a complete simply-connected surface with constant curvature $\frac{\rho}{4}$. U. Abresch and H. Rosenberg proved there are four classes of such surfaces, the first three of them, namely the cmc spheres $S_H^2\subset\bar M^2(\frac{\rho}{4})\times\mathbb{R}$ of Hsiang and Pedrosa, their non-compact cousins $D_H^2$ and the surfaces of catenoidal type $C_H^2$, being embedded and rotationally invariant, and the fourth one being comprised of parabolic surfaces $P_H^2$ (see [@AR] and [@AR2] for detailed description of all these surfaces). \[cor:class\] Any immersed non-minimal anti-invariant pmc surface in $M^2(\rho)\times\mathbb{R}$ with vanishing $(2,0)$-parts of the quadratic forms $Q$ and $Q'$ is one of the surfaces $S_H^2$, $D_H^2$, $C_H^2$ and $P_H^2$ in the product space $\bar M^2(\frac{\rho}{4})\times\mathbb{R}$. Therefore, we have Any immersed non-minimal anti-invariant pmc $2$-sphere in a non-flat cosymplectic space form $M^2(\rho)\times\mathbb{R}$ is one of the embedded rotationally invariant cmc spheres $S_H^2\subset\bar M^2(\frac{\rho}{4})\times\mathbb{R}$. A surface $\Sigma^2$ immersed in a cosymplectic space form is called a *slant surface* if for all vectors $X$ tangent to $\Sigma^2$ and orthogonal to $\xi$ the angle $\theta$ between $\varphi X$ and $T_p\Sigma^2$ is constant, i.e. $\theta$ does not depend on $X$ or on the point $p$ on the surface. Obviously, the invariant and anti-invariant surfaces are slant surfaces. A slant surface which is neither invariant nor anti-invariant is called a proper slant surface. If $\Sigma^2$ is a proper slant surface then $\xi$ is orthogonal to the surface (see [@L]). It follows that, if $\Sigma^2$ is an immersed proper slant surface in $M^2(\rho)\times\mathbb{R}$, then it lies in $M^2(\rho)$. On the other hand, there are no non-minimal pmc $2$-spheres in a non-flat complex space form $M^2(\rho)$ (see [@H]). Therefore, $S_H^2\subset\bar M^2(\frac{\rho}{4})\times\mathbb{R}$ are the only non-minimal slant pmc $2$-spheres in $M^2(\rho)\times\mathbb{R}$. In the following, we shall see that Lemma \[lemma:calc5\] allows us to make some considerations about the admissible range of the angle function $\nu$. Let $\Sigma^2$ be a surface as in Corollary \[cor:class\] with parallel mean curvature vector $H$. From Lemma \[lemma:calc5\], it follows, after a straightforward computation, that $$\label{D1} \Delta\nu^2=2\lambda_2^2(1-3\nu^2)$$ and $$\label{D2} \Delta|A|^2=\frac{\rho^2}{32|H|^2}\lambda_2^2\mu^2(5\nu^2-1).$$ Assume now that the surface is complete and $K\geq 0$, so that $\Sigma^2$ is a parabolic space. If $\nu^2\geq\frac{1}{5}$ on an open dense subset of $\Sigma^2$, then, from , it follows that $|A|^2$ is a subharmonic function, and, since $|A|^2$ is bounded by , we get that either $\lambda_2^2=0$ or $\mu^2=0$ or $\nu^2=\frac{1}{5}$. J. M. Espinar and H. Rosenberg proved in [@ER] that if the angle function $\nu$ is constant, then $\nu^2=0$ or $\nu^2=1$, the second case being possible only when the surface is minimal. Therefore, since we also know that $\mu^2$ cannot vanish on an open dense subset of $\Sigma^2$, one obtains that $\lambda_2^2=|H|^2(1+\frac{\rho}{16|H|^2}\mu^2)^2=0$, and then that $\mu$ and $\nu$ are constant, which means that $\nu^2=0$ and $\mu^2=1$. But this is a contradiction, since we assumed that $\nu^2\geq\frac{1}{5}$. If $\nu^2\leq\frac{1}{3}$ on an open dense subset of $\Sigma^2$, then, from , in the same way as above, we obtain that $\nu^2=0$, $K=0$ and $\rho=-16|H|^2$. In this case, $\Sigma^2$ is a vertical cylinder over a circle in $\mathbb{H}^2(-4|H|^2)$, with curvature $\kappa=2|H|$ and complex torsion equal to $0$ (see also [@ER]). Next, if $\Sigma^2$ is compact, from and the divergence theorem, we get that if $\nu^2\geq\frac{1}{5}$ then $\nu=0$, which is a contradiction. From , again using the divergence theorem, we obtain that, if $\nu^2\leq\frac{1}{3}$ on $\Sigma^2$, then the surface is a cylinder, which is also a contradiction, since we assumed that $\Sigma^2$ is compact. Summarizing, we proved the following Let $\Sigma^2$ be a complete non-minimal cmc surface in $\bar M^2(\frac{\rho}{4})\times\mathbb{R}$ with vanishing Abresch-Rosenberg differential and non-negative Gaussian curvature. Then we have that: 1. $\nu^2\geq\frac{1}{5}$ cannot occur on an open dense subset of $\Sigma^2$; 2. if $\nu^2\leq\frac{1}{3}$ on an open dense subset of $\Sigma^2$, then $\nu$ vanishes identically and the surface is a vertical cylinder over a circle in $\mathbb{H}^2(-4|H|^2)$, with curvature $\kappa=2|H|$ and complex torsion equal to $0$. There are no compact non-minimal cmc surfaces in $\bar M^2(\frac{\rho}{4})\times\mathbb{R}$ with vanishing Abresch-Rosenberg differential, such that one of the inequalities $\nu^2\geq\frac{1}{5}$ or $\nu^2\leq\frac{1}{3}$ holds on the surface. [****]{} We note first that, according to Theorem \[th\_red\], the surface cannot be pseudo-umbilical, since we have assumed that the tangent part of $\xi$ does not vanish in an open dense set. Now, let us consider again the orthonormal basis $\{e_1,e_2\}$ in $T_p\Sigma^2$, $p\in T$, where $e_2$ is the unit vector in the direction of the projection of $\xi$ on the tangent space. From and , we can see that $\{e_1,e_2\}$ diagonalizes $A_H$ in this case too. Since the surface is anti-invariant, from the Ricci equation, we get $[A_H,A_V]=0$, for any normal vector $V$ and, therefore, $\{e_1,e_2\}$ diagonalizes $A_V$, for any normal vector $V$. We define the subbundle $L=\operatorname{span}\{\operatorname{Im}\sigma\cup\xi^{\perp}\}$ in the normal bundle and, in the same way as in Lemma \[lemma\_parallel\], we can prove that, for any normal vector $V$, orthogonal to $L$, we have $\langle\sigma(e_i,e_j),\nabla^{\perp}_{e_k}V\rangle=0$ and $\langle\xi^{\perp},\nabla^{\perp}_{e_k}V\rangle=0$, $i,j,k=\{1,2\}$, which means that $L$ is parallel. It is also easy to see that $T\Sigma^2\oplus L$ is invariant by $R^N$. We shall prove that $\varphi X\perp Y$, for any $X,Y\in T\Sigma^2\oplus L$. Since the surface is anti-invariant, we have $\varphi e_1\perp e_2$ and, moreover, $\langle\varphi e_1,\xi^{\perp}\rangle=\langle\varphi e_1,\xi-\xi^{\top}\rangle=0$. Next, we obtain $$\langle\varphi e_1,\sigma(e_2,e_2)\rangle=\langle\varphi e_1,\nabla^N_{e_2}e_2\rangle=-\langle\varphi\nabla^N_{e_2}e_1,e_2\rangle= -\langle\varphi\nabla_{e_2}e_1,e_2\rangle=0,$$ again using the fact that $\Sigma^2$ is anti-invariant and $\sigma(e_1,e_2)=0$. From the equations and it follows that $\varphi e_i\perp H$, $i=\{1,2\}$, and then $$\langle\varphi e_1,\sigma(e_1,e_1)\rangle=\langle\varphi e_1,2H-\sigma(e_2,e_2)\rangle=0.$$ Since $T\Sigma^2\oplus L=\operatorname{span}\{e_1,e_2,\sigma(e_1,e_1),\sigma(e_2,e_2),\xi^{\perp}\}$, we have just proved that $\varphi e_1$ is orthogonal to $T\Sigma^2\oplus L$. In the same way we get that $\varphi e_2$ and $\varphi\xi^{\perp}=|\xi^{top}|\varphi e_2$ are orthogonal to $T\Sigma^2\oplus L$. Finally, since $\varphi H\perp e_i$, $i=\{1,2\}$, it results that $\varphi H$ is normal and one gets $$\begin{array}{ll} \langle\varphi\sigma(e_1,e_1),\sigma(e_2,e_2)\rangle&=\langle\varphi\sigma(e_1,e_1),2H-\sigma(e_1,e_1)\rangle =2\langle\varphi\sigma(e_1,e_1),H\rangle\\ \\&=-2\langle\nabla_{e_1}^N e_1,\varphi H\rangle=2\langle e_1,\varphi\nabla_{e_1}^NH\rangle=-2\langle e_1,\varphi A_He_1\rangle\\ \\&=0, \end{array}$$ which means $\varphi\sigma(e_i,e_i)\perp T\Sigma^2\oplus L$. Hence, $T\Sigma^2\oplus L$ is parallel, invariant by $R^N$, anti-invariant by $\varphi$ and its dimension is less than or equal to $5$. Now, again using Theorem $2$ in [@ET], we can state A non-minimal non-pseudo-umbilical anti-invariant pmc surface immersed in a non-flat cosymplectic space form $N^{2n+1}(\rho)$, $n\geq 3$, with vanishing $(2,0)$-parts of $Q$ and $Q'$, lies in a totally geodesic anti-invariant submanifold of $N^{2n+1}(\rho)$, with dimension less than or equal to $5$. If $N^{2n+1}(\rho)$ is of product type, we use again Proposition 3.2 in [@CO], in order to obtain A non-minimal non-pseudo-umbilical anti-invariant pmc surface immersed in $M^n(\rho)\times\mathbb{R}$, $n\geq 3$, $\rho\neq 0$, with vanishing $(2,0)$-parts of $Q$ and $Q'$, lies in a product space $\bar M^4(\frac{\rho}{4})\times\mathbb{R}$, where $\bar M^4(\frac{\rho}{4})$ is a space form immersed as a totally geodesic totally real submanifold in the complex space form $M^n(\rho)$. The non-minimal non-pseudo-umbilical pmc $2$-spheres immersed in $\bar M^4(\frac{\rho}{4})\times\mathbb{R}$ were characterized by H. Alencar, M. do Carmo and R. Tribuzy in [@AdCT] (Theorem 2(4)). In the same paper, they also described the non-minimal non-pseudo-umbilical complete pmc surfaces with non-negative Gaussian curvature with vanishing $(2,0)$-part of $Q'$ (Theorem 3(4)). As we have seen, a proper slant surface $\Sigma^2$ immersed in $M^n(\rho)\times\mathbb{R}$, $\rho\neq 0$, lies in $M^n(\rho)$. Moreover, as an immersed surface in this space, it has constant Kähler angle. In [@F] it is proved that there are no non-minimal non-pseudo-umbilical pmc $2$-spheres with constant Kähler angle in a non-flat complex space form. Therefore, there are no non-minimal non-pseudo-umbilical proper slant pmc $2$-spheres in $M^n(\rho)\times\mathbb{R}$. [99]{} U. Abresch and H. Rosenberg, *A Hopf differential for constant mean curvature surfaces in $\mathbb{S}^2\times\mathbb{R}$ and $\mathbb{H}^2\times\mathbb{R}$*, Acta Math. 193(2004), 141-–174. U. Abresch and H. Rosenberg, *Generalized Hopf differentials*, Mat. Contemp. 28(2005), 1–28. P. Alegre, D. E. Blair and A. Carriazo, *Generalized Sasakian pace forms*, Israel J. Math. 141(2004), 157–183. H. Alencar, M. do Carmo and R. Tribuzy, *A theorem of Hopf and the Cauchy-Riemann inequality*, Comm. Anal. Geom. 15(2007), 283–298. H. Alencar, M. do Carmo and R. Tribuzy, *A Hopf Theorem for ambient spaces of dimensions higher than three*, J. Differential Geometry 84(2010), 1–17. D. E. Blair, *Riemannian Geometry of Contact and Symplectic Manifolds*, Birkhäuser Boston, Progress in Mathematics, 203, 2002. D. E. Blair and S. I. Goldberg, *Topology of almost contact manifolds*, J. Differential Geometry 1(1967), 347–354. I. Castro, C. R. Montealegre and F. Urbano, *Minimal Lagrangian submanifolds in the complex hyperbolic space*, Illinois J. Math. 46(2002), 695–721. B.-Y. Chen and G. D. Ludden, *Surfaces with mean curvature vector parallel in the normal bundle*, Nagoya Math. J. 47(1972), 161–167. B.-Y. Chen and K. Ogiue, *On totally real submanifolds*, Trans. Am. Math. Soc. 193(1974), 257–266. S.-S. Chern, *On surfaces of constant mean curvature in a three-dimensional space of constant curvature*, Geometric dynamics (Rio de Janeiro, 1981), Lecture Notes in Math. 1007, Springer, Berlin, 1983, 104–108. D. Chinea, M. de León and J. C. Marrero, *Topology of cosymplectic manifolds*, J. Math. Pures Appl. 72(1993), 567–591. H. Endo, *A note on invariant submanifolds in an almost cosymplectic manifold*, Tensor (N.S.) 43(1986), 75–78. J. H. Eschenburg and R. Tribuzy, *Existence and uniqueness of maps into affine homogeneous spaces*, Rend. Sem. Mat. Univ. Padova 89(1993), 11–18. J. M. Espinar and H. Rosenberg, *Complete constant mean curvature surfaces in homogeneous spaces*, Comment. Math. Helv., to appear. D. Ferus, *The torsion form of submanifolds in $E^N$*, Math. Ann. 193(1971), 114-–120. D. Fetcu, *Surfaces with parallel mean curvature vector in complex space forms*, preprint 2010. S. Hirakawa, *Constant Gaussian curvature surfaces with parallel mean curvature vector in two-dimensional complex space forms*, Geom. Dedicata 118(2006), 229–244. D. A. Hoffman, *Surfaces in constant curvature manifolds with parallel mean curvature vector field*, Bull. Amer. Math. Soc. 78(1972), 247–250. H. Hopf, *Differential Geometry in the Large*, Lecture Notes in Math. 1000, Springer-Verlag, 1983. A. Lotta, *Slant submanifolds in contact geometry*, Bull. Math. Soc. Sci. Math. Roumanie (N.S.) 39(1996), 183-198. S. Maeda and T. Adachi, *Holomorphic helices in a complex space form*, Proc. Amer. Math. Soc. 125(1997), 1197–1202. S. Maeda and Y. Ohnita, *Helical geodesic immersions into complex space forms*, Geom. Dedicata 30(1989), 93–114. B. O’Neill, *Semi-Riemannian Geometry with Applications to Relativity*, Pure and Applied Mathematics 103, Academic Press, New York, 1983. S.-T. Yau, *Submanifolds with constant mean curvature*, Amer. J. Math. 96(1974), 346–366.
--- abstract: 'We investigate the lepton flavor violation decays of vector mesons in the scenario of the unparticle physics by considering the constraint from $\mu-e$ conversion. In unparticle physics, the predictions of LFV decays of vector mesons depend strongly on the scale dimension $d_{\mathcal{U}}$. The predictions of LFV decays of vector mesons can reach the detective sensitivity in experiment in region of $3\le d_{\mathcal{U}}\le 4$, while the prediction of $\mu-e$ conversion rate can meet the experimental upper limit. For the searching of the lepton flavor violation processes of charged lepton sector in experiment, the process $\Upsilon\rightarrow e\mu$ may be a promising one to be observed.' address: | $^{\dagger}$ Department of Physics, Dalian University of Technology, Dalian 116024, China\ $^{\ddagger}$ Department of Physics, Hebei University, Baoding 071002,China\ $^{\ast} $ sunkesheng@126.com author: - | KE-SHENG SUN $^{\dagger,\ddagger,\ast}$ , TAI-FU FENG $^{\ddagger,\dagger}$, LI-NA KOU $^{\dagger}$ , FEI SUN $^{\dagger}$ ,\ TIE-JUN GAO $^{\ddagger,\dagger}$, HAI-BIN ZHANG $^{\ddagger,\dagger}$ title: LEPTON FLAVOR VIOLATION DECAYS OF VECTOR MESONS IN UNPARTICLE PHYSICS --- Introduction {#intro} ============ During the last decades, searching for Lepton Flavor Violation (LFV) processes in charged lepton sector, as an evidence to discover new physics beyond the Standard Model (SM), have attracted a great deal of attention. Although nonzero neutrino masses supported by the neutrino oscillation experiments [@oscillation1; @oscillation2; @oscillation3] imply the non-conservation of lepton flavor, due to the small masses of neutrinos, the lepton flavor violating processes in the SM are highly suppressed. Nevertheless, the LFV decays could be enhanced by the new sources of LFV in various extensions of the SM, such as grand unified models,[@GUT1; @GUT2; @GUT3] supersymmetric models with and without R-parity,[@SUSY1; @SUSY2; @SUSY3] left-right symmetry models etc.[@LR1; @LR2; @LR3] These new sources are mainly originated from the interactions between the SM particles and new particles beyond SM. Instead, Georgi proposes an alternative scenario that there could be a sector that is exactly scale invariant and very weakly interacting with the sectors in SM.[@Georgi1; @Georgi2] There are no particles in such a sector in space-times spaces cause no particles states with a nonzero mass exist. In general, the scale invariant sector or the so-called unparticle has a scale dimension of fractional number rather than an integral number. The interactions between the unparticle and the SM particles in low energy effective theory can lead to various interesting features in LFV processes and other phenomenologies. In unparticle physics, the unparticle can interact with different flavors of SM leptons and this indicates that the LFV processes can happen at tree level. There have been many studies of LFV processes in unparticle physics. Such as, $\mu\rightarrow 3e$,[@Aliev; @Choudhury] $\mu\rightarrow e\gamma$,[@Ding] $\mu-e$ conversion,[@Ding] $M^0\rightarrow ll'$,[@Lu] $e^+ e^-\rightarrow ll'$,[@Lu] $J/\Psi\rightarrow ll'$,[@Wei] $r\rightarrow ll'$,[@Iltan1; @Iltan2] $l\rightarrow l'\gamma\gamma$,[@Iltan1; @Iltan2] $\tau\rightarrow l(V_0,P_0)$,[@Li] etc.. The study of LFV processes involving vector mesons is an effective way maybe to search for new physics beyond the SM, and the SND Collaboration at the BINP (Novosibirk) presents an upper limit on the $\phi\rightarrow e^+\mu^-$ branching fraction of BR$(\phi\rightarrow e^+\mu^-)\le 2\times 10^{-6}$ .[@SND] Additionally, using a sample of $5.8\times10^7\;J/\Psi$ events collected with the BESII detector, Ref.  obtains the upper limits on BR$(J/\Psi\rightarrow\mu\tau)<2.0\times10^{-6}$ and BR$(\Upsilon\rightarrow\mu\tau)<8.3\times10^{-6}$ at the 90% confidence level (C.L.). Adopting the data collected with the CLEO III detector, the authors of Ref.  estimate the upper limits on BR $(\Upsilon(1S)\rightarrow\mu\tau)<6.0\times10^{-6}$, BR $(\Upsilon(2S)\rightarrow\mu\tau)<1.4\times10^{-5}$ and BR $(\Upsilon(3S)\rightarrow\mu\tau)<2.0\times10^{-5}$ respectively at the 95% C.L. In literatures, several stringent limits on LFV decays of vector mesons are derived in a model independent way. Assuming that a vector boson $M_i$ couples to $\mu^{\mp}e^{\pm}$ and $e^{\mp}e^{\pm}$, the authors of Ref.  deduce some upper bounds on the LFV decays of vector mesons by a consideration of the experimental constraint on the process $\mu\rightarrow3e$. Under a similar assumption that a vector meson $M_i$ couples to $\mu^{\mp}e^{\pm}$ and nucleon-nucleon, Ref.  and Ref.  study the LFV decays of vector mesons by taking account of the experimental constraint on $\mu-e$ conversion. In this paper, we investigate the LFV decays of vector mesons in unparticle physics by the consideration of constraint on $\mu-e$ conversion. In Section.\[sec:2\], we firstly provide a brief introduction to the unparticle physics and corresponding interaction Lagrangian in effective field theory. Then we derive the analytic results of the amplitude in detail. The numerical analysis and discussion are presented in Section.\[sec:3\], and the conclusion is drawn in Section.\[sec:4\]. Formalism {#sec:2} ========= In very high energy, as it is proposed by Georgi,[@Georgi1; @Georgi2] the theory is composed of the SM fields and the fields of a theory with a nontrivial IR fixed point, which is called Banks-Zaks ($\mathcal{BZ}$) fields.[@BZ] The two fields can interact by the exchange of particles with a large mass scale $M_{\mathcal{U}}\gg1TeV$. Below the scale $M_{\mathcal{U}}$, there are nonrenormalizable couplings involving both standard model fields and Banks-Zaks fields suppressed by powers of $M_{\mathcal{U}}$. The interaction between SM field and $\mathcal{BZ}$ field has the form: $$\begin{aligned} \frac{1}{M_{\mathcal{U}}^{d_{SM}+d_{\mathcal{BZ}}-4}}O_{SM}O_{\mathcal{BZ}}, \label{SM-BZ}\end{aligned}$$ where $O_{SM}$ is an operator with a mass dimension $d_{SM}$ corresponding to SM fields and $O_{\mathcal{BZ}}$ is an operator with a mass dimension $d_{\mathcal{BZ}}$ corresponding to $\mathcal {BZ}$ fields. In effective field theory, below the scale $\Lambda_{\mathcal{U}}$, the $\mathcal {BZ}$ operators would match onto the unparticle operators, and Eq.(\[SM-BZ\]) can be viewed as the interaction between SM field and unparticle field: $$\begin{aligned} \frac{C_{\mathcal{U}}\Lambda^{d_{\mathcal{BZ}}-d_{\mathcal{U}}}_{\mathcal{U}}} {M_{\mathcal{U}}^{d_{SM}+d_{\mathcal{BZ}}-4}}O_{SM}O_{\mathcal{U}}, \label{SM-U}\end{aligned}$$ where $C_{\mathcal{U}}$ is a coefficient function, $d_{\mathcal{U}}$ denotes the scaling dimension of the unparticle operator $O_{\mathcal{U}}$. For simplicity, it is convenient to define: $$\begin{aligned} \lambda=\frac{C_{\mathcal{U}}\Lambda^{d_{\mathcal{BZ}}}_{\mathcal{U}}}{M_{\mathcal{U}}^{d_{SM}+d_{\mathcal{BZ}}-4}}. \label{Lam}\end{aligned}$$ Then, in effective theory, the couplings of the scalar and vector unparticles to SM fermions (leptons or quarks) are generally given by the following effective operators: $$\begin{aligned} &&\frac{\lambda^{SS}_{ff'}}{\Lambda^{d_{\mathcal{U}}-1}_{\mathcal{U}}}\bar{f}f'O_{\mathcal{U}}, \frac{\lambda^{SP}_{ff'}}{\Lambda^{d_{\mathcal{U}}-1}_{\mathcal{U}}}\bar{f}\gamma_{5}f'O_{\mathcal{U}}, \frac{\lambda^{SV}_{ff'}}{\Lambda^{d_{\mathcal{U}}}_{\mathcal{U}}}\bar{f}\gamma_{\mu}f'\partial^{\mu}{O_{\mathcal{U}}}, \frac{\lambda^{SA}_{ff'}}{\Lambda^{d_{\mathcal{U}}}_{\mathcal{U}}}\bar{f}\gamma_{\mu}\gamma_{5}f'\partial^{\mu}{O_{\mathcal{U}}}, \nonumber\\ &&\frac{\lambda^{VS}_{ff'}}{\Lambda^{d_{\mathcal{U}}}_{\mathcal{U}}}\bar{f}f'\partial_{\mu}{O^{\mu}_{\mathcal{U}}}, \frac{\lambda^{VP}_{ff'}}{\Lambda^{d_{\mathcal{U}}}_{\mathcal{U}}}\bar{f}\gamma_{5}f'\partial_{\mu}{O^{\mu}_{\mathcal{U}}}, \frac{\lambda^{VV}_{ff'}}{\Lambda^{d_{\mathcal{U}}-1}_{\mathcal{U}}}\bar{f}\gamma_{\mu}f'O^{\mu}_{\mathcal{U}}, \frac{\lambda^{VA}_{ff'}}{\Lambda^{d_{\mathcal{U}}-1}_{\mathcal{U}}}\bar{f}\gamma_{\mu}\gamma_{5}f'O^{\mu}_{\mathcal{U}}, \label{Eff-ope}\end{aligned}$$ where $\lambda^{S,P,V,A}_{ff'}$ are dimensionless coefficients. $S,P,V$ and $A$ stand for scalar field, pseudo-scalar field, vector field and axial vector field, respectively.$f$ and $f'$ denote SM fermions, $O_{\mathcal{U}}$ and $O^{\mu}_{\mathcal{U}}$ denote scalar and vector unparticle fields. The propagator of scalar unparticle field has the form[@Georgi2; @Cheung]: $$\begin{aligned} \int&& e^{iP\cdot x}d^4 x \langle 0|T[O_{\mathcal{U}}(x)O_{\mathcal{U}}(0)|0\rangle = i\frac{A_{d_{\mathcal{U}}}}{2 \sin(d_{\mathcal{U}}\pi)}\frac{1}{(-P^2-i\epsilon)^{d_{\mathcal{U}}-2}}\end{aligned}$$ If the vector unparticle field is assumed to be transverse, the propagator can been written as: $$\begin{aligned} \int&& e^{iP\cdot x}d^4 x \langle 0|T[O^{\mu}_{\mathcal{U}}(x)O^{\nu}_{\mathcal{U}}(0)|0\rangle = i\frac{A_{d_{\mathcal{U}}}}{2 \sin(d_{\mathcal{U}}\pi)}\frac{-g^{\mu\nu}+P^{\mu}P^{\nu}}{(-P^2-i\epsilon)^{2-d_{\mathcal{U}}}}\end{aligned}$$ where $A_{d_{\mathcal{U}}}$ is defined by: $$\begin{aligned} A_{d_{\mathcal{U}}}=\frac{16\pi^{5/2}}{(2\pi)^{2d_{\mathcal{U}}}} \frac{\Gamma(d_{\mathcal{U}}+1/2)}{\Gamma(d_{\mathcal{U}}-1)\Gamma(2d_{\mathcal{U}})}\end{aligned}$$ For vector mesons, only the vector current $\bar{f}\gamma_{\mu}f'$ couples to vector mesons. The tree level Feynman diagram is presented in Fig.\[fig1\]. The amplitude for Fig.\[fig1\] can be written as: $$\begin{aligned} {\cal M}_Q=&& \frac{\lambda^{VV}_{bb}\lambda^{VV}_{e \mu}}{\Lambda^{2d_{\mathcal{U}}-2}_{\mathcal{U}}} \frac{A_{d_{\mathcal{U}}}}{2\sin(d_{\mathcal{U}}\pi)}\bar{\upsilon}(p_1)\gamma_{\mu}u(p_2) \frac{g^{\mu\nu}-p^{\mu}p^{\nu}}{p^{2(2-d_{\mathcal{U}})}}\bar{u}(p_3)\gamma_{\nu}\upsilon(p_4). \label{Amp}\end{aligned}$$ In the quark picture, mesons are composed of a quark and an antiquark. We adopt a phenomenological model where the amplitude of hard process involving a s-wave meson can be described by the matrix elements of gauge-invariant nonlocal operators, which are sandwiched between the vacuum and the meson states. The distribution amplitude of vector meson $\Upsilon$ in leading-order is defined through the correlation function[@OZI1; @OZI2; @OZI3]: $$\begin{aligned} \langle 0| \bar{b}_{1\alpha }^{i}(y)b_{2\beta}^{j}(x)|\Upsilon(p)\rangle &=&\frac{\delta_{ij}}{4N_{c}}\int_{0}^{1}du~e^{-i[upy+(1-u)px]} \Big[ f_{\Upsilon }m_{\Upsilon }/\!\!\!\varepsilon \phi _{\parallel}(u) \nonumber\\ &&+\frac{i}{2}\sigma ^{{\mu }'{\nu }'}f_{\Upsilon }^{T} ( \varepsilon _{ {\mu }'}{p}_{{\nu }'}-\varepsilon _{{\nu }'}{p}_{{\mu }'} )\phi _{\perp } ( u )\Big]_{\beta \alpha}\;, \label{hadron}\end{aligned}$$ where Nc is the number of colors, $\varepsilon$ is the polarization vector, $f_{\Upsilon }$ and $f_{\Upsilon}^{T}$ are the decay constants, $\phi _{\parallel }$ and $\phi _{\perp }$ are the leading-twist distribution functions corresponding to the longitudinally and transversely polarized meson, respectively. Since the leading-twist light-cone distribution amplitudes of meson are close to their asymptotic form,[@Beneke] so we set $\phi _{\parallel }=\phi _{\perp }=\phi(u)=6u(1-u)$. Then, at hadron level, using Eq.(\[hadron\]), the amplitude is rewritten as $$\begin{aligned} {\cal M}_H= \frac{\lambda^{VV}_{bb}\lambda^{VV}_{e \mu}}{\Lambda^{2d_{\mathcal{U}}-2}_{\mathcal{U}}} \frac{A_{d_{\mathcal{U}}}m_{\Upsilon}f_{\Upsilon}}{2N_c\sin(d_{\mathcal{U}}\pi)}\frac{\varepsilon^{\nu}} {p^{2(2-d_{\mathcal{U}})}}\bar{u}(p_3)\gamma_{\nu}\upsilon(p_4).\end{aligned}$$ In the frame of center of mass, using the summation formula $$\begin{aligned} \sum_{\lambda =\pm 1,0}\varepsilon ^{\mu }_{\lambda }(p)\varepsilon ^{\ast \nu }_{\lambda }(p) \equiv -g^{\mu \nu }+\frac{p^{\mu }p^{\nu }}{m_{\Upsilon}^{2}},\end{aligned}$$ we get $$\begin{aligned} |{\cal M}_H|^2 = \mid\frac{\lambda^{VV}_{bb}\lambda^{VV}_{e \mu}}{\Lambda^{2d_{\mathcal{U}}-2}_{\mathcal{U}}} \frac{A_{d_{\mathcal{U}}}m_{\Upsilon}f_{\Upsilon}}{2N_c\sin(d_{\mathcal{U}}\pi)}\mid^2 \frac{4(m^2_{\Upsilon}-m^2_{e}-m^2_{\mu})-16 m_e m_{\mu}}{m_{\Upsilon}^{4(2-d_{\mathcal{U}})}}. \label{MH}\end{aligned}$$ Finally,we express the branching ratio of process $\Upsilon\rightarrow e\mu$ as $$\begin{aligned} Br(\Upsilon\rightarrow e\mu)=\frac{\sqrt{ [m_{\Upsilon}^{2}-(m_{e}+m_{\mu})^{2}] [m_{\Upsilon}^{2}-(m_{e}-m_{\mu})^{2}] }}{16 \pi m_{\Upsilon}^{3}\Gamma_{\Upsilon}}\times|{\cal M}_H|^2, \label{BR}\end{aligned}$$ where $\Gamma_{\Upsilon}$ is the total decay width. The branching ratios for other LFV processes of vector mesons can be formulated in a similar way. Numerical Analysis {#sec:3} ================== Taking account of the constraint on the LFV processes $\mu\rightarrow e\gamma$ and $\mu-e$ conversion in nuclei, we will study the LFV decay of $\Upsilon\rightarrow e\mu$ in unparticle physics firstly. Under the assumption of the unparticle couplings with the SM fermions in Eq.(\[Eff-ope\]) are universal: $$\label{Assume} \lambda^{KK}_{ff'} = \left\{ \begin{aligned} \lambda_{k}, & f=f' \\ \kappa\lambda_{k}, & f\neq f' \end{aligned} \right.$$ where $\kappa > 1$ and K = S, P, V, A for scalar, pseudoscalar, vector and axial vector couplings respectively, the authors of Ref.  have investigated the LFV processes $\mu\rightarrow e\gamma$ and $\mu-e$ conversion in various nuclei in region of $1<d_{\mathcal{U}}<2$, $1 \mathrm{TeV}<\Lambda_{\mathcal{U}}<100 \mathrm{TeV}$. It displays the constraint on dimension $d_{\mathcal{U}}$ deduced from experimental bound on $\mu-e$ conversion is more stringent. Therefore, we will study the LFV decays of vector mesons by a consideration of $\mu-e$ conversion in unparticle physics. The formula for the $\mu-e$ conversion rate with the pure vector coupling between SM fermions and unparticle is given by: $$\begin{aligned} CR(\mu-e, Nucleus)&=&\frac{m^5_{\mu} \alpha^3 Z^4_{eff} F^2_p}{2 \pi^2 Z } [\lambda^{VV}_{e\mu}\lambda^{VV}_{qq}\frac{A_{d_{\mathcal{U}}}}{2\sin(d_{\mathcal{U}}\pi)} \frac{1}{\Lambda^2_{\mathcal{U}}}(\frac{m^2_{\mu}}{\Lambda^2_{\mathcal{U}}})^{d_{\mathcal{U}}-2}]^2 \nonumber\\ &&\times\mid Z\sum_q G^{(q,p)}_V + N \sum_q G^{(q,n)}_V\mid^2\frac{1}{\Gamma_{capt}}, \label{CR}\end{aligned}$$ where Z and N denote the proton and neutron numbers in a nucleus, $F_p$ is the nuclear form factor and $Z_{eff}$ is an effective atomic charge, $G^{(q,p)}_V$ and $G^{(q,n)}_V$ are nuclear matrix elements relevant to proton and neutron. Here, as in Ref. , taking $\Lambda_{\mathcal{U}}=10$TeV, $\lambda^{VV}_{bb}=0.001$ and $\lambda^{VV}_{e\mu}=0.003$, we display BR$(\Upsilon\rightarrow e\mu)$ versus $d_{\mathcal{U}}$ and CR$(\mu-e, Ti)$ versus $d_{\mathcal{U}}$ in region of $1\le d_{\mathcal{U}}\le 4$ in Fig.\[fig2\], where the solid line denotes the prediction of CR$(\mu-e, Ti)$, the dot line denotes the prediction of BR$(\Upsilon\rightarrow e\mu)$. The horizontal lines correspond to $10^{-6}$ and $4.2\times10^{-12}$, which are the experimental sensitivity of LFV decays of vector mesons and the experimental bound on $\mu-e$ conversion rate respectively. The following numerical values are used[@Decay; @constant1; @Decay; @constant2; @Decay; @constant3]: $$\begin{aligned} &&m_{\Upsilon}=9.406GeV,f_{\Upsilon}=715MeV,\Gamma_{\Upsilon}=54KeV,\nonumber\\ &&F_p=0.54,Z_{eff}=17.6, \Gamma_{capt}=1.7\times10^{-18}.\end{aligned}$$ As we can see from Fig.\[fig2\], the predictions of both BR$(\Upsilon\rightarrow e\mu)$ and CR$(\mu-e, Ti)$ depend strongly on the scaling dimension $d_{\mathcal{U}}$. The value of dimension $d_{\mathcal{U}}$ is constrained to near 2 or more larger and the relevant prediction of BR$(\Upsilon\rightarrow e\mu)$ is highly suppressed to reach the experimental sensitivity. There is an interesting feature in Fig.\[fig2\] that the prediction of BR$(\Upsilon\rightarrow e\mu)$ is less than CR$(\mu-e, Ti)$ in region of $1\le d_{\mathcal{U}}\le 3$, however, in region of $3\le d_{\mathcal{U}}\le 4$, the prediction of BR$(\Upsilon\rightarrow e\mu)$ is larger than CR$(\mu-e, Ti)$. Considering the experimental bound on CR$(\mu-e, Ti)$ is $\mathcal{O} (10^{-12})$, it is impossible to make the prediction of BR$(\Upsilon\rightarrow e\mu)$ reach the experimental sensitivity by resetting the couplings $\mid\lambda^{VV}_{bb}\lambda^{VV}_{e\mu}\mid$ in region of $1\le d_{\mathcal{U}}\le 3$. Nevertheless, in region of $3\le d_{\mathcal{U}}\le 4$, by enlarging the couplings $\mid\lambda^{VV}_{bb}\lambda^{VV}_{e\mu}\mid$, it is available to get that not only the prediction of CR$(\mu-e, Ti)$ is compatible with the experimental bound, but also the prediction of BR$(\Upsilon\rightarrow e\mu)$ is large enough to be detected in experiment at present or in near future. Therefore, we will investigate the LFV decays of vector mesons in region of $3\le d_{\mathcal{U}}\le 4$. In addition, the constraint of $d_{\mathcal{U}}\geq3$ is also supported in Ref.  and Ref.  by the consideration of unitarity. From unitarity, the gauge invariant primary vector operator $\mathcal{U}^{\mu}$ have $d_V\ge3$, with $d_V=3$ if and only if the operator is a conserved current, $\partial_{\mu}\mathcal{U}^{\mu}=0$. For the aim of enhancing the prediction of BR$(\Upsilon\rightarrow e\mu)$ to be detectable in experiment, the value of the couplings $\mid\lambda^{VV}_{bb}\lambda^{VV}_{e\mu}\mid$ would be very large. However, we can also investigate the LFV decays of vector mesons in a way independent of the couplings $\mid\lambda^{VV}_{bb}\lambda^{VV}_{e\mu}\mid$. Considering the $\mu-e$ conversion in $Ti$ nucleus, let us define the fraction $R(X)$ by: $$\begin{aligned} R(X)=\frac{BR(X\rightarrow e\mu)}{CR(\mu-e,Ti)}, \label{Rx}\end{aligned}$$ where $X$ would be any vector mesons:$\rho$, $\omega$, $\phi$, $J/\Psi$ or $\Upsilon$. From Eq.(\[MH\]), Eq.(\[BR\]) and Eq.(\[CR\]), we can see that coefficient $\lambda^{VV}_{e\mu}$ and mass scale $\Lambda_{\mathcal{U}}$ can be canceled out in $R(X)$. As for the unparticle couplings with the quarks are universal, i.e., $$\begin{aligned} \lambda^{VV}_{bb}=\lambda^{VV}_{ss}=\lambda^{VV}_{cc}\simeq \frac{\lambda^{VV}_{uu}+\lambda^{VV}_{dd}}{\sqrt{2}}\simeq \frac{\lambda^{VV}_{uu}-\lambda^{VV}_{dd}}{\sqrt{2}}, \label{assume}\end{aligned}$$ the couplings listed in Eq.(\[assume\]) would also be canceled out in $R(X)$ due to the same value setting in both numerator and denominator in Eq.(\[Rx\]). Therefore, $R(X)$ would only be a function of scaling dimension $d_{\mathcal{U}}$. Using Eq.(\[MH\]), Eq.(\[BR\]) and Eq.(\[CR\]), the relation between $R(X)$ and $d_{\mathcal{U}}$ can be shown in a simple form: $$\begin{aligned} R(X)\propto(\frac{m_X}{m_{\mu}})^{4(d_{\mathcal{U}}-2)}, \label{Relation}\end{aligned}$$ where $\frac{m_X}{m_{\mu}}>1$ for different mesons. It is noteworthy that even if the unparticle couplings with the SM fermions are not universal, i.e., Eq.(\[assume\]) is not feasible, the relation between $R(X)$ and $d_{\mathcal{U}}$ in Eq.(\[Relation\]) is still reliable. In Fig.\[fig3\], we display the fraction $R(X)$ varies as a function of $d_{\mathcal{U}}$ in region of $3\le d_{\mathcal{U}}\le 4$, where, from the bottom up, the lines stand for $R(\rho)$,$R(\omega)$,$R(\phi)$,$R(J/\Psi)$ and $R(\Upsilon)$, respectively. The parameters relevant to different mesons are listed below[@Decay; @constant1; @Decay; @constant2; @Decay; @constant3]: $$\begin{aligned} m_{\rho}&=&775MeV,f_{\rho}=209MeV,\Gamma_{\rho}=149MeV,\nonumber\\ m_{\omega}&=&782MeV,f_{\omega}=195MeV,\Gamma_{\omega}=8.49MeV,\nonumber\\ m_{\phi}&=&1.019GeV,f_{\phi}=231MeV,\Gamma_{\phi}=4.2MeV,\nonumber\\ m_{J/\Psi}&=&3.096GeV,f_{J/\Psi}=405MeV,\Gamma_{J/\Psi}=92.9KeV.\nonumber\end{aligned}$$ It displays in Fig.\[fig3\] that $R(X)$ increases as $d_{\mathcal{U}}$ grows. For light flavor mesons, the fraction $R(X)$ is very small. However, for heavy flavor mesons, the fraction $R(X)$ is large and we can get a large BR$(X\rightarrow e\mu)$ to be detectable in experiment and compatible with the constraint on $\mu-e$ conversion. From Eq.(\[Rx\]) we can express the branching ratio of $X\rightarrow e\mu$ as: $$\begin{aligned} BR(X\rightarrow e\mu)=R(X)\times CR(\mu-e,Ti). \label{BRR}\end{aligned}$$ Using Eq.(\[BRR\]), we give the predictions on branching ratios of LFV decays of vector mesons in Tab.\[tab1\] with $d_{\mathcal{U}}=3$, $d_{\mathcal{U}}=3.5$ and $d_{\mathcal{U}}=4$, where CR$(\mu-e,Ti)\le 4.2\times10^{-12}$ is used. For light flavor mesons, the prediction of BR$(X\rightarrow e\mu)$ is much little, and it is impossible to observe the LFV processes of these mesons in experiment. For heavy flavor mesons, the large prediction of BR$(X\rightarrow e\mu)$ is available for $d_{\mathcal{U}}$ near 4. Especially, the prediction of BR$(\Upsilon\rightarrow e\mu)$ is as large as $\mathcal{O}(10^{-4})$, and it is very promising to be observed in experiment. In literatures, several stringent limits on LFV decays of vector mesons are derived already. A summary table of experimental bounds and corresponding theoretical predictions is presented in Tab.\[tab2\]. Assuming that a vector boson $M_i$ couples to $\mu^{\mp}e^{\pm}$ and $e^{\mp}e^{\pm}$, the authors of Ref.  deduce some upper bounds on the LFV decay of mesons using the experimental constraint on the LFV process $\mu\rightarrow3e$. Under a similar assumption that a vector meson $M_i$ couples to $\mu^{\mp}e^{\pm}$ and nucleon-nucleon, Ref.  and Ref.  study the LFV decays of vector mesons by taking account of the experimental constraint on $\mu-e$ conversion. From Tab.\[tab1\] and Tab.\[tab2\], it is easy to find that our predictions are compatible with those in literatures. Finally, the predictions of LFV decays of vector mesons in both our article and Ref.  greatly depend on the experimental constraints on BR$(l_i\rightarrow l_j\gamma)$, BR$(l_i\rightarrow 3l_j)$ and CR$(\mu-e)$. Thus, more reliable predictions on LFV decays of vector mesons depend on the new data from the experiment. In the future, the expected sensitivities for BR $(\mu\rightarrow e\gamma)$ would be of order $10^{-13}$.[@MEG] For BR $(\tau\rightarrow e\gamma)$ and BR $(\tau\rightarrow\mu\gamma)$, it would be $10^{-9}$.[@Bona] For CR $(\mu-e, Ti)$, it would be as low as $10^{-16}\sim10^{-17}$.[@AIP] Then, the predictions of BR$(X\rightarrow e\mu)$ for vector mesons would be more stringent. Conclusions\[sec:4\] ==================== Considering the constraint on $\mu-e$ conversion, we analyze the LFV decays of vector mesons in the framework of unparticle physics. In the scenario of the unparticle physics, the predictions of branching ratios of LFV decays of vector mesons depend strongly on the scale dimension $d_{\mathcal{U}}$. Supposing the unparticle couplings with the SM fermions are universal, the predictions of the branching ratios of the LFV decays of vector mesons can reach the detective sensitivity in experiment in region of $d_{\mathcal{U}}\geq3$, while the prediction of $\mu-e$ conversion rate in Ti nucleus is compatible with the experimental upper limit. Although nonzero neutrino masses supported by the neutrino oscillation experiments imply the nonconservation of lepton flavors, it is very important to directly search the LFV processes of charged lepton sector in colliders running now. The LFV process $\Upsilon\rightarrow e\mu$ is very promising to be observed in experiment. Acknowledgements {#acknowledgements .unnumbered} ================ The work has been supported by the National Natural Science Foundation of China (NNSFC) with Grants No. 10975027. [0]{} Y. Fukuda et al. (Super Kamiokande Collab.), Phys. Rev. Lett**81**,1562(1998). Q. R. Ahmad et al. (SNO Collab.), Phys. Rev. Lett**37**,071301(2001). K. Eguchi et al. (Kamland Collab.), Phys. Rev. Lett**90**,021802(2003). J.C. Pati and A. Salam, Phys. Rev. D**10**,275(1974). H. Georgi and S. L. Glashow, Phys. Rev. Lett**32**,438(1974). P. Langacker, Phys. Rep**72**,185(1981). H. E. Haber and G. L. Kane, Phys. Rep**117**,75(1985). C.-H. Chang, T.-F. Feng, Eur. Phys. J.C**12**,137(2000). K.-S. Sun, T.-F. Feng, T.-J. Gao, S.-M. Zhao, Nucl. Phys. B, DOI:10.1016/j.nuclphysb.2012.08.005. R. N. Mohapatra and J. C. Pati, Phys. Rev. D**11**,566(1975). R. N. Mohapatra and J. C. Pati, Phys. Rev. D**11**,2558(1975). G. Senjanovic and R. N. Mohapatra, Phys. Rev. D**12**,1502(1975). H. Georgi, Phys. Rev. Lett**98**,221601(2007). H. Georgi, Phys. Lett. B**650**,275(2007). T. M. Aliev, A. S. Cornell, N. Gaur, Phys. Lett. B**657**,27(2007). D. Choudhury, D.K. Ghosh, Mamta, Phys. Lett. B**658**,148(2008). G.-J. Ding, M.-L. Yan, Phys. Rev. D**77**,014005(2008). C.-D. Lu, W. Wang, Y.-M. Wang, Phys. Rev. D**76**,077701(2007). Wei.Zheng-Tao, Xu.Ye, Li.Xue-Qian, Eur. Phys. J. C**62**,593(2009). E.O. Iltan,Eur. Phys. J. C**56**,105(2008). E.O. Iltan, Mod. Phys. Lett. A**23**,3331(2008). Zuo-Hong. Li,Ying. Li,Hong-Xia. Xu, Phys. Lett. B**677**,150(2009). M. N. Achasov,et al. (SND Collaboration), Phys. Rev. D**81**,057102(2010). M. Ablikim et al. (BES Collaboration), Phys. Lett. B**598**,172(2004). W. Love et al. (CLEO Collaboration), Phys. Rev. Lett**101**,201601(2008). S. Nussinov, R.D. Peccei, X.M. Zhang, Phys. Rev. D**63**,016003(2000). T. Gutsche, J. Helo, S. Kovalenko, V.E. Lyubovitskij, Phys. Rev. D**81**,037702(2010). T. Gutsche, J. Helo, S. Kovalenko, V.E. Lyubovitskij, Phys. Rev. D**83**,115015(2011). T. Banks and A. Zaks, Nucl. Phys. B**196**,189(1982). K. Cheung, W.Y. Keung and T. C. Yuan, Phys. Rev. Lett**99**,051803(2007). T. Li, S.-M. Zhao, X.-Q. Li, Nucl. Phys. A**828**,125(2009). P. Ball, V.M. Braun, Phys. Rev. D**54**,2182(1996). T.-J. Gao, T.-F. Feng, X.-Q. Li, Z.-G. Si, S.-M. Zhao, Sci. China G**53**,1988(2010). M. Beneke, G. Buchalla, M. Neubert, C. T. Sachrajda, Nucl. Phys. B**591**,313(2000). S.-L. Chen, X.-G. He, X.-Q, Li, H.-C. Tsai, Z.-T. Wei,Eur. Phys. J.C**59**,899(2009). Z.-Q. Zhang, Phys. Rev. D**82**,034036(2010). Q. Wang,X.-H. Liu, Q. Zhao, hep-ph/1103.1095v1. H.-W. Ke,X.-Q. Li,Z.-T. Wei, X. Liu, Phys. Rev. D**82**,034023(2010). G. Mack,Commun. Math. Phys**55**(1977)1. B. Grinstein, K.A. Intriligator, I.Z. Rothstein, Phys. Lett. B**662**,367(2008). R. Kitano, M. Koike, and Y. Okada, Phys. Rev. D **66**,096002(2002). O. A. Kiselev \[MEG Collaboration\], Nucl. Instrum. Meth. A 604(2009)304. M. Bona et al., arXiv:0709.0451 \[hep-ex\]. D. Glenzinski, AIP Conf. Proc. 1222 (2010) 383;Y. G. Cui et al. \[COMET Collaboration\], KEK-2009-10.
=cmr10 =cmbx10 =cmr10 scaled2 =cmr10 scaled1 =cmbx10 scaled2 =cmbx10 scaled1 =cmr10 scaled3 =cmbx10 scaled3 \#1[=]{} \#1\#2[=.45 =.45]{} \#1\#2\#3\#4\#5\#6\#7 to\#2 ------------------------------------------------------------------------ epsf Analytical Dynamical Models for Double-Power-Law Galactic Nuclei HongSheng Zhao Max-Planck-Institute f[ü]{}r Astrophysik, 85740 Garching, Germany Email: hsz@MPA-Garching.MPG.DE 0.5cm ABSTRACT Motivated by the finding that the observed surface brightness profile of many galactic nuclei are well fit by double-power-laws, we explore a range of spherical self-consistent dynamical models with this light profile. We find that the corresponding deprojected volume density profile, phase space density and line-of-sight velocity distribution of these models are well fit by simple analytic approximations. We illustrate the application of these models by fitting a sample of about 25 galactic nuclei observed by Hubble Space Telescope. We give the derived volume density, phase space density, velocity dispersion and line profile parameters in tables. The models are useful for predicting kinematic properties of these galaxies for comparison with future observations. They can also be easily applied to seed N-body simulations of galactic nuclei with realistic density profiles for studying the evolution of these systems. 0.5cm [*Subject headings*]{}: galaxies: kinematics and dynamics - line: profiles Submitted to Monthly Notices of R.A.S. Introduction ============ Recent high-resolution observations of galactic nuclei find that their surface brightness profiles are well fit by a three-parameter double-power-law. Assuming a constant mass-to-light ratio, this means that the projected mass density $\mu(R)$ satisfies (Lauer et al. 1995, Byun et al. 1996) [^1] $$\label{mu} \mu(R) = \mu_0\left({R \over B}\right)^{-\gamma_1} \left(1 +\left({R \over B}\right)^{1 \over \alpha_1}\right)^{-(\beta_1-\gamma_1)\alpha_1},$$ where $\gamma_1$, $\beta_1$ and $\alpha_1$ specify the slope of the inner power-law, the outer power-law and the width of transition region. Meaningful values for the three parameters satisfy $0\leq \gamma_1 \leq \beta_1$ and $\alpha_1>0$. Other parameters $B$, $\mu_0$ are scaling parameters, specifying a length scale (the break radius) and a scale for projected density. For flattened nuclei, the parametrization applies on the major or minor axis or in a shell averaged sense. Interpreting this class of density profiles requires a new class of dynamical models. Previous dynamical models mostly have a finite core. A recent set of cuspy models, including the $\gamma/\eta$ models discovered by Dehnen (1994) and Tremaine et al. (1995), and a even wider range of analytical models by Zhao (1996), often do not match the light of nuclei outside the break radius. Unlike these models the observed light profiles are often much shallower at large radius, and in principle correspond to a divergent mass if extrapolated to infinity. Hence it is necessary to explore dynamical models consistent with a general double-power-law profile, including those with a divergent mass. The existence of a universal parametrization constitutes a major advantage for theoretical study of their dynamical properties. It makes it possible to study observed galactic nuclei as a class spanned by the three slope parameters $(\alpha_1, \beta_1, \gamma_1)$, and constrain the models with the steady state dynamics without necessarily using data of individual systems explicitly. In this paper, we give some results which immediately follow from the above surface brightness profiles. We will concentrate on the simple class of spherical isotropic models with main emphasis on their simple analytical results. While spherical models with a $f(E)$ distribution function are famous for admitting mathematical solutions, and are often used as a compromise for more realistic but less tractable anisotropic/flattened/triaxial models, the actual implementation of the spherical models is in fact very tedious, and rarely admits simple analytical results. These significantly complicate the first-level simple interpretation of photometric and kinematic data. The standard inversion using Eddington formula involves at least three integrations and two derivatives (analytical or numerical) to get $f(E)$ from a surface density profile. To further make a prediction on line profile requires computing a three dimensional integral (see e.g., Dehnen 1994) at each grid point in the projected radius vs. velocity plane. For the current problem, one is interested in a class of models with a range of density profiles. The main challenge is to present the results in a manageable and easily interpretable way. Virtually no rigorous analytical solutions are known for the projected models given by Eq.(1). Even for some related analytical models (Zhao 1996, Dehnen 1994, Tremaine et al. 1995), the expressions of the distribution function and projected density and dispersion are generally very lengthy, typically involving more than half a dozen analytic terms with no clear physical meaning and with possible large cancellations between terms. As a result, the relations between observable and model quantities are obscured. In this paper we build a set of spherical dynamical models with simple functional forms for the intrinsic volume density and $f(E)$. We fit these to the double-power-law projected density models. We set the scaling quantities, $\mu_0$ and $B$ as unity, and vary the dimensionless parameters $(\alpha_1, \beta_1, \gamma_1)$ in a 3D parameter space to simulate a complete set of radial profiles of the surface light. Because of the way the fitted models are tailored, the residuals of the fits are typically smaller than the uncertainties in the data so that the models are practically consistent with the double-power-law surface brightness profile. The main results of the paper are summarized in several universal formulae for the volume density, the phase space density, the line-of-sight velocity profiles. The model results can be presented directly with the fitting formulae and a few numerically fitted parameters. To demonstrate the applications, we compute the system parameters for about 25 observed galaxies and give them in tables. With these the intrinsic and projected quantities of the model are fully determined, and it involves virtually no further calculation to predict the line-of-sight velocity distributions. The paper is organized as follows. In Section 2 we fit the surface brightness profiles with analytical volume density models. Section 3 gives the analytical expression for the model potential. Section 4 gives the deprojected phase space density by matching the volume density and the potential. In Section 5, the distribution functions are re-projected to yield line-of-sight velocity distributions as well as the dispersion and kurtosis of the profiles, all on a grid of projected radius. We illustrate the model applications in Section 6. We summarize in Section 7. Asymptotic relations for the model quantities are given in Appendix A. Some alternative analytical approximations with rigorous asymptotic solutions are given in Appendix B. A simple formula for the line profiles of the models is derived in Appendix C. Although similar models can be built for anisotropic spherical systems with a black hole, and the techniques are also generalizable to oblate $f(E,J_z)$ systems, we leave these generalized models for a later study (Zhao and Syer 1996). The three-dimensional $(\alpha_1,\beta_1, \gamma_1)$ parameter space of the spherical models is already very big, and at least two more dimensions would be necessary to cover spherical models with black hole and anisotropy. Also while the isotropic models are most likely stable, many simulations are needed to examine the stability of anisotropic or black hole models before applying them to observations. Deprojected volume density profile ================================== One expects an asymptotic power-law in surface brightness to correspond to a (steeper) power-law in volume density. So the double-power-law of the observed surface brightness of galactic nuclei suggests that their volume density could be fit by a similar double-power-law, $$\label{nu} \nu(r) = \nu_0 \left({r \over b}\right)^{-\gamma} \left(1 +\left({r \over b}\right)^{1 \over \alpha}\right)^{-(\beta-\gamma)\alpha},$$ where the new parameters $(\alpha, \beta, \gamma)$ and $\nu_0$ and $b$ have the same meaning as the five parameters in Eq. \[mu\] except that they describe the volume density profile instead of the surface brightness profile. In Appendix A we give the relations between $(\alpha, \beta,\gamma, \nu_0, b)$ and $(\alpha_1, \beta_1,\gamma_1, \mu_0, B)$ based on matching the two densities at asymptotic large or small radii. But except for a few special cases[^2], the deprojected density of $\mu(R)$ in Eq. \[mu\] is generally not as simple as $\nu(r)$ in Eq. \[nu\]. Our approach is to match the two densities in the least squared sense. We fix the outer slope $$\beta=\beta_1+1,$$ to enforce a good match at large radius. Then we tune the other four parameters $\alpha, \gamma, \nu_0, b$ to fit the surface brightness to a high accuracy by minimizing the following r.m.s residual, $$\label{ressurf} \delta_\mu = \left({1 \over N_R} \sum_{i=0}^{N_R} [\log I(R_i) - \log \mu(R_i)]^2\right)^{1 \over 2}$$ where $$\label{IR} I(R) =2\int_0^{+\infty} \nu(\sqrt{R^2+z^2}) dz$$ is the projected density of $\nu(r)$, and the fitting positions $R_i$ and the number of fitting points $N$ are given in Table 1. The initial values for the fitting parameters are taken from the following approximate relations $$\begin{aligned} \gamma \approx \gamma_{e} &= & \gamma_1+1 \mbox{ {\rm}{if $\gamma_1 >0$} } \\ & = & [1-{1 \over \alpha_1},0]_{max} \mbox{ {\rm}{if $\gamma_1 =0$ } } \end{aligned}$$ to fit near the cusp, where $\gamma_{e}$ is the expected exact asymptotic power for the density, and $$\alpha \approx \alpha_1, \,\,\, b \sim B,$$ to fit near the break radius, and $$\nu_0 b^{-\beta} \sim {\Gamma({\beta \over 2}) \over 2\sqrt{\pi} \Gamma({\beta - 1 \over 2}) } \mu_0 B^{-\beta-1} ,$$ to match the density normalization far away. The first panel in Fig. 1, Fig. 3 and Fig. 5 shows a few typical model fits and the upper panel of Fig. 6 shows their residuals. The residual is small over 4 or more decades in the intensity scale. The typical residual $|\log I(R)- \log \mu(R)| \sim \delta_\mu$ is $10^{-4}-10^{-1}$ for the projected radius $0.01 \le R/B \le 100$. For galactic nuclei observed by Space Telescope the light is measured to an accuracy of about $0.1$ magnitude ($\sim$ 4% difference in ten log) from the central $0.1\arcsec$ to $10\arcsec$ outward, with a typical break radius at $1\arcsec$. So the small internal residuals of the models here are negligible when fitting the photometric data. The above results are independent of the dynamics. The potential ============= Zhao (1996) shows that the potential $\phi(r)=-\Phi(r)$ corresponding to a double-power-law volume density distribution $\nu(r)$ is simply a sum of two (analytical) incomplete Beta-functions (Zhao 1996). $$\label{phi} \Phi(r) = 4\pi \nu_0 b^2 \alpha [ {b \over r} B(\alpha ( 3-\gamma ), \alpha ( \beta -3), {({r \over b})^{1 \over \alpha} \over 1+ ({r \over b})^{1 \over \alpha} }) + B(\alpha ( \beta -2), \alpha (2 -\gamma), {1 \over 1+ ({r \over b})^{1 \over \alpha} }) ],$$ where the gravitational constant $G$ is set to unity. As the incomplete Beta-functions can be computed with a fast function call to standard routines in Numerical Recipes (Press et al. 1992), this greatly simplifies the numerics for the dynamics. According to the asymptotic expression for $\Phi(r)$ given in Appendix A, the zero point of the potential is at infinitely large radius. The depth of the potential well $\Phi_0=\Phi(0)>0$ is infinite for models with a strong cusp with $\gamma \geq 2$ (or if there is a central black hole), and is finite for $\gamma<2$. Deprojected distribution function ================================= In this section, we derive an approximation to the underlying intrinsic distribution function of the double-power law spherical density models. For simplicity we consider only models with isotropic velocity distribution, namely, models with distribution function $f=f(Q)$, where we define a positive energy $0 \leq Q \leq \Phi(0) $ with $$\label{Qdef} Q \equiv -E =\Phi(r) -{1 \over 2} v^2,$$ and a function $G(Q)$ being an integration of the $f(Q)$, $$\label{gqdef} G(Q) \equiv \int_0^Q f(Q) dQ.$$ To decide a functional form for $f$ or $G(Q)$, we note that the distribution function of asymptotic power-law systems is often a power-law of energy at asymptoticly large or small radius. For example, in the Hernquist model, $f(Q) \propto Q^{5/2}$ at large radius and $f(Q) \propto (\Phi_0-Q)^{5/2}$ at small radius. So a sensible universal formula for the distribution function of the double-power-law density models should be a smooth positive function of $Q$ which reduces to a power-law at small $Q$ and large $Q$. The following contrived expression for $G(Q)$ has the desired property $$\label{intdf} G(Q) = f_0 Q_b q^{\beta_2} \left(1+q^{1 \over \alpha_2}\right)^{(\gamma_2-\beta_2)\alpha_2},$$ where $f_0$ and $Q_b$ are two scaling quantities with the dimension of phase density and energy, $(\alpha_2, \beta_2, \gamma_2)$ are another three dimensionless quantities for the shape of the distribution. We define a dimensionless energy $q$, which is a rescaling of the energy $Q$ with $$\begin{aligned} \label{qdef} q & \equiv &{ {Q \over Q_b} \over 1 - {Q \over \Phi_0 } }, \mbox{ {\rm}{for finite potential well,}} \\ & \equiv & {Q \over Q_b}, \mbox{ {\rm}{for infinitely deep potential well,}} \end{aligned}$$ so that $q$ runs from $0$ to $\infty$ with decreasing radius for models with finite as well as infinitely deep potential well. Note smaller (bigger) values for $Q$ or $q$ correspond to bigger (smaller) radius, and $G(Q)$ reduces to power-law of $q$ at big $q$ and at small $q$. Taking the derivative of Eq. \[intdf\] we obtain the corresponding distribution function $f(Q)$ as $$\label{df} f = f(Q) = f_0 q^{\beta_2-1} (1+q^{1 \over \alpha_2})^{(\gamma_2-\beta_2)\alpha_2} \times W \times U,$$ where $$\label{wdef} W \equiv \gamma_2+ {\beta_2-\gamma_2 \over 1+q^{1 \over \alpha_2} },$$ and $$\begin{aligned} \label{udef} U & \equiv &{ 1 \over (1 - {Q \over \Phi_0})^2 } = (1+ q{Q_b \over \Phi_0 })^2 , \mbox{ {\rm}{for finite potential well,}} \\ & \equiv & 1, \mbox{ {\rm}{for infinitely deep potential well,}} \end{aligned}$$ are two dimensionless factors. It then follows from the above equation that $f(Q)$ is positive definite given that $\gamma_2 \ge 0$ and $\beta_2 \ge 0$, and has the following asymptotic power-law dependence on $Q$, $$\begin{aligned} \label{fasy} f(Q) & \propto & Q^{\beta_2-1}, \mbox{ {\rm}{ $Q \rightarrow 0$}}\\ & \propto & Q^{\gamma_2-1}, \mbox{ {\rm}{ $ Q \rightarrow \Phi_0=\infty $}} \\ & \propto & (\Phi_0 - Q)^{-\gamma_2-1}, \mbox{ {\rm}{ $ Q \rightarrow $ a finite $ \Phi_0 $ and $\gamma_2>0$ }} \\ & \propto & 1, \mbox{ {\rm}{ $ Q \rightarrow $ a finite $ \Phi_0 $ and $\gamma_2=0$ }}.\end{aligned}$$ To briefly comment on more general models, the parametrization for $G(Q)$ or $f(Q)$ is also plausible for models with a central black hole as $f(Q)$ is a power-law for infinitely deep potential well (Tremaine et al. 1994). One can also obtain its simple counterpart in Osipkov-Merritt type anisotropic models if replacing $Q$ with $Q_a=-E-{1 \over 2} \eta J^2)$. For a given potential $-\Phi(r)$, the distribution function $f(Q)$ has five fitting parameters $(\alpha_2, \beta_2, \gamma_2, Q_b, f_0)$. These are determined by making $f(Q)$ and the volume density $\nu(r)$ consistent. In practice we minimize the following r.m.s. residual, $$\label{resden} \delta_\nu = \left({1 \over N_r} \sum_{i=0}^{N_r} [\log n(r_i) - \log \nu(r_i)]^2\right)^{1 \over 2}$$ where the fitting positions $r_i$ and the number of fitting points $N_r$ are given in Table 1, and $$\label{nr} n(r) = 4\pi \sqrt{2} \int_0^{\Phi(r)} f(Q) \sqrt{\Phi(r) -Q} dQ$$ is the volume density corresponding to $f(Q)$ (cf. Binney and Tremaine 1987); after an integration of parts it reduces to a more convenient expression for $n(r)$:[^3] $$\label{nr1} n(r) = 2\pi \sqrt{2} \int_0^{\Phi(r)} {G(Q) \over \sqrt{\Phi(r) -Q}} dQ.$$ There are several simple (approximate) relations between the parameters $(\alpha_2, \beta_2, \gamma_2, Q_b, f_0)$ and $(\alpha, \beta, \gamma, b, \nu_0)$, which follows from matching the densities $\nu(r)$ and $n(r)$ at big and small radius. If $\beta_{2e}$ and $\gamma_{2e}$ are the exact asymptotic powers for $G(Q)$ as given in Apppendix A, we require $\beta_2 =\beta_{2e}$ in the fitting program. The true fitting parameters reduce to only four. The somewhat rigid form of the five-parameter distribution function $f(Q)$ prevents fixing $\gamma_2$ to $\gamma_{2e}$: in order to fit everywhere about equally well, the fitted function does not follow exactly the analytical asymptotic behavior. Alternative solutions to cure this problem are discussed in Appendix B. Some fits are shown in the lower left panels of Fig. 1, Fig.3 and Fig. 5, and their residuals are shown in the lower panel of Fig.6. The reprojected volume density $n(r)$ can often fit $\nu(r)$ satisfactorily for several decades in density with typical residuals $|\log n(r) - \log \nu(r)| \sim \delta_\nu$ from 0.1 to 0.001 for radius $ 0.01 \le r/B \le 100$. Table 5 gives the parameters of the models shown in Fig. 1, Fig. 3 and Fig. 5. Re-projected velocity profiles ============================== The projected velocity profiles are the main observable constraints to the dynamical mass distribution of a system. Since the profiles $L(R, v_z)$ are functions of both projected radius $R$ and line-of-sight velocity $v_z$, one often prefers to use moments of the profiles at selected projected radius as convenient comparisons with observation. For the spherical $f(E)$ models here, the odd moments such as rotation and skewness are all zero, and most of the information of the profiles are contained in the lowest order even moments, namely, the line intensity (presumably proportional to the projected density $\mu(R)$), the dispersion $\sigma(R)$ and the kurtosis. There are several ways to represent a profile with the lowest moments. While the Gauss-Hermit expansion (van der Marel and Franx 1993) is mathematically elegant, Zhao and Prada (1996) showed that the direct Gauss-Hermit expansion genericly gives rises to profiles with negative wings and multiple peaks. To cure these problems without losing the elegance and many nice properties of the Gauss-Hermit expansion, they proposed the following fitting formula for line profiles, which we will use. $$\label{linefit} L (R, v_z) = {L_0(R) \over \sqrt{2 \pi} \sigma} e^{-{v_z^2 \over 2 \sigma(R)^2}} \{ 1 + \lambda e^{-{ v_z^2 \over 2 \sigma(R)^2}} c_4(R) H_4 (\lambda { v_z \over \sigma(R) } ) \},~~~ \lambda=\sqrt{3 \over 2},$$ where $H_4(y)$ is the four-order Hermit polynomial of $y$ $$H_4 (y)= {2 \over \sqrt{6}} (y^4 - 2y^2 + {3 \over 4}),$$ and $\sigma(R)$ is the best-fit dispersion at radius $R$, $c_4(R)$ is a parameter describing the kurtosis of the profile. This fitting formula differs mainly from the usual Gaussian-Hermit expansion by the extra Gaussian damping term $\sqrt{3 \over 2} e^{-{ v_z^2 \over 2 \sigma(R)^2}}$ in front of the Hermit polynomial $H_4$, which helps to suppress oscillatory peaks far from systematic velocity and to eliminate the unphysical negative wings; the coefficients $\sqrt{3 \over 2}$ in the damping term and in the $H_4$ are to keep the orthogonality of the functions. The formula is robust for mildly double-peaked $-0.25 \le c_4 \le -0.15$ or mildly cuspy $0.2 \le c_4 \le 0.45$ profiles and for profiles close to Gaussian $-0.1 \le c_4 \le 0.1$. The conventional $h_4$ parameter is approximately $c_4$ for nearly Gaussian profiles, but the former cannot fit mildly non-Gaussian profiles. For the dynamical models here, $\sigma(R)$ and $c_4(R)$ are determined by fitting $L(R, v_z)$ to the projected velocity distribution $P(R, v_z)$ at each projected radius $R$. We minimize the following r.m.s. residual at each radius $R$, $$\label{chiline} \left({1 \over N_v} \sum_{j=0}^{N_v} [L(R, v_j) - P(R, v_j)]^2\right)^{1 \over 2},$$ where the velocity grid $v_j$ and the number of points $N_v$ is given in Table 1; the velocities $v_j$ scales with the escape velocity at positions $R_i$. The projected velocity distribution $P(R, v_z)$ at projected radius $R$ of the $f(Q)$ models is simply $$\begin{aligned} \label{profile} P(R, v_z) & \equiv & \int^\infty_{-\infty} dz \int\int dv_x dv_y f(Q) \\ & = & 4\pi \int_0^\infty dz {G(Q=\Phi(\sqrt{R^2+z^2}) + {1 \over 2} v_z^2) },\end{aligned}$$ where $G(Q)$ is defined in Eq. \[intdf\]. So the profile $P(R, v_z)$ has been reduced from a 3D to a 1D line-of-sight integration. See Appendix C for derivation of this equation and its generalized form for anisotropic systems. Fig. 2 and 4 show some typical line-of-sight velocity distribution fits at three different radii $R=0.1,1,10$. One can see that the fitting formula can recover the profiles $P(R, v_z)$ to good accuracy. The residual is typically between $0.01$ and $0.05$ of the peak intensity. The right panels of Fig.1, Fig.3, and Fig. 5 also show the radial run of the dispersion and kurtosis for a few models. As noted in Tremaine et al. (1994), depending on the strength of the central cusp, the dispersion can have a peak near the break radius (if $1 \le \gamma < 2$) or a steady falling radial profile (if $\gamma \ge 2$ or $\gamma < 1$). An application to observed galaxies =================================== As an illustration of the models, we apply them to a sample of observed galactic nuclei given in Byun et al (1996). The intrinsic volume density and phase space density parameters of these systems are derived and listed in Table 4. Interestingly most of observed nuclei have a divergent total mass if the light profile is extrapolated to infinity because their $\beta=\beta_1+1 \le 3$. The typical values for $\alpha_1 \sim 0.5$. These properties would not be adequately accounted for by a known narrow class of analytical models by Dehnen (1993) and Tremaine et al. (1994), which are characterized with $\beta=4$ and $\alpha=1$. The residuals of our proposed analytical models for the volume density and phase space density of galactic nuclei are about as small as (if not smaller than) the residual for the double-power-law used in Byun et al. fitting the photometric data. We conclude that the universal surface brightness profile also corresponds to a universal volume density and phase space density. And the analytical models here are secure for interpreting observation. Given this, we can make predictions on observable kinematics. The fitted values of $\sigma(R)$ and $c_4(R)$ on a radial grid are given in Table 5. To obtain values at other radius, one can simply interpolate between the tabulated values as both quantities are smooth functions of the radius. As it is clear from Table 5, the model predicts a very small kurtosis near or outside the break radius of these observed nuclei. We note that this is general as shown in the right bottom panels of Fig. 1, Fig. 3 and Fig. 5 for hypothetic systems. We find that for the whole class of double-power-law isotropic models with $\le 0 \gamma_1 < 2$ and $\beta_1 \ge [2,\gamma_1]_{max}$ and $0.5 \le \alpha_1 \le 2$, the profiles are always very close to Gaussian with $-0.05 <c_4(R) <0.2$. The amplitude of $c_4(R)$ generally increases towards the center, but is small at all radii. Outside the core, $R \ge 1$, the kurtosis is negligiblely small with $|c_4(R)| \le 0.03$. These results argue that velocity profiles are always very close to Gaussian for the whole class of isotropic double-power-law models. They support the interpretation that strongly non-Gaussian profiles near or inside the break radius are indications of either anisotropy or central black hole. Summary ======= In summary, a large number of galactic nuclei obey a parametrized double-power-law surface brightness radial profile (Byun et al. 1996). We find that their intrinsic volume density fits a similar universal double-power-law with a comparble residual. We further explore spherical isotropic models consistent with these profiles, and find a simple fitting formula for the distribution function $f(E)$ as well. These parametrizations are tailored so that their functional forms reduce to power-laws at large or small radius. These analytical models also simplify the procedures to interpret photometric and kinematic data of galactic nuclei. We demonstrate the models with a simple application to a group of observed galactic nuclei, and predict the radial runs of their velocity dispersion and kurtosis. Tables for computed models as well as FORTRAN programs to run additional models are available at site http://ftp.ibm-1.mpa-garching.mpg.de/pub/hsz. Galactic nuclei are generally flattened with a possible central black hole and velocity anisotropy. For these models the distribution function is generally a function of two or three integrals, $f=f(E,J_z,I_3)$. Still the simple spherical model here can provide some simple insights which help to build these more complex models. We expect that an $f(E,J_z,I_3)$ with its energy dependence similar to the fitting formula for isotropic models here will give a plausible fit to anisotropic flattened systems if with a double-power-law radial profile. I thank Dave Syer for a critical reading of the manuscript and many helpful comments. Binney, J.J., & Tremaine, S., 1987, ‘Galactic Dynamics’ (Princeton University Press, New Jersey). Byun et al. 1996, SISSA/astro-ph/9602117 Dehnen, W. 1993, , 265, 250. Lauer, T. R., Ajhar, E. A., Byun, Y. I., Dressler, A., Faber, S. M., Grillmair, C., Kormendy, J., Richstone, D., & Tremaine, S. 1995, AJ, 110, 2622 Press, W. et al. 1992, Numerical Recipes in FORTRAN, 2nd. edition, (Cambridge Univerity Press: New York City) Tremaine, S. et al. 1994, A.J., 107, 2, 634 van der Marel, R. P. & Franx, M., 1993, ApJ, 407, 525 Zhao, H.S. 1996, , 278, 488 Zhao, H.S. and Prada, F. 1996, , accepted Zhao, H.S. and Syer, D.. 1996, work in progress Appendix ======== Asymptotic expressions of the double-power-law models ===================================================== Here we give the asymptotic expressions for the projected density, the phase space density and the potential for the double-power-law [*volume*]{} density model. For the volume density $\nu(r)$ $$\begin{aligned} \label{nuasm.1} \nu (r) & \rightarrow & \nu_0 ({r \over b})^{-\beta} \mbox{ {\rm}{if $r\rightarrow +\infty$}}, \\ \label{nuasm.2} & \rightarrow & \nu_0 ({r \over b})^{-\gamma} \mbox{ {\rm}{if $ r \rightarrow 0$}},\end{aligned}$$ For the projected density $I(R)$ $$\begin{aligned} I(R) & = & 2\int_0^{+\infty} \nu(\sqrt{R^2+z^2}) dz \\ \label{irasm.1} &\rightarrow & c_{\beta-1} ({R \over b})^{1-\beta} \mbox{ {\rm}{if $R \rightarrow +\infty $ }} \\ \label{irasm.2} &\rightarrow & I_0 H(1-\gamma)+ c_{\gamma-1} ({R \over b})^{1-\gamma} \mbox{ {\rm}{ if $R \rightarrow 0 $ }},\end{aligned}$$ where $H(x)$ is a step function, which is unity for $x>0$ and zero otherwise, and $$\label{cn} c_n = {2 \nu_0 b \sqrt{\pi} \Gamma(1+{n \over 2}) \over n \Gamma({1+n \over 2}) },~~~ n >0,$$ and $$\label{i0} I_0 =2 \nu_0 b \alpha B(\alpha(\beta-1), \alpha(1-\gamma)).$$ For the potential $\Phi(r)$ $$\begin{aligned} \label{phiasm.1} {\Phi(r) \over 4\pi \nu_0 b^2 } & \rightarrow & \alpha {b \over r} B(\alpha (3-\gamma ), \alpha ( \beta -3)) H(\beta-3) + {1 \over (3-\beta)(\beta-2)} ({b \over r})^{\beta-2} \mbox{ {\rm}{if $r\rightarrow +\infty$}}\\ \label{phiasm.2} & \rightarrow & \alpha B(\alpha (\beta -2), \alpha (2 -\gamma)) H(2-\gamma) + {1 \over (3-\gamma)(\gamma-2)} ({b \over r})^{\gamma-2} \mbox{ {\rm}{if $ r \rightarrow 0$}} \end{aligned}$$ where $\beta>[2, \gamma]_{min}$, $0\leq \gamma<3$. The zero point of the potential is at infinitely large radius. The depth of the potential well $\Phi_0=\Phi(0)>0$ is infinite for models with a strong cusp with $\gamma \geq 2$ (or if there is a central black hole), and is finite for $\gamma<2$. For the phase space density $f(Q)={d \over dQ} G(Q)$, we have $$\begin{aligned} G(Q) &=& {1 \over 2\pi^2 \sqrt{2}} \int_{Q \ge \Phi(r)} {d \nu(r) \over dr} {1 \over \sqrt{Q-\Phi(r)}} dr \\\label{gasm.1} & \propto & q^{\beta_{2e}} \mbox{ {\rm}{if $Q \rightarrow 0$}}\\ \label{gasm.2} & \propto & q^{\gamma_{2e}} \mbox{ {\rm}{if $ Q \rightarrow \Phi(0) $}} ,\end{aligned}$$ where $q$ is given in Eq. \[qdef\], and $$\label{beta2e} \beta_{2e} = [ {\beta \over \beta-2 }- {1 \over 2}, \beta - {1 \over 2} ]_{max},$$ and $$\begin{aligned} \label{gamma2e} \gamma_{2e} &= & {\gamma +2 \over 2|\gamma-2| } \mbox{ {\rm}{if $\gamma >0$ } } \\ & = & [{1 \over 2 } (1 -{1 \over \alpha}),0]_{max} \mbox{ {\rm}{if $\gamma=0$. } } \end{aligned}$$ Other Approximate Models ======================== A disadvantage of the fit formulae purposed in the main text for the volume density and the phase space density is that they do not follow rigorously with the analytical asymptotic expressions at small radius. Rather the parameters $\gamma$ and $\gamma_2$ are free fitting parameters adjusted to fit all radii equally well. This might be OK for fitting observations with finite resolution at the center, but is not satisfactory for theoretical modelling. On the other hand, it is possible to devise other expressions which have the expected analytical asymptotic behavior while still keeping the residual small near the transition region. The projected density $I(R)$ of the double-power-law density $\nu(r)$ (cf. Eq. \[nu\], and \[IR\]) satisfies the following approximation, $$\label{irapprox} I(R) \approx I_{in}(R) + I_{out}(R) ,$$ where $$\label{in} I_{in}(R) = c_{\gamma-1} ({R \over b})^{1-\gamma} (1+({R \over b})^{1 \over \alpha})^{-1-\alpha(\beta-\gamma)}, \mbox{ {\rm}{ if $\gamma>1 $ }},$$ and $$\label{out} I_{out}(R) = c_{\beta-1} ({R \over b})^{1-\gamma+ {1 \over \alpha} } (1+({R \over b})^{1 \over \alpha})^{-1-\alpha(\beta-\gamma)},$$ where $c_n$ is given in Eq. \[cn\]. The approximation is devised so that $I_{in}(R)+I_{out}(R)$ is rigorously $I(R)$ at asymptoticly big or small radius. $$I(R) \rightarrow I_{in}(R) \rightarrow c_{\gamma-1} ({R \over b})^{1-\gamma} \gg I_{out}(R) \mbox{ {\rm}{ if $R \rightarrow 0 $ }},$$ and $$I(R) \rightarrow I_{out}(R) \rightarrow c_{\beta-1} ({R \over b})^{1-\beta} \gg I_{in}(R) \mbox{ {\rm}{if $R \rightarrow +\infty $ }}.$$ The approximation is also found to be typically accurate within a $10\%$ in the transition region. This is qualitatively understandable if ones notes the equality $$\nu_{\alpha,\beta,\gamma} (r) = \nu_{\alpha,\beta+{1\over \alpha},\gamma} (r) + \nu_{\alpha,\beta,\gamma-{1\over \alpha}} (r) ,$$ where the subscripts specify the double-power-law slopes, and that the projected density of $\nu_{\alpha,\beta-{1\over \alpha},\gamma} (r)$ and $\nu_{\alpha,\beta,\gamma-{1\over \alpha}} (r)$ is roughly $I_{in}(R)$ and $I_{out}(R)$ respectively. The above suggests that it is worthwhile to fit the real photometric data with $I_{in}(R)+I_{out}(R)$ instead of $\mu(R)$, because one obtains simple accurate expressions for the projected density and the volume density simultaneously. But note that if $\gamma \le 1$, namely, if the projected profile has a finite core, the expression for $I_{in}(R)$ is undefined (because $c_{\gamma-1}$ is undefined, see Eq. \[cn\]). With similar techniques, one can also work out simple approximation to the phase space density corresponding to the double-power-law. The idea is to write $$f(Q) = f_{in}(Q) + f_{out} (Q),$$ so that $f_{in}(Q)$ is approximately consistent with $\nu_{\alpha,\beta-{1\over \alpha},\gamma} (r)$ and $f_{out}(Q)$ with $\nu_{\alpha,\beta,\gamma-{1\over \alpha}} (r)$. The results (not given here) are somewhat tedious depending on the range of $\gamma$ and $\beta$. Line profile expressed as a 1D integral ======================================= Here we show that the line profile can be reduced to a 1D integral for the models. With no loss of simplicity we will derive the equations in the slightly general context of a Ospikov-Merritt type anisotropic model, where the distribution function $$f(E,J)=f(Q_a), \,\, \mbox{ {\rm}{ $Q_a \equiv -E-{1 \over 2} \eta J^2$}}$$ where $\eta \equiv {1 \over r_a^2}$ has the dimension of inverse squared distance, and $r_a$ is an anisotropy radius. If $\eta >0$ , then beyond $r_a$ most orbits are radial. The model reduces to isotropic with $f=f(E)$ in the special case that $\eta=0$. Generally $$Q_a=\Phi(r)-{1 \over 2}(v_x^2+v_y^2+v_z^2+\eta J^2),$$ and $$\begin{aligned} J^2 & = & (x v_y -y v_x)^2 +(x v_z- z v_x)^2 +(z v_y - y v_z)^2,\\ & = & v_x^2 (y^2+z^2) + v_y^2 (x^2+z^2) + v_z^2 (x^2+y^2) -2(xyv_xv_y + yzv_yv_z + zxv_zv_x).\end{aligned}$$ Without loss of generality, we can set $$x=R, \,\, y=0$$ It then follows that $$Q_a= \Phi(r)-{1 \over 2}(a_R v_z^2 + a_z v^2_x - 2a_{Rz} v_zv_x + a_r v^2_y)$$ where $$a_R= (1+\eta R^2), \,\, a_z= (1+\eta z^2), \,\, a_r= (1+\eta z^2+ \eta R^2), \,\, a_{Rz}= \eta Rz.$$ With a change of variables $$v_y = v'_y,\,\, v_x = v'_x + u,\,\, u={a_{Rz}v_z \over a_z},$$ we have $$dv_x dv_y = dv'_x dv'_y.$$ and $$Q_a= \Phi(r)-{1 \over 2}(v_z^2 a_{rz} +v'^2_x a_z + v'^2_y a_r ),$$ where $$a_{rz}= (a_R - {a^2_{Rz} \over a_z} ) = {1+\eta (R^2+z^2) \over 1+ \eta z^2}.$$ With a further transformation of coordinates, one finds that $$\int\int dv_x dv_y f(Q_a) = \int_0^{Q_a} \int_0^{2 \pi} d\theta d Q_a f(Q_a) {1 \over \sqrt{a_r a_z} }.$$ If one can devise a function $G_a(Q_a)$ as an elementary function of $Q_a$ and specify the phase space density $f(Q_a)$ by $$f(Q_a) = {d \over dQ_a} G_a(Q_a),$$ then the 3D integral for the line profile is reduced to a 1D integral, $$\begin{aligned} P(R, v_z) & = & \int^\infty_{-\infty} dz \int\int dv_x dv_y f(Q_a) \\\label{profileani} & = & 4\pi \int_0^\infty dz {G_a(Q_a) \over (1+ \eta (R^2+z^2) )^{1/2} (1+\eta z^2)^{1/2} },\end{aligned}$$ where $$Q_a=\Phi(\sqrt{R^2+z^2})- {v_z^2 \over 2} {1+\eta (R^2+z^2) \over 1+ \eta z^2} .$$ When $\eta=0$ and $G_a(Q_a)=G(Q)$, the above reduces to Equation \[profile\] of isotropic models. [rrrrrrrrrrrrr]{} 0.01 & 0.02 & 0.05 & 0.1 & 0.2 & 0.5 & 1 & 2 & 5 & 10 & 20 & 50 & 100 [rrrrrrrrrr]{} 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 [rrrrrrrrrrrrr]{} $0.5\_1.8\_0.0$&0.50&2.80&0.00&1.00&0.47&1.E-03&0.64&3.50&0.16&27.80&-7.40&7.E-02 $0.5\_1.8\_0.5$&0.50&2.80&1.36&1.36&0.20&3.E-02&0.25&3.50&2.34&18.90&-7.99&2.E-01 $0.5\_1.8\_1.0$&0.50&2.80&1.97&1.40&0.19&1.E-02&0.67&3.50&9.99&14.47&-8.66&2.E-01 $0.5\_1.8\_1.5$&0.50&2.80&2.49&1.37&0.20&3.E-03&1.87&3.50&5.10&30.13&-8.30&6.E-03 $0.5\_1.2\_0.5$&0.63&2.20&1.36&1.26&0.22&3.E-02&1.12&11.00&2.27&13.79&-12.04&1.E-01 $0.5\_1.8\_0.5$&0.50&2.80&1.36&1.36&0.20&3.E-02&0.25&3.50&2.34&18.90&-7.99&2.E-01 $0.5\_2.4\_0.5$&0.50&3.40&1.33&1.29&0.24&3.E-02&0.25&3.40&2.17&9.49&-7.31&2.E-01 $0.5\_3.0\_0.5$&0.50&4.00&1.31&1.23&0.28&4.E-02&0.37&4.00&2.06&3.00&-7.73&7.E-02 $0.5\_1.4\_0.0$&0.51&2.40&0.00&1.00&0.40&4.E-03&1.29&6.00&0.00&23.56&-6.54&2.E-02 $1.0\_1.4\_0.0$&1.05&2.40&0.26&0.85&0.60&3.E-03&1.27&6.00&0.55&18.72&-7.67&6.E-03 $1.5\_1.4\_0.0$&1.57&2.40&0.44&0.79&0.73&2.E-03&1.31&6.00&0.83&17.18&-7.98&2.E-02 $2.0\_1.4\_0.0$&2.00&2.40&0.64&0.94&0.47&5.E-03&1.29&6.00&1.11&14.93&-8.40&2.E-02 [rrrrrrrrrrrrl]{} N 596&$1.3\_2.0\_0.6$&1.29&2.97&1.47&1.32&0.22&0.79&2.57&3.25&2E-3&-25.6&3:0:2N 720&$0.4\_1.7\_0.1$&0.37&2.66&0.44&1.22&0.26&0.75&3.53&0.64&17.6&-8.17&1:0:0N1172&$0.7\_1.6\_1.0$&0.64&2.64&1.98&1.40&0.18&0.47&3.63&9.99&13.0&-8.54&3:0:4N1399&$0.7\_1.7\_0.1$&0.63&2.68&0.52&1.15&0.31&0.70&3.43&0.77&18.3&-8.07&1:0:0N1400&$0.7\_1.3\_0.0$&0.74&2.32&0.13&0.92&0.47&1.31&6.77&0.32&20.0&-7.45&3:0:0N1600&$0.8\_2.2\_0.0$&0.81&3.18&0.20&0.94&0.64&0.40&2.68&0.54&16.8&-6.90&2:0:4N1700&$1.1\_1.3\_0.0$&1.14&2.30&0.35&0.88&0.52&1.34&7.10&0.64&17.7&-8.27&2:0:0N2832&$0.5\_1.4\_0.0$&0.54&2.40&0.23&1.07&0.34&1.13&5.51&0.40&19.2&-7.80&1:0:0N3115&$0.7\_1.4\_0.8$&0.67&2.43&1.75&1.46&0.16&0.35&5.16&4.78&0.92&-20.4&2:0:4N3377&$0.5\_1.3\_0.3$&0.41&2.33&1.12&1.55&0.14&0.81&6.52&1.63&13.6&-10.7&3:0:0N3379&$0.6\_1.4\_0.2$&0.55&2.43&0.89&1.40&0.18&0.90&5.13&1.20&13.9&-9.35&2:0:0N3608&$1.0\_1.3\_0.0$&0.98&2.33&0.27&0.89&0.51&1.31&6.56&0.53&18.3&-7.94&2:0:0N4168&$1.1\_1.5\_0.1$&1.04&2.50&0.80&1.17&0.28&1.00&4.50&1.14&15.9&-8.35&2:0:0N4365&$0.5\_1.3\_0.1$&0.40&2.27&0.81&1.44&0.16&0.93&7.91&1.06&17.8&-10.2&2:0:1N4464&$0.6\_1.7\_0.9$&0.55&2.68&1.85&1.45&0.17&0.86&3.43&9.99&44.9&-5.75&2:0:1N4551&$0.3\_1.2\_0.8$&0.30&2.23&1.77&1.52&0.14&0.16&9.08&5.77&11.1&-16.4&3:0:2N4552&$0.7\_1.3\_0.0$&0.70&2.30&0.10&0.93&0.45&1.33&7.23&0.27&20.5&-7.34&4:0:0N4621&$5.3\_1.7\_0.5$&5.44&2.71&1.43&1.04&0.44&0.59&3.32&5.52&8E-3&-24.6&3:0:0N4636&$0.6\_1.3\_0.1$&0.55&2.33&0.75&1.34&0.19&0.98&6.58&1.00&16.2&-9.52&1:0:0N4649&$0.5\_1.3\_0.2$&0.41&2.30&0.82&1.44&0.17&0.92&7.17&1.08&16.5&-10.0&3:0:1N4874&$0.4\_1.4\_0.1$&0.33&2.37&0.76&1.42&0.17&0.93&5.96&0.98&14.9&-9.45&1:0:1N4881&$0.6\_1.4\_0.8$&0.53&2.36&1.71&1.50&0.15&0.58&6.09&4.71&1.91&-19.1&3:0:2N4889&$0.4\_1.3\_0.0$&0.33&2.35&0.36&1.23&0.24&1.07&6.25&0.51&17.9&-8.42&1:0:1N5813&$0.5\_1.3\_0.1$&0.41&2.33&0.54&1.29&0.22&1.01&6.64&0.73&17.4&-9.06&2:0:1N5845&$0.8\_2.7\_0.5$&0.74&3.74&1.36&1.24&0.26&0.27&3.24&2.32&3.21&-7.40&8:0:0 [rrrrrrrrrrrrl]{} N 596&0.67&0.72&0.74&0.72&0.67&0.57&0.49&0.23&0.02&0.01&0.00&0.02N 720&0.77&0.84&0.93&0.97&0.95&0.85&0.75&0.42&0.18&0.11&0.04&0.02N1172&1.41&1.36&1.28&1.20&1.11&0.96&0.82&0.45&-0.01&-0.01&-0.01&0.02N1399&0.67&0.74&0.83&0.86&0.85&0.78&0.70&0.40&0.17&0.09&0.03&0.02N1400&0.74&0.78&0.86&0.92&0.97&0.98&0.95&0.75&0.13&0.10&0.05&0.02N1600&0.61&0.64&0.65&0.63&0.62&0.55&0.46&0.20&0.08&0.03&-0.01&0.02N1700&0.55&0.62&0.72&0.79&0.85&0.89&0.89&0.75&0.14&0.10&0.05&0.03N2832&0.79&0.84&0.92&0.97&1.00&0.97&0.91&0.65&0.15&0.10&0.04&0.02N3115&1.10&1.13&1.14&1.13&1.09&1.01&0.92&0.62&0.01&0.01&0.00&0.02N3377&0.69&0.81&0.96&1.05&1.09&1.06&0.99&0.72&0.11&0.08&0.05&0.02N3379&0.66&0.76&0.89&0.97&1.00&0.96&0.88&0.60&0.15&0.10&0.05&0.02N3608&0.61&0.67&0.77&0.83&0.89&0.91&0.90&0.72&0.14&0.10&0.05&0.03N4168&0.55&0.63&0.73&0.78&0.81&0.79&0.75&0.52&0.12&0.08&0.03&0.02N4365&0.68&0.81&0.95&1.05&1.11&1.10&1.05&0.81&0.17&0.12&0.06&0.03N4464&1.20&1.21&1.19&1.15&1.07&0.91&0.78&0.42&0.00&0.00&-0.00&0.02N4551&1.11&1.16&1.20&1.23&1.23&1.20&1.15&0.87&0.01&0.01&0.01&0.02N4552&0.78&0.81&0.88&0.94&0.99&1.01&0.98&0.79&0.12&0.09&0.05&0.02N4621&0.15&0.15&0.15&0.15&0.15&0.14&0.13&0.10&0.00&0.00&-0.00&0.02N4636&0.67&0.77&0.91&0.99&1.04&1.02&0.97&0.73&0.16&0.12&0.06&0.02N4649&0.68&0.80&0.95&1.04&1.09&1.07&1.01&0.77&0.16&0.12&0.06&0.02N4874&0.71&0.81&0.95&1.04&1.08&1.04&0.97&0.69&0.19&0.13&0.06&0.02N4881&1.05&1.09&1.14&1.15&1.14&1.07&1.00&0.72&0.01&0.01&0.00&0.02N4889&0.82&0.88&0.98&1.06&1.09&1.05&0.98&0.72&0.19&0.13&0.06&0.02N5813&0.75&0.83&0.95&1.04&1.08&1.05&0.99&0.74&0.20&0.14&0.06&0.02N5845&0.74&0.79&0.81&0.75&0.63&0.46&0.35&0.12&0.03&0.02&-0.00&0.02 to5.5in ------------------------------------------------------------------------ to5.5in ------------------------------------------------------------------------ to5.5in ------------------------------------------------------------------------ to5.5in ------------------------------------------------------------------------ to5.5in ------------------------------------------------------------------------ to5.5in ------------------------------------------------------------------------ [^1]: The parameters $\alpha_1,\beta_1,\gamma_1$ in this paper correspond to ${1 \over \alpha}, \beta, \gamma$ respectively in the convention used by these authors. Their models are restricted to surface brightness studies to establish the central cusp in light. [^2]: for finite core models with $\alpha=\alpha_1={1 \over 2}$ and for single-power-law models [^3]: A useful generalization to Osipkov-Merrit models can be obtained by specifying the distribution function $f=f(Q_a=-E-\eta J^2)={d \over dQ_a}G_a(Q_a)$, where $G_a(Q_a)=G(Q_a)+\eta g(Q_a)$ is a linear function $\eta$, and it reduces to $G(Q)$ for isotropic model. In this case, one can show (with Eq. 4-148 of Binney and Tremaine) that $n(r) (1+\eta r^2) = 2\pi \sqrt{2} \int_0^{\Phi(r)} {G(Q_a)+ \eta g(Q_a) \over \sqrt{\Phi(r) -Q_a}} dQ_a$, and $G(Q_a)$ and $g(Q_a)$ depend on $\eta$ only through $Q_a$.
--- abstract: 'We construct smooth finite element de Rham complexes in two space dimensions. This leads to three families of curl-curl conforming finite elements, two of which contain two existing families. The simplest triangular and rectangular finite elements have only 6 and 8 degrees of freedom, respectively. Numerical experiments for each family demonstrate the convergence and efficiency of the elements for solving the quad-curl problem.' address: - 'School of Mathematics, University of Minnesota, Minneapolis, MN 55455, USA.' - 'Department of Mathematics, Wayne State University, Detroit, MI 48202, USA. ' - 'Beijing Computational Science Research Center, Beijing, China; Department of Mathematics, Wayne State University, Detroit, MI 48202, USA' author: - Kaibo Hu - 'Qian Zhang ()' - Zhimin Zhang bibliography: - 'quadcurl-2d-reduction.bib' title: 'Simple curl-curl-conforming finite elements in two dimensions' --- [^1] Introduction ============ In this paper, we construct and analyze [three families]{} of curl-curl conforming ($H({\operatorname{curl}}^2)$-conforming) finite elements in two space dimensions (2D) and use these elements to solve the qual-curl problem. The quad-curl equation appears in various models, such as the inverse electromagnetic scattering theory [@Cakoni2017A; @Monk2012Finite; @Sun2016A] and magnetohydrodynamics [@Zheng2011A]. The corresponding quad-curl eigenvalue problem plays a fundamental role in the analysis and computation of the electromagnetic interior transmission eigenvalues [@sun2011iterative]. Some methods have been developed for the source problem and the eigenvalue problem in, e.g., [@WZZelement; @Zheng2011A; @Sun2016A; @Qingguo2012A; @Brenner2017Hodge; @quadcurlWG; @Zhang2018M2NA; @Chen2018Analysis164; @Zhang2018Regular162; @SunZ2018Multigrid102; @WangC2019Anew101; @BrennerSC2019Multigrid100; @quad-curl-eig-posterior]. Two of the authors and their collaborator have recently developed, for the first time, a family of curl-curl conforming finite elements [@WZZelement]. To save the degrees of freedom (DOFs), they used incomplete polynomials. The polynomial degree $k$ starts from 4 for triangular elements and 3 for rectangular elements, respectively, and the lowest order elements of both shapes have 24 DOFs. Moreover, in [@quad-curl-eig-posterior], they collaborated with J. Sun and constructed another family of curl-curl conforming triangular elements with complete polynomials. The polynomial degree $k$ starts from 4 and hence the lowest order element has 30 DOFs. In this paper, in addition to the construction of new H$({\operatorname{curl}}^2)$-conforming elements, [ we will also fit the two existing families into complexes and extend them to lower-order cases]{}. The discrete de Rham complex is now an important tool for the construction of finite elements and analysis of numerical schemes, c.f., [@arnold2018finite; @arnold2010finite; @arnold2006finite; @hiptmair1999canonical; @neilan2015discrete; @christiansen2018nodal]. In this direction, the finite element periodic table [@arnold2014periodic] includes various successful finite elements for computational electromagnetism or diffusion problems. Motivated by problems in fluid and solid mechanics, there is an increased interest in constructing finite element de Rham complexes with enhanced smoothness, sometimes referred to as Stokes complexes [@falk2013stokes; @christiansen2016generalized]. In this paper, for the discretization of the quad-curl problem, we will consider another variant of the de Rham complex, i.e., $$\begin{aligned} \label{2D:quad-curl} \begin{diagram} 0 & \rTo^{} & \mathbb{R} & \rTo^{\subset} & H^{1}(\Omega) & \rTo^{\nabla} & H({\operatorname{curl}}^2; \Omega) & \rTo^{\nabla\times} & H^{1}(\Omega) & \rTo^{} & 0, \end{diagram}\end{aligned}$$ where $\Omega$ is a bounded Lipschitz domain in $\mathbb{R}^{2}$ and $$H({\operatorname{curl}}^{2}; \Omega):=\{\bm u \in {\bm L}^2(\Omega):\; \nabla \times \bm u \in L^2(\Omega),\;\bm{\nabla \times}\nabla \times \bm u \in \bm L^2(\Omega)\}.$$ For simplicity of presentation, throughout this paper we will assume that $\Omega$ is contractible. Then the exactness of follows from standard results in, e.g., [@arnold2018finite]. This complex point of view makes it possible to achieve the goal of this paper, i.e., constructing simple curl-curl conforming elements with fewer degrees of freedom, compared to, e.g., those in [@WZZelement] and [@quad-curl-eig-posterior]. From this complex perspective, we also fit the quad-curl problem and its finite element approximations in the framework of the finite element exterior calculus (FEEC) [@arnold2018finite; @arnold2006finite]. Thus a number of tools from FEEC can be used for the numerical analysis. For example, we construct interpolations operators that commute with the differential operators. Then the convergence result follows from standard argument. Specifically, the new finite elements fit into a subcomplex of : $$\begin{aligned} \label{discrete-complex} \begin{diagram} 0 & \rTo^{} & \mathbb{R} & \rTo^{\subset} & \Sigma_h & \rTo^{\nabla} & V_{h} & \rTo^{\nabla\times} & W_h & \rTo^{} & 0. \end{diagram}\end{aligned}$$ In , we choose Lagrange finite element spaces for $\Sigma_h$ and Lagrange elements enriched with bubbles for $W_h$. The space $V_h\subset H({\operatorname{curl}}^2;\Omega)$ is thus obtained as the gradient of $\Sigma_h$ plus a complementary part, mapped onto $W_h$ by ${\operatorname{curl}}$. We will use $V_h$ as a conforming finite element for solving the quad-curl problem below. [Among the three versions of $V_h$ which we will construct in this paper, the simplest elements have]{} only 6 DOFs for a triangle and 8 DOFs for a rectangle. To the best of our knowledge, these elements have the smallest number of DOFs among all the existing curl-curl conforming finite elements. [The significance of this new development is threefold: 1) It develops new families of curl-curl conforming elements; 2) It relates the curl-curl conforming elements to the FEEC via the de Rham complex and thus allowing further systematic development of new elements; 3) It reduces element DOF of the existing lowest-order curl-curl conforming element from 24 to 6 and 8 for triangular and rectangular elements, respectively, which makes commercial adoption of the elements feasible. ]{} The remaining part of the paper is organized as follows. In Section 2, we present notations and preliminaries. In Section 3, we define shape functions and local exact sequences by the Poincaré operators and prove their properties. In Section 4, we construct a new family of curl-curl conforming finite elements and in Section 5 we extend two existing families to lower order cases by fitting them into complexes. In Section 6, we provide numerical examples to verify the correctness and efficiency of our method. Finally, concluding remarks and future work are given in Section 7. Preliminaries ============= Let $\Omega\in\mathbb{R}^2$ be a contractible Lipschitz domain. We adopt standard notations for Sobolev spaces such as $H^m(D)$ or $H_0^m(D)$ on a simply-connected sub-domain $D\subset\Omega$ equipped with the norm $\left\|\cdot\right\|_{m,D}$ and the semi-norm $\left|\cdot\right|_{m,D}$. If $m=0$, the space $H^0 (D)$ coincides with $ L^2(D)$ equipped with the norm $\|\cdot\|_{D}$, and when $D=\Omega$, we drop the subscript $D$. We use $\bm H^m(D)$ and ${\bm L}^2(D)$ to denote the vector-valued Sobolev spaces $\left[H^m(D)\right]^2$ and $\left[L^2(D)\right]^2$. Let ${\bm u}=(u_1, u_2)^T$ and ${\bm w}=(w_1, w_2)^T$, where the superscript $T$ denotes the transpose. Then ${\bm u} \times {\bm w} = u_1 w_2 - u_2 w_1$ and $\nabla \times {\bm u} = \partial_{x_1} u_2 - \partial_{x_2} u_1 $. For a scalar function $v$, $\bm{\nabla \times} v = (\partial_{x_2} v , - \partial_{x_1} v)^T$. We denote $(\nabla\times)^2\bm u=\bm\nabla\times\nabla\times\bm u$. We define $$\begin{aligned} &H(\text{curl};D):=\{\bm u \in {\bm L}^2(D):\; \nabla \times \bm u \in L^2(D)\},\\ H(\text{curl}^2;D)&:=\{\bm u \in {\bm L}^2(D):\; \nabla \times \bm u \in L^2(D),\;\bm{\nabla \times}\nabla \times \bm u \in \bm L^2(D)\},\end{aligned}$$ with the scalar products and norms $$(\bm u,\bm v)_{H({\operatorname{curl}}^s;D)}=(\bm u,\bm v)+\sum_{j=1}^s((\nabla\times)^j\bm u,(\nabla\times)^j \bm v),$$ and $$\left\|\bm u\right\|_{H({\operatorname{curl}}^s;D)}=\sqrt{(\bm u,\bm u)_{H({\operatorname{curl}}^s;D)}},$$ with $s=1,2$. We use $Q_{i,j}(D)$ to denote the polynomials with two variables $(x_1, x_2)$ where the maximal degree is $i$ in $x_1$ and $j$ in $x_2$. For simplicity, we drop a subscript $i$ when $i=j$. We use $P_i(D)$ to represent the space of polynomials on $D$ with degree of no larger than $i$ and $\bm P_i(D)=\left[P_i(D)\right]^2$. We denote $\widetilde P_i(D)$ as the space of homogeneous polynomials. Let $\mathcal{T}_h\,$ be a partition of the domain $\Omega$ consisting of rectangles or triangles. We denote $h_K$ as the diameter of an element $K \in \mathcal{T}_h$ and $h$ the mesh size of $\mathcal {T}_h$. We use $C$ to denote a generic positive $h$-independent constant. Let $\mathfrak{p}: C^{\infty}(\mathbb{R}^{2})\mapsto \left [C^{\infty}(\mathbb{R}^{2})\right ]^{2}$ be an operator which maps a scalar function to a vector field: $$\mathfrak{p} u:=\int_{0}^{1}t \bm{x}^{\perp}u(t\bm x)\, dt,$$ where $$\bm{x}:=(x_{1}, x_{2}), \text{ and } \bm{x}^{\perp}:=(-x_{2}, x_{1}).$$ As a special case of the Poincaré operators (c.f. [@hiptmair1999canonical; @christiansen2016generalized]), $\mathfrak{p}$ has the following properties: - the null-homotopy identity $$\label{null-homotopy} \nabla\times \mathfrak{p}u=u,~ \forall u\in C^{\infty}(\mathbb R^2);$$ - polynomial preserving property: if $u\in {P}_{r}(\mathbb{R}^{2})$, then $\mathfrak{p}u\in \bm {P}_{r+1}(\mathbb{R}^{2})$. We review some basic facts from homological algebra; further details can be found, for instance, in [@arnold2006finite]. A differential complex is a sequence of spaces $V^{i}$ and operators $d^{i}$ such that $$\begin{aligned} \label{general-complex} \begin{diagram} 0 & \rTo & V^{1} & \rTo^{d^{1}} &V^{2}&\rTo^{d^{2}} & \cdots & \rTo^{d^{n-1}} &V^{n} & \rTo^{d^n} & 0, \end{diagram}\end{aligned}$$ satisfying the complex property $d^{i+1}d^{i}=0$ for $i=1, 2, \cdots, n-1$. Let $\ker(d^{i})$ be the kernel space of the operator $d^{i}$ in $V^{i}$, and ${\operatorname{ran}}(d^{i})$ be the image of the operator $d^{i}$ in $V^{i+1}$. Due to the complex property, we have $\ker(d^{i})\subset {\operatorname{ran}}(d^{i-1})$ for each $i\geq 2$. Furthermore, if $\ker(d^{i})= {\operatorname{ran}}(d^{i-1})$, we say that the complex is exact at $V^{i}$. At the two ends of the sequence, the complex is exact at $V^{1}$ if $d^{1}$ is injective (with trivial kernel), and is exact at $V^{n}$ if $d^{n}$ is surjective (with trivial cokernel). The complex is called exact if it is exact at all the spaces $V^{i}$. If each space in has finite dimensions, then a necessary (but not sufficient) condition for the exactness of is the following dimension condition: $$\sum_{i=1}^{n} (-1)^{i}\dim (V^{i})=0.$$ Local spaces and polynomial complexes ===================================== To define a finite element space, we must supply, for each element $K\in\mathcal{T}_h$, the space of shape functions and the DOFs. We will use the following complexes as the local function spaces on each $K\in \mathcal{T}_h$ for : $$\begin{aligned} \label{local-complex} \begin{diagram} 0 & \rTo^{} & \mathbb{R} & \rTo^{\subset} & \Sigma^{r}_h(K) & \rTo^{\nabla} &V^{r-1, k}_{h}(K) & \rTo^{\nabla\times} & W_h^{k-1}(K) & \rTo^{} & 0. \end{diagram}\end{aligned}$$ Let $\Sigma^{r}_h(K)$ be $P_{r}(K)$ for a triangle element or $Q_{r}(K)$ for a rectangle element. For a triangle element $K$, we set $$W_h^{k-1}(K)= \begin{cases} P_{k-1}(K),& k\geq 4,\\ P_{k-1}(K)\oplus {\operatorname{span}}\{B_t\},& k=2,3, \end{cases}$$ where $B_t=\lambda_1\lambda_2\lambda_3$ with the barycentric coordinate $\lambda_i$. For a rectangle element $K$, we set $$W_h^{k-1}(K)= \begin{cases} Q_{k-1}(K),& k\geq 3,\\ Q_{k-1}(K)\oplus {\operatorname{span}}\{B_r\},& k=2, \end{cases}$$ where $B_r=h_x^{-2}h_y^{-2}\left(x-x_l\right)\left(x-x_r\right)\left(y-y_d\right)\left(y-y_u\right)$ with the element $K=(x_l,x_r)\times(y_d,y_u)$ and $h_x=x_r-x_l$, $h_y=y_u-y_d$. We define $$\begin{aligned} \label{Vh2} V_h^{r-1, k}(K)=\nabla \Sigma^{r}_h(K)\oplus \mathfrak{p}W^{k-1}_h(K).\end{aligned}$$ By the null-homotopy identity , the right hand side of is a direct sum. The local sequence is a complex and exact. Since $V_h^{r-1, k}(K)=\nabla \Sigma_h^{r}(K)+\mathfrak p W_h^{k-1}(K)$ and the null-homotopy identity , we have $\nabla \Sigma^{r}_h(K)\subseteq V^{r-1, k}_h(K)$ and $\nabla\times V^{r-1, k}_h(K)= W_h^{k-1}(K)$. This shows that a complex. It remains to show the exactness. We first show that, for any $\bm v_h\in V^{r-1, k}_h(K)$ for which $\nabla\times\bm v_h=0$, there exists a $p_h\in \Sigma^{r}_h(K)$ s.t. $\bm v_h=\nabla p_h.$ Since $\bm v_h\in V^{r-1, k}_h$, we have $\bm v_h=\nabla p_h+\mathfrak p w_h$ with $p_h\in \Sigma^{r}_h(K)$ and $w_h\in W^{k-1}_h(K)$. By the null-homotopy identity again, $0=\nabla\times\bm v_h=w_h$. Therefore, $\bm v_h=\nabla p_h.$ Moreover, the curl operator $\nabla\times: V^{r-1, k}_h(K) \to W^{k-1}_h(K)$ is surjective since $\nabla\times V^{r-1, k}_h(K)=W^{k-1}_h(K)$. In the following lemma, we show that $V_h(K)$ contains some polynomial subspaces. \[Vh\] Suppose that $r\leq k+1$. Then $\bm P_{r-1}(K)\subseteq V^{r-1, k}_h(K)$. We claim that $$\begin{aligned} \label{dcmp_Pr} \bm P_{r-1}(K)=\nabla P_{r}(K)\oplus \mathfrak{p} P_{r-2}(K). \end{aligned}$$ In fact, $\nabla P_{r}(K)\oplus\mathfrak{p} P_{r-2}(K)\subseteq \bm P_{r-1}(K)$. To show , we only need to show $$\dim \nabla P_{r}(K)\oplus\mathfrak{p} P_{r-2}(K)=\dim \bm P_{r-1}(K).$$ By the null-homotopy identity , the right hand side is a direct sum. Therefore, $$\dim \nabla P_{r}(K)\oplus\mathfrak{p} P_{r-2}(K)=\dim \nabla P_{r}(K)+\dim P_{r-2}(K),$$ which is exactly the dimension of $\bm P_{r-1}(K)$. Combining and the fact that $P_{r-2}(K)\subseteq W^{k-1}_h(K)$, we get $\bm P_{r-1}(K)\subseteq \nabla P_r(K)\oplus \mathfrak{p}W^{k-1}_h(K)=V^{r-1, k}_h(K)$. In the following sections, we will take different values of $r$ to get various families of curl-curl conforming finite elements and complexes. In Section \[sec:new\], we take $r=k-1$, and this leads to a new family of simple elements. In Section \[sec:existing\], [we introduce the other two]{} families of elements by taking $r=k,k+1$, respectively. A new family of curl-curl conforming elements [ $r=k-1$]{} {#sec:new} ========================================================== In this section, we construct a new family of curl-curl conforming elements $V_{h}^{k-2, k}$ by specifying $r=k-1$ in , i.e., $$\begin{aligned} \label{discrete-complex-k-1} \begin{diagram} 0 & \rTo^{} & \mathbb{R} & \rTo^{\subset} & \Sigma_h^{k-1} & \rTo^{\nabla} & V_{h}^{k-2, k} & \rTo^{\nabla\times} & W_h^{k-1} & \rTo^{} & 0. \end{diagram}\end{aligned}$$ For simplicity of presentation, we focus on the triangular elements and only mention the rectangular elements in Remark \[rmk:rectangular\] below. Degrees of freedom and global finite element spaces {#sec:dofs} --------------------------------------------------- We define DOFs for the spaces in . The DOFs for the Lagrange element $\Sigma^{r}_{h}$ can be given as follows. - Vertex DOFs $ M_{v}({u})$ at all the vertices $v_{i}$ of $K$: $$M_{v}(u)=\left\{u\left({v}_{i}\right) \text{ for all vertices $v_i$}\right\}.$$ - Edge DOFs $M_{e}(u)$ on all the edges $e_{i}$ of $K$: $$\begin{aligned} M_{e}(u)=\left\{\int_{e_i} u v \mathrm{d} s\text { for all } v \in P_{r-2}(e_i) \text { and for all edges }e_i\right\}.\end{aligned}$$ - Interior DOFs $M_{K}(u)$: $$M_{K}(u)=\left\{ \int_K u v \mathrm{d} A \text{ for all } v \in P_{r-3}(K) \text{ or } Q_{r-2}(K) \right\}.$$ For $ u \in H^{1+\delta}(\Omega)$ with $\delta >0$, we can define an $H^1$ interpolation operator $\pi_h$ by the above DOFs. The restriction of $\pi_h$ on $K$ is denoted as $\pi_K$ and defined by $$\begin{aligned} \label{def-inte-H1} M_v( u-\pi_K u)=\{0\},\ M_e(u-\pi_Ku)=\{0\},\ \text{and}\ M_K( u-\pi_K u)=\{0\}.\end{aligned}$$ The DOFs for $W_{h}^{k-1}$ can be given similarly, with only one additional interior integration DOF on $K$ to take care of the interior bubble. We denote $\tilde\pi_h$ as the $H^1$ interpolation operator to $W_h$ by these DOFs. For the shape function space $V_h^{k-2, k}(K):=\nabla P_{k-1}(K)\oplus \mathfrak p W^{k-1}_h(K)$ of the triangular elements, we define the following DOFs: - Vertex DOFs $\bm M_{ {v}}({\bm u})$ at all the vertices $ {v}_{i}$ of $K$: $$\label{tridef1-1} \bm M_{ {v}}({\bm u})=\left\{( \nabla\times {\bm u})( v_{i}),\; i=1,\;2\;,3\right\}.$$ - Edge DOFs $\bm M_{ {e}}( {\bm u})$ at all the edges $ {e}_i$ of $ {K}$ (with the unit tangential vector $ {\bm \tau}_i$): $$\begin{aligned} \bm M_{ {e}}( {\bm u})=&\left\{\int_{e_i} {\bm u}\cdot {\bm \tau}_i {q}\mathrm d {s},\ \forall {q}\in P_{k-2}( {e}_i), i=1,2,3\right\}\nonumber\\ \cup& \left\{\int_{e_i}\nabla\times{\bm u}q\d s,\ \forall {q}\in P_{k-3}( {e}_i), i=1,2,3\right\}.\label{tridef1-2} \end{aligned}$$ - Interior DOFs $\bm M_{ {K}}( {\bm u})$: $$\begin{aligned} \label{tridef1-3} &\bm M_{ {K}}( {\bm u})=\left\{\int_{ {K}} {\bm u}\cdot {\bm q}\mathrm \d A,\ \forall{{\bm q}}\in \mathcal{D} \right\}, \end{aligned}$$ where $\mathcal{D}=\bm P_{k-5}( K)\oplus\widetilde{P}_{k-5} {{\bm x}}\oplus\widetilde{P}_{k-4} {{\bm x}}$ with $ {{\bm x}}=( x_1,\; x_2)^T$ when $k\geq 5$; $\mathcal{D}={P}_{0} {{\bm x}}$ when $k=4$; $\mathcal{D}=\emptyset$ when $k=2,3$. (5,6.3)(2,-4) (0,0) (2,2) (-1, 0)[(1,2)[1]{}]{} (0, 2)[(1,-2)[1]{}]{} (-1,0)[(1,0)[2]{}]{} (-1.,0) (1.,0) (0,2) (1.5, 1)[(1, 0)[1]{}]{} (1.68, 1.15)[$\nabla$]{} (4,0)[ (-1, 0)[(1,2)[1]{}]{} (0, 2)[(1,-2)[1]{}]{} (-1,0)[(1,0)[2]{}]{} (-0.16, 2.06)[${\operatorname{curl}}$]{} (-1.14, -0.18)[${\operatorname{curl}}$]{} (0.88, -0.18)[${\operatorname{curl}}$]{} (-0.75, 0.65)[(1, 2)[0.3]{}]{} (0.75, 0.65)[(-1, 2)[0.3]{}]{} (-0.2, -0.05)[(1, 0)[0.6]{}]{} ]{} (5.5, 1)[(1, 0)[1]{}]{} (5.75, 1.1)[[$\nabla\times$]{}]{} (8,0) (2,2) (-1, 0)[(1,2)[1]{}]{} (0, 2)[(1,-2)[1]{}]{} (-1,0)[(1,0)[2]{}]{} (-1, 0) (1, 0) (-0, 2) (-0, 0.5) (0,-3) (2,2) (-1, 0)[(0,2)[2]{}]{} (-1, 2)[(1,0)[2]{}]{} (1,2)[(0,-2)[2]{}]{} (-1,0)[(2,0)[2]{}]{} (-1.,2) (-1.,0) (1,2) (1,0) (1.5, -2)[(1, 0)[1]{}]{} (1.68, -1.8)[$\nabla$]{} (4,-3)[ (-1, 0)[(0,2)[2]{}]{} (-1, 2)[(1,0)[2]{}]{} (1,2)[(0,-2)[2]{}]{} (-1,0)[(2,0)[2]{}]{} (-1.2,2.06)[${\operatorname{curl}}$]{} (-1.2,-0.2)[${\operatorname{curl}}$]{} (0.8,2.06)[${\operatorname{curl}}$]{} (0.8,-0.2)[${\operatorname{curl}}$]{} (-0.2, 2.06)[(1, 0)[0.6]{}]{} (1.06, 0.65)[(0, 1)[0.6]{}]{} (-1.06, 0.65)[(0, 1)[0.6]{}]{} (-0.2, -0.05)[(1, 0)[0.6]{}]{} ]{} (5.5, -2)[(1, 0)[1]{}]{} (5.75, -1.8)[[$\nabla\times$]{}]{} (8,-3) (2,2) (-1, 0)[(0,2)[2]{}]{} (-1, 2)[(1,0)[2]{}]{} (1,2)[(0,-2)[2]{}]{} (-1,0)[(2,0)[2]{}]{} (-1.,2) (-1.,0) (1,2) (1,0) (-0, 1) (0,-4)[$\Sigma_h^1$]{} (1.5, -4)[(1, 0)[1]{}]{} (1.68, -3.8)[$\nabla$]{} (4,-4)[$V_h^{0,2}$]{} (5.5, -4)[(1, 0)[1]{}]{} (5.75, -3.8)[[$\nabla\times$]{}]{} (8,-4)[$W_h^1$]{} \[well-defined-conditions\] The DOFs - are well-defined for any ${\bm u}\in \bm H^{1/2+\delta}({K})$ and ${\nabla}\times{\bm u}\in H^{1+\delta}({K})$ with $\delta>0$. The proof of this lemma is the same as that of Lemma 3.4 in [@WZZelement]. We omit it here. The DOFs for $V^{k-2, k}_{h}(K)$ are unisolvent. The decomposition is a direct sum. Therefore $\dim V^{k-2, k}_{h}(K) =\dim \nabla \Sigma^{k-1}_h(K)+\dim W^{k-1}_h(K)={k(k+1)-1}$ when $k\geq 4$ and $\dim V^{k-2, k}_{h}(K)=k(k+1)$ when $k=2,3$. By counting the number of DOFs, the DOFs have the same dimension. Then it suffices to show that if all the DOFs vanish on a function $\bm{u}$, then $\bm{u}=0$. To see this, we first observe that $\nabla\times \bm{u}=0$ by the unisolvence of the DOFs of $W^{k-1}_h(K)$. Then $\bm{u}=\nabla\phi\in P_{k-2}(K)$ for some $\phi\in \Sigma^{k-1}_h(K)$. By the edge DOFs of $V^{k-2, k}_{h}(K)$, $\bm u\cdot \bm {\tau}=0$ on edges. Then there exists some $\psi \in P_{k-4}(K)$ such that $\phi=\lambda_{1}\lambda_{2}\lambda_{3}\psi$. Choosing $\bm{q}\in P_{k-4}(K)\bm x$ for which $\nabla\cdot\bm q=\psi$, we have by the interior DOFs: $$0=\left(\bm u,\bm q\right)=\left(\nabla \phi,\bm q\right)=-\left(\phi,\nabla\cdot\bm q\right)=\left(\lambda_{1}\lambda_{2}\lambda_{3}{\psi},{\psi}\right).$$ This implies that $\psi=0$ and hence $\phi=0$ and $\bm u=0$. Provided $\bm u \in \bm H^{1/2+\delta}(\Omega)$, and $ \nabla \times \bm u \in H^{1+\delta}(\Omega)$ with $\delta >0$ (see Lemma \[well-defined-conditions\]), we can define an $H({\operatorname{curl}}^2)$ interpolation operator $\Pi_h$ whose restriction on $K$ is denoted as $\Pi_K$ and defined by $$\begin{aligned} \label{def-inte-01} \bm M_v(\bm u-\Pi_K\bm u)=\{0\},\ \bm M_e(\bm u-\Pi_K\bm u)=\{0\},\ \text{and}\ \bm M_K(\bm u-\Pi_K\bm u)=\{0\},\end{aligned}$$ where $\bm M_v,\ \bm M_e$ and $\bm M_K$ are the sets of DOFs in -. Gluing the local spaces by the above DOFs, we obtain the global finite element spaces $\Sigma_h^{k-1}$, $V_h^{k-2, k}$ and $W_h^{k-1}$. The conformity holds: $$V^{k-2, k}_{h}\subset H({\operatorname{curl}}^2; \Omega).$$ It’s straightforward since $\nabla\times V^{k-2, k}_{h}\subseteq W^{k-1}_h\subset H^{1}(\Omega)$. Global finite element complexes for the quad-curl problem --------------------------------------------------------- The global finite element spaces lead to a complex which is exact on contractible domains. The complex is exact on contractible domains. We first show the exactness at $V^{k-2, k}_h$. To this end, we show that for any $\bm v_h\in V^{k-2, k}_h\subset H({\operatorname{curl}}^2;\Omega)$ satisfying $\nabla\times\bm v_h=0$, there exists $p\in \Sigma^{k-1}_h$ such that $\bm v_h=\nabla p$. Actually, this follows from the exactness of the standard finite element differential forms (e.g., [@arnold2018finite]) and the fact that the curl-free part of $V^{k-2, k}_h$ is a subspace of the second Nédélec space of degree $k-2$. To prove the exactness at $W^{k-1}_h$, that is to prove the operator $\nabla\times$ from $V^{k-2, k}_h$ to $W^{k-1}_h$ is surjective, we count the dimensions. The dimension count of the Lagrange elements reads: $$\dim \Sigma^{k-1}_h=\mathcal V+(k-2)\mathcal E+\frac{1}{2}(k-3)(k-2)\mathcal F,$$ where $\mathcal V$, $\mathcal E$, and $\mathcal F$ denote the number of vertices, edges, and 2D cells, respectively. Moreover, $\dim W^{k-1}_h= \dim \Sigma^{k-1}_h$ for $k \geq 4$ and $\dim W^{k-1}_h= \dim \Sigma^{k-1}_h+\mathcal F$ for $k=2,3$. From the DOFs -, $$\begin{aligned} \dim V^{k-2, k}_h&=\mathcal V+(2k-3)\mathcal E+(k^2-5k+5)\mathcal F \text{ for } k\geq 4,\\ &\dim V^{k-2, k}_h=\mathcal V+(2k-3)\mathcal E\text{ for } k =2,3.\end{aligned}$$ From the above dimension count, we have $$\dim V^{k-2, k}_h=\dim W^{k-1}_h+\dim \Sigma^{k-1}_h-1,$$ where we have used Euler’s formula $\mathcal V-\mathcal E+\mathcal F=1$. This completes the proof. We summarize the interpolations defined in Section \[sec:dofs\] in the following diagram. $$\label{2complex} \begin{tikzcd} 0 \arrow[r] & \mathbb{R} \arrow[r,"\subset"] & H^1(\Omega) \arrow[d ]\arrow[r,"\nabla"] & H({\operatorname{curl}}^2;\Omega)\arrow[r,"\nabla\times"]\arrow[d ] & H^1(\Omega)\arrow[r]\arrow[d ]& 0\\ 0 \arrow[r] & \mathbb{R} \arrow[r,"\subset"] & W \arrow[d, "\pi_h" ]\arrow[r,"\nabla"] & V\arrow[r,"\nabla\times"]\arrow[d, "\Pi_h" ] &W\arrow[r]\arrow[d, "\tilde{\pi}_h" ]& 0\\ 0 \arrow[r] & \mathbb{R} \arrow[r,"\subset"] & \Sigma^{k-1}_h\arrow[r,"\nabla"] & V^{k-2, k}_h\arrow[r,"\nabla\times"] & W^{k-1}_h\arrow[r]& 0, \end{tikzcd}$$ where $W$ and $V$ are two spaces of $H^1(\Omega)$ and $H({\operatorname{curl}}^2;\Omega)$ in which $\pi_h$ (or $\tilde\pi_h$) and $\Pi_h$ are well-defined. Now we show that the interpolations in commute with the differential operators. This result will play a key role in the error analysis below for discretizing the quad-curl problem. \[commute\] The last two rows of the complex are a commuting diagram, i.e., $$\begin{aligned} \nabla\pi_h u&=\Pi_h\nabla u \text{ for all } u\in W,\label{Pih_and_pih}\\ \nabla\times\Pi_h \bm u&=\tilde\pi_h\nabla\times\bm u \text{ for all } \bm u\in V.\label{Pih_and_tildepih}\end{aligned}$$ We only prove . A similar trick can be used to prove . From the diagram , we know both $\Pi_h\nabla u$ and $\nabla\pi_h u$ are in the space $V_h^{k-2,k}$. It suffices to prove that the DOFs - for $\Pi_h\nabla u$ and $\nabla\pi_h u$ agree element by element. For a given element $K$ with a vertex $v_i$, we first have $$\nabla\times\big(\Pi_h\nabla u-\nabla \pi_h u\big)(v_i)=\nabla\times\big(\nabla u-\nabla \pi_h u\big)(v_i)=0.$$ On an edge $e_i$ with a tangent vector $\bm \tau_i$ and two vertices $v_1$ and $v_2$, for any $q\in P_{k-2}(e_i)$, we derive $$\begin{aligned} &\int_{e_i}\big(\Pi_h\nabla u-\nabla\pi_h u\big)\cdot\bm \tau_i q\d s =\int_{e_i}\big(\nabla u-\nabla\pi_h u\big)\cdot\bm \tau_i q\d s\\ =p(v_2)&(u-\pi_h u)(v_2)-p(v_1)(u-\pi_h u)(v_1)-\int_{e_i}\big( u-\pi_h u\big)\frac{\partial q}{\partial \bm \tau_i} \d s = 0. \end{aligned}$$ Here we used integration by parts and the definition of the interpolations. By the definition of $\Pi_h$, we have $$\begin{aligned} \int_{e_i}\nabla\times\big(\Pi_h\nabla u-\nabla\pi_h u\big) q\d s = 0. \end{aligned}$$ For the interior DOFs, we see that for any $\bm q\in \bm P_{k-3}(K)$, $$\begin{aligned} &\int_K \big(\Pi_h\nabla u-\nabla\pi_h u\big)\cdot\bm q\d S=\int_K \big(\nabla u-\nabla\pi_h u\big)\cdot\bm q\d S\\ =&-\int_K \big( u-\pi_h u\big)\nabla\cdot\bm q\d S+\int_{\partial K} \big( u-\pi_h u\big)\bm q\cdot \bm n\d s=0. \end{aligned}$$ This completes the proof. If $\bm u\in \bm H^{s-1}(\Omega)$ and $\nabla\times\bm u\in H^s(\Omega)$, $ 1+\delta\leq s \leq k$ with $\delta>0$, then we have the following error estimates for the interpolation $\Pi_h$, $$\begin{aligned} &\left\|\bm u-\Pi_h\bm u\right\|\leq Ch^{s-1}(\left\|\bm u\right\|_{s-1}+\left\|\nabla\times\bm u\right\|_{s}),\label{inter-u}\\ &\left\|\nabla\times(\bm u-\Pi_h\bm u)\right\|\leq Ch^s\left\|\nabla\times\bm u\right\|_{s},\label{inter-curlu}\\ & \left\|(\nabla\times)^2(\bm u-\Pi_h\bm u)\right\|\leq Ch^{s-1}\left\|\nabla\times\bm u\right\|_{s}. \end{aligned}$$ From Lemma \[Vh\], $\bm P_{k-2}(K)\subseteq V^{k-2, k}_h(K)$ and $\bm P_{k-1}(K)\subseteq W^{k-1}_h(K)$. By a similar proof of Theorem 3.11 in [@WZZelement] and using Lemma , we complete the proof. Here, we only provide the approximation property for the interpolation $\Pi_h\bm u$. Since $V_h^{k-2,k}$ is a conforming finite element space, the approximation property of the numerical solution $\bm u_h$ follows immediately from Céa’s lemma. It’s the same for the other two families. \[rmk:rectangular\] Similarly, we can get a family of rectangular elements. The DOFs for $\bm{u}\in V^{k-2, k}_{h}(K)=\nabla Q_{k-1}(K)+\mathfrak pW^{k-1}_h(K)$ are given by the following. - Vertex DOFs $\bm M_{ {v}}({\bm u})$ at all the vertices $ {v}_{i}$ of $K$: $$\bm M_{ {v}}({\bm u})=\left\{( \nabla\times {\bm u})( v_{i}),\; i=1,\;2,\cdots,4\right\}.$$ - Edge DOFs $\bm M_{ {e}}( {\bm u})$ at all the edges $ {e}_i$ of $ {K}$ (with the unit tangential vector $ {\bm \tau}_i$): $$\begin{aligned} \bm M_{ {e}}( {\bm u})=&\left\{\int_{e_i} {\bm u}\cdot {\bm \tau}_i {q}\mathrm d {s},\ \forall {q}\in P_{k-2}( {e}_i), i=1,2,\cdots,4\right\}\\ \cup& \left\{\int_{e_i}\nabla\times{\bm u}q\d s,\ \forall {q}\in P_{k-3}( {e}_i), i=1,2,\cdots,4\right\}. \end{aligned}$$ - Interior DOFs $\bm M_{ {K}}( {\bm u})$: $$\begin{aligned} &\bm M_{ {K}}( {\bm u})=\left\{\int_{ {K}} {\bm u}\cdot {\bm q}\mathrm \d A,\ \forall {\bm q} \in \mathcal{S}_1\oplus \mathcal{S}_2 \right\}, \end{aligned}$$ where $\mathcal{S}_1=\big\{ {\bm q}\ |\ {\bm q}= \psi {{\bm x}},\ \forall \psi\in Q_{k-3}(K)\big\}\ \text{and}\ \mathcal{S}_2=\big\{ {\bm q}\ |\ {\bm q}= \bm{\nabla}\times {\varphi},\ \forall {\varphi}\in {Q}_{k-3}(K)\slash{\mathbb{R}}\big\}$ when $k\geq 3$; $\mathcal{S}_1=\mathcal{S}_2=\emptyset$ when $k=2$. The same theoretical results as the triangular elements can be obtained by a similar argument. [Two families of curl-curl conforming elements with $r=k$ and $r=k+1$]{} {#sec:existing} ======================================================================== The curl-curl conforming elements introduced in [@WZZelement; @quad-curl-eig-posterior] are restricted to high-oder cases, i.e., $k\geq 4$ for triangular elements and $k\geq 3$ for rectangular elements in [@WZZelement], and $k\geq 4$ for the triangular elements in [@quad-curl-eig-posterior]. Rectangular elements are missing in [@quad-curl-eig-posterior]. In this section, [ we will construct two families of curl-curl conforming elements by setting $r=k$ and $r=k+1$ with $k\geq 2$. The two families of elements contain the elements in [@WZZelement; @quad-curl-eig-posterior]]{}. Similar properties as in [@WZZelement; @quad-curl-eig-posterior] hold for the generalizations below. For brevity, we only present the definitions and the approximation properties of the $V_h$ spaces. [A family of the curl-curl conforming elements with $r=k$]{} ------------------------------------------------------------ By taking $r=k$, we obtain another family of finite element complexes, i.e., $$\begin{aligned} \label{discrete-complex-k} \begin{diagram} 0 & \rTo^{} & \mathbb{R} & \rTo^{\subset} & \Sigma_h^{k} & \rTo^{\nabla} & V_{h}^{k-1, k} & \rTo^{\nabla\times} & W_h^{k-1} & \rTo^{} & 0. \end{diagram}\end{aligned}$$ Recall that $\Sigma_h^{k}$ is the Lagrange finite element space of order $k$, and $V^{k-1, k}_h(K)=\nabla P_k(K)\oplus \mathfrak p W_h(K)$ or $V^{k-1, k}_h(K)=\nabla Q_k(K)\oplus \mathfrak p W_h(K)$ with $k\geq 2$. By Lemma \[Vh\], $V^{k-1, k}_h(K)$ contains $\bm P_{k-1}(K)$. More precisely, $$\begin{aligned} V^{k-1, k}_h(K)=&\mathcal{R}_k\triangleq\bm P_{k-1}\oplus \big\{\bm u\in\widetilde {\bm P}_k\big|\ \bm u\cdot \bm x =0\big\} \text{ when } k\geq 4 \text{ and } K \text{ is a triangle},\\ &V^{k-1, k}_h(K)=Q_{k-1,k}\times Q_{k,k-1} \text{ when } k\geq 3 \text{ and } K \text{ is a rectangle}, \end{aligned}$$ which can be proved by a similar argument as for Lemma \[Vh\]. For the triangular elements with $k\geq 4$ (rectangular elements with $k\geq 3$), $V^{k-1, k}_h$ coincides with the curl-curl conforming elements in [@WZZelement]. Here we extend these finite elements to lower order by allowing $k=2$ or $3$. The sequence of the lowest-order case is shown in Fig. \[fig:firstfamily\]. These elements have 9 DOFs on a triangle and 13 DOFs on a rectangle. ### Triangular elements We define the following DOFs for $V^{k-1, k}_h(K)=\nabla P_k(K)\oplus \mathfrak p W^{k-1}_h(K)$. - Vertex DOFs $\bm M_{ {v}}({\bm u})$ at all the vertices $ {v}_{i}$ of $K$: $$\bm M_{ {v}}({\bm u})=\left\{( \nabla\times {\bm u})( v_{i}),\; i=1,\;2,3\right\}.$$ - Edge DOFs $\bm M_{ {e}}( {\bm u})$ at all the edges $ {e}_i$ of $ {K}$ (with the unit tangential vector $ {\bm \tau}_i$): $$\begin{aligned} \bm M_{ {e}}( {\bm u})=&\left\{\int_{e_i} {\bm u}\cdot {\bm \tau}_i {q}\mathrm d {s},\ \forall {q}\in P_{k-1}( {e}_i), i=1,2,3\right\}\\ \cup& \left\{\int_{e_i}\nabla\times{\bm u}q\d s,\ \forall {q}\in P_{k-3}( {e}_i), i=1,2,3\right\}. \end{aligned}$$ - Interior DOFs $\bm M_{ {K}}( {\bm u})$: $$\begin{aligned} &\bm M_{ {K}}( {\bm u})=\left\{\int_{ {K}} {\bm u}\cdot {\bm q}\mathrm \d A,\ \forall \bm q \in \mathcal{D} \right\}, \end{aligned}$$ where $\mathcal{D}=\bm P_{k-5}( K)\oplus\widetilde{P}_{k-5} {\bm x}\oplus\widetilde{P}_{k-4} {\bm x}\oplus \widetilde{P}_{k-3} {{\bm x}}$ when $k\geq 5$; $\mathcal{D}={P}_{k-3} {\bm x}$ when $k=3,4$; $\mathcal{D}=\emptyset$ when $k=2$. (5,6.3)(2,-4) (0,0) (2,2) (-1, 0)[(1,2)[1]{}]{} (0, 2)[(1,-2)[1]{}]{} (-1,0)[(1,0)[2]{}]{} (-1.,0) (1.,0) (0,2) (0,0) (0.5,1) (-0.5,1) (1.5, 1)[(1, 0)[1]{}]{} (1.68, 1.15)[$\nabla$]{} (4,0)[ (-1, 0)[(1,2)[1]{}]{} (0, 2)[(1,-2)[1]{}]{} (-1,0)[(1,0)[2]{}]{} (-0.16, 2.06)[${\operatorname{curl}}$]{} (-1.14, -0.18)[${\operatorname{curl}}$]{} (0.88, -0.18)[${\operatorname{curl}}$]{} (-0.75, 0.65)[(1, 2)[0.3]{}]{} (-0.9, 0.35)[(1, 2)[0.3]{}]{} (0.75, 0.65)[(-1, 2)[0.3]{}]{} (0.9, 0.35)[(-1, 2)[0.3]{}]{} (-0.2, -0.05)[(1, 0)[0.6]{}]{} (-0.45, -0.05)[(1, 0)[0.6]{}]{} ]{} (5.5, 1)[(1, 0)[1]{}]{} (5.75, 1.1)[[$\nabla\times$]{}]{} (8,0) (2,2) (-1, 0)[(1,2)[1]{}]{} (0, 2)[(1,-2)[1]{}]{} (-1,0)[(1,0)[2]{}]{} (-1, 0) (1, 0) (-0, 2) (-0, 0.5) (0,-3) (2,2) (-1, 0)[(0,2)[2]{}]{} (-1, 2)[(1,0)[2]{}]{} (1,2)[(0,-2)[2]{}]{} (-1,0)[(2,0)[2]{}]{} (-1.,2) (-1.,0) (1,2) (1,0) (0,1) (-1.,1) (1,1) (0,0) (0,2) (1.5, -2)[(1, 0)[1]{}]{} (1.68, -1.8)[$\nabla$]{} (4,-3)[ (-1, 0)[(0,2)[2]{}]{} (-1, 2)[(1,0)[2]{}]{} (1,2)[(0,-2)[2]{}]{} (-1,0)[(2,0)[2]{}]{} (-1.2,2.06)[${\operatorname{curl}}$]{} (-1.2,-0.2)[${\operatorname{curl}}$]{} (0.8,2.06)[${\operatorname{curl}}$]{} (0.8,-0.2)[${\operatorname{curl}}$]{} (-0.1,0.9)[$\times$]{} (-0.2, 2.06)[(1, 0)[0.6]{}]{} (1.06, 0.65)[(0, 1)[0.6]{}]{} (-1.06, 0.65)[(0, 1)[0.6]{}]{} (-0.2, -0.05)[(1, 0)[0.6]{}]{} (-0.5, 2.06)[(1, 0)[0.6]{}]{} (1.06, 0.25)[(0, 1)[0.6]{}]{} (-1.06, 0.25)[(0, 1)[0.6]{}]{} (-0.5, -0.05)[(1, 0)[0.6]{}]{} ]{} (5.5, -2)[(1, 0)[1]{}]{} (5.75, -1.8)[[$\nabla\times$]{}]{} (8,-3) (2,2) (-1, 0)[(0,2)[2]{}]{} (-1, 2)[(1,0)[2]{}]{} (1,2)[(0,-2)[2]{}]{} (-1,0)[(2,0)[2]{}]{} (-1.,2) (-1.,0) (1,2) (1,0) (-0, 1) (0,-4)[$\Sigma_h^2$]{} (1.5, -4)[(1, 0)[1]{}]{} (1.68, -3.8)[$\nabla$]{} (4,-4)[$V_h^{1,2}$]{} (5.5, -4)[(1, 0)[1]{}]{} (5.75, -3.8)[[$\nabla\times$]{}]{} (8,-4)[$W_h^1$]{} ### Rectangular elements Similarly, we can extend the rectangular elements to the case of $k=2$. The DOFs for $\bm{u}\in V^{k-1, k}_h(K)=\nabla Q_k(K)\oplus \mathfrak p W^{k-1}_h(K)$ are given by the following. - Vertex DOFs $\bm M_{ {v}}({\bm u})$ at all the vertices $ {v}_{i}$ of $K$: $$\bm M_{ {v}}({\bm u})=\left\{( \nabla\times {\bm u})( v_{i}),\; i=1,\;2,\cdots,4\right\}.$$ - Edge DOFs $\bm M_{ {e}}( {\bm u})$ at all the edges $ {e}_i$ of $ {K}$, each with the unit tangential vector $ {\bm \tau}_i$: $$\begin{aligned} \bm M_{ {e}}( {\bm u})=&\left\{\int_{e_i} {\bm u}\cdot {\bm \tau}_i {q}\mathrm d {s},\ \forall {q}\in P_{k-1}( {e}_i), i=1,2,\cdots,4\right\}\\ \cup& \left\{\int_{e_i}\nabla\times{\bm u}q\d s,\ \forall {q}\in P_{k-3}( {e}_i), i=1,2,\cdots,4\right\}. \end{aligned}$$ - Interior DOFs $\bm M_{ {K}}( {\bm u})$: $$\begin{aligned} &\bm M_{ {K}}( {\bm u})=\left\{\int_{ {K}} {\bm u}\cdot {\bm q}\mathrm \d A,\ \forall \bm q \in \mathcal{S}_1\oplus \mathcal{S}_2 \right\}, \end{aligned}$$ where $\mathcal{S}_1=\big\{ {\bm q}\ |\ {\bm q}= \psi {\bm x},\ \forall \psi\in Q_{k-2}( K)\big\}\ \text{and}\ \mathcal{S}_2=\big\{ {\bm q}\ |\ {\bm q}= \bm\nabla\times {\varphi},\ \forall {\varphi}\in {Q}_{k-3}( K)\slash{\mathbb{R}}\big\}$ when $k\geq 3$; $\mathcal{S}_1=\{{\bm x}\}$ and $\mathcal{S}_2=\emptyset$ when $k=2$. If $\bm u\in \bm H^s(\Omega)$ and $\nabla\times\bm u\in H^s(\Omega)$, $1+\delta\leq s\leq k$ with $\delta>0$, then we have the following error estimates for the interpolation $\Pi_h$, $$\begin{aligned} &\left\|\bm u-\Pi_h\bm u\right\|\leq Ch^{s}(\left\|\bm u\right\|_{s}+\left\|\nabla\times\bm u\right\|_{s}),\label{inter-u}\\ &\left\|\nabla\times(\bm u-\Pi_h\bm u)\right\|\leq Ch^s\left\|\nabla\times\bm u\right\|_{s},\label{inter-curlu}\\ & \left\|(\nabla\times)^2(\bm u-\Pi_h\bm u)\right\|\leq Ch^{s-1}\left\|\nabla\times\bm u\right\|_{s}. \end{aligned}$$ From Lemma \[Vh\], $\bm P_{k-1}(K)\subseteq V^{k-1, k}_h(K)$ and $\bm P_{k-1}(K)\subseteq W^{k-1}_h(K)$. [A family of the curl-curl conforming elements with $r=k+1$]{} -------------------------------------------------------------- We take $r=k+1$ in for $k\geq 2$ to get the following complex: $$\begin{aligned} \label{discrete-complex-k1} \begin{diagram} 0 & \rTo^{} & \mathbb{R} & \rTo^{\subset} & \Sigma_h^{k+1} & \rTo^{\nabla} & V_{h}^{k, k} & \rTo^{\nabla\times} & W_h^{k-1} & \rTo^{} & 0. \end{diagram}\end{aligned}$$ We note that $V^{k, k}_h(K)=\bm P_k(K)$ when $k\geq 4$ and $K$ is a triangle, and thus $V_h^{k, k}(K)$ on triangles coincides with the finite elements constructed in [@quad-curl-eig-posterior] for $k\geq 4$. The lower order triangular elements and the entire family of rectangular elements fill the gap in [@quad-curl-eig-posterior]. The lowest order cases are shown in Fig. \[fig:secondfamily\]. The number of DOFs of the lowest-order element is 13 for a triangle and 20 for a rectangle. (5,6.3)(2,-4) (0,0) (2,2) (-1, 0)[(1,2)[1]{}]{} (0, 2)[(1,-2)[1]{}]{} (-1,0)[(1,0)[2]{}]{} (-1.,0) (1.,0) (0,2) (-0.3,0) (0.3,0) (-0.3,1.36) (-0.66,0.67) (0.3,1.36) (0.66,0.67) (0,0.67) (1.5, 1)[(1, 0)[1]{}]{} (1.68, 1.15)[$\nabla$]{} (4,0)[ (-1, 0)[(1,2)[1]{}]{} (0, 2)[(1,-2)[1]{}]{} (-1,0)[(1,0)[2]{}]{} (-0.16, 2.06)[${\operatorname{curl}}$]{} (-1.14, -0.18)[${\operatorname{curl}}$]{} (0.88, -0.18)[${\operatorname{curl}}$]{} (-0.75, 0.65)[(1, 2)[0.3]{}]{} (-0.9, 0.35)[(1, 2)[0.3]{}]{} (-0.62, 0.9)[(1, 2)[0.3]{}]{} (0.75, 0.65)[(-1, 2)[0.3]{}]{} (0.9, 0.35)[(-1, 2)[0.3]{}]{} (0.62, 0.9)[(-1, 2)[0.3]{}]{} (-0.2, -0.05)[(1, 0)[0.6]{}]{} (-0.45, -0.05)[(1, 0)[0.6]{}]{} (-0.62, -0.05)[(1, 0)[0.5]{}]{} (-0.13,0.66)[$\times$]{} ]{} (5.5, 1)[(1, 0)[1]{}]{} (5.75, 1.1)[[$\nabla\times$]{}]{} (8,0) (2,2) (-1, 0)[(1,2)[1]{}]{} (0, 2)[(1,-2)[1]{}]{} (-1,0)[(1,0)[2]{}]{} (-1, 0) (1, 0) (-0, 2) (-0, 0.5) (0,-3) (2,2) (-1, 0)[(0,2)[2]{}]{} (-1, 2)[(1,0)[2]{}]{} (1,2)[(0,-2)[2]{}]{} (-1,0)[(2,0)[2]{}]{} (-1.,2) (-1.,0) (1,2) (1,0) (-1.,1.35) (-1.,0.66) (-0.35,0.66) (0.35,0.66) (-0.35,1.35) (0.35,1.35) (1,1.35) (1,0.66) (-0.35,0) (0.35,0) (0.35,2) (-0.35,2) (1.5, -2)[(1, 0)[1]{}]{} (1.68, -1.8)[$\nabla$]{} (4,-3)[ (-1, 0)[(0,2)[2]{}]{} (-1, 2)[(1,0)[2]{}]{} (1,2)[(0,-2)[2]{}]{} (-1,0)[(2,0)[2]{}]{} (-1.2,2.06)[${\operatorname{curl}}$]{} (-1.2,-0.2)[${\operatorname{curl}}$]{} (0.8,2.06)[${\operatorname{curl}}$]{} (0.8,-0.2)[${\operatorname{curl}}$]{} (0.23,0.66)[$\times$]{} (-0.4,0.66)[$\times$]{} (0.23,1.35)[$\times$]{} (-0.4,1.35)[$\times$]{} (-0.2, 2.06)[(1, 0)[0.6]{}]{} (1.06, 0.75)[(0, 1)[0.6]{}]{} (-1.06, 0.75)[(0, 1)[0.6]{}]{} (-0.2, -0.05)[(1, 0)[0.6]{}]{} (-0.5, 2.06)[(1, 0)[0.6]{}]{} (1.06, 0.35)[(0, 1)[0.6]{}]{} (-1.06, 0.35)[(0, 1)[0.6]{}]{} (-0.5, -0.05)[(1, 0)[0.6]{}]{} (-0.8, 2.06)[(1, 0)[0.6]{}]{} (1.06, -0)[(0, 1)[0.6]{}]{} (-1.06, -0)[(0, 1)[0.6]{}]{} (-0.5, -0.05)[(1, 0)[0.6]{}]{} (-0.8, -0.05)[(1, 0)[0.6]{}]{} ]{} (5.5, -2)[(1, 0)[1]{}]{} (5.75, -1.8)[[$\nabla\times$]{}]{} (8,-3) (2,2) (-1, 0)[(0,2)[2]{}]{} (-1, 2)[(1,0)[2]{}]{} (1,2)[(0,-2)[2]{}]{} (-1,0)[(2,0)[2]{}]{} (-1.,2) (-1.,0) (1,2) (1,0) (-0, 1) (0,-4)[$\Sigma_h^3$]{} (1.5, -4)[(1, 0)[1]{}]{} (1.68, -3.8)[$\nabla$]{} (4,-4)[$V_h^{2,2}$]{} (5.5, -4)[(1, 0)[1]{}]{} (5.75, -3.8)[[$\nabla\times$]{}]{} (8,-4)[$W_h^1$]{} ### Triangular elements The DOFs for $\bm{u}\in V^{k, k}_h(K)=\nabla P_{k+1}(K)\oplus \mathfrak p W^{k-1}_h(K)$ are given as follows. - Vertex DOFs $\bm M_{ {v}}({\bm u})$ at all the vertices $ {v}_{i}$ of $K$: $$\bm M_{ {v}}({\bm u})=\left\{( \nabla\times {\bm u})( v_{i}),\; i=1,\;2,\;3\right\}.$$ - Edge DOFs $\bm M_{ {e}}( {\bm u})$ at all the edges $ {e}_i$ of $ {K}$, each with the unit tangential vector $ {\bm \tau}_i$: $$\begin{aligned} \bm M_{ {e}}( {\bm u})=&\left\{\int_{e_i} {\bm u}\cdot {\bm \tau}_i {q}\mathrm d {s},\ \forall {q}\in P_{k}( {e}_i), i=1,2,3\right\}\\ \cup& \left\{\int_{e_i}\nabla\times{\bm u}q\d s,\ \forall {q}\in P_{k-3}( {e}_i), i=1,2,3\right\}. \end{aligned}$$ - Interior DOFs $\bm M_{ {K}}( {\bm u})$: $$\begin{aligned} &\bm M_{ {K}}( {\bm u})=\left\{\int_{ {K}} {\bm u}\cdot {\bm q}\mathrm \d A,\ \forall \bm q \in \mathcal{D} \right\}, \end{aligned}$$ where $\mathcal{D}=\bm P_{k-5}( K)\oplus\widetilde{P}_{k-5} {\bm x}\oplus\widetilde{P}_{k-4} {\bm x}\oplus \widetilde{P}_{k-3} {\bm x}\oplus \widetilde{P}_{k-2} {\bm x}$ when $k\geq 5$; $\mathcal{D}={P}_{k-2} {\bm x}$ when $k=2,3,4$. ### Rectangular elements We extend the construction in [@quad-curl-eig-posterior] to the rectangular case. The DOFs for $\bm{u}\in V^{k, k}_{h}(K)=\nabla Q_{k+1}(K)\oplus \mathfrak p W^{k-1}_h(K)$ are given by the following. - Vertex DOFs $\bm M_{ {v}}({\bm u})$ at all the vertices $ {v}_{i}$ of $K$: $$\bm M_{ {v}}({\bm u})=\left\{( \nabla\times {\bm u})( v_{i}),\; i=1,\;2,\cdots,4\right\}.$$ - Edge DOFs $\bm M_{ {e}}( {\bm u})$ at all the edges $ {e}_i$ of $ {K}$, each with the unit tangential vector $ {\bm \tau}_i$: $$\begin{aligned} \bm M_{ {e}}( {\bm u})=&\left\{\int_{e_i} {\bm u}\cdot {\bm \tau}_i {q}\mathrm d {s},\ \forall {q}\in P_{k}( {e}_i), i=1,2,\cdots,4\right\}\\ \cup& \left\{\int_{e_i}\nabla\times{\bm u}q\d s,\ \forall {q}\in P_{k-3}( {e}_i), i=1,2,\cdots,4\right\}. \end{aligned}$$ - Interior DOFs $\bm M_{ {K}}( {\bm u})$: $$\begin{aligned} &\bm M_{ {K}}( {\bm u})=\left\{\int_{ {K}} {\bm u}\cdot {\bm q}\mathrm \d A,\ \forall \bm q \in \mathcal{S}_1\oplus \mathcal{S}_2 \right\}, \end{aligned}$$ where $\mathcal{S}_1=\big\{ {\bm q}\ |\ {\bm q}= \psi{\bm x},\ \forall \psi\in Q_{k-1}( K)\big\}\ \text{and}\ \mathcal{S}_2=\big\{ {\bm q}\ |\ {\bm q}= \bm{\nabla}\times {\varphi},\ \forall {\varphi}\in {Q}_{k-3}( K)\slash{\mathbb{R}}\big\}$ when $k\geq 3$; $\mathcal{S}_2=\emptyset$ when $k=2$. This family of elements lead to one-order higher accuracy in $L^2$-norms. \[interp-f2\] If $\bm u\in \bm H^{s+1}(\Omega)$, $ 1+\delta\leq s \leq k$ with $\delta>0$, then we have the following error estimates for the interpolation $\Pi_h$, $$\begin{aligned} &\left\|\bm u-\Pi_h\bm u\right\|\leq Ch^{s+1}\left\|\bm u\right\|_{s+1},\label{inter-u}\\ &\left\|\nabla\times(\bm u-\Pi_h\bm u)\right\|\leq Ch^s\left\|\bm u\right\|_{s+1},\label{inter-curlu}\\ & \left\|(\nabla\times)^2(\bm u-\Pi_h\bm u)\right\|\leq Ch^{s-1}\left\|\bm u\right\|_{s+1}. \end{aligned}$$ From Lemma \[Vh\], $\bm P_{k}(K)\subseteq V^{k, k}_h(K)$ and $\bm P_{k-1}(K)\subseteq W^{k-1}_h(K)$. By the dualality argument, in the sense of the $L^2$-norm, the numerical solution $\bm u_h$ converges to the exact solution $\bm u$ with an order $\min\{s+1,2(s-1)\}$. Hence when $s<3$ the convergence order is $2(s-1)$. Numerical Experiments ===================== In this section, we use the three families of the $H(\text{curl}^2)$-conforming finite elements to solve the quad-curl problem: for $\bm f\in H({\operatorname{div}}^0;\Omega)$, find $\bm u$, such that $$\label{prob1} \begin{split} (\nabla\times)^4\bm u+\bm u&=\bm f\ \ \text{in}\;\Omega,\\ \nabla \cdot \bm u &= 0\ \ \text{in}\;\Omega,\\ \bm u\times\bm n&=0\ \ \text{on}\;\partial \Omega,\\ \nabla \times \bm u&=0\ \ \text{on}\;\partial \Omega. \end{split}$$ Here $H({\operatorname{div}}^0;D)$ is the space of $\bm L^2(D)$ functions with vanishing divergence, i.e., $$H(\text{div}^0;D) :=\{\bm u\in {\bm L}^2(D):\; \nabla\cdot \bm u=0\},$$ and $\bm n$ is the unit outward normal vector to $\partial \Omega$. Taking divergence on both sides of the first equation of , we see that the divergence-free condition $\nabla\cdot\bm u=0$ holds automatically. We define $H_0(\text{curl}^2;D)$ with vanishing boundary conditions: $$\begin{aligned} &H_0(\text{curl}^2;D):=\{\bm u \in H(\text{curl}^2;D):\;{\bm n}\times\bm u=0\; \text{and}\; \nabla\times \bm u=0\;\; \text{on}\ \partial D\}.\end{aligned}$$ The variational formulation reads: find $\bm u\in H_0({\operatorname{curl}}^2;\Omega)$, such that $$\label{prob22} \begin{split} a(\bm u,\bm v)&=(\bm f, \bm v)\quad \forall \bm v\in H_0({\operatorname{curl}}^2;\Omega), \end{split}$$ with $a(\bm u,\bm v):=(\nabla\times\nabla\times\bm u,\nabla\times\nabla\times\bm v) + (\bm u,\bm v)$. We define the finite element space with vanishing boundary conditions $$\begin{aligned} V^0_h=\{\bm{v}_h\in V^{r}_h,\ \bm{n} \times \bm{v}_h=0\ \text{and}\ \nabla\times \bm{v}_h = 0 \ \text {on} \ \partial\Omega\}.\end{aligned}$$ The $H(\text{curl}^2)$-conforming finite element method reads: seek $\bm u_h\in V^0_h$, such that $$\label{prob3} \begin{split} a(\bm u_h,\bm v_h)&=(\bm f, \bm v_h)\quad \forall \bm v_h\in V^0_h. \end{split}$$ We now turn to a concrete example. We consider the problem on a unit square $\Omega=(0,1)\times(0,1)$ with an exact solution $$\bm u=\left( \begin{array}{c} 3\pi\sin^3(\pi x)\sin^2(\pi y)\cos(\pi y) \\ -3\pi \sin^3(\pi y)\sin^2(\pi x)\cos(\pi x)\\ \end{array} \right).$$ Then the source term $\bm f$ can be obtained by a simple calculation. The finite element solution is denoted as $\bm u_h$. To measure the error between the exact solution and the finite element solution, we denote $$\bm e_h=\bm u-\bm u_h.$$ The new family of elements with $r=k-1$ --------------------------------------- We first use the lowest-order element in the new family with $r=k-1$ to solve the problem . In this test, we use uniform triangular meshes and uniform rectangular meshes with the mesh size $h$ varying from ${1}/{20}$ to ${1}/{320}$ with the bisection strategy. For $\bm u=(u_1,u_2)^T$, we define two discrete norms: $$\begin{aligned} \3bar\bm u\3bar_{V}^2=& \sum_{K\in \mathcal{T}_h}2h_x^K\int_{x_c^K-h_x^K}^{x_c^K+h_x^K}u_1^2(x_c^K,y)\d y+\sum_{K\in \mathcal{T}_h}2h_y^K\int_{y_c^K-h_y^K}^{y_c^K+h_y^K}u_2^2(x,y_c^K)\d x,\label{u_discrete_norm}\\ &\3bar \bm u\3bar^2_W=\sum_{K\in \mathcal{T}_h}4h_x^Kh_y^K\left[u_1^2(x_c^K,y_c^K)+u_2^2(x_c^K,y_c^K)\right]\label{curlcurlu_discrete_norm}\end{aligned}$$ where $K=(x_c^K-h_x^K,x_c^K+h_x^K)\times(y_c^K-h_y^K,y_c^K+h_y^K)$ and [$x_c^K,y_c^K,h_x^K,h_y^K$ are defined in Fig. \[rect\]]{} ![A rectangular element[]{data-label="rect"}](rec){width="7cm"} Table \[tab1\] illustrates various errors and convergence rates for triangular elements. Table \[tab2\] shows errors measured in various norms for rectangular elements. We also depict error curves for rectangular elements with a log-log scale in Fig. \[fig1\]. We observe that the numerical solution converges to the exact solution with a convergence order 1 in the $L^2$-norm, 2 in the $H({\operatorname{curl}})$-norm, and 1 in the $H({\operatorname{curl}}^2)$-norm, respectively. From Fig. \[fig1\], we also observe some superconvergence phenomena of $\bm e_h$ and $(\nabla\times)^2\bm e_h$ measured in the sense of and , respectively. Using these superconvergent results, together with some recovery techniques, we can construct a solution with higher accuracy if needed. ![Error curves in different norms[]{data-label="fig1"}](errorcurve.pdf){width="12cm"} $h$ $\left\|\bm e_h\right\|$ rates $\left\|\nabla\times\bm e_h\right\|$ rates $\left\|(\nabla\times)^2\bm e_h\right\|$ rates -------------- -------------------------- -------- -------------------------------------- -------- ------------------------------------------ -------- -- $1\slash 20$ 2.0993673726e-01 7.5983899295e-01 2.5104874626e+01 $1\slash40$ 9.7360384609e-02 1.1085 1.9608684185e-01 1.9542 1.2588225735e+01 0.9959 $1\slash80$ 4.7593813590e-02 1.0326 4.9417569900e-02 1.9884 6.2979085415e+00 0.9991 $1\slash160$ 2.3656260145e-02 1.0086 1.2379341740e-02 1.9971 3.1494062841e+00 0.9998 $1\slash320$ 1.1838722855e-02 0.9987 3.0965306022e-03 1.9992 1.5747589432e+00 0.9999 : [Numerical results by the lowest-order $(k=2)$ triangular element in the new family $(r=k-1)$ of $H(\text{curl}^2)$-conforming elements]{}[]{data-label="tab1"} $h$ $\left\|\bm e_h\right\|$ $\left\|\bm e_h\right\|_V$ $\left\|\nabla\times\bm e_h\right\|$ $\left\|(\nabla\times)^2\bm e_h\right\|$ $\left\|(\nabla\times)^2\bm e_h\right\|_W$ -------------- -------------------------- ---------------------------- -------------------------------------- ------------------------------------------ -------------------------------------------- -- -- $1\slash 20$ 1.569533e-01 3.836506e-02 2.630653e-01 1.484308e+01 3.233318e+00 $1\slash40$ 5.767988e-02 9.629132e-03 6.588465e-02 7.411773e+00 8.087790e-01 $1\slash80$ 2.841556e-02 2.409625e-03 1.647883e-02 3.704842e+00 2.022239e-01 $1\slash160$ 1.417562e-02 6.026113e-04 4.120188e-03 1.852296e+00 5.055782e-02 $1\slash320$ 7.085066e-03 2.087836e-04 1.030090e-03 9.261324e-01 1.263971e-02 : [Numerical results by the lowest-order $(k=2)$ rectangular element in the new family $(r=k-1)$ of $H(\text{curl}^2)$-conforming elements]{}[]{data-label="tab2"} [The family of elements with $r=k$]{} ------------------------------------- We then use the lowest-order element $V^{k-1, k}_{h}$ in the family with $r=k$. Again, we use the uniform mesh. Table \[tab3\] and Table \[tab4\] demonstrate the numerical results with $h$ varying from ${1}/{10}$ to ${1}/{160}$. We observe a second order convergence in the $L^2$-norm, second order in the $H({\operatorname{curl}})$-norm, and first order in the $H({\operatorname{curl}}^2)$-norm respectively. $h$ $\left\|\bm e_h\right\|$ rates $\left\|\nabla\times\bm e_h\right\|$ rates $\left\|(\nabla\times)^2\bm e_h\right\|$ rates -------------- -------------------------- -------- -------------------------------------- -------- ------------------------------------------ -------- -- $1\slash 10$ 1.9240008243e-01 1.8367480009e+00 4.8220431951e+01 $1\slash20$ 5.0377609092e-02 1.9333 4.9247009630e-01 1.8990 2.4914116835e+01 0.9527 $1\slash40$ 1.2750885542e-02 1.9822 1.2537561647e-01 1.9738 1.2562579616e+01 0.9878 $1\slash80$ 3.1977487715e-03 1.9955 3.1488015795e-02 1.9934 6.2946438036e+00 0.9969 $1\slash160$ 8.0169537073e-04 1.9959 7.8810588738e-03 1.9983 3.1489963271e+00 0.9992 : [Numerical results by the lowest-order $(k=2)$ triangular element in the family of $H(\text{curl}^2)$-conforming elements with $ r=k$]{}[]{data-label="tab3"} $h$ $\left\|\bm e_h\right\|$ rates $\left\|\nabla\times\bm e_h\right\|$ rates $\left\|(\nabla\times)^2\bm e_h\right\|$ rates -------------- -------------------------- -------- -------------------------------------- -------- ------------------------------------------ -------- -- $1\slash 10$ 8.5451929281e-02 7.7422997132e-01 3.1165128865e+01 $1\slash20$ 2.1175471745e-02 2.0127 1.9245034017e-01 2.0083 1.5568441694e+01 1.0013 $1\slash40$ 5.2832495527e-03 2.0029 4.8047264551e-02 2.0020 7.7828759533e+00 1.0002 $1\slash80$ 1.3201647809e-03 2.0007 1.2007795245e-02 2.0005 3.8912825860e+00 1.0001 $1\slash160$ 3.3018178425e-04 1.9994 3.0016984719e-03 2.0001 1.94562226310e+00 1.0000 : [Numerical results by the lowest-order $(k=2)$ rectangular element in the family of $H(\text{curl}^2)$-conforming elements with $ r=k$]{}[]{data-label="tab4"} [The family of elements with $r=k+1$]{} --------------------------------------- We now test elements in the family with $r=k+1$. We apply the same mesh as before. Table \[tab5\], Table \[tab6\], and Table \[tab7\] show the numerical results for various mesh sizes and elements. We observe the same convergence behavior as in Theorem \[interp-f2\]. $h$ $\left\|\bm e_h\right\|$ rates $\left\|\nabla\times\bm e_h\right\|$ rates $\left\|(\nabla\times)^2\bm e_h\right\|$ rates -------------- -------------------------- -------- -------------------------------------- -------- ------------------------------------------ -------- -- $1\slash 10$ 1.9162039079e-01 1.8313769687e+00 4.8217734800e+01 $1\slash20$ 4.9535363173e-02 1.9517 4.9211205978e-01 1.8959 2.4914028390e+01 0.9526 $1\slash40$ 1.2542331571e-02 1.9817 1.2535288020e-01 1.9730 1.2562576819e+01 0.9878 $1\slash80$ 3.1457630521e-03 1.9953 3.1486589833e-02 1.9932 6.2946437160e+00 0.9969 $1\slash160$ 7.8970028290e-04 1.9940 7.8809577490e-03 1.9983 3.1489963244e+00 0.9992 : [Numerical results by the lowest-order $(k=2)$ triangular element in the family of $H(\text{curl}^2)$-conforming elements with $r=k+1$]{}[]{data-label="tab5"} $h$ $\left\|\bm e_h\right\|$ rates $\left\|\nabla\times\bm e_h\right\|$ rates $\left\|(\nabla\times)^2\bm e_h\right\|$ rates -------------- -------------------------- -------- -------------------------------------- -------- ------------------------------------------ -------- -- $1\slash 10$ 8.3992408207e-02 7.7364067749e-01 3.1176022684e+01 $1\slash20$ 2.0556712167e-02 2.0306 1.9241217769e-01 2.0075 1.5569873825e+01 1.0017 $1\slash40$ 5.1255229677e-03 2.0038 4.8044859565e-02 2.0017 7.7830572413e+00 1.0003 $1\slash80$ 1.2805559780e-03 2.0009 1.20076446880e-02 2.0004 3.8913053186e+00 1.0001 $1\slash160$ 3.2031720408e-04 1.9992 3.0016889644e-03 2.0001 1.9456251069e+00 1.0000 : [Numerical results by the lowest-order $(k=2)$ rectangular element in the family of $H(\text{curl}^2)$-conforming elements with $r=k+1$]{}[]{data-label="tab6"} $h$ $\left\|\bm e_h\right\|$ rates $\left\|\nabla\times\bm e_h\right\|$ rates $\left\|(\nabla\times)^2\bm e_h\right\|$ rates -------------- -------------------------- -------- -------------------------------------- -------- ------------------------------------------ -------- -- $1\slash 4$ 6.4824700078e-02 9.9555045503e-01 2.7962159296e+01 $1\slash 8$ 4.5803979724e-03 3.8230 1.3888086952e-01 2.8416 7.3371187528e+00 1.9302 $1\slash 16$ 2.9272256581e-04 3.9679 1.7804274852e-02 2.9636 1.8544759176e+00 1.9842 $1\slash 32$ 1.8384636804e-05 3.9930 2.2390375000e-03 2.9913 4.6485519640e-01 1.9962 $1\slash 64$ 1.1662844410e-06 3.9785 2.8029810923e-04 2.9978 1.1629065139e-01 1.9990 : [Numerical results by the third-order ($k=3$) rectangular element in the family of $H(\text{curl}^2)$-conforming elements with $r=k+1$]{}[]{data-label="tab7"} We conclude this section by pointing out that the three families of elements bear their own advantages. The new family ($r=k-1$) can be the best choice if we pursue a low computational cost, while the family with $r=k+1$ stands out for its higher accuracy in the $L^2$-norm. Conclusion ========== In this paper, we constructed finite element de Rham complexes with enhanced smoothness in 2D. The new construction yields several curl-curl conforming elements. The two existing families of elements fit into our complexes and with the idea in this paper, we extend these elements to lower order cases. The low order elements (e.g., with 6 DOFs and 8 DOFs for the lowest-order cases on triangles and rectangles, respectively) are thus easy to implement. In the future, we will construct discrete Stokes type complexes and curl-curl conforming elements in 3D and further investigate the superconvergence phenomena. [^1]: This work is supported in part by the National Natural Science Foundation of China grants NSFC 11871092 and NSAF U1930402.
--- abstract: | The effect of one-gluon-exchange (OGE) pair-currents on the ratio $\mu_p G_E^p/G_M^p$ for the proton is investigated within a nonrelativistic constituent quark model (CQM) starting from $SU(6) \times O(3)$ nucleon wave functions, but with relativistic corrections. We found that the OGE pair-currents are important to reproduce well the ratio $\mu_p G_E^p/G_M^p$. With the assumption that the OGE pair-currents are the driving mechanism for the violation of the scaling law we give a prediction for the ratio $\mu_n G_E^n/G_M^n$ of the neutron. author: - 'Murat M. Kaskulov' - Peter Grabmayr title: 'Effect of gluon-exchange pair-currents on the ratio $\mu_p G_E^p/G_M^p$' --- [*Introduction:*]{}    Recently the ratio $\mu_p G_E^p/G_M^p$ between the electric $G^p_E(Q^2)$ and magnetic $G^p_M(Q^2)$ form factors of the proton has been extracted from experimental data on the recoil proton polarization in elastic electron scattering with polarized electrons up to $Q^2\sim$5 GeV$^2$ [@Milbrath1999; @Jones2000; @Gayou2001; @Gayou2002]. These experiments are of importance because they are direct measurements of the form factor ratio, and the present results are in contradiction to previous analyses [@Milbrath1999; @Jones2000; @Gayou2001; @Gayou2002; @Brash2002]. Historically, the determination of the electric and magnetic form factors up to several GeV were based on the Rosenbluth separation, and they were found compatible with the scaling laws: $$\label{scaling} G_E^p(Q^2) = G_M^p(Q^2)/{\mu_p} = G_{D}(Q^2)\ \ .$$ where $G_{D}(Q^2)$ represents the dipole form factor. The form factors and particularly the ratio give insight to the main features of the dynamical processes and are very useful for a test of the nucleon models [@Thomas_book]. The remarkable feature of the new experimental data is that they show a decrease of the ratio $\mu_{p} G_E^p/G_M^p$ from unity, indicating a significant deviation from this simple scaling law, but also from the simple constituent quark model. Within different hadronic models the calculations for the proton ratio $\mu_p G_E^p/G_M^p$ became available, with Ref. [@Frank2] presenting one of the earliest. We will restrict this discussion to the most recent calculations which agree reasonably well with the trend of the experimental data and which will allow to make predictions at higher $Q^2$ than the present data. In the cloudy bag model (CBM) [@Thomas], the pion field required by chiral symmetry is quantized and coupled to the MIT bag [@MIT1]. Addition of the pion cloud improves the MIT bag model results [@Lu1], in which the decrease of $\mu_p G_E^p/G_M^p$ is an inherent property. It was shown for a CBM formulated on the light cone [@Miller], that the combination of Poincaré invariance and pion effects is sufficient to describe $\mu_p G_E^p/G_M^p$. Several groups have studied different effects within CQMs. In the Goldstone boson exchange CQM [@GlozmanRiska] the baryon is considered as a system of three constituent quarks with an effective $qq$ hyperfine interaction mediated by the octet of pseudoscalar mesons. This model together with the point-form spectator approximation [@GlozmanBoffi], which provides a covariant framework, leads to a rather close description of the nucleon form factors and the available $\mu_p G_E^p/G_M^p$ data. Calculations of Ref. [@Cardarelli] performed within CQM and light-front formalism, showed that a suppression of the ratio can be expected in the CQM, if the relativistic effects generated by kinematical $SU(6)$ breaking due to the Melosh rotation of the constituent spins are taken into account. Finally, the most recent calculations based on relativistic quark models are from Ref. [@MillerFrank], where the hadron helicity nonconservation induced by the Melosh transformation was recognised to affect the ratio. The implementation of relativity is an common feature of all these works and all emphasize the necessity of both kinematical and dynamical relativistic corrections for the interpretations of the decrease of the ratio $\mu_p G_E^p/G_M^p$. In the non-relativistic constituent quark model (NRCQM) [@Isgur], the effective degrees of freedom are the massive quarks moving in a self-consistent potential whose specific form is dictated by considerations of QCD. Other degrees of freedom like Goldstone bosons or gluons are not considered in the original version and effectively absorbed into the constituent quarks. Theoretically, the explicit introduction of the additional degrees of freedom in the nucleon structure will change its properties compared to expectations based on simple quark models in which the baryon is described as a three-quark state only. Among different improvements to the naive CQM which could be essential for dynamical properties of the nucleons, the most important ones are relativistic kinematical corrections, the introduction of a mesonic cloud via pion-loop corrections, and dynamical corrections due to the interaction currents and to the creation of quark-antiquark ($q \bar q$) pairs. For low momentum transfer, $q \bar q$ pairs (sea-quarks) are dominant and the mesonic degrees of freedom become increasingly important. However, in a recent study [@Geiger] on “un-quenching” the quark model, strong cancellations between the hadronic components of the $q \bar q$ sea were found which tend to make the nucleon transparent to photons. These studies provide a natural way of understanding the success of the valence quark model even though the $q\bar q$ sea is very strong. At higher momentum transfer and in the presence of residual $qq$ interaction, the e.m. operators must be supplemented by the two-body exchange currents. The inclusion of two-body terms leads beyond the single-quark impulse approximation, and in dependence on the model for the $qq$ interaction effectively represents the gluonic or mesonic exchange degrees of freedom in the e.m. current operator. In this sense the physical picture should be similar to nuclear physics, where at low momentum transfer the nucleons are reasonable degrees of freedom, but at higher momentum transfer the meson-exchange currents play a prominent role [@Kaskulov:2002mc]. In this work we continue our studies [@Grabmayr1] of the possible role of interaction currents, in particular OGE pair-currents, for the e.m. properties of the nucleon. We use the NRCQM with relativistic corrections, coming from the Lorentz boost of the nucleon wave function, together with gluonic corrections for the calculation of the proton ratio $\mu_{p} G_E^p/G_M^p$ at momentum transfers beyond 1 GeV$^2$, where effects of the soft pionic cloud should be less important. We show that gluonic corrections to the CQM are important, and that the ratio $\mu_{p} G_E^p/G_M^p$ is well reproduced by the $SU(6) \times O(3)$ wave function of the nonrelativistic quark model. [*The nucleon in the NRCQM:*]{}    In the quark model, baryons are considered as three-quark configurations. The ground state has positive parity with all three quarks in their lowest state, and the total angular momentum (isospin) of baryons is obtained by appropriately combining the quark spins (isospins). In the NRCQM [@Isgur] a baryon is treated as a non-relativistic three-quark system, and in the simplest case of equal quark masses $m_q$ it is described by the Hamiltonian: $$\begin{aligned} \label{H3q} \mathcal{H}_{3q} &=&~~{\displaystyle}\sum_{i=1}^{3} \Big(m_q + \frac{{\displaystyle}{\bf p}_i^2}{{\displaystyle}2 m_q} \Big) - \ \frac{{\displaystyle}{\bf P}^2}{{\displaystyle}6 m_q} \nonumber \\ && + {\displaystyle}\sum_{i<j}^{3} V^{(conf)}({\bf r}_i,{\bf r}_j) + {\displaystyle}\sum_{i<j}^{3} V^{(res)}({\bf r}_i,{\bf r}_j) \end{aligned}$$ where ${\bf r}_i$, ${\bf p}_i$ are the spatial and momentum coordinates of the $i$-th quark, respectively, and [**P**]{} is the centre-of-mass momentum. The Hamiltonian ${\cal H}_{3q}$ consists of the nonrelativistic kinetic energy, a confinement potential $V^{(conf)}$, and a residual interaction $V^{(res)}$. Here, we take a two-body harmonic oscillator (h.o.) confinement potential: $V^{(conf)}({\bf r}_i,{\bf r}_j) ~\sim~ {\bf \lambda}_i \cdot {\bf \lambda}_j ( {\bf r}_i - {\bf r}_j)^2,~ $where ${\bf \lambda}_i$ are the Gell-Mann colour matrices of the $i$-th quark, with $\left<{\bf \lambda}_i \cdot {\bf \lambda}_j\right> = -8/3$ for a $qq$ pair in a baryon. The phenomenological residual interaction $V^{(res)}$ can be based on various $qq$ potentials [@GlozmanRiska; @Isgur], which reflect the symmetries and properties of QCD. Up to now, its dynamical origin is rather uncertain. We use a standard OGE interaction, the strength of which is determined by the strong coupling constant $\alpha_s$. However, unlike perturbative QCD, where the strong coupling constant $\alpha_s$ goes to zero at large inter-quark momenta, we take $\alpha_s$ of the NRCQM as an effective momentum independent constant. We start from the simplest form of the NRCQM, i.e. without configuration mixing, in which the nucleon $| N \rangle$ is described by the lowest h.o. three quark configurations $(0s)^3[3]_X$ in the translationally-invariant shell model (TISM): $$| N \rangle = \Big|(0s)^3 [3]_X L=0, ST = \frac{1}{2} \frac{1}{2} [3]_{ST},~ J^P = \frac{1}{2}^{+} \Big\rangle$$ where the colour part is omitted. After having removed the centre-of-mass coordinate ${\bf R}$ from the TISM configuration, the ground state eigenfunction depends only on the Jacobi relative coordinates ${\bf \rho}_1$ and ${\bf \rho}_2$ of the quarks: $$| (0s)^3 ({\bf \rho}_1,{\bf \rho}_2) \rangle \sim \exp \left( - \frac{1}{4 b^2} {\bf \rho}^2_1 - \frac{1}{3 b^2} {\bf \rho}^2_2 \right)$$ where the constant $b$ determines the average hadronic size of the baryon. Note that the elimination of ${\bf R}$ is crucial for correctly counting the baryonic states. This is one reason why the nonrelativistic approach is so successful in spectroscopy. [*The nucleon e.m. Sachs form factors:*]{}    The nucleon e.m. form factors are functions of the square of the momentum transfer in the scattering process ${Q}^2 = -q^{\mu} q_{\mu}$. The Sachs form factors, $G_{E(M)}$, fully characterize the charge and current distributions inside the nucleon [@Sachs1] and can be written in terms of Dirac and Pauli form factors $\mathcal{F}_1$ and $\mathcal{F}_2$, respectively. The most general form of the nucleon e.m. operator $J^{\mu}_{em}(x)$, which defines $\mathcal{F}_1$ and $\mathcal{F}_2$, satisfies the requirements of relativistic covariance and the condition of gauge invariance; it is of the form $$\begin{aligned} \langle N(p',s') | J^{\mu}_{em}(0) | N(p,s) \rangle &=& \\ \nonumber \bar{u}({\bf p}',s') \Big[ \gamma^{\mu} \mathcal{F}_{1}(Q^2) &+& i\frac{\sigma^{\mu\nu}q_{\nu}}{2 M_N}\mathcal{F}_{2}(Q^2) \Big] u({\bf p}, s) ,\end{aligned}$$ with $q^{\nu} = p'^{\nu} - p^{\nu}$. The Breit frame, where the incoming momentum ${\bf p} = - {\bf q}/2$ is scattered to the momentum ${\bf p}' = {\bf q}/2$, is characterized by ${Q}^2 = {\bf q}^2$. In this frame the nucleon electric $G_E$ and magnetic $G_M$ form factors can be interpreted as Fourier transforms of the distributions of charge and magnetization, respectively: $$\begin{aligned} \Big< N_{s'}(\frac{{\bf q}}{2})\Big|~{\bf J}_{em}(0)~ \Big| N_s(-\frac{{\bf q}}{2}) \Big> &=&\chi^{\dagger}_{s'}\frac{i{\bf\sigma}\times{\bf q}}{2 M_N} \chi_s G_M(q^2) ~~~~\\ \Big< N_{s'}(\frac{{\bf q}}{2}) \Big| ~J^{0}_{em}(0)~ \Big| N_s(-\frac{{\bf q}}{2}) \Big> &=& \chi^{\dagger}_{s'} \chi_s G_E(q^2) \ \\end{aligned}$$ where $\chi^{\dagger}_{s'}$ and $\chi_s$ are Pauli spinors for the initial and final nucleons. Starting from the rest frame, the spherical nucleon is expected to undergo a Lorentz contraction along the direction of motion. Results of previous studies suggest that the consistent treatment of the form factors should be supplemented by the relativistic boost [@Wagenbrunn]. But a complete solution of a covariant many-body problem is difficult; the use of the light-cone dynamics [@Chung] for constituent quarks leads to the introduction of additional parameters. Thus, a semiclassical prescription proposed in Ref. [@Licht] and successfully applied in a CBM [@Lu1] is used here. Thereby, the relativistic form factors can be derived in the Breit frame from the corresponding nonrelativistic ones by a simple substitution: $$\label{Lboost} G_{E(M)}(Q^2) \to \eta G_{E(M)}(\eta Q^2),$$ where $ {\displaystyle}\eta = {M^2_N}/{E^2_N} $ and $ E_N^2 = M_N^2 + {{\bf q}^2}/{4}$. The scaling factor $\eta$ in the argument of $G_{E(M)}$ arises from the coordinate transformation of the struck quark, and the pre-factor in Eq.(\[Lboost\]) comes from the reduction of the integral measure of the two spectator quarks in the Breit frame. This simple boost together with the NRCQM nucleon wave function does not addmix configurations with nonzero orbital angular momentum; it leads to the hadron helicity conserving solution. Note, that imposing Poincaré invariance in a relativistic CQM causes substantial violation of the helicity conservation rule [@MillerFrank], and results in an asymptotic behaviour of form factors which differs from that as expected in pQCD [@Lepage]. We first consider the nucleon single-quark current ${j}_{q_i}^{\mu}(x)$ contribution:$J^{\mu}_{em}(x) = \sum_{i=1}^{3} {j}_{q_i}^{\mu}(x).$ In the CQM the e.m. vertex of the internal quarks should be assumed to have a spatially extended structure that may be described by a form factor $F_{q}({\bf q}^2)$. The most general form for the covariant e.m. current operator of the constituent quarks is written as [@Gross]: $$\label{J_3q_Modif} {j}_{q_i}^{\mu}(x) = \mathcal{Q}_i \bar{q}_i(x) \Big\{ \gamma^{\mu} + \Big(F_q({\bf q}^2) - 1 \Big) \Big[\gamma^{\mu} - \frac{\gamma \cdot q q^{\mu}}{q^2} \Big] \Big\} q_i(x),$$ where $q_i(x)$ is the quark field operator, $\mathcal{Q}_i$ is its charge in units of $e$:  $\mathcal{Q}_i = 1/2 \left[ 1/3 + \tau^3_i \right].$ This vertex, in which the first term corresponds to pointlike quarks, maintains the requirement of current conservation, as the form factor modification appears only in a purely transverse term. The nonrelativistic reduction of Eq.(\[J\_3q\_Modif\]) for pointlike quarks, $F_{q}({\bf q}^2)=1$, leads to the standard one-body e.m. current operators: $\hat{\rho}_{3q}({\bf q}) = \sum_{i=1}^{3} \mathcal{Q}_i e^{i \bf{q} \cdot {\bf r}_i}$ and $\hat{\bf j}_{3q}({\bf q}) = \frac{1}{2 m_q} \sum_{i=1}^{3} \mathcal{Q}_i e^{i \bf{q} \cdot {\bf r}_i} \Bigl( {\bf p}'_i + {\bf p}_i + i {\bf \sigma}_i \times {\bf q} \Bigr), ~ $ where we have retained only the lowest order contributions. This is in spirit of a NRCQM, where the main contribution to the e.m. moments is expected to come from the non-relativistic single quark currents, which by the choice of the effective quark mass already incorporates substantial relativistic corrections [@Buchmann]. It follows that one should not use next-to-leading order relativistic corrections proportional to $\sim {\bf q}^2/8 m_q^2 $ in the charge operator $\hat{\rho}_{3q}({\bf q})$, for example the Darwin-Foldy term, if one ignores them in the kinetic energy. The naive CQM results in the following nucleon e.m. form factors $G^{(3q)}_E$ and $G^{(3q)}_M$: $$\begin{aligned} \label{OB_E} G^{(3q)}_E ({\bf q}^2) &=& e_N \exp \left(-{\bf q}^2 b^2/6 \right) \\ \label{OB_M} G^{(3q)}_M ({\bf q}^2) &=& \frac{M_N}{m_q} \ \mu_N \exp \left(-{\bf q}^2 b^2/6\right) \end{aligned}$$ where $e_N$ and $\mu_N$ are the charge and CQM magnetic moment of the nucleon: $e_N = \frac{1}{2} \langle N |(1 + \tau_3) | N \rangle, ~\mu_N = \frac{1}{6} \langle N | ( 1 + 5 \tau_3 ) | N \rangle .$ Due to the same momentum dependence, Eqs.(\[OB\_E\]) and (\[OB\_M\]) lead to the scaling law noted in Eq.(\[scaling\]); a ratio of unity is obtained as presented by the long dashed line in Fig. \[fig:gegmprot4\]. Clearly, the scaling law is in contradiction with the recent proton experiments [@Jones2000; @Milbrath1999; @Gayou2001; @Gayou2002]. [*The OGE pair-current:*]{}   In the presence of residual OGE interactions between the quarks the total current operator of the hadron cannot simply be a sum of free quark currents, but must be supplemented by two-body currents. These two-body currents are closely related to the $qq$ potential from which they can be derived by minimal substitution. Since the effect of the residual $qq$ potential is clearly seen in the excited spectra of hadrons, one expects the corresponding two-body currents to play an important role in various e.m. properties of hadrons. Both, the photon and the gluons interacting with quarks can produce $q\bar q$ pairs leading to pair-current contributions to e.m. quark current as provided by OGE. The two-body terms we consider are depicted in Fig. \[OGE\]. The nonrelativistic reduction of these diagrams leads to the following configuration space e.m. current operators [@Grabmayr1; @Sanctis]: $$\begin{aligned} \label{rhooge} \rho_{3q}^{(OGE)} = - i \frac{ \alpha_s}{16 m_q^3} \sum_{i < j} {\bf \lambda}_i \cdot {\bf \lambda}_j \frac{ \mathcal{Q}_i}{r^3_{ij}} \left[ e^{i {\bf q}\cdot {\bf r}_i} \Big({\bf q} \cdot ({\bf r}_{i} - {\bf r}_{j}) \right. \hspace{0.cm} \nonumber \\ + \left. \Big[{\bf \sigma}_i \times {\bf q}\Big] \Big[{\bf \sigma}_j \times ({\bf r}_{i}-{\bf r}_j)\Big] \Big) + (i \leftrightarrow j) \right] \hspace{0.8cm} \\ \label{joge} {\bf j}_{3q}^{(OGE)} = - \frac{\alpha_s}{8 m_q^2} \sum_{i < j} {\bf \lambda}_i \cdot {\bf \lambda}_j \frac{ \mathcal{Q}_i}{r^3_{ij}} \hspace{3.3cm} \nonumber \\ \times ~ \Big[ e^{i {\bf q} \cdot {\bf r}_i} \Big[({\bf \sigma}_i + {\bf \sigma}_j ) \times ({\bf r}_{i}-{\bf r}_j)\Big] + (i \leftrightarrow j) \Big] \hspace{0.2cm}\end{aligned}$$ These OGE pair-currents describe a $q\bar q$ pair creation process induced by the external photon with subsequent annihilation of the $q\bar q$ pair into a gluon, which is then absorbed by an another quark. These currents are of relativistic origin as reflected in the higher powers of $1/m_{q}$ as compared to the one-body e.m. current operators. Because the gluon does not carry any isospin the OGE pair-current has the same isospin structure as the one-body currents. Eqs.(\[rhooge\]) and (\[joge\]) result in the following electric $G^{(OGE)}_{E}$ and magnetic $G^{(OGE)}_{M}$ form factors: $$\begin{aligned} \label{OGE_E} \left\{ \begin{array}{r} G^{(OGE)}_{E_p} \\ G^{(OGE)}_{E_n} \end{array} \right\} &=& -\frac{\alpha_s}{m_q^3} ~ q ~ e^{-q^2 b^2 /24} \left\{ \begin{array}{r} 1/3 \\ - 2/9 \end{array} \right\} \mathcal{K}(q) ~~~~~~\\ \label{OGE_M} \left\{ \begin{array}{r} G^{(OGE)}_{M_p} \\ G^{(OGE)}_{M_n} \end{array} \right\} &=& \frac{\alpha_s}{m_q^2} ~ \frac{M_N}{q} ~ e^{-q^2 b^2 /24} \left\{ \begin{array}{r} 2/3 \\ - 2/9 \end{array} \right\} \mathcal{K}(q) ~~~~~~\end{aligned}$$ The function $\mathcal{K}$ in the above expressions is: $$\mathcal{K}(q) = 4 \pi \Big( \frac{1}{2 \pi b^2} \Big)^{3/2} \int_{0}^{\infty} d r ~ e^{-r^2/(2 b^2)} j_1({q r}/{2})$$ where $j_1({q r}/{2})$ is the spherical Bessel function. The interaction of the incoming photon with a $q\bar q$ pair can be considered as a point-like interaction or as being dominated by intermediate vector mesons. The latter leads to an additional dipole form factor, $ {\displaystyle}F_{\gamma q \bar q}({\bf q}^2) = \Lambda_{\gamma q \bar q}^2/ \left(\Lambda_{\gamma q \bar q}^2 + {\bf q}^2\right)$, reflecting the extended structure of the $\gamma q \bar q$ vertex. $\Lambda_{\gamma q \bar q}$ can be considered as a free parameter or simply can be taken equal to the $\rho$-meson mass. [*Results:*]{}   In this work we consider the effect of the OGE pair-current corrections to the NRCQM nucleon e.m. form factors, particularly for the ratio $\mu_{p} G^p_{E}/G^p_{M}$. The ratio is calculated for a quark mass of $m_q$=400 MeV and the respective quark core radius of $b$=0.5 fm. In Fig. \[fig:gegmprot4\] calculations with different $\alpha_s$ are shown to indicate the sensitivity. In the insert of Fig. \[fig:gegmprot4\] we show results towards higher values of $Q^2$ for the best description of the present data by $\alpha_s$ = 0.4 with (solid curve) and without Lorentz boost (dashed curve). Our results indicate that the  $\mu_{p} G^p_{E}/G^p_{M}$ continues to decrease and that it will cross zero at $Q^2\sim$8.1 GeV$^2$. From this, a negative value of the ratio must be expected for the planed measurements in JLAB at $Q^2\sim$9 GeV$^2$. Deviations could be explained due to an extended $\gamma q \bar q$ vertex as demonstrated by using a $\Lambda_{\gamma q \bar q}$=770 MeV (dotted line). The introduction of such states does not affect very much our results up to $\sim$10 GeV$^2$, but strongly influences the behaviour of $\mu_{p} G^p_{E}/G^p_{M}$ for higher $Q^2$. For quark masses in the range $m_q\sim313\div400$ MeV and bag radii of $b\sim0.4\div0.6$ fm one can find also a good description of the data with reasonable values for $\alpha_s\sim0.2\div0.6$ [@Sanctis]. However, these are not able to reproduce the $N-\Delta$ mass splitting in the case of pure OGE. It seems likely that the observed mass splitting is the result of a linear combination of the pion-loop contributions and OGE [@Thomas_book]. In this sense pionic contributions could produce the desirable effect of reducing the size of the strong coupling constant $\alpha_s$, needed for the reproduction of the  $\mu_{p} G^p_{E}/G^p_{M}$. The ratio $\mathcal{F}^p_2/\mathcal{F}^p_{1}$ can be directly derived from $G^p_{E}/G^p_{M}$. It is predicted in Ref. [@Miller] to be constant for values of $Q^2$ up to 20 GeV$^2$ and it is understood as a result of the Melosh transformation, which reflects relativistic effects. Our results are shown in Fig. \[fig:qf2f1-highQ\]. The “kinematical” background formed by the naive CQM results (dot-dashed curve), $\mathcal{F}^p_2/\mathcal{F}^p_{1} = 1/(1+ \kappa_p Q^2/4M^2_M)$, underestimates the data and is not affected by the Lorentz boost, a failure which is overcompensated when adding the OGE pair-currents (dashed curve). It is due to the Lorentz boost (solid curve) acting on the OGE currents to reproduce the flattening in $Q\mathcal{F}^p_2/\mathcal{F}^p_{1}$. Following Ref. [@MillerFrank], we also study the high $Q^2$-behavior. The ratio falls for asymptotic values of $Q^2$ as $Q\mathcal{F}^p_2/\mathcal{F}^p_{1} \sim 1/Q$, and allows to make a smooth transition to the scaling behavior as expected from pQCD [@Lepage]. In Ref. [@MillerFrank] the ratio $Q \mathcal{F}^p_2/\mathcal{F}^p_{1}$ falls less quickly as in our case and in pQCD, both stated a notion of the hadron helicity conservation. We also confirm the statement of Ref. [@MillerFrank], that a plateau seen in Fig. \[fig:qf2f1-highQ\] is the result of a broad maximum occuring near $Q^2\sim10$ GeV$^2$. Recent experimental progress in using polarized nuclear targets will allow to obtain the neutron ratio $G_E^n/G_M^n$. As well known in the $SU(6)$ limit $G_E^n(Q^2)$ is zero [@IsgurNeutron]. We can treat $G_E^n(Q^2)$ as a result of the residual OGE-force in the form of gluonic currents and with the assumption that the OGE pair-currents are the driving mechanism of the scaling law violation we can calculate the neutron ratio $G_E^n/G_M^n$ using the best results for the proton. The results are shown in Fig. \[fig:gegmneut\]. Recombining Eqs. \[OB\_E\], \[OB\_M\], \[OGE\_E\] and \[OGE\_M\] leads to a simple approximate result in analytic form between $G_E^n/G_M^n$ and that of the proton,  $G_E^p/G_M^p$: $$\label{eq:approx} \mu_n G_E^n/G_M^n \simeq \frac{2}{3} ~(1 - \mu_p G_E^p/G_M^p),$$ which works remarkably well from low up to very high $Q^2$, and actually insensitive to the choice of the parameters (insert Fig. \[fig:gegmneut\]). In conclusion, we would like to mention that the internal dynamics of the nucleon are much more complex than we have presented in this work. First of all it is interesting to examine the effect of nonvalence Fock states [@KG]: $$\Psi_N = \left( \begin{array}{c} \Psi(3q) \\ \Psi(3q + q \bar q) \end{array} \right)$$ reflecting $q \bar q$ fluctuations of the constituent quarks. This question is closely related to the possible role of the mesonic cloud. Very useful discussions with V.I. Kukulin are gratefully acknowledged. This work was supported by the Deutsche Forschungsgemeinschaft under contracts Gr1084/3, He2171/3 and GRK683. [99]{} B.D. Milbrath [*et al.*]{}, Phys. Rev. Lett. [**82**]{}, 2221 (1999). M.K. Jones [*et al.*]{}, Phys. Rev. Lett. [**84**]{}, 1398 (2000). O. Gayou [*et al.*]{}, Phys. Rev. [**C64**]{}, 038202 (2001). O. Gayou [*et al.*]{}, Phys. Rev. Lett. [**88**]{}, 092301 (2002). E.J. Brash [*et al.*]{}, Phys. Rev. [**C65**]{}, 051001(R) (2001). A.W. Thomas and W. Weise, [*The structure of the nucleon*]{}, WILEY-VCH Verlag Berlin GmbH, Berlin, 2001. M.R. Frank, B.K.Jennings and G.A. Miller, Phys. Rev. [**C54**]{}, 920 (1998). A.W. Thomas, S. Théberge and G.A. Miller, Phys. Rev. [**D24**]{}, 216 (1981); S. Théberge and A.W. Thomas, Nucl. Phys. [**A393**]{}, 252 (1983). A. Chodos [*et al.*]{}, Phys. Rev. [**D9**]{}, 341 (1974); [*ibid.*]{} [**D10**]{}, 2599 (1974); T.A. Derand [*et al.*]{}, [*ibid.*]{} [**D12**]{}, 2060 (1975). D.H. Lu, A.W. Thomas and A.G. Williams, Phys. Rev. [**C57**]{}, 2628 (1998); D.H. Lu, S.N. Yang and A.W. Thomas, Nucl. Phys. [**A684**]{}, 296 (2001); J. Phys. [**G26**]{}, L75 (2000). G.A. Miller, Phys. Rev. [**C66**]{}, 032201(R) (2002). L.Ya. Glozman and D.O. Riska, Phys. Rep. [**268**]{}, 263 (1996). S. Boffi [*et al.*]{}, Eur. Phys. J. [**A14**]{}, 17 (2002); L.Ya. Glozman [*et al.*]{}, Phys. Lett. [**B516**]{}, 183 (2001). F. Cardarelli and S. Simula, Phys. Rev. [**C62**]{}, 065201 (2000). G.A. Miller and M.R. Frank, Phys. Rev. [**C65**]{}, 065205 (2002). N. Isgur and G. Karl, Phys. Rev. [**D18**]{}, 4187 (1978); [**D19**]{}, (1979) 2653; [**D20**]{}, (1979) 1191; [**D21**]{}, 3175 (1980). P. Geiger and N. Isgur, Phys. Rev. Lett. [**67**]{}, 1066 (1991); Phys. Rev. [**D41**]{}, 1595 (1990); [*ibid.*]{} [**44**]{}, 799 (1991). M.M. Kaskulov, V.I. Kukulin and P. Grabmayr, nucl-th/0212097 P. Grabmayr and A.J. Buchmann, Phys. Rev. Lett. [**86**]{}, 2237 (2001). F.G. Ernst, R.G. Sachs and K.C. Wali, Phys. Rev. [**119**]{}, 1105 (1960); R.G. Sachs, Phys. Rev. [**126**]{}, 2256 (1962). R.F. Wagenbrunn [*et al.*]{}, Phys. Lett. [**B511**]{}, 33 (2001). F. Cardarelli [*et al.*]{}, Nucl. Phys. [**A623**]{}, 362 (1997). A.L. Licht and A. Pagnamenta, Phys. Rev. [**D2**]{}, 1156 (1970); [**2**]{}, 1150 (1970). F. Gross and D.O. Riska, Phys.Rev. [**C36**]{}, 1928 (1987). A.J. Buchmann, Z. Naturforsch. A [**52**]{}, 877 (1997). M. De Sanctis [*et al.*]{}, Phys. Rev. [**C62**]{}, 025208 (2000). G.P. Lepage and S.J. Brodsky, Phys. Rev. [**D22**]{}, 2157 (1980). N. Isgur, Phys. Rev. Lett. [**83**]{}, 272 (1999). M.M. Kaskulov and P. Grabmayr, in preparation.
--- abstract: 'This paper establishes a new combinatorial framework for the study of coarse median spaces, bridging the worlds of asymptotic geometry, algebra and combinatorics. We introduce a simple and entirely algebraic notion of coarse median algebra, which simultaneously generalises the concepts of bounded geometry coarse median spaces and classical discrete median algebras. In particular we prove that the metric on a quasi-geodesic coarse median space of bounded geometry can be constructed up to quasi-isometry using only the coarse median operator. We study the coarse median universe from the perspective of intervals, with a particular focus on cardinality as a proxy for distance. We develop a concept of rank for coarse median algebras in terms of the geometry of intervals and show that both geometric and algebraic notions of rank naturally provide higher analogues of Gromov’s concept of $\delta$-hyperbolicity.' address: 'School of Mathematical Sciences, University of Southampton, Highfield, SO17 1BJ, United Kingdom.' author: - Graham Niblo - Nick Wright - Jiawen Zhang bibliography: - 'bibfileCMA.bib' title: 'Coarse median algebras: The intrinsic geometry of coarse median spaces and their intervals' --- [^1] Introduction ============ Gromov’s notion of a CAT(0) cubical complex has played a significant role in major results in topology, geometry and group theory. Its power stems from the beautiful interplay between the non-positively curved geometry of the space and the median algebra structure supported on the vertices, as outlined by Roller, [@roller1998poc]. Coarse median spaces as introduced by Bowditch [@bowditch2013coarse] provide a geometric coarsening of CAT(0) cube complexes which additionally includes $\delta$-hyperbolic spaces, mapping class groups, and hierarchically hyperbolic groups [@behrstock2017hierarchically; @behrstock2015hierarchically]. The interaction between the geometry and combinatorics of a CAT(0) cube complex is mediated by the fact that the edge metric can be computed entirely in terms of the median. In contrast, for a coarse median space the metric is an essential part of the data, as evidenced by the fact that almost any ternary algebra can be made into a coarse median space by equipping it with a bounded metric. This prompts the question to what extent there could be a combinatorial characterisation of coarse medians mirroring the notion of a median algebra. We will provide the missing combinatorial framework by defining coarse median algebras. First we recall the definition of a coarse median space given by Bowditch: \[Bowditch original def\] A *coarse median space* is a triple $(X,d,\mu)$, where $(X,d)$ is a metric space and $\mu$ is a ternary operator on $X$ satisfying the following: - For all $a,b\in X$, $\mu(a,a,b)=a$; - For all $a,b,c\in X$, $\mu(a,b,c)=\mu(a,c,b)=\mu(b,a,c)$; - There are constants $k, h(0)$, such that for all $a,b,c,a',b',c'\in X$ we have $$d(\mu(a,b,c),\mu(a',b',c'))\leq k\left(d(a,a'),+d(b,b')+d(c,c')\right) + h(0).$$ - There is a function $h:\mathbb N\rightarrow \Rp$ with the following property. Suppose that $A\subseteq X$ with $1\leq |A| \leq p < \infty$, then there is a finite median algebra $(\Pi, \mu_\Pi)$ and maps $\pi:A\rightarrow \Pi$ and $\lambda:\Pi\rightarrow X$, such that for all $x,y,z\in \Pi$ we have $$d\big(\lambda(\langle x,y,z\rangle_\Pi), \langle{\lambda(x)}, {\lambda(y)}, {\lambda(z)}\rangle\big) \leq h(p),$$ and for all $a \in A$ we have $$d(a, \lambda\pi(a))\leq h(p).$$ The metric plays the crucial role of measuring and controlling the extent to which the ternary operator (the coarse median) approximates a classical median operator. Our observation is that the additional metric data can be replaced by the structure of the intervals in the space which are intrinsic to the median operator: the cardinality of intervals serves as a proxy for distance.[^2] \[interval\] Let $(X,\mu)$ be a ternary algebra. For any $a,b\in X$, *the interval $[a,b]$* is the set $\{\mu(a,x,b)\mid x\in X\}$. We say that $(X,\mu)$ has *finite intervals* if for every $a,b\in X$ the interval $[a,b]$ is a finite set. \[cmadef\] A *coarse median algebra* is a ternary algebra $(X,\mu)$ with finite intervals such that: - For all $a,b\in X$, $\mu(a,a,b)=a$; - For all $a,b,c\in X$, $\mu(a,b,c)=\mu(a,c,b)=\mu(b,a,c)$; - There exists a constant $K\geq 0$ such that for all $a,b,c,d,e\in X$ the cardinality of the interval $\big[\mu(a,b,\mu(c,d,e)),\, \mu(\mu(a,b,c),\mu(a,b,d),e)\big]$ is at most $K$. Putting $K=1$ in the definition reduces (M3)’ to the classical 5-point condition $\mu(a,b,\mu(c,d,e))= \mu(\mu(a,b,c),\mu(a,b,d),e)$ defining a median operator, so Definition \[cmadef\] generalises the notion of discrete median algebra. Moreover, as we will see, any bounded geometry coarse median space is a coarse median algebra. Indeed we have the following equivalence: \[unique metric prop\] [ Let $(X,\mu)$ be a bounded valency ternary algebra. Then $(X, \mu)$ admits a metric $d$ such that $(X,d, \mu)$ is a bounded geometry coarse median space *if and only if* $(X,\mu)$ is a coarse median algebra. ]{} (Bounded valency is a combinatorial condition that mimics bounded geometry, and generalises the notion of bounded valency for a graph, see Definition \[bounded valency def\].) As an application of these ideas we show that for any bounded geometry quasi-geodesic coarse median space the metric is uniquely determined by the median operator up to quasi-isometry. \[uniquemetricthm\]\[bi-lip equi\] For a bounded geometry quasi-geodesic coarse median space $(X,d,\mu)$, the metric $d$ is unique up to quasi-isometry. Moreover within this equivalence class of metrics there is a canonical representative $d_\mu$ defined purely in terms of the coarse median operator $\mu$. As well as providing a relatively simple characterisation of a coarse median operator, our combinatorial approach provides a new perspective on the notion of rank in the coarse median world. We provide three new ways to characterise rank each of which is a higher rank analogue of one of the classical characterisations of $\delta$-hyperbolicity: to 1 **Hyperbolic spaces** & **Coarse median spaces/algebras of rank $n$**\ approximating finite subsets by trees & approximating finite subsets by CAT(0) cube complexes of dimension $n$ [@bowditch2013coarse]\ Gromov’s inner product (“thin squares”) condition & thin $(n+1)$-cubes condition: Theorem \[hyper rank\] (3) and Lemma \[cma rank lemma\]\ slim triangle condition & $(n+1)$-multi-median condition: Theorem \[hyper rank\] (2)\ pencils of quasi-geodesics grow linearly & interval growth is $o(n+1)$: Theorem \[growth rank\]\ The paper is organized as follows. In Section \[preliminaries\], we recall background definitions including coarse median spaces, their ranks and Špakula & Wright’s notion of iterated coarse median operators. In Section \[coarseintervalstructures\], by analogy with Sholander’s results for median algebras and interval structures [@sholander1954medians], we give a characterisation of coarse median spaces entirely in terms of their intervals. In Section \[growth section\] we introduce and study characterisations of rank in the context of coarse interval structures and show that for coarse median spaces, the correspondences from Section \[coarseintervalstructures\] preserve rank. In Section \[coarsemedianalgebras\] we study the intrinsic metric on a ternary algebra and show that it is unique up to quasi-isometry for any quasi-geodesic coarse median space of bounded geometry. Motivated by this in Section \[coarsemedianalg\] we study the geometry of coarse median algebras. We establish that these simultaneously generalise the notions of: 1. Classical discrete median algebras, 2. Quasi-geodesic hyperbolic spaces of bounded geometry, 3. Bounded geometry coarse median spaces. The correspondences established in this paper can also be couched as correspondences between, or equivalences of suitable categories, and in the Appendix we examine the notion of morphism and the definitions of the functors required by that approach. Preliminaries ============= We follow the conventions established in [@niblo2017four]. Metrics and geodesics --------------------- Let $(X,d)$ be a metric space. 1. A subset $A \subseteq X$ is *bounded*, if its diameter $\diam (A):=\sup\{d(x,y):x,y\in A\}$ is finite; $A$ is a *net* in $X$, if there exists some constant $C>0$ such that for any $x\in X$, there exists some $a\in A$ such that $d(a,x) \leqslant C$. 2. The metric space $(X,d)$ is said to be *uniformly discrete* if there exists a constant $C>0$ such that for any $x \neq y \in X$, $d(x,y)>C$. 3. The metric space $(X,d)$ is said to have *bounded geometry* if, for any $r>0$, there exists some constant $n \in \mathbb N$ such that $\card~B(x,r) \leqslant n$ for any $x\in X$. 4. Points $x,y\in X$ are said to be *$s$-close* (with respect to the metric $d$) if $d(x,y)\leqslant s$. If $x$ is $s$-close to $y$, we write $x\thicksim_s y$. Maps $f,g:X\to Y$ are said to be *$s$-close*, written $f\thicksim_s g$, if for all $x\in X$, $f(x)\thicksim_s g(x)$. Let $(X,d), (Y,d')$ be metric spaces and $L,C>0$ be constants. 1. An *$(L,C)$-large scale Lipschitz map* from $(X,d)$ to $(Y,d')$ is a map $f:X\rightarrow Y$ such that for any $x,x'\in X$, $d'(f(x),f(x'))\leqslant Ld(x,x')+C$. 2. An *$(L,C)$-quasi-isometry* from $(X,d)$ to $(Y, d')$ is an $(L,C)$-large scale Lipschitz map $f:X\rightarrow Y$ such that there exists another $(L,C)$-large scale Lipschitz map $g: Y \to X$ with $f\circ g\thicksim_L \id_Y$, $g\circ f\thicksim_L \id_X$. 3. $(X,d)$ is said to be *$(L,C)$-quasi-geodesic*, if for any two points $x,y\in X$, there exists a map $\gamma \colon [0,d(x,y)] \rightarrow X$ with $\gamma(0)=x$, $\gamma(d(x,y))=y$, satisfying: for any $s,t\in [0,d(x,y)]$, $$L^{-1}|s-t|-C \leqslant d(\gamma(s),\gamma(t)) \leqslant L|s-t|+C.$$ If we do not care about the constant $C$ we say that $(X,d)$ is *$L$-quasi-geodesic*. If $(X,d)$ is $(1,0)$-quasi-geodesic then we say that $X$ is *geodesic*. When considering integer-valued metrics we make the same definitions restricting the intervals to intervals in $\mathbb Z$. We will take the liberty of omitting the parameters $L,C$ where their values are not germane to the discussion. Let $(X,d), (Y,d')$ be metric spaces, $\rho: \Rp \to \Rp$ a proper function and $C>0$ a constant. 1. A *$\rho$-bornologous map* from $(X,d)$ to $(Y,d')$ is a function $f:X\rightarrow Y$ such that for all $x,x'\in X$, $d'(f(x), f(x')) \leqslant \rho(d(x,x'))$. 2. $f$ is *proper* if given any bounded subset $B \subseteq Y$, $f^{-1}(B)$ is bounded. 3. A *$\rho$-coarse map* from $(X,d)$ to $(Y,d')$ is a proper $\rho$-bornologous map. 4. A *$(\rho,C)$-coarse equivalence* from $(X,d)$ to $(Y, d')$ is a $\rho$-coarse map $f:X\rightarrow Y$ such that there exists another $\rho$-coarse map $g: Y \to X$ with $f\circ g\thicksim_C \id_Y$, $g\circ f\thicksim_C \id_X$. In this case, $g$ is called a $(\rho,C)$-*coarse inverse* of $f$. When the parameters $\rho,C$ are not germane to the discussion we omit them. Median Algebras {#medalg} --------------- As discussed in [@bandelt1983median] there are a number of equivalent formulations of the axioms for median algebras. We will use the following formulation from [@birkhoff1947ternary]: \[median algebra defn\] Let $X$ be a set and $\mu$ a ternary operation on $X$. Then $\mu$ is a *median operator* and the pair $(X,\mu)$ is a *median algebra* if the following are satisfied: - $\mu(a,a,b)=a$; - $\mu(a_1,a_2,a_3)=\langle a_{\sigma(1)},a_{\sigma(2)},a_{\sigma(3)}\rangle$, where $\sigma$ is any permutation of $\{1,2,3\}$; - $\mu(a,b,\mu(c,d,e))= \mu(\mu(a,b,c),\mu(a,b,d),e)$. Axiom (M3) is equivalent to the $4$-point condition given in [@kolibiar1974question], see also [@bandelt2008metric]: $$\label{four point} \mu(\mu(a,b,c),b,d)=\mu(a,b, \mu(c,b,d)).$$ This can be viewed as an associativity axiom: For each $b\in X$ the binary operator $$(a,c)\mapsto a*_b c:=\mu(a,b,c)$$ is associative. It is also commutative by (M2). \[mediancube\] An important example is furnished by the *median $n$-cube*, denoted by $I^n$, which is the $n$-dimensional vector space over $\mathbb Z_2$ with the median operator $\mu_n$ given by majority vote on each coordinate. Coarse median spaces {#def of cms} -------------------- In [@niblo2017four] we showed how to replace Bowditch’s original definition of a coarse median space (Definition \[Bowditch original def\]) in terms of a $4$-point condition mirroring the classical $4$-point condition for median algebras. This may also be viewed as an analogue of Gromov’s $4$-point condition for hyperbolicity, and the other approximations then follow for free. \[coarse median operator\] A *coarse median* on a metric space $(X,d)$ is a ternary operator $\mu$ on $X$ satisfying the following: - There is a constant $\kappao >0$ such that for all points $a_1,a_2,a_3$ in $X$, $\mu(a_1,a_1,a_2)\thicksim_{\kappao} a_1$, and $\langle a_{\sigma(1)},a_{\sigma(2)},a_{\sigma(3)}\rangle \thicksim_{\kappao} \mu(a_1,a_2,a_3)$ for any permutation $\sigma$ of $\{1,2,3\}$. - For $b,c\in X$ the map $$a\mapsto \mu(a,b,c)$$ is *bornologous uniformly* in $b,c$, that is there exists a function $\rho:\Rp\to \Rp$ such that for all $a,a',b,c\in X$, $$d(\mu(a,b,c), \mu(a',b,c)) \leqslant \rho(d(a,a'));$$ - There exists a constant $\kappaiv>0$ such that for any $a,b,c,d\in X$, we have $$\mu(\mu(a,b,c),b,d) \thicksim_{\kappaiv} \mu(a,b, \mu(c,b,d)).$$ It is direct from (C0) and (C1) that for any $a,b,c,a',b',c' \in X$, we have $$d(\mu(a,b,c), \mu(a',b',c')) \leqslant \rho(d(a,a'))+\rho(d(b,b'))+\rho(d(c,c'))+4\kappa_0.$$ Without loss of generality, $\rho$ can be taken to be increasing, in which case it follows that $$\label{C1 est} d(\mu(a,b,c), \mu(a',b',c')) \leqslant \rho'(d(a,a')+d(b,b')+d(c,c'))$$ where $\rho'=3\rho+4\kappa_0$. From now on, enlarging $\rho$ we can assume inequality (\[C1 est\]) holds in place of the one in axiom (C1). As in [@niblo2017four] we have replaced the large-scale Lipschitz condition in Bowditch’s original definition of coarse medians by a bornology. In the most common applications, where the space is quasi-geodesic, these conditions coincide, and since many of the desired outcomes are essentially coarse geometric it is natural to make this generalisation. \[M1M2 remark\] As remarked by Bowditch [@bowditch2013coarse], any coarse median $\mu$ is *uniformly close* to a coarse median $\mu'$ satisfying the localisation and symmetry conditions (M1), (M2) of Definition \[median algebra defn\]. A triple $(X,d,\mu)$ is a *coarse median space* if the pair $(X,d)$ is a metric space and $\mu$ satisfies axioms (M1), (M2), (C1) and (C2). In the same way that axiom (M3) for a median algebra is equivalent to the $4$-point condition (\[four point\]), in a coarse median space, there exists a constant $\kappav>0$ such that for any five points $x,y,z,v,w\in X$, $$\label{five point estimate} \mu(a,b,\mu(c,d,e)) \thicksim_\kappav \mu(\mu(a,b,c),\mu(a,b,d),e).$$ The constant $\kappav$ depends only on the parameters $\rho, \kappaiv$, however it is convenient to carry it with us in calculations. With this in mind we make the following definition. We define the *parameters* for a coarse median space $(X,d,\mu)$ to be any 3-tuple $(\rho, \kappaiv, \kappav)$ of constants satisfying the axioms in Definition \[coarse median operator\] together with estimate (\[five point estimate\]). In the (quasi-)geodesic case, $\rho$ in (C1) can be chosen as: $\rho(t)=Kt+H_0$ for some constants $K,H_0>0$; hence in this case, we also refer to the 4-tuple $(K,H_0,\kappaiv, \kappav)$ as parameters of $(X,d,\mu)$. Rank for a coarse median space ------------------------------ As in the case of median algebras, there is a notion of *rank* for a coarse median space. In terms of Bowditch’s original definition of coarse medians, the rank is simply the least upper bound on the ranks of the required approximating median algebras, and generalising the large scale Lipschitz condition to (C1), one can retain this definition of rank in our context. First recall that for coarse median spaces $(X,d_X,\mu_X)$ and $(Y,d_Y,\mu_Y)$, a map $f:X \to Y$ is a *$C$-quasi-morphism* for some $C>0$ if for $a,b,c\in X$, $\langle f(a),f(b),f(c)\rangle_Y\thicksim_C f(\mu(a,b,c)_X)$. Using the formulation of coarse median given in Definition \[coarse median operator\] (which only indirectly implies the existence of approximations for all finite subsets by median algebras) the following characterisation of ranks is more useful. \[char for high rank-final\] Let $(X,d,\mu)$ be a coarse median space and $n\in \mathbb{N}$. Then the following conditions are equivalent. 1. $\rank X \leqslant n$; 2. For any $\lambda>0$, there exists a constant $C=C(\lambda)$ such that for any $a,b\in X$, any $e_1,\ldots,e_{n+1}\in[a,b]$ with $\langle e_i,a,e_j\rangle\thicksim_\lambda a$ ($i\neq j$), one of the points $e_i$ is $C$-close to $a$; 3. For any $L>0$, there exists a constant $C=C(L)$ such that for any $L$-quasi-morphism $\sigma$ from the median $n$-cube $I^{n+1}$ to $X$, the image $\sigma(\bar{e}_i)$ of one of the cube vertices $\bar{e}_i$ adjacent to the origin $\bar \0$ is $C$-close to the image $\sigma(\bar \0)$. While this theorem was proved in the context of Bowditch’s more restrictive notion of coarse median, the proof still applies in the current generality. We also need the following notion of coarse median isomorphisms when we characterise rank via interval growths in Section \[growth section\]. \[cms isom\] Let $(X,d_X)$, $(Y,d_Y)$ be metric spaces and $\mu_X$, $\mu_Y$ be coarse medians on them, respectively. A map $f:X \to Y$ is called a $(\rho,C)$-*coarse median isomorphism* for some proper function $\rho: \mathbb{R}^+ \to \mathbb{R}^+$ and constant $C>0$, if $f$ is a $(\rho,C)$-coarse equivalence as well as a $C$-quasi-morphism. There is a nice categoric explanation of this terminology given in Appendix \[The coarse median (space) category\]. As shown in Remark \[dep of para for coarse median iso\], for a $(\rho_+,C)$-coarse median isomorphism $f$, any $(\rho_+,C)$-coarse inverse $g$ is a $C'$-quasi-morphism with the constant $C'$ depending only on $\rho_+,C$ and parameters of $X,Y$. In this case, we will also refer to $g$ as an *inverse* of $f$. Iterated coarse medians ----------------------- We recall the following definition from [@vspakula2017coarse]: \[coarseiteratedmediandefn\] Let $(X,d,\mu)$ be a coarse median space and $b\in X$. For $x_1\in X$ define $$\mu(x_1;b):=x_1,$$ and for $k \geqslant 1$ and $x_1,\ldots,x_{k+1} \in X$, define the *coarse iterated median* $$\mu(x_1,\ldots,x_{k+1};b):=\mu(\mu(x_1,\ldots,x_k;b),x_{k+1},b).$$ Note that this definition “agrees” with the original coarse median operator $\mu$ in the sense that for any $a,b,c$ in $X$, $\mu(a,b,c)=\mu(a,b;c)$. In [@niblo2017four] we established the following estimates: \[coarse iterated estimate\] Let $(X,d,\mu)$ be a coarse median space with parameters $(\rho, \kappaiv, \kappav)$. Then for any $a_0,a_1,\ldots,a_n;b_0,b_1,\ldots,b_n \in X$, there exist functions $\rho_n, H_n:\Rp\to \Rp$ and constants $C_n, D_n$ depending only on $\rho, \kappaiv, \kappav$, and satisfying: 1. $d(\mu(a_1,\ldots,a_n;a_0),\mu(b_1,\ldots,b_n;b_0)) \leqslant \rho_n (\sum_{k=0}^n d(a_k,b_k))$. 2. Let $(\Pi,\mu_\Pi)$ be a median algebra, and $\sigma \colon \Pi \rightarrow X$ an $L$-quasi-morphism (to recall the notion, see Definition \[coarse median category\] below). For any $x_1,\ldots,x_n,b\in\Pi$, $$\sigma(\langle x_1,\ldots,x_n;b\rangle_\Pi)\thicksim_{H_n(L)}\langle\sigma(x_1),\ldots,\sigma(x_n);\sigma(b)\rangle.$$ 3. $\mu(a,b,\mu(a_1,\ldots,a_{n-1};a_n))\thicksim_{C_n}\mu(\mu(a,b,a_1),\ldots,\mu(a,b,a_{n-1});a_n)$. 4. $\mu(a,b,\mu(a_1,\ldots,a_{n-1};a_n))\thicksim_{D_n}\mu(\mu(a,b,a_1),\ldots,\mu(a,b,a_{n-1});\mu(a,b,a_n))$. Here we provide additional estimates that will give us the control we need later to analyse the structure of coarse cubes in Section \[growth\]. \[coarse iterated estimate new1\] Let $(X,d,\mu)$ be a coarse median space with parameters $(\rho, \kappaiv, \kappav)$. Then for any $n\in \mathbb N$, there exists a constant $G_n$ depending only on $\rho, \kappaiv, \kappav$ such that for any $a_1,\ldots,a_n,b\in X$ and any permutation $\sigma\in S_n$, $$\langle a_{\sigma(1)},\ldots,a_{\sigma(n)};b\rangle \thicksim_{G_n} \mu(a_1,\ldots,a_n;b).$$ We proceed by induction on $n$. When $n=1$ or $2$, we may take $G_1=G_2=0$ by definition and axiom (M2). Now assume that the result holds for $1,2,\dots,n-1$, and consider the case of $n$. As usual it is sufficient to prove the lemma when $\sigma$ is a transposition of the form $(1j)$. If $j<n$ then by definition, we have $$\mu( a_1,\ldots, a_n; b )=\langle \langle a_1,\ldots,a_j;b\rangle,a_{j+1},\ldots,a_n;b \rangle.$$ Inductively $\langle a_1,\ldots,a_j;b\rangle \thicksim_{G_j} \langle a_j,a_2,\ldots,a_{j-1},a_1;b\rangle$ and the result follows by Lemma \[coarse iterated estimate\] (1). It remains to check the case $\sigma=(1n)$. By the inductive step, we have $$\begin{aligned} &&\mu(a_n,a_2,\ldots, a_{n-1},a_1;b)=\mu(\mu(a_n,a_2,\ldots,a_{n-1};b), a_1, b )\\ &\thicksim_{\rho(G_{n-1})}& \mu(\mu(a_2,\ldots,a_{n-1},a_n;b), a_1, b )= \mu( \mu(\mu(a_2,\ldots,a_{n-1};b), a_n , b), a_1, b )\\ &\thicksim_{\kappaiv}&\mu( \mu(\mu(a_2,\ldots,a_{n-1};b),a_1,b),a_n,b )=\mu( \mu(a_2,\ldots,a_{n-1},a_1;b),a_n,b )\\ &\thicksim_{\rho(G_{n-1})}& \mu( \mu(a_1,a_2,\ldots,a_{n-1};b),a_n,b )= \mu(a_1,a_2,\ldots,a_n;b).\end{aligned}$$ Hence for the transposition $(1n)$, we have $$\mu(a_n,a_2,\ldots, a_{n-1},a_1;b) \thicksim_{2\rho(G_{n-1})+\kappaiv}\mu(a_1,a_2,\ldots,a_n;b).$$ This completes the proof. \[coarse iterated estimate new2\] Let $(X,d,\mu)$ be a coarse median space with parameters $(\rho, \kappaiv, \kappav)$. Then for any $1\leqslant k\leqslant n$, there exists a constant $E(k,n)$ depending only on $\rho, \kappaiv, \kappav$ such that for any $a_1,\ldots,a_n,b \in X$, $$\mu(a_1,\ldots,a_k;\mu(a_1,\ldots,a_n;b))\thicksim_{E(k,n)}\mu(a_1,\ldots,a_k;b).$$ In particular, when we take $k=n$ and $E_n=E(n,n)$, we have $$\mu(a_1,\ldots,a_n;\mu(a_1,\ldots,a_n;b))\thicksim_{E_n}\mu(a_1,\ldots,a_n;b).$$ We proceed by induction on $k$. When $k=1$, by definition, we have $$\mu(a_1;\mu(a_1,\ldots,a_n;b))=a_1=\mu(a_1;b).$$ Hence we may take $E(1,n)=0$ for all $n\geqslant1$. Now take $k=2$. For $n=2$ we have $$\mu(a_1,a_2;\mu(a_1,a_2;b))\thicksim_{\kappaiv}\mu(a_1,a_2;b),$$ hence we may take $E(2,2)=\kappaiv$. Now for $n\geqslant 3$, by Lemma \[coarse iterated estimate\](3), there exists a constant $C_n$ depending only on $\rho, \kappaiv, \kappav$ such that $$\label{eqn1} \mu(a_1,a_2;\mu(a_1,\ldots,a_n;b))\thicksim_{C_n} \mu(a_1,a_2,\mu(a_1,a_2,a_3),\ldots,\mu(a_1,a_2,a_n);b).$$ We now prove, by induction on $n$, that there exists a constant $F_n$ depending only on $\rho, \kappaiv, \kappav$ such that for any $a_1,\ldots,a_n,b \in X$, $$\label{eqn2} \mu(a_1,a_2,\mu(a_1,a_2,a_3),\ldots,\mu(a_1,a_2,a_n);b) \thicksim_{F_n} \mu(a_1,a_2,b).$$ When $n=3$, we have $$\mu(a_1,a_2,\mu(a_1,a_2,a_3);b)=\mu( \mu(a_1,a_2,b),\mu(a_1,a_2,a_3),b )\thicksim_{\kappaiv}\mu(a_1,a_2,\mu(b,a_3,b))=\mu(a_1,a_2,b).$$ Hence we may take $F_3=\kappaiv$. Now assume we have found a constant $F_{n-1}$ depending only on $\rho, \kappaiv, \kappav$ such that $$\mu(a_1,a_2,\mu(a_1,a_2,a_3),\ldots,\mu(a_1,a_2,a_{n-1});b) \thicksim_{F_{n-1}} \mu(a_1,a_2,b).$$ Then we have $$\begin{aligned} &&\mu(a_1,a_2,\mu(a_1,a_2,a_3),\ldots,\mu(a_1,a_2,a_n);b)\\ &=&\mu( \mu(a_1,a_2,\mu(a_1,a_2,a_3),\ldots,\mu(a_1,a_2,a_{n-1});b),\mu(a_1,a_2,a_n),b )\\ &\thicksim_{\rho(F_{n-1})}& \mu( \mu(a_1,a_2,b), \mu(a_1,a_2,a_n), b ) \thicksim_{\kappaiv} \mu(a_1,a_2,b).\end{aligned}$$ Hence we may take $F_n=\rho(F_{n-1})+\kappaiv$. Now combining estimates (\[eqn1\]) and (\[eqn2\]): $$\mu(a_1,a_2;\mu(a_1,\ldots,a_n;b))\thicksim_{C_n+F_n}\mu(a_1,a_2,b).$$ Hence we may take $E(2,n)=C_n+F_n$ for $n\geqslant 3$. This completes the case $k=2$ and we proceed to the induction step: Assume that for $k-1$ and for each $n\geqslant k-1$, there exists a constant $E(k-1,n)$ satisfying $$\mu(a_1,\ldots,a_{k-1};\mu(a_1,\ldots,a_n;b))\thicksim_{E(k-1,n)}\mu(a_1,\ldots,a_{k-1};b).$$ Then for $k$ and $n \geqslant k$, we have $$\begin{aligned} &&\mu(a_1,\ldots,a_k;\mu(a_1,\ldots,a_n;b))\\ &=&\mu( \mu(a_1,\ldots,a_{k-1};\mu(a_1,\ldots,a_n;b)) , a_k, \mu(a_1,\ldots,a_n;b) )\\ &\thicksim_{\rho(E(k-1,n))}& \mu( \mu(a_1,\ldots,a_{k-1};b), a_k, \mu(a_1,\ldots,a_n;b) )\\ &\thicksim_{C_{k-1}}& \mu( \mu(a_1,a_k,\mu(a_1,\ldots,a_n;b)), \ldots, \mu(a_{k-1},a_k,\mu(a_1,\ldots,a_n;b));b ),\end{aligned}$$ by Lemma \[coarse iterated estimate\](3). Now by Lemma \[coarse iterated estimate new1\] and the case of $k=2$, for each $i=1,\ldots,k-1$ we have $$\begin{aligned} \mu(a_i,a_k,\mu(a_1,\ldots,a_n;b)) &\thicksim_{\rho(G_n)}&\mu(a_i,a_k,\mu(a_i,a_k,a_1,\ldots,a_{i-1},a_{i+1},\ldots,a_{k-1},a_{k+1},\ldots,a_n;b))\\ &\thicksim_{E(2,n)}&\mu(a_i,a_k,b).\end{aligned}$$ Hence by Lemma \[coarse iterated estimate\](1), taking $\alpha(k,n)=\rho_{k-1}((k-1)(\rho(G_n)+E(2,n)))$, we have $$\begin{aligned} &&\mu( \mu(a_1,a_k,\mu(a_1,\ldots,a_n;b)), \ldots, \mu(a_{k-1},a_k,\mu(a_1,\ldots,a_n;b));b )\\ &\thicksim_{\alpha(k,n)}& \mu( \mu(a_1,a_k,b),\ldots,\mu(a_{k-1},a_k,b);b )\\ &\thicksim_{C_{k-1}}& \mu(\mu(a_1,\ldots,a_{k-1};b),a_k,b)=\mu(a_1,\ldots,a_k;b).\end{aligned}$$ Hence we may take $$\begin{aligned} E(k,n)&=&\rho(E(k-1,n))+C_{k-1}+\alpha(k,n)+C_{k-1}\\ &=&\rho(E(k-1,n))+\rho_{k-1}((k-1)(\rho(G_n)+E(2,n)))+2C_{k-1}\end{aligned}$$ and the lemma holds. Coarse interval structures {#coarseintervalstructures} ========================== Sholander studied the relation between intervals and median operators, and we will generalise this approach to the coarse context. Classically Sholander defined the interval between $a,b$ in a median algebra $(X,\mu)$ to be the set $\{c:\mu(a,c,b)=c\}$, which, in the context of median algebras, agrees with our definition of interval (Definition \[interval\]) since, for any $c=\mu(a,x,b)\in [a,b]$, we have $$\mu(a,c,b)=\mu(c,a,b)=\mu(\mu(x,a,b),a,b)= \mu(x,\mu(a,b,a),b)=\mu(x,a,b)=c.$$ Of course the two definitions of interval do not necessarily coincide in the coarse context. \[sholander\] For every median algebra $(X,\mu)$, the binary operation $[\ ,\ ]: X\times X\rightarrow \mathcal P(X)$ defined by $(a,b)\mapsto [a,b]$ has the following properties: - $[a,a] =\{a\}$, - if $c\in [a,b]$ then $[a,c]\subseteq [b,a]$, - $[a,b]\cap [b,c]\cap [c,a]$ has cardinality $1$. Conversely, every operation $X^2 \rightarrow \mathcal P(X)$ with the preceding properties induces a ternary operator $\mu'$ whereby $\mu(a,b,c)'$ is the unique point in $[a,b]\cap [b,c]\cap [c,a]$ such that $(X,\mu')$ is a median algebra. In this section we will provide a coarse analogue of Sholander’s theorem. We start by introducing the notion of a coarse interval space. \[coarse interval\] Let $(X,d,\mu)$ be a coarse median space with parameters $\rho,\kappaiv,\kappav$. Then the map $[\cdot,\cdot]: X^2 \rightarrow \mathcal{P}(X)$ defined by $(a,b) \mapsto [a,b]=\{\mu(a,x,b)\mid x\in X\}$ satisfying: - For all $a,b\in X$, $[a,a]=\{a\}$, $[a,b]=[b,a]$; - There exists a non-decreasing function $\f: \Rp \rightarrow \Rp$ such that for any $a,b\in X$ and $c\in \mathcal{N}_R([a,b])$, we have $[a,c] \subseteq \mathcal{N}_{\f(R)}([a,b])$; - There exists a non-decreasing function $\g: \Rp \rightarrow \Rp$ such that for any $a,b,c\in X$, we have $[a,b] \cap [b,c] \cap [c,a] \neq \emptyset$, and $$\diam( \mathcal{N}_R([a,b]) \cap \mathcal{N}_R([b,c]) \cap \mathcal{N}_R([c,a]) ) \leqslant \g(R).$$ Property (I1) follows directly from axioms (M1) and (M2) for a coarse median space. For (I2), since $c\in\mathcal{N}_R([a,b])$, there exists $x\in X$ such that $c \thicksim_R \mu(a,b,x)$. Now for any $y\in X$, by axioms (C1) and (C2), we have $$\langle a,c,y\rangle \thicksim_{\rho(R)}\langle a,\mu(a,b,x),y\rangle \thicksim_{\kappaiv} \langle a,b,\langle a,x,y\rangle\rangle,$$ which implies $\langle a,c,y\rangle \in \mathcal{N}_{\rho(R)+\kappaiv}([a,b])$. So we can take $\f(R)=\rho(R)+\kappaiv$, and (I2) holds. For (I3), we know that $\mu(a,b,c) \in [a,b] \cap [b,c] \cap [c,a]$ so the intersection is non-empty. Furthermore, given a point $z\in \mathcal{N}_R([a,b]) \cap \mathcal{N}_R([b,c]) \cap \mathcal{N}_R([c,a])$, there exists $w\in X$ such that $z \thicksim_R \mu(a,b,w)$. So by (C1) and (C2), we have $$\mu(a,b,z) \thicksim_{\rho(R)} \mu(a,b,\mu(a,b,w)) \thicksim_\kappaiv \mu(\mu(a,b,a),b,w)=\mu(a,b,w) \thicksim_R z.$$ Similarly, we can do the same thing for $b,c$ and $c,a$. In a word, we obtain that $$\mu(a,b,z) \thicksim_{\kappa'} z,\quad \mu(b,c,z) \thicksim_{\kappa'} z,\quad \mu(c,a,z) \thicksim_{\kappa'} z,$$ where $\kappa':=\rho(R)+R+\kappaiv =\f(R)+R$. Combining with (C1) and (\[five point estimate\]), we obtain $$\begin{aligned} z & \thicksim_{\kappa'} & \mu(c,a,z) \thicksim_{\rho(\kappa')} \mu(c,a,\mu(b,c,z)) \thicksim_{\kappaiv} \mu(\mu(c,a,b),c,z)\\ & = & \mu(\mu(a,b,c),c,z) \thicksim_{\rho(\kappa')} \mu(\mu(a,b,c),c,\mu(a,b,z)) \thicksim_{\kappaiv} \mu(a,b,\mu(c,c,\mu(a,b,z)))\\ & = & \mu(a,b,c).\end{aligned}$$ The above estimate implies that the diameter of $\mathcal{N}_R([a,b]) \cap \mathcal{N}_R([b,c]) \cap \mathcal{N}_R([c,a])$ is bounded by $$\g(R)=4\rho(\kappa')+2\kappa'+4\kappaiv=4\rho(\rho(R)+R+\kappaiv)+2\rho(R)+2R+6\kappaiv.$$ So we finish the proof. With this in mind, we define the concept of coarse interval spaces as follows. Let $(X,d)$ be a metric space, and $\I=[\cdot,\cdot]: X^2 \rightarrow \mathcal{P}(X)$ be a map satisfying (I1)$\sim$(I3) in Proposition \[coarse interval\]. Then $(X,d,\I)$ is called a *coarse interval space*. The functions $\f,\g$ in the conditions are called *parameters* for $\I$. As with the notion of a coarse median space, the parameters are not uniquely defined and are not part of the data. It is only their existence that is required. \[induced ternary operator\] Given a coarse median space $(X,d,\mu)$, we define a map $\mathcal I:X^2\rightarrow \mathcal P(X)$ by $\mathcal I(a,b)=[a,b]$. By Proposition \[coarse interval\], the triple $(X,d,\mathcal I)$ is a coarse interval space. We say that this is the *coarse interval space induced by $(X,d,\mu)$*. On the other hand, suppose we are given a coarse interval space $(X,d,\I)$. By axiom (I3), for any $a,b,c\in X$, we can always choose a point in $[a,b] \cap [b,c] \cap [c,a]$, denoted by $\mu(a,b,c)$, which is invariant under any permutation of $\{a,b,c\}$. Making such a choice for all $a,b,c$ gives us a ternary operator $\mu$ on $X$ satisfying (M1) and (M2), called the *induced (ternary) operator* of $\mathcal{I}$. Note that by axiom (I3), $\mu$ is uniquely determined up to bounded error. Our proof that the induced ternary operator is a coarse median operator on $X$ is inspired by Sholander’s argument in [@sholander1954medians], though more care needs to be taken with the estimates introduced by the coarse conditions. For clarity we divide the proof into several lemmas. \[C1\] Let $(X,d,\I)$ be a coarse interval space and $\mu$ be the induced operator. Given parameters $\f,\g$ for $\I$, then for any $a,a',b,c\in X$, we have $$d(\mu(a,b,c), \mu(a',b,c)) \leqslant \g(\f(d(a,a'))).$$ In particular, axiom (C1) holds for $(X,d,\mu)$ with $\rho=\g\circ \f$. Set $R=d(a,a')$, then $a' \in \mathcal{N}_R([a,b])$ and $a' \in \mathcal{N}_R([c,a])$. By (I1), (I2), we have $$[a',b] \subseteq \mathcal{N}_{\f(R)}([a,b])\quad\mbox{and}\quad [c, a'] \subseteq \mathcal{N}_{\f(R)}([c,a]).$$ Hence, $$\mu(a',b,c) \in [a',b] \cap [b,c] \cap [c,a'] \subseteq \mathcal{N}_{\f(R)}([a,b]) \cap \mathcal{N}_{\f(R)}([b,c]) \cap \mathcal{N}_{\f(R)}([c,a]).$$ Combined with (I3), we obtain that $\mu(a',b,c) \thicksim_{\g(\f(R))} \mu(a,b,c)$. CONVENTION: Following this lemma, given parameters $\f, \g$ we will fix the function $\rho:=3\g\circ \f$, so that $d(\mu(a,b,c), \mu(a',b',c')) \leqslant \rho(d(a,a')+d(b,b')+d(c,c')).$ We now turn our attention to axiom (C2). Fix a coarse interval space $(X,d,\I)$ with parameters $\f,\g$ and the induced operator $\mu$. We begin with the following elementary lemma, which can be deduced directly from the definition. \[estm1\] If $c \thicksim_R \mu(a,b,c)$, then $c\in \mathcal{N}_R([a,b])$; conversely, if $c\in \mathcal{N}_R([a,b])$, then $c \thicksim_{\g(R)} \mu(a,b,c)$ for any $a,b,c\in X$. The following estimates are a little less obvious. \[estmA\] Let $b\in \mathcal{N}_{R_1}([a,c])$ and $c\in \mathcal{N}_{R_2}([a,d])$, then $c\in \mathcal{N}_{h(R_1,R_2)}([b,d])$ where $h(R_1,R_2)=\g(R_2)+\g(\f(R_1+\f(R_2)))$. Since $b\in \mathcal{N}_{R_1}([a,c])$, axioms (I1) and (I2) imply that $[b,c] \subseteq \mathcal{N}_{\f(R_1)}([a,c])$. Since $c\in \mathcal{N}_{R_2}([a,d])$, again by (I2), we have $[a,c] \subseteq \mathcal{N}_{\f(R_2)}([a,d])$. Hence $b \in \mathcal{N}_{R_1}([a,c]) \subseteq \mathcal{N}_{R_1+\f(R_2)}([a,d])$, and consequently $[b,d] \subseteq \mathcal{N}_{\f(R_1+\f(R_2))}([a,d])$ by axioms (I1) and (I2). Combining them together with axiom (I3), we have $$\mu(b,c,d)\in [b,c]\cap[c,d]\cap[d,b] \subseteq \mathcal{N}_{\f(R_1)}([a,c]) \cap [c,d] \cap \mathcal{N}_{\f(R_1+\f(R_2))}([a,d]),$$ which implies $\mu(b,c,d) \thicksim_{\g(\f(R_1+\f(R_2)))} \mu(a,c,d) \thicksim_{\g(R_2)} c$ (we use Lemma \[estm1\] in the second estimate since $c\in \mathcal{N}_{R_2}([a,d])$). So the conclusion holds. \[corA\] Suppose the Hausdorff distance $d_H([a,b],[a,c]) \leqslant R$, then $d(b,c)\leqslant h(R,R)$. By assumption, $b\in \mathcal{N}_{R}([a,c])$ and $c\in \mathcal{N}_{R}([a,b])$. Now putting $d:=b$ and applying Lemma \[estmA\], we have $c\in \mathcal{N}_{h(R,R)}([b,b])$. Since $[b,b]=\{b\}$ by axiom (I1), we have $d(b,c)\leqslant h(R,R)$. \[estmB\] For any $a,b,c,d\in X$, we have $\mu(a,\mu(a,c,d),\mu(b,c,d)) \thicksim_{\kappa''} \mu(a,c,d)$, where $\kappa''=\g(\f(0)+\g\f^2(0))$. Setting $x=\mu(b,c,d)$, we consider $m=\mu(a,\mu(a,x,c),d)\in [a,\mu(a,x,c)] \subseteq \mathcal{N}_{\f(0)}([a,x])$. Taking $y=\mu(a,x,c)=\mu(a,\mu(b,c,d),c)\in [a,c]$, we have $[a,y] \subseteq \mathcal{N}_{\f(0)}([a,c])$ by (I2), which implies $m \in \mathcal{N}_{\f(0)}([a,c])$. Again by (I2), $y \in [c,\mu(b,c,d)] \subseteq \mathcal{N}_{\f(0)}([c,d])$, so $m\in [y,d] \subseteq \mathcal{N}_{\f^2(0)}([c,d])$. Combining them together, we have $$m\in \mathcal{N}_{\f(0)}([a,c]) \cap \mathcal{N}_{\f^2(0)}([c,d]) \cap [a,d],$$ which implies $\mu(a,c,d)\thicksim_{\g(\f^2(0))}m$ by (I3). Hence $\mu(a,c,d) \in \mathcal N_{\f(0)+\g\f^2(0)}([a,x])$. Finally, by Lemma \[estm1\] we have $\mu(a,\mu(a,c,d),x)\thicksim_{\g(\f(0)+\g\f^2(0))} \mu(a,c,d)$. From now on, let us fix the constant $\kappa''=\g(\f(0)+\g\f^2(0))$. \[estmC\] For any $R_1,R_2>0$, there exists a constant $\lambda(R_1,R_2)>0$ such that for any $b\in \mathcal{N}_{R_1}([a,c]) \cap \mathcal{N}_{R_2}([a,d])$ and $x\in [c,d]$, we have $b\in \mathcal{N}_{\lambda(R_1,R_2)}([a,x])$. In particular, taking $x=\mu(a,c,d)$, we have: $$\mathcal{N}_{R_1}([a,c]) \cap \mathcal{N}_{R_2}([a,d]) \subseteq \mathcal{N}_{\lambda(R_1,R_2)}([a,\mu(a,c,d)]).$$ Since $b\in \mathcal{N}_{R_1}([a,c])$, by Lemma \[C1\] and \[estmB\], we have $$\mu(d,\mu(a,c,d),b) \thicksim_{\rho(\g(R_1))} \mu(d,\mu(a,c,d),\mu(a,b,c)) \thicksim_{\kappa''} \mu(a,c,d),$$ which implies $\mu(a,c,d)\in \mathcal{N}_{\rho(\g(R_1))+\kappa''}([b,d])$. Together with $b\in \mathcal{N}_{R_2}([a,d])$ and Lemma \[estmA\], we have $b\in \mathcal{N}_{h(\rho(\g(R_1))+\kappa'',R_2)}([a,\mu(a,c,d)])$. On the other hand, since $x\in [c,d]$, by Lemma \[C1\], \[estm1\] and \[estmB\], we have: $$\mu(a,\mu(a,c,d),x) \thicksim_{\rho(\g(0))} \mu(a,\mu(a,c,d),\mu(x,c,d)) \thicksim_{\kappa''} \mu(a,c,d),$$ which implies $\mu(a,c,d)\in \mathcal{N}_{\rho(\g(0))+\kappa''}([a,x])$. So $[a,\mu(a,c,d)] \subseteq \mathcal{N}_{\f(\rho(\g(0))+\kappa'')}([a,x])$. Combining them together, we have: $$b\in \mathcal{N}_{h(\rho(\g(R_1))+\kappa'',R_2)+\f(\rho(\g(0))+\kappa'')}([a,x]).$$ Now taking $$\lambda(R_1,R_2)=h(\rho(\g(R_1))+\kappa'',R_2)+\f(\rho(\g(0))+\kappa''),$$ the lemma holds. Finally, we can prove the following theorem. \[coarse interval converse\] Let $(X,d,\I)$ be a coarse interval space with the induced operator $\mu$, then $(X,d,\mu)$ is a coarse median space. It only remains to verify (C2). In other words, we need to find a constant $\kappa$ such that for any $a,b,c,d\in X$, $$\mu( \mu(a,b,c),b,d ) \thicksim_\kappa \mu( a,b,\mu(c,b,d) ).$$ By axiom (I2) and Lemma \[estmC\] we have: $$\begin{aligned} [b,\mu( \mu(a,b,c),b,d )] &\subseteq& \mathcal{N}_{\f(0)}([b,\mu(a,b,c)]) \cap \mathcal{N}_{\f(0)}([b,d]) \\ &\subseteq& \mathcal{N}_{\f^2(0)}([b,a]) \cap \mathcal{N}_{\f^2(0)}([b,c]) \cap \mathcal{N}_{\f(0)}([b,d])\\ &\subseteq& \mathcal{N}_{\f^2(0)}([b,a]) \cap \mathcal{N}_{\lambda(\f^2(0),\f(0))}([b,\mu(b,c,d)])\\ &\subseteq& \mathcal{N}_{\lambda(\f^2(0),\lambda(\f^2(0),\f(0)))}([b,\mu(a,b,\mu(b,c,d))]).\end{aligned}$$ Similarly, we have $$[b,\mu(a,b,\mu(b,c,d))] \subseteq \mathcal{N}_{\lambda(\f^2(0),\lambda(\f^2(0),\f(0)))}([b,\mu( \mu(a,b,c),b,d )]).$$ The above two estimates imply: $$d_H([b,\mu( \mu(a,b,c),b,d )], [b, \mu(a,b,\mu(c,b,d ))]) \leqslant \lambda(\f^2(0),\lambda(\f^2(0),\f(0))).$$ Finally, by Corollary \[corA\], we get $$\mu( \mu(a,b,c),b,d ) \thicksim_\kappa \mu(a,b,\mu(c,b,d ))$$ for $\kappa=h(\lambda(\f^2(0),\lambda(\f^2(0),\f(0))),\lambda(\f^2(0),\lambda(\f^2(0),\f(0))))$. Analogous to relaxing axioms (M1) and (M2) for a coarse median operator to axiom (C0), we consider the following notion of a coarse interval structure. Let $(X,d)$ be a metric space, and $\I$ a map $[\cdot,\cdot]: X^2 \rightarrow \mathcal{P}(X)$. $\I$ is called a *coarse interval structure* on $(X,d)$, if there exists a constant $\kappao >0$ such that the following conditions hold: - For all $a,b\in X$, $d_H([a,a], \{a\}) \leqslant \kappao$, $d_H([a,b],[b,a])\leqslant \kappao$; - There exists a non-decreasing function $\f: \Rp \rightarrow \Rp$ such that for any $a,b\in X$ and $c\in \mathcal{N}_R([a,b])$, we have $[a,c] \subseteq \mathcal{N}_{\f(R)}([a,b])$; - There exist a non-decreasing function $\g: [\kappao,+\infty) \rightarrow \Rp$ such that for any $a,b,c\in X$, we have $\N_\kappao([a,b]) \cap \N_\kappao([b,c]) \cap \N_\kappao([c,a]) \neq \emptyset$, and for any $R \geqslant \kappao$, $\diam( \mathcal{N}_R([a,b]) \cap \mathcal{N}_R([b,c]) \cap \mathcal{N}_R([c,a]) ) \leqslant \g(R)$. The constant $\kappao$ and functions $\f,\g$ in the conditions are called *parameters* for $\I$. \[ends close to intervals\] By (I1)’, for any point $a$, the interval $[a,a]$ lies in $B(a,\kappao)$. By (I3)’ the intersection $\N_\kappao([a,a]) \cap \N_\kappao([a,b])$ must be non-empty for all $b$, so, as $\N_\kappao([a,a])$ lies in $B(a,2\kappao)$, it follows that $a$ must lie in $\N_{3\kappao}([a,b])$. Similarly $b\in\N_{3\kappao}([a,b])$. To simplify notations, when we consider different coarse interval structures on different spaces, we will use $[\cdot,\cdot]$ to denote intervals in both spaces, since the points tell us what space we are focusing on. When we consider different coarse interval structures on the same space, we will use some index to tell the difference. For example, if $\I,\I'$ are two different coarse interval structures on $X$, we use $[\cdot,\cdot],[\cdot,\cdot]'$ to denote the intervals, respectively. Recall that a coarse median is always uniformly close to another coarse median satisfying axioms (M1) and (M2). Similarly, we will show that a coarse interval structure is always “close" to another satisfying (I1)$\sim$(I3) in the following sense. Let $(X,d)$ be a metric space and $\I,\I'$ be two coarse interval structures on it. Say they are *uniformly close*, if there exists a constant $C>0$ such that $d_H([x,y],[x,y]')\leqslant C$ for any $x,y\in X$. \[coarse interval close\] Let $(X,d)$ be a metric space, and $\I$ be a coarse interval structure on it. Then there exists another coarse interval structure $\I'$ on $(X,d)$ which is uniformly close to $\I$ and satisfies axioms (I1)$\sim$(I3). We define ‘fattened’ intervals: $$[a,b]':= \N_\kappao([a,b])\cup \N_\kappao([b,a])\cup \{a,b\}$$ for $a\neq b$, and define $[a,a]':=\{a\}$. It is easy to see from (I1)’ that $[a,a]'=\{a\}$ is (uniformly) close to $[a,a]$ and that $\N_\kappao([a,b])\cup \N_\kappao([b,a])$ is close to $[a,b]$. By Remark \[ends close to intervals\], the points $a,b$ are also close to $[a,b]$, hence $[a,b]'$ is close to $[a,b]$. By construction, $[\cdot,\cdot]'$ satisfies (I1), and clearly it still satisfies (I2). The fattening of the intervals ensures that $[a,b]'\cap[b,c]'\cap[c,a]'$ is non-empty for $a,b,c$ distinct, by (I3)’. Now taking repeated points, $[a,b]'\cap[b,b]'\cap[b,a]'=\{b\}$ by construction. Hence $[a,b]'\cap[b,c]'\cap[c,a]'$ is non-empty in all cases. Since $[\cdot,\cdot]'$ is uniformly close to $[\cdot,\cdot]$, the intersection $ \mathcal{N}_R([a,b]') \cap \mathcal{N}_R([b,c]') \cap \mathcal{N}_R([c,a]')$ has bounded diameter by (I3)’. This establishes (I3) for the new definition of intervals $\I'$. Adapting the arguments we made above, we have the following correspondence between coarse medians and coarse interval structures. \[induce cm and ci\] Let $(X,d)$ be a metric space. 1. Given a coarse median $\mu$ on $(X,d)$, the induced $\I$ defined in Proposition \[coarse interval\] is a coarse interval structure, called the *induced coarse interval structure*; 2. Suppose $\I$ is a coarse interval structure on $(X,d)$ with parameters $\kappao,\f,\g$. For any $a,b,c\in X$, choose a point in $\N_\kappao([a,b]) \cap \N_\kappao([b,c]) \cap \N_\kappao([c,a])$, denoted by $\mu(a,b,c)$. Making such a choice gives us a coarse median $\mu$ on $X$, called the *induced coarse median operator*. Rank, generalised hyperbolicity and interval growth {#growth section} =================================================== Generalised hyperbolicity for higher rank coarse median spaces -------------------------------------------------------------- Here we will provide the following characterisations of rank for a coarse median space. \[hyper rank\] Let $(X,d,\mu)$ be a coarse median space and $n \in \mathbb N$, then the following are equivalent: 1) $\rank X \leqslant n$. 2) *Multi-median condition:* There exists a non-decreasing function $\psi$ such that for any $\lambda>0$ and any $x_1,\ldots,x_{n+1},q \in X$, we have $$\bigcap_{i \neq j} \N_\lambda([x_i,x_j]) \subseteq \bigcup_{i=1}^{n+1} \N_{\psi(\lambda)}([x_i,q]).$$ 3) *Thin $(n\!+\!1)$-cubes condition:* There exists a non-decreasing function $\varphi$, such that $$\min\{d(p,\langle x_i,p,q\rangle):i=1,\ldots,n+1\} \leqslant \varphi(\max\{d(p,\langle x_i,x_j,p\rangle): i\neq j\})$$ for any $x_1,\ldots,x_{n+1};p,q \in X$. As Bowditch showed in [@bowditch2013coarse], a geodesic coarse median space has rank $1$ *if and only if* it is hyperbolic, and it is instructive to consider conditions 2) and 3) above in that context. Here, condition 2) reduces to a version of the generalised slim triangles condition abstracted from classical hyperbolic geometry, while condition 3) reduces to the Gromov inequality (see Equation (\[gromov product generalised\]) below) motivated by the geometry of trees. From this perspective, Theorem \[hyper rank\] provides higher rank analogues of these two characterisations. To be more precise, recall that in [@niblo2017four] we established: \[hyperbolic char\] For a coarse median space $(X,d,\mu)$, the following are equivalent: 1) $\rank X \leqslant 1$; 2) There exists a constant $\delta>0$ such that for any $a,b,c \in X$, we have $$[a,c] \subseteq \N_\delta([a,b]) \cup \N_\delta([b,c]).$$ We also showed in [@niblo2017four] that the intervals in a rank 1 geodesic coarse median space are uniformly close to geodesics, so Theorem \[hyperbolic char\] is a version of the slim triangles condition for hyperbolicity. Clearly Theorem \[hyper rank\] generalises this, providing a higher rank analogue of the slim triangles condition which holds even in the non-geodesic context. The closeness of geodesics and intervals is a unique (and not *a priori* obvious) feature of the rank 1 case. Combining this fact with Proposition \[coarse interval\], we deduce that any geodesic metric space admits at most one coarse median of rank one up to uniform bound. This is *not* true for higher rank cases, see [@zeidler2013coarse Example 2.2.8]. Turning now to Gromov’s inner product, we recall the definition. Fixing a base point $p$ in a metric space $(X,d)$, for $a,b\in X$ we set $$(a|b)_p:=\frac{1}{2}[d(a,p)+d(b,p)-d(a,b)].$$ A geodesic metric space $(X,d)$ is Gromov hyperbolic *if and only if* there exists some constant $\delta>0$ such that the following inequality holds for any $a,b,c,p \in X$: $$\label{gromov inequality} \min\{(a|b)_p,(b|c)_p\} \leqslant (a|c)_p + \delta.$$ We note that condition (\[gromov inequality\]) can be relaxed to a coarse condition that is still strong enough to characterise hyperbolicity: Let $(X,d)$ be a geodesic metric space, then $X$ is hyperbolic *if and only if* there exists some non-decreasing function $\varphi$, such that for any $a,b,c,p\in X$, $$\label{gromov product generalised} \min\{(a|b)_p,(b|c)_p\} \leqslant \varphi((a|c)_p).$$ Consider a geodesic triangle with vertices $x,y,z$, and points $i_x\in [y,z]$, $i_y \in [x,z]$ and $i_z\in [x,y]$ with $d(x,i_z)=d(x,i_y)$, $d(y,i_x)=d(y,i_z)$ and $d(z,i_x)=d(z,i_y)$. To show that the space is hyperbolic, it suffices to obtain a uniform bound on the diameter of the set $\{i_x, i_y, i_z\}$, [@gromov1987hyperbolic]. Since $i_x \in [y,z]$, we have $2(y|z)_{i_x}=d(y,i_x)+d(z,i_x)-d(y,z)=0$. Applying Inequality (\[gromov product generalised\]) to $x,y,z;i_x$, we obtain that $$\min\{(x|y)_{i_x},(x|z)_{i_x}\} \leqslant \varphi((y|z)_{i_x}) \leqslant \varphi(0).$$ By direct calculation: $$d(y,i_x)-d(x,y) = d(y,i_x)-d(y,i_z)-d(i_z,x) = -d(i_z,x),$$ and $$d(z,i_x)-d(x,z) = d(z,i_x)-d(z,i_y)-d(i_y,x) = -d(i_y,x).$$ Since $d(i_z,x)=d(i_y,x)$, we have $$\label{EQ1} 0\leqslant d(x,i_x)-d(x,i_z)=d(x,i_x)-d(x,i_y) \leqslant 2\varphi(0).$$ If we replace $x$ with $y$ or $z$, we have got another two similar inequalities. In particular, we have $d(x,i_z)+d(z,i_z)-d(x,z)=d(z,i_z)-d(z,i_y) \leqslant \varphi(0)$. Hence applying Inequality (\[gromov product generalised\]) again to $x,z,i_x;i_z$, we have $$\label{EQ2} \min\{(x|i_x)_{i_z}, (z|i_x)_{i_z}\}\leqslant \varphi((x|z)_{i_z}) \leqslant \varphi(\frac{\varphi(0)}{2}).$$ On the other hand, by Inequality (\[EQ1\]), we have $d(x,i_z)+d(i_x,i_z)-d(x,i_x)\geqslant d(i_x,i_z)-2\varphi(0)$ and $d(z,i_z)+d(i_x,i_z)-d(z,i_x)\geqslant d(i_x,i_z)$. Combining with (\[EQ2\]), we have: $$d(i_x,i_z) \leqslant 2\varphi(\frac{\varphi(0)}{2})+2\varphi(0).$$ Similarly, we get the same estimates for $d(i_x,i_y)$ and $d(i_y,i_z)$, which implies $$\diam (\{i_x,i_y,i_z\}) \leqslant 2\varphi(\frac{\varphi(0)}{2})+2\varphi(0),$$ providing the required uniform bound. For a rank $1$ geodesic coarse median space $(X,d,\mu)$, there exists a constant $C>0$ such that for any $a,b,p \in X$, $(a|b)_p \thicksim_C d(p,\langle a,b,p\rangle)$. Hence, the coarse inequality (\[gromov product generalised\]) above can be rewritten to give the following characterisation of rank 1: $$\label{generalised gromov inequality} \min\{d(p,\mu(a,b,p)),d(p,\mu(b,c,p))\} \leqslant \varphi(d(p,\mu(a,c,p))),$$ which is the rank 1 case of Theorem \[hyper rank\] (3). So Theorem \[hyper rank\] provides a higher rank generalisation of the Gromov inner product characterisation of hyperbolicity. We now turn to the proof of our theorem. Assume $(\rho,\kappaiv)$ are parameters of $(X,d,\mu)$. *$3) \Rightarrow 2)$*: For any $p \in \cap_{i \neq j} \N_\lambda([x_i,x_j])$ and $i \neq j$, there exists $p' \in [x_i,x_j]$ such that $p\thicksim_\lambda p'$. So we have $$\langle x_i,p,x_j\rangle \thicksim_{\rho(\lambda)} \langle x_i,p',x_j\rangle \thicksim_{\kappaiv} p' \thicksim_\lambda p.$$ Hence from condition (3), there exists some $i=1,\ldots,n+1$ such that $$d(p,\mu(x_i,p,q))\leqslant \varphi(\rho(\lambda)+\lambda+\kappaiv).$$ Taking $\psi(\lambda)=\varphi(\rho(\lambda)+\lambda+\kappaiv)$, we have $p \in \N_{\psi(\lambda)}([x_i,q])$ as required. *$2) \Rightarrow 3)$*: For any $p,q;x_1,\ldots,x_{n+1} \in X$, take $\xi=\max\{d(p,\langle x_i,x_j,p\rangle): i\neq j\}$. Then $p\thicksim_{\xi}\langle x_i,x_j,p\rangle \in [x_i,x_j]$. By condition 2), there exists some $i=1,\ldots,n+1$ such that $p \in \N_{\psi(\xi)}([x_i,q])$, i.e., there exists some $p' \in [x_i,q]$ such that $p \thicksim_{\psi(\xi)}p'$. Hence $$\mu(x_i,p,q) \thicksim_{\rho(\psi(\xi))} \mu(x_i,p',q) \thicksim_{\kappaiv}p' \thicksim_{\psi(\xi)} p.$$ Taking $\varphi(\xi)=\rho(\psi(\xi))+\psi(\xi)+\kappaiv$, we are done. *1) $\Rightarrow$ 3)*: Since the rank is at most $n$, by Theorem \[char for high rank-final\]: For any $\lambda>0$, there exists a constant $C=C(\lambda)$ such that for any $a,b\in X$, any $e_1,\ldots,e_{n+1}\in[a,b]$ with $\langle e_i,a,e_j\rangle\thicksim_\lambda a$ ($i\neq j$), one of $e_i$’s is $C$-close to $a$. Set $\xi=\max\{d(p,\langle x_i,x_j,p\rangle): i\neq j\}$, then by the coarse 4-point axiom (C2) we have: $$\langle\langle x_i,p,q\rangle,p,\langle x_j,p,q\rangle\rangle \thicksim_\kappaiv \langle\langle x_i,x_j,p\rangle,p,q \rangle \thicksim_{\rho(\xi)}\langle p,p,q\rangle=p$$ for any $i \neq j$. Therefore, we have $$\min\{d(p,\mu(x_i,p,q)):i=1,\ldots,n+1\}\leqslant C(\rho(\xi)+\kappaiv).$$ Taking $\varphi(\xi)=C(\rho(\xi)+\kappaiv)$, we are done. *3) $\Rightarrow$ 1)*: Assume $e_1,\ldots,e_{n+1} \in [a,b]$ with $\langle e_i,a,e_j\rangle \thicksim_\lambda a$. Condition 3) implies that $$\min\{d(a,\mu(e_i,a,b)): i=1,\ldots,n+1\} \leqslant \varphi(\lambda).$$ Since $e_i\in [a,b]$, we have $\mu(e_i,a,b)\thicksim_{\kappaiv} e_i$. Hence, $$\min\{d(a,e_i): i=1,\ldots,n+1\} \leqslant \varphi(\lambda)+\kappaiv.$$ Taking $C(\lambda)=\varphi(\lambda)+\kappaiv$, $(X,d,\mu)$ has rank at most $n$ by Theorem \[char for high rank-final\]. This suggests a natural notion of rank for coarse interval spaces as follows. Let $(X,d,\I)$ be a coarse interval space. We say that *the rank of $(X,d,\I)$ is at most $n$* if there exists a non-decreasing function $\psi$ such that $$\bigcap_{i \neq j} \N_\lambda([x_i,x_j]) \subseteq \bigcup_{i=1}^{n+1} \N_{\psi(\lambda)}([x_i,q])$$ for any $\lambda>0$ and $x_1,\ldots,x_{n+1},q \in X$. Note that in the higher rank case ($n\geq 2$), the intersection on the left must be uniformly bounded by axiom (I3), and can be thought of as a generalised centroid of the points $x_1, \ldots, x_{n+1}$. So the axiom asserts that the generalised centroid must be close to at least one of those coarse intervals. With this definition and combining Theorem \[induce cm and ci\], we obtain the following: \[rank preserving\] For a metric space, any coarse median of rank $n$ induces a coarse interval structure of rank $n$, and vice versa. Cubes in coarse median spaces {#cubesinCMA} ----------------------------- In this subsection we will provide a structure theorem which describes a coarse cube in a coarse median space as a product of coarse intervals. It will play a key role in our characterisation of finite rank coarse median spaces in terms of the growth of coarse intervals. Recall that median cubes are the fundamental building blocks for median algebras. Equipping the median $n$-cube $(I^n,\mu_n)$ with the $\ell^1$-metric $d_{\ell^1}$ makes it a coarse median space $(I^n, d_{\ell^1}, \mu_n)$. \[coarsecubedef\] An *$L$-coarse cube* of rank $n$ in a coarse median space $(X,d,\mu)$ is an $L$-quasi-morphism $c$ from $(I^n, d_{\ell^1}, \mu_n)$ to $(X,d,\mu)$. An *edge* in an $L$-coarse cube $c$ is a pair of points $c(\bar a), c(\bar b)$ in the image such that $\bar a, \bar b$ are adjacent vertices in the median cube. Two edges in an $L$-coarse cube $c$ are said to be *parallel* if there exist parallel edges in the median cube which map to them under $c$. We will denote the origin of the median $n$-cube by $\bar{\0}$, the vertex diagonally opposite to $\bar{\0}$ by $\bar{\1}$ and the vertices adjacent to $\bar{\0}$ by $\bar{e}_1,\ldots,\bar{e}_n$. Given an $L$-coarse cube $c$, where there is no risk of confusion, we will denote the images of the vertices $\bar{\0},\bar{\1},\bar{e}_1,\ldots,\bar{e}_n$ under the map $c$ by $\0, \1, e_1, \ldots, e_n$ respectively. The convention that elements of the median cube are barred while their images are not corresponds to the view that the median cube is an approximation (in the sense of Bowditch, see Definition \[Bowditch original def\]) to the finite set of vertices $\0, \1, e_1, \ldots, e_n$. Note that in Definition \[coarsecubedef\] we do not impose any control on the distances between the points of the image, since we wish to allow cubes of arbitrarily large diameter. By analogy with Zeidler’s result in [@zeidler2013coarse], we have the following lemma, which controls the relationship between lengths of parallel edges in a coarse cube. Given an edge $e$ of length $d$ in an $L$-coarse cube $c$, all edges parallel to $e$ in $c$ have length bounded by $\rho(d)+2L$, where $\rho$ is a control function parameter for the coarse median. The proof is similar to that of [@zeidler2013coarse Lemma 2.4.5] and is therefore omitted. Given that there is control between the lengths of parallel edges but no control on the lengths of “perpendicular” edges, it may be helpful to think of a coarse cube as a coarse cuboid. Given an interval $[a,b]$ in a coarse median space $(X, d, \mu)$, we may define a new ternary operator on $[a,b]$ by $\langle x,y,z \rangle_{a,b}:= \langle a,\langle x,y,z\rangle,b\rangle$. By [@niblo2017four Lemma 2.22], the triple $([a,b], d|_{[a,b]}, \mu_{a,b})$ is a coarse median space and $\mu\thicksim_C \mu_{a,b}$, where $C$ is independent of $a,b$. Given an $L$-coarse cube $f:I^n\rightarrow X$, define the following coarse median spaces: $$\mathcal A:=([\0,\1], d, \mu_{\0,\1}); \quad\mathcal B:=([\0,e_1]\times \ldots \times [\0, e_n], d_{\ell^1}, \mu_{\ell^1})$$ where $d_{\ell^1}$ denotes the $\ell^1$-product of the induced metrics on the intervals $[\0, e_i]$, and $\mu_{\ell^1}$ is defined by $\mu_{\ell^1}=\mu_{\0,e_1}\times \ldots \times \mu_{\0, e_n}$. Also define maps as follows: $$\left. \begin{array}{ccccc} \Phi:\mathcal A\rightarrow \mathcal B, &\quad& x & \mapsto & (\mu(\0,x,e_1), \ldots, \mu(\0,x,e_n));\\ \Psi:\mathcal B\rightarrow \mathcal A, &\quad&(x_1, \ldots, x_n)& \mapsto & \mu(\mu(x_1,\ldots, x_n;1),\0,\1). \end{array} \right.$$ \[productresult\] Let $(X,d,\mu)$ be a coarse median space and $f:I^n\rightarrow X$ be an $L$-coarse cube of rank $n$ in $X$. Then the map $\Phi:\mathcal A \to\mathcal B$ defined above provides a $(\rho_+,C)$-coarse median isomorphism with inverse $\Psi$ defined above, where $\rho_+,C$ depend only on $n$, $L$ and the parameters of $(X,d,\mu)$. Assume $\rho,\kappaiv,\kappav$ are parameters of $(X,d,\mu)$. First we show that $\Phi,\Psi$ are bornologous. By axiom (C1), for any $x,y \in [\0,\1]$ we have: $$\begin{aligned} d_{\ell^1}(\Phi(x),\Phi(y)) = \sum_{k=1}^n d(\mu(\0,x,e_k),\langle\0,y,e_k\rangle) \leqslant \sum_{k=1}^n \rho(d(x,y)) = n\rho(d(x,y)),\end{aligned}$$ which implies $\Phi$ is $(n\rho)$-bornologous. On the other hand, for any $\vec{x}=(x_1, \ldots, x_n)$ and $\vec{y}=(y_1, \ldots, y_n) \in [\0,e_1]\times \ldots \times [\0, e_n]$, by axiom (C1), we have: $$\begin{aligned} d(\Psi(\vec{x}),\Psi(\vec{y})) &=& d( \mu(\mu(x_1,\ldots, x_n;\1),\0,\1), \langle\langle y_1,\ldots, y_n;\1\rangle,\0,\1\rangle ) \\ &\leqslant& \rho(d(\mu(x_1,\ldots, x_n;\1),\langle y_1,\ldots, y_n;\1\rangle))\\ &\leqslant& \rho\circ \rho_n(\sum_{k=1}^n d(x_k,y_k)),\end{aligned}$$ where the last inequality follows from the control over iterated coarse medians provided by Lemma \[coarse iterated estimate\](1). This implies $\Psi$ is $(\rho\circ \rho_n)$-bornologous. Next we show that $\Phi$ is a quasi-morphism. For $x,y,z \in [\0,\1]$, $\mu(x,\0,\1)\thicksim_\kappaiv x$ and $\mu(y,\0,\1)\thicksim_\kappaiv y$. So by axiom (C1) and the estimate (\[five point estimate\]), we have $$\langle\langle x,y,z\rangle,\0,\1\rangle \thicksim_{\kappav}\langle\langle x,\0,\1\rangle,\langle y,\0,\1\rangle,z\rangle \thicksim_{\rho(2\kappaiv)}\langle x,y,z\rangle.$$ Applying the same argument again, denoting the projection from $[\0,e_1]\times \ldots \times [\0, e_n]$ onto the $i$-th coordinate by $pr_i$, we have: $$\begin{aligned} &&pr_i\circ \Phi (\langle x,y,z \rangle_{\0,\1}) = \langle\0, \langle\langle x,y,z\rangle,\0,\1\rangle, e_i \rangle \thicksim_{\rho(\rho(2\kappaiv)+\kappav)} \langle \0,\langle x,y,z\rangle,e_i\rangle)\\ &\thicksim_\kappaiv& \langle \0, \langle\0,\langle x,y,z\rangle,e_i\rangle, e_i \rangle \thicksim_{\rho(\kappav)} \langle \0, \langle\langle\0,x,e_i\rangle,\langle\0,y,e_i\rangle,z\rangle, e_i \rangle \\ &\thicksim_\kappav& \langle\langle\0,x,e_i\rangle, \langle\0,\langle\0,y,e_i\rangle,e_i\rangle, \langle\0,z,e_i\rangle\rangle \thicksim_{\rho(\kappaiv)} \langle\langle\0,\langle\0,x,e_i\rangle,e_i\rangle, \langle\0,\langle\0,y,e_i\rangle,e_i\rangle, \langle\0,z,e_i\rangle\rangle\\ &\thicksim_{\kappav}& \langle\0, \langle\langle \0,x,e_i\rangle, \langle\0,y,e_i\rangle, \langle\0,z,e_i\rangle \rangle, e_i \rangle =pr_i(\langle\Phi(x),\Phi(y),\Phi(z)\rangle_{\ell^1}).\end{aligned}$$ Hence $\Phi$ is a $C'$-quasi-morphism for $C'=n[\rho(\rho(2\kappaiv)+\kappav)+\rho(\kappaiv)+\rho(\kappav)+\kappaiv+2\kappav]$. Note that in the canonical cube $I^n$, the iterated median $\langle\bar{e}_1, \ldots, \bar{e}_n;\bar{\1}\rangle_n=\bar{\1}$. It follows that by Lemma \[coarse iterated estimate\](2), there exists a constant $H_n(L)$ such that $$\mu(e_1, \ldots,e_n;\1)=\langle f(\bar{e}_1), \ldots, f(\bar{e}_n);f(\bar{\1})\rangle\thicksim_{H_n(L)}f(\langle\bar{e}_1, \ldots, \bar{e}_n;\bar{\1}\rangle_n)=f(\bar{\1})=\1.$$ Now by Lemma \[coarse iterated estimate\](3), there is a constant $C_n$ such that for any $x \in [\0,\1]$, we have $$\begin{aligned} \Psi \circ \Phi(x) &=& \mu(\mu(\mu(\0,x,e_1), \ldots, \mu(\0,x,e_n);\1),\0,\1)\\ &\thicksim_{\rho(C_n)}&\mu(\mu(\0,x,\mu(e_1,\ldots, e_n;\1)),\0,\1) \thicksim_{\rho^2(H_n(L))}\mu(\mu(\0,x,\1),\0,\1)\\ &\thicksim_{\kappaiv}& \mu(x,\0,\1)\thicksim_{\kappaiv} x.\end{aligned}$$ Hence $ \Psi \circ \Phi$ is $C''$-close to the identity on $\mathcal A$ for $C'':=\rho^2(H_n(L))+\rho(C_n)+2\kappaiv$. Since $f$ is an $L$-coarse median morphism, we have $\mu(\0,\1,e_i)\thicksim_{L} e_i$ and $\langle\0,e_i, e_j\rangle\thicksim_L \0$ for $i\neq j$. For any $\vec{x}=(x_1, \ldots, x_n) \in [\0,e_1]\times \ldots \times [\0, e_n]$, we have: $$\begin{aligned} pr_i \circ \Phi \circ \Psi (\vec{x}) &=& \mu(\0,\mu(\mu(x_1,\ldots,x_n;\1),\0,\1),e_i ) \thicksim_{\kappaiv} \mu(\0,\mu(x_1,\ldots,x_n;\1),\mu(\0,\1,e_i))\\ &\thicksim_{\rho(L)}& \mu(\0,\mu(x_1,\ldots,x_n;\1),e_i) \thicksim_{C_n} \mu(\mu(\0,e_i,x_1), \ldots, \mu(\0,e_i,x_n);\1 ),\end{aligned}$$ where the final estimate follows from Lemma \[coarse iterated estimate\](3). Since $x_i\in [\0,e_i]$, we have $\mu(\0,x_i,e_i)\thicksim_\kappaiv x_i$; while for $j\not=i$, we have $$\langle\0,e_i, x_j\rangle\thicksim_{\rho(\kappaiv)} \langle\0,e_i, \langle\0,x_j,e_j\rangle\rangle\thicksim_{\kappaiv} \langle\0,\langle e_i, \0, e_j\rangle,x_j\rangle\thicksim_{\rho(L)} \0.$$ Hence applying Lemma \[coarse iterated estimate\](1), we obtain that $$\mu(\mu(\0,e_i,x_1), \ldots, \mu(\0,e_i,x_n);\1 )\thicksim_{C'''} \langle\underbrace{\0, \ldots, \0}_{i-1}, x_i, \underbrace{\0,\ldots ,\0}_{n-i};\1\rangle=\langle x_i,\underbrace{\0,\ldots ,\0}_{n-i+1};\1\rangle$$ for $C''':=\rho_n((n-1)(\rho(\kappaiv)+\kappaiv+\rho(L))+\kappaiv)$. Since all of these iterated medians lie in $[\0,\1]$, the cost of reducing the number of zeros by $1$ is $\kappaiv$, hence at worst $$\begin{aligned} \mu(x_i, \0,\ldots ,\0;\1)&\thicksim_{(n-2)\kappaiv}&\mu(x_i,\0,\1) \thicksim_{\rho(L)}\mu(\0, \mu(\0,x_i, e_i),\1)\\ &\thicksim_\kappaiv& \mu(\0,\mu(\0,e_i,\1),x_i)\thicksim_{\rho(L)} \mu(\0, e_i, x_i)\thicksim_\kappaiv x_i.\end{aligned}$$ Combining them together, we obtain that $\Phi \circ \Psi$ is $n(3\rho(L)+(n+1)\kappaiv+C_n+C''')$-close to the identity on $\mathcal B$. To sum up, taking $\rho_+(t)=\max\{n\rho(t),\rho\circ\rho_n(t)\}$ for $t \in \Rp$ and $$C=\max\{C',C'',n(3\rho(L)+(n+1)\kappaiv+C_n+C''')\},$$ we proved: both $\Phi$ and $\Psi$ are $\rho_+$-bornologous; $\Phi$ is a $C$-quasi-morphism; $\Phi\circ \Psi\thicksim_C id_{\mathcal B}$ and $\Psi\circ \Phi\thicksim_C id_{\mathcal A}$. Hence, by definition, $\Phi$ is a $(\rho_+,C)$-coarse median isomorphism with inverse $\Psi$. The above theorem suggests that we may regard the space $\mathcal A$ as a coarse cube (or, at least, cuboid) in our coarse median space. We now consider a natural family of subspaces, regarded as subcubes of $\mathcal A$. Given points $x_i \in [\0,e_i]$ and taking $x:=\Psi((x_1,\ldots,x_n))$ in $[\0,\1]$, we consider the following coarse median spaces: $$\mathcal A':=([\0,x], d, \mu_{0,x}); \quad\mathcal B':=([\0,x_1]\times \ldots \times [\0, x_n], d_{\ell^1}, \mu_{\ell^1}')$$ where $d_{\ell^1}$ denotes the $\ell^1$-product of the induced metrics on the intervals $[\0, x_i]$, and $\mu_{\ell^1}'$ is defined by $\mu_{\ell^1}'=\mu_{\0,x_1}\times \ldots \times \mu_{\0, x_n}$. Also define maps as follows: $$\left. \begin{array}{ccccc} \Phi':\mathcal A'\rightarrow \mathcal B', &\quad& y & \mapsto & (\mu(\0,y,x_1), \ldots, \mu(\0,y,x_n));\\ \Psi':\mathcal B'\rightarrow \mathcal A', &\quad&(y_1, \ldots, y_n)& \mapsto & \mu(\mu(y_1,\ldots, y_n;x),\0,x). \end{array} \right.$$ \[subcubes\] Let $(X,d,\mu)$ be a coarse median space and $f:I^n\rightarrow X$ be an $L$-coarse cube of rank $n$ in $X$. Then the map $\Phi':\mathcal A' \to\mathcal B'$ defined above provides a $(\rho'_+,C')$-coarse median isomorphism with inverse $\Psi'$, where $\rho'_+,C'$ depend only on $n$, $L$ and the parameters of $(X,d,\mu)$. It follows from the same arguments in the first part of the proof of Theorem \[productresult\], that $\Phi',\Psi'$ are $\rho_+$-bornologous and $\Phi'$ is a $C$-coarse median morphism for the same constants $\rho_+,C$ as in Theorem \[productresult\]. It suffices to prove that $\Psi' \circ \Phi'$ and $\Phi' \circ \Psi'$ are close to the corresponding identities. $\bullet$ Recall that for $\Phi,\Psi$, the map $\Phi \circ \Psi$ is $C$-close to the identity. So we have $$(x_1,\ldots,x_n)\thicksim_C \Phi\circ \Psi((x_1,\ldots,x_n))=\Phi(x)=(\mu(\0,x,e_1),\ldots,\mu(\0,x,e_n)),$$ which implies that $x_i \thicksim_C \mu(\0,x,e_i)$ for each $i$. As showed in the proof of Theorem \[productresult\], $\mu(e_1, \ldots,e_n;\1)\thicksim_{H_n(L)} \1$. Combining them together with parts (1), (2) and (4) of Lemma \[coarse iterated estimate\], we obtain that $$\begin{aligned} \mu(x_1,\ldots,x_n;x) &\thicksim_{\rho_n(nC+\kappaiv)}& \mu(\mu(\0,x,e_1),\ldots,\mu(\0,x,e_n);\mu(\0,x,\1))\\ &\thicksim_{D_n}& \mu(\0,x,\mu(e_1,\ldots,e_n;\1)) \thicksim_{\rho(H_n(L))}\mu(\0,x,\1)\thicksim_{\kappaiv}x,\end{aligned}$$ i.e., $\mu(x_1,\ldots,x_n;x) \thicksim_{\alpha_n(L)}x$ for $\alpha_n(L):=\rho(H_n(L))+\rho_n(nC+\kappaiv)+D_n+\kappaiv$. Now for any $y \in [\0,x]$, we have: $$\begin{aligned} \Psi' \circ \Phi'(y) &=& \mu(\mu(\mu(\0,y,x_1), \ldots, \mu(\0,y,x_n);x),\0,x)\\ &\thicksim_{\rho(C_n)}&\mu(\mu(\0,y,\mu(x_1,\ldots, x_n;x)),\0,x) \thicksim_{\rho^2(\alpha_n(L))}\mu(\mu(\0,y,x),\0,x) \thicksim_{2\kappaiv} y.\end{aligned}$$ Hence $ \Psi' \circ \Phi'$ is $C''$-close to $\mathrm{Id}_{\mathcal A'}$ for $C'':=\rho^2(\alpha_n(L))+\rho(C_n)+2\kappaiv$. $\bullet$ For the other direction, since $x_i \thicksim_C \mu(\0,x,e_i)$, we have: $$\mu(\0,x_i,x)\thicksim_{\rho(C)}\mu(\0,\mu(\0,x,e_i),x)\thicksim_\kappaiv \mu(\0,x,e_i)\thicksim_C x_i.$$ Hence for any $\vec{y}=(y_1, \ldots, y_n) \in [\0,x_1]\times \ldots \times [\0, x_n]$, we have $$\begin{aligned} pr_i \circ \Phi' \circ \Psi' (\vec{y}) &=& \mu(\0,\mu(\mu(y_1,\ldots,y_n;x),\0,x),x_i )\\ &\thicksim_{\kappaiv}& \mu(\0,\mu(y_1,\ldots,y_n;x),\mu(\0,x,x_i))\thicksim_{\rho(\rho(C)+C+\kappaiv)} \mu(\0,\mu(y_1,\ldots,y_n;x),x_i)\\ &\thicksim_{C_n}& \mu(\mu(\0,x_i,y_1), \ldots, \mu(\0,x_i,y_n);x ),\end{aligned}$$ where the final estimate follows from Lemma \[coarse iterated estimate\](3). On the other hand, since $\langle e_i,\0,e_j\rangle \thicksim_L \0$ for $i\neq j$, we have $$\langle x_i,\0,e_j\rangle\thicksim_{\rho(\kappaiv)}\langle\langle\0,x_i,e_i\rangle,\0,e_j\rangle\thicksim_{\kappaiv}\langle\0,x_i,\langle e_i,\0,e_j\rangle\rangle\thicksim_{\rho(L)}\langle\0,x_i,\0\rangle=\0,$$ which implies that $$\begin{aligned} \langle x_i,\0,x_j\rangle\thicksim_{\rho(\kappaiv)}\langle x_i,\0,\langle\0,x_j,e_j\rangle\rangle \thicksim_\kappaiv \langle\0,x_j,\langle x_i,\0,e_j\rangle\rangle\thicksim_{\rho(\rho(L)+\rho(\kappaiv)+\kappaiv)}\langle\0,x_j,\0\rangle=\0.\end{aligned}$$ In other words, $\langle x_i,\0,x_j\rangle \thicksim_{\beta_n(L)}\0$ for $\beta_n(L):=\rho(\rho(L)+\rho(\kappaiv)+\kappaiv)+\rho(\kappaiv)+\kappaiv$. Notice that $\mu(\0,y_i,x_i)\thicksim_\kappaiv y_i$, so for $j\not= i$ we have $$\langle\0,x_i, y_j\rangle\thicksim_{\rho(\kappaiv)} \langle\0,x_i, \langle\0,y_j,x_j\rangle\rangle\thicksim_{\kappaiv} \langle\0,\langle x_i, \0, x_j\rangle,y_j\rangle\thicksim_{\rho(\beta_n(L))} \0.$$ Now using the same arguments as in the proof of Theorem \[productresult\], we obtain that for the constant $$C''':=\rho_n((n-1)\rho(\beta_n(L))+(n-1)\rho(\kappaiv)+n\kappaiv),$$ we have $$\begin{aligned} \mu(\mu(\0,x_i,y_1), \ldots, \mu(\0,x_i,y_n);x )\thicksim_{C'''} \mu(\0, \ldots, \0, y_i, \0,\ldots ,\0;x)\thicksim_{(n-2)\kappaiv}\mu(\0,y_i,x)\\ \thicksim_{\rho(\kappaiv)}\mu(\0, \mu(\0,y_i, x_i),x)\thicksim_\kappaiv \mu(\0,\mu(\0,x_i,x),y_i)\thicksim_{\rho(\rho(C)+C+\kappaiv)} \mu(\0, x_i, y_i)\thicksim_\kappaiv y_i.\end{aligned}$$ Therefore, $\Phi' \circ \Psi'$ is $D'$-close to $\mathrm{Id}_{\mathcal B'}$ for $$D':=n[C'''+2\rho(\rho(C)+C+\kappaiv)+\rho(\kappaiv)+(n+1)\kappaiv+C_n].$$ Finally setting $\rho_+'=\rho_+$ and $C'=\max\{C,C'',nD'\}$, we finish the proof. Rank and coarse interval growth {#growth} ------------------------------- In this subsection we will give a characterisation of rank in terms of interval growth as a converse to a result of Bowditch, [@bowditch2014embedding]. First we notice that the cardinality of intervals can always be bounded in terms of the distance between its endpoints in the context of bounded geometry coarse median spaces. \[finiteness\] Let $(X,d,\mu)$ be a coarse median space with parameters $(\rho, \kappaiv, \kappav)$ and $a,b\in X$ with $d(a,b) \leqslant r$, then $[a,b] \subseteq B(a,\rho(r))$. If in addition $(X,d)$ has bounded geometry, then there exists a constant $C(r)$ such that $\card [a,b] \leqslant C(r)$. For any $c\in [a,b]$, there exists some $x\in X$ such that $c=\mu(a,b,x)$. Now by axiom (C1), we have $$c=\mu(a,b,x)\thicksim_{\rho(r)}\mu(a,a,x)=a,$$ which implies $c\in B(a,\rho(r))$. The second statement follows directly by the definition of bounded geometry. For the remainder of this section we will specialise to the context of *uniformly discrete quasi-geodesic* coarse median spaces with bounded geometry. Recall that in a quasi-geodesic coarse median space $(X,d,\mu)$ we can always choose $\rho$ in (C1) to have the form $\rho(t)=Kt+H_0$ for some constant $K,H_0>0$. Bowditch proved [@bowditch2014embedding] that in a uniformly discrete coarse median space of bounded geometry and finite rank, there is a polynomial bound on growth within intervals. Now given an interval $[a,b]$ in $X$, any point $x\in [a,b]$ can be written in the form $x=\mu(a,y,b)$, so: $$x=\mu(a,y,b)\thicksim_{Kd(a,b)+H_0} \mu(a,y,a) =a,$$ which implies that $\diam([a,b]) \leqslant 2Kd(a,b)+2H_0$. Taking the subset $Q=[a,b]\subseteq [a,b]_\kappaiv$ (where $[a,b]_\kappaiv$ is Bowditch’s definition of coarse interval), we obtain the following as a corollary to Bowditch’s result [@bowditch2014embedding Proposition 9.8]. \[bounding intervals in terms of diam\] Let $(X,d,\mu)$ be a uniformly discrete quasi-geodesic coarse median space with bounded geometry and which has rank at most $n$. Then there is a function $p: \mathbb N \to \mathbb N$ with $p(r)=o(r^{n+\epsilon})$ for all $\epsilon>0$, such that $\card[a,b] \leqslant p(d(a,b))$ for any $a,b\in X$. We now provide a converse to Bowditch’s theorem, showing that this growth condition indeed characterises the rank. \[coarseintervalrank\]\[growth rank\] Let $(X,d,\mu)$ be a uniformly discrete, quasi-geodesic coarse median space with bounded geometry and $n$ be a natural number. The following are equivalent: 1. $(X,d,\mu)$ has rank at most $n$; 2. there is a function $p: \Rp\to \Rp$ with $p(r)=o(r^{n+\epsilon})$ for all $\epsilon>0$, such that $\card~ [a,b] \leqslant p(d(a,b))$ for any $a,b\in X$; 3. there is a function $p: \Rp \to \Rp$ with $p(r)/r^{n+1}\stackrel{r\rightarrow\infty}{\longrightarrow}0$, such that $\card~ [a,b] \leqslant p(d(a,b))$ for any $a,b\in X$. (1)$\Rightarrow$(2) is given by Proposition \[bounding intervals in terms of diam\], while (2)$\Rightarrow$(3) *a fortiori*. (3)$\Rightarrow$(1): Suppose $X$ is $(\alpha,\beta)$-quasi-geodesic, $(K,H_0, \kappaiv, \kappav)$ are parameters of $X$, and $\rank X>n$ (note that we do not assume $X$ has finite rank). By Theorem \[char for high rank-final\], there exists a constant $L_0>0$, such that for any $C>0$, there exists an $L_0$-coarse cube $\sigma: I^{n+1} \rightarrow X$ with $d(\sigma(\bar{e}_i),\sigma(\bar{\0}))>C$ for all $i$. After setting $\0:=\sigma(\bar{\0}), \1:=\sigma(\bar{\1})$ and $e_i:=\sigma(\bar{e}_i)$ for each $i$, we have $d(e_i,\0)>C$. Now choose a discrete $(\alpha, \beta)$-quasi-geodesic $\0=p_0, \ldots p_k=e_i$ and construct $q_j=\langle\0,p_j,e_i\rangle$ to get a sequence of points in $[\0, e_i]$ with $d(q_j, q_{j-1})\leq G$ where $G=K(\alpha+\beta)+H_0$ is independent of $C$. Now $d(\0, q_0)=0$ and $d(\0, q_k)>C$, so we may choose the first $j$ such that $d(\0, q_j)\geq C$ and for this $j$, we also have $d(\0, q_j)<C+G$. Setting $x_i:=q_j\in [\0, e_i]$ we have $C\leqslant d(\0, x_i)<C+G$. Choose a discrete $(\alpha,\beta)$-quasi-geodesic $z_0,z_1,\ldots,z_k\in X$ connecting $\0$ and $x_1$. Projecting $z_i$ into $[\0,x_1]$, we obtain a sequence $\0=y_0,y_1,\ldots,y_k=x_1$ with $d(y_i,y_{i-1}) \leqslant K(\alpha+\beta)+H_0$, where $y_i=\mu(\0,z_i,x_1)$. We inductively “de-loop” this sequence to define a subsequence $y_{j_0},\ldots,y_{j_l}$ such that the points in it are distinct, but still satisfy $d(y_{j_p},y_{j_{p-1}}) \leqslant K(\alpha+\beta)+H_0$: let $j_0$ be the maximal index such that $y_{j_0}=y_0$; for $l>0$, set $j_p$ to be the maximal index such that $y_{j_p}=y_{j_{p-1}+1}$, and we obtain the required sequence. This process allows us to assume that we have picked the sequence $\0=y_0,y_1,\ldots,y_l=x_1$ to be distinct while ensuring that $d(y_i,y_{i-1}) \leqslant K(\alpha+\beta)+H_0$ for each $i$. Now we have: $$C\leq d(\0,x_1) \leqslant \sum_{i=1}^l d(y_i,y_{i-1}) \leqslant l\cdot(K(\alpha+\beta)+H_0),$$ which implies $\card[\0,x_1] \geqslant l\geqslant \frac{C}{K(\alpha+\beta)+H_0}$. Similar estimate holds for each $[\0,x_i]$. Hence, we obtain that for the constant $\gamma:=(\frac{1}{K(\alpha+\beta)+H_0})^{n+1}$, $$\card([\0,x_1]\times \ldots \times [\0,x_{n+1}]) \geqslant \gamma C^{n+1}.$$ Now set $x:= \mu(\0, \mu(x_1, \ldots, x_{n+1}; \1),\1)$. By Corollary \[subcubes\], there exist a function $\rho_+$, a constant $\lambda_0$ depending only on $n,L_0$ and the parameters, and a $(\rho_+,\lambda_0)$-coarse median isomorphism $\Psi':[\0,x_1]\times \ldots \times [\0,x_{n+1}] \to [\0,x]$. Moreover since $X$ is quasi-geodesic, $\Psi'$ is a quasi-isometry. Enlarging $\lambda_0$ if necessary, we may assume that $\rho_+(t)=\lambda_0 t+\lambda_0$. Hence for any $\vec{z},\vec{y} \in [\0,x_1]\times \ldots \times [\0,x_{n+1}]$, we have: $$\label{distance bounds original} \lambda_0^{-1}d_{\ell^1}(\vec{z},\vec{y})-\lambda_0 \leqslant d(\Psi'(\vec{z}),\Psi'(\vec{y})) \leqslant \lambda_0d_{\ell^1}(\vec{z},\vec{y})+\lambda_0.$$ Since $X$ has bounded geometry, there exists a constant $N$ depending only on $\lambda_0$ such that $\card\Psi'^{-1}(\{y\}) \leqslant N$ for any $y\in [\0,x]$. In other words, $\Psi'$ may collapse at most $N$ points to a single point. Hence $\card\Psi'(A) \geqslant \frac{1}{N}\card A$ for any $A\subseteq [\0,x_1]\times \ldots \times [\0,x_{n+1}]$. In particular, we have $$\label{intervalcardinalityestimate} \card[\0,x] \geqslant \card\Psi'([\0,x_1]\times \ldots \times [\0,x_{n+1}]) \geqslant \frac{1}{N} \card([\0,x_1]\times \ldots \times [\0,x_{n+1}]) \geqslant \frac{\gamma}{N} C^{n+1}.$$ Now we would like to estimate the distance $d(\0,x)$ and show that it is approximately linear in $C$. First notice that $\Psi'(\vec{0})=\0$, and by definition we have $$\begin{aligned} \Psi'(\vec{x})&=&\mu(\mu(x_1,\ldots, x_{n+1};x),\0,x)=\mu(x_1,\ldots, x_{n+1},\0;x)\\ &=& \mu(x_1,\ldots, x_{n+1},\0;\mu(x_1,\ldots, x_{n+1},\0;\1))\\ &\thicksim_{E_n}& \mu(x_1,\ldots, x_{n+1},\0;\1)=x,\end{aligned}$$ where estimate in the third line follows from Lemma \[coarse iterated estimate new2\] and the constant $E_n$ depends only on $n,\lambda_0,\kappao$ and $\kappaiv$. Combining with (\[distance bounds original\]), we have: $$\begin{aligned} d(\0,x) &\leqslant& d(\Psi'(\vec{0}),\Psi'(\vec{x}))+E_n \leqslant \lambda_0d_{\ell^1}(\vec{0},\vec{x})+\lambda_0+E_n=\lambda_0 \sum\limits_{i=1}^{n+1}d(\0, x_i)+\lambda_0 + E_n \\ &\leqslant& \lambda_0 (n+1)(C+G)+\lambda_0+E_n.\end{aligned}$$ After rearranging, we get $$C\geqslant\frac{d(\0,x)-\lambda_0(nG+G+1)-E_n}{\lambda_0 (n+1)}.$$ Combining with (\[intervalcardinalityestimate\]), we obtain: $$\card[\0,x] \geqslant \frac{\gamma}{N} \Big(\frac{d(\0,x)-\lambda_0(nG+G+1)-E_n}{\lambda_0 (n+1)}\Big)^{n+1}.$$ On the other hand, (\[distance bounds original\]) implies that $$\begin{aligned} d(\0,x)&\geqslant& d(\Psi'(\vec{0}),\Psi'(\vec{x}))-E_n \geqslant \lambda_0^{-1}d_{\ell^1}(\vec{0},\vec{x})-\lambda_0-E_n \\ &\geqslant&\lambda_0^{-1}(n+1)C-\lambda_0-E_n.\end{aligned}$$ So $d(\0,x) \to \infty$ as $C \to \infty$. Therefore for any $C>0$, we have constructed an interval $[\0,x]$ such that the distance $d(\0,x)$ goes to infinity as $C \to \infty$, and the cardinality $\sharp[\0,x]$ is bounded below by a polynomial of degree $n+1$ in $d(\0,x)$ with positive leading coefficient, $\frac{\gamma}{N(\lambda_0(n+1))^{n+1}}$. This contradicts the existence of the function $p$. Theorem \[coarseintervalrank\] allows us to characterise rank of a coarse interval space purely in terms of the growth of intervals: A uniformly discrete, bounded geometry, quasi-geodesic coarse interval space $(X, d,\I)$ has rank at most $n$ *if and only if* there is a function $p: \Rp \to \Rp$ with $\lim\limits_{r\rightarrow\infty}p(r)/r^{n+1}=0$, such that $\card[a,b] \leqslant p(d(a,b))$ for any $a,b\in X$. Intervals and metrics for ternary algebras {#coarsemedianalgebras} ========================================== Bowditch observed that perturbing the metric for a coarse median space up to quasi-isometry respects the coarse median axioms, it is not, however, *a priori* obvious the extent to which the metric is determined by the coarse median operator. We will now show that for a quasi-geodesic coarse median space $(X,d,\mu)$ of bounded geometry the metric is determined *uniquely* up to quasi-isometry by $\mu$. This motivates our definition of coarse median algebra, as given in the introduction. To establish the uniqueness of the metric, we will construct a canonical metric defined purely in terms of the intervals associated to the coarse median operator. The construction may be of independent interest since it can be defined for any ternary operator satisfying some weakening of axioms (M1) and (M2), and therefore in the context of a more general notion of interval structure. (The following reversal axiom can in fact be weakened to the existence of bijections between the corresponding intervals $[a,b]$ and $[b,a]$). Abstract ternary algebras and induced metrics {#ternary} --------------------------------------------- =Consider a ternary algebra $(X,\nu)$ satisfying the following axioms: - $\nu(a,a,x)=\nu(a,x,a)=a$ for all $a,x\in X$; - $\nu(a,x,b)=\nu(b,x,a)$, for all $a,x,b\in X$. While classically it is natural to think of the ternary operator $\nu$ as furnishing a notion of betweenness, whereby $c$ lies between $a, b$ if and only if $\nu(a,c,b)=c$, this definition is not well adapted to the coarse world, where statements are typically true up to controlled distortion. Regarding the operation $x\mapsto \nu(a,x,b)$ as a projection, which realises our definition of interval as the range of the projection, is better suited to this environment. Axiom (T1) ensures that the interval $[a,a]=\{a\}$ and axiom (T2) that $[a,b]=[b,a]$, and these axioms together are a slight weakening of axioms (M1) and (M2) for a (coarse) median algebra. \[graph\] Let $\Gamma$ be a connected graph and for any $a,b,x\in V(\Gamma)$ choose a vertex, denoted $\nu(a,x,b)$, which lies on an edge geodesic from $a$ to $b$ and minimises distance to $x$ among all such choices. Clearly we can do so to satisfy axiom (T2), while axiom (T1) is immediate. With this definition of the ternary operator, the interval $[a,b]$ is exactly the set of vertices on edge geodesics from $a$ to $b$. We will use cardinalities of intervals to measure distances. In order to ensure that these distances are finite we need to impose a condition that points can be joined by chains of finite intervals: A ternary algebra $(X,\nu)$ is said to satisfy the *finite interval chain condition*, if for any $a,b\in X$ there exists a sequence $a=x_0, x_1, \ldots, x_n:=b$ in $X$ such that the cardinality of each interval $[x_i, x_{i+1}]$ is finite. Given a ternary algebra $(X,\nu)$ satisfying the finite interval chain condition, we define the *induced function* $d_\nu$ on $X\times X$ as follows: for any $a,b \in X$, $$d_\nu\relax (a,b)=\min \Big\{ \sum_{i=1}^n (\card [x_{i-1},x_i]-1): a=x_0,\ldots,x_n=b, x_i\in X, n\in \mathbb{N} \Big\}.$$ It is routine to check that $d_\nu$ satisfies the triangle inequality, and the imposition of axioms (T1) and (T2) ensure that the function $d_\nu$ also satisfies the obvious symmetry, reflexivity and positivity conditions, so that $d_\nu$ is a metric in this case. When (T1) and (T2) are satisfied, we will refer to $d_\nu$ as the *induced metric*. Let $(X,\mu)$ be a discrete median algebra and let $Z$ be its geometric realisation as a CAT(0) cube complex. Then the induced metric $d_\mu$ is the edge-path metric on the vertices of $Z$. Let $\Gamma$ be a connected graph and $\nu$ the projection operator defined in Example \[graph\], then the induced metric $d_\nu$ is the edge-path metric on the vertices of $\Gamma$. Uniqueness of coarse median metrics {#uniquemetricsection} ----------------------------------- While it is easy to show that one can change the metric of a coarse median space arbitrarily within its quasi-isometry class, it is a remarkable fact, as we will now show, that the quasi-isometry class of the metric is determined uniquely by the coarse median operator. Indeed, the induced metric is the unique coarse median metric up to quasi-isometry: For a bounded geometry quasi-geodesic coarse median space $(X,d,\mu)$, the metric $d$ is unique up to quasi-isometry. Moreover $d$ is quasi-isometric to the induced metric $d_\mu$. Let $(X,d,\mu)$ be an $(L,C)$-quasi-geodesic coarse median space with bounded geometry, and let $(K, H_0, \kappao,\kappaiv,\kappav)$ denote its parameters. *First*, we will show that $d$ can be controlled by $d_\mu$. Given $a,b\in X$, let $a=a_0,\ldots,a_n=b$ be a sequence of points such that $$d_{\mu\relax}(a,b)=\sum_{i=1}^n (\card [a_{i-1},a_i]-1).$$ Fix $i$ and choose an $(L,C)$-quasi-geodesic $\gamma_i$ with respect to the metric $d$ connecting $a_{i-1}$ and $a_i$. Take $n_i=\lfloor d(a_{i-1},a_i) \rfloor$, the integer part of $d(a_{i-1},a_i)$, and $$x_0=\gamma_i(0)=a_{i-1},x_1=\gamma_i(1),\ldots,x_{n_i}=\gamma_i(n_i), x_{n_i+1}=\gamma_i(d(a_{i-1},a_i))=a_i,$$ then $d(x_{i-1},x_i) \leqslant L+C$. Let $y_j=\langle a_{i-1},a_i,x_j\rangle\in [a_{i-1},a_i]$, then $d(y_{j-1},y_j) \leqslant K(L+C)+H_0$ by axiom (C1). Write $C'=K(L+C)+H_0$. As in the proof of Theorem \[coarseintervalrank\] we can “de-loop” the sequence $y_0,y_1,\ldots,y_{n_i+1}$ in $[a_{i-1},a_i]$ to a subsequence $y_{j_0},\ldots,y_{j_l}$ with the property that the points in it are distinct, but still satisfy $d(y_{j_k},y_{j_{k-1}}) \leqslant C'$. Hence, we have $$d(a_{i-1},a_i) \leqslant \sum_{k=1}^l d(y_{j_{k-1}},y_{j_k}) \leqslant l \cdot C' \leqslant (\card [a_{i-1},a_i]-1) \cdot C'.$$ The same estimate holds for other $i$ as well. Therefore, we obtain that $$d(a,b) \leqslant \sum_{i=1}^n d(a_{i-1},a_i) \leqslant C' \cdot \sum_{i=1}^n (\card [a_{i-1},a_i]-1) = C' \cdot d_\mu\relax(a,b).$$ *Second*, we show that $d_\mu\relax$ can be controlled by $d$. For any $a,b\in X$ choose an $(L,C)$-quasi-geodesic $\gamma$ with respect to the metric $d$ connecting them, and take $a_i=\gamma(i)$ for $i=0,1,\ldots,n-1=\lfloor d(a,b) \rfloor$ and $a_n=\gamma(d(a,b))$, which implies $d(a_{i-1},a_i) \leqslant L+C$. By Lemma \[finiteness\], there exists a constant $C''$ (depending on $L+C$) such that the intervals $[a_{i-1},a_i]$ all have cardinality at most $C''$. Hence we have $$d_\mu\relax(a,b) \leqslant \sum_{i=1}^n (\card [a_{i-1},a_i]-1) < \sum_{i=1}^n C" \leqslant C" \cdot (d(a,b) + 1).$$ In conclusion, we showed that for any $a,b\in X$, $$\frac{1}{C'}\cdot d(a,b) \leqslant d_\mu\relax(a,b) < C" \cdot d(a,b) + C".$$ This completes the proof. Without the assumption that $(X,d)$ is quasi-geodesic, Theorem \[bi-lip equi\] fails. Indeed, $(X,d)$ can have bounded geometry and $(X,d_\mu)$ have balls of infinite cardinality as the following example shows: \[F infty\] Let $F_\infty$ be the free group on countably many generators $\{g_i\}$. The Cayley graph of $F_\infty$ is a tree and therefore the group admits a median $\mu$. Note that with the induced metric $d_\mu$ this is a coarse median space and does not have bounded geometry, since each of the intervals $[e, g_i]$ has cardinality $2$. However, for $d$ a proper left invariant metric on $F_\infty$ (e.g., setting $d(g_i,e)=i$), the space $(F_\infty,d,\mu)$ is again a coarse median space. With this metric the space is bounded geometry. Hence $\mu$ admits two coarse median metrics which are not quasi-isometric. If we just focus on uniformly discrete metrics, then it is clear that “quasi-isometric" can be replaced by “bi-Lipschitz" in Theorem \[bi-lip equi\]. Coarse median algebras {#coarsemedianalg} ====================== We have seen that intervals play a key role in determining the structure and geometry of a coarse median space. In particular as shown in Theorem \[uniquemetricthm\], for a quasi-geodesic coarse median space of bounded geometry the metric is determined by the interval structure, and is therefore redundant in the description. This leads us to the following purely algebraic notion of coarse median algebra. A *coarse median algebra* is a ternary algebra $(X,\mu)$ with finite intervals such that: - For all $a,b\in X$, $\mu(a,a,b)=a$; - For all $a,b,c\in X$, $\mu(a,b,c)=\mu(a,c,b)=\mu(b,a,c)$; - There exists a constant $K\geq 0$ such that for all $a,b,c,d,e\in X$ the cardinality of the interval $\big[\mu(a,b,\mu(c,d,e)),\, \mu(\mu(a,b,c),\mu(a,b,d),e)\big]$ is at most $K$. As remarked in the introduction, taking the case when $K=1$ this reduces to the classical definition of a discrete median algebra. Bounded geometry for a ternary algebra -------------------------------------- \[bounded valency def\] A ternary algebra $(X,\nu)$ is said to have *bounded valency* if there is a function $\phi:\mathbb R^+\rightarrow \mathbb R^+$ such that for all $x\in X$, $$\sharp \{y\in X\mid \sharp [x,y]\leq R\}\leq \phi(R).$$ The terminology is motivated by the example of a median graph, where bounded valency in our sense agrees with its classical meaning. \[bdd geo\] Let $(X,\nu)$ be a ternary algebra satisfying (T1) and (T2) together with the finite interval chain condition. Then it has bounded valency *if and only if* the induced metric $d_\nu$ has bounded geometry. Fix $x\in X$ and $R>1$. Since $d_\nu\relax(x,y) \leq \sharp[x,y]-1$, we have $$\{y\in X\mid \sharp [x,y]\leq R\} \subseteq B_{R-1}(x).$$ Hence bounded geometry of $d_\nu$ implies bounded valency. On the other hand, suppose $X$ has bounded valency with parameter $\phi$. For any $y\in B_R(x)$ there is an interval chain $x=x_0, \ldots , x_n=y$ with $n\leq R$ and such that each interval $[x_i, x_{i+1}]$ has at most $R+1$ points. It follows that given $x_i$ the number of possible choices for $x_{i+1}$ is at most $\phi(R+1)$, so $B_R(x)$ has cardinality at most $\phi(R+1)^R$. [ Let $(X,\mu)$ be a bounded valency ternary algebra. Then $(X, \mu)$ admits a metric $d$ such that $(X,d, \mu)$ is a bounded geometry coarse median space *if and only if* $(X,\mu)$ is a coarse median algebra. ]{} Suppose $(X,\mu)$ is a bounded valency coarse median algebra, we impose the induced metric $d:=d_\mu$, which has bounded geometry by Lemma \[bdd geo\]. Axiom (M3)’ gives us an upper bound on the distance between the two iterated medians, $\mu(a,b,\mu(c,d,e))$ and $\mu(\mu(a,b,c),\mu(a,b,d),e)$, which specialises to the 4-point axiom (C2). It only remains to establish axiom (C1). To do so, we choose a finite interval chain $a=x_0, \ldots , x_n=a'$ which realises the distance $d(a, a')$. For each $i$, let $y_i=\mu(x_i,b,c)$ and consider the interval chain $y_0=\mu(a,b,c), \ldots, y_n=\mu(a',b,c)$ which gives an upper bound for $d(\mu(a,b,c), \mu(a',b,c))$. For each point $$\mu(z, y_i, y_{i+1}) = \mu(z,\mu(x_i, b,c),\mu(x_{i+1},b,c))$$ in the interval $[y_i, y_{i+1}]$, the interval from $\langle z, y_i, y_{i+1}\rangle$ to $\mu(\mu(z,x_i, x_{i+1}),b,c)$ has cardinality at most $K$ by axiom (M3)’. Clearly, the set $\{\mu(\mu(z,x_i, x_{i+1}),b,c)\mid z\in X\}$ has cardinality bounded by the cardinality of $[x_i, x_{i+1}]$. So by bounded valency, the interval $[y_i, y_{i+1}]$ has cardinality bounded by $\phi(K) \cdot \card[x_i,x_{i+1}]$. It follows that $$d(\mu(a,b,c), \mu(a',b,c)) \leqslant \phi(K)\sum_{i=0}^{n-1}\card[x_i,x_{i+1}]\leqslant 2\phi(K)d(a,a').$$ Therefore, $(X,d,\mu)$ is a coarse median space. On the other hand, suppose there exists a bounded geometry metric $d$ on $X$ such that $(X,d,\mu)$ is a coarse median space. Due to axiom (C2) and the bounded geometry of $d$, we know that (M3)’ holds from Lemma \[finiteness\]. Therefore, $(X,d,\mu)$ is a coarse median algebra. While it is tempting to conflate the ideas of bounded geometry and bounded valency in this context, some care should be taken, since in the general world of coarse median spaces the metric is only loosely associated with the median structure as illustrated by Example \[F infty\]: the free group $F_\infty$ equipped with a proper left invariant metric and its natural median is a coarse median space which has bounded geometry, but not bounded valency. Of course this example is not quasi-geodesic, and in the quasi-geodesic world, as we saw in Theorem \[uniquemetricthm\], we have much better control. Quasi-geodesic ternary algebras ------------------------------- \[qgdef\] A ternary algebra $(X,\nu)$ satisfying (T1) and (T2) is said to be *quasi-geodesic* if there exist constants $L,C>0$ such that for any $a,b\in X$, there exist $a=y_0,\ldots,y_n=b$ with $\sharp [y_j,y_{j+1}] \leqslant C+1$ and $n \leqslant L\sharp [a,b]$. Note that the finite interval chain condition is subsumed in this definition so does not need to be imposed separately. This definition has a natural interpretation in the terms of the following analogue of the classical Rips Complex. For $(X,\nu)$ a ternary algebra, let $P_C(X, \nu)$ denote the simplicial complex in which $\sigma=[x_0,x_1,\ldots,x_n]$ is an $n$-simplex for $x_0,x_1,\ldots,x_n \in X$ if and only if $\sharp[x_i,x_j] \leqslant C+1$. Recall for comparison that if $(X,d)$ is a metric space then for $C>0$ the *Rips complex* is the simplicial complex, in which $\sigma=[x_0,x_1,\ldots,x_n]$ is an $n$-simplex for $x_0,x_1,\ldots,x_n \in X$ if and only if $d(x_i,x_j) \leqslant C$. When the complex $P_C(X, \nu)$ is connected, its vertex set $X$ inherits the edge-path metric, denoted $d_{P_C}$, which is of course a geodesic metric. \[quasi-geodesic\] Let $(X,\nu)$ be a ternary algebra satisfying conditions (T1) and (T2) together with the finite interval chain condition. Let $d_\nu$ denote the induced metric . Then the following are equivalent: (1) The metric $d_\nu$ is quasi-geodesic. (2) The ternary algebra $(X,\mu)$ is quasi-geodesic. (3) There exists $C>0$ such that the complex $P_C(X, \nu)$ is connected and $d_\nu$ is quasi-isometric to the edge-path metric $d_{P_C}$ on the complex. *(1) $\Rightarrow$ (2)*: Assume $d_\nu$ is $(L',C')$-quasi-geodesic and $a\neq b\in X$. Let $\gamma: [0,m] \rightarrow X$ be an $(L',C')$-quasi-isometric embedding with $\gamma(0)=a$ and $\gamma(m)=b$. Without loss of generality we may take $m$ to be an integer. Let $x_i=\gamma(i)$, and note that $d_\nu\relax(x_i,x_{i+1})\leq C:=L'+C'$. On the other hand, $\frac 1{L'} m -C'\leq d_\nu\relax(a,b)$, so $m\leq L'd_\nu\relax(a,b)+L'C\leq L''d_\nu\relax(a,b)$, where $L''=L'+L'C'$. Now fix $i$ and take a chain $y_i^0,\dots y_i^{n_i}$ realising the distance from $x_i$ to $x_{i+1}$, i.e., $$d_\nu\relax(x_i,x_{i+1})=\sum_{j=0}^{n_i-1}(\sharp [y_i^j,y_i^{j+1}]-1).$$ Since $d_\nu\relax(x_i,x_{i+1})\leq C$ it follows that each set $[y_i^j,y_i^{j+1}]$ has cardinality at most $C+1$. Furthermore, without loss of generality, we may assume that $y_i^j\neq y_i^{j+1}$ for each $j$, which implies $n_i \leq d_\nu\relax(x_i,x_{i+1})\leq C$. Concatenating these chains gives the required chain from $a$ to $b$. Putting $L=CL''$, the number of terms is: $$\sum_{i=0}^{m-1} n_i\leq Cm\leq CL''d_\nu\relax(a,b) < L\sharp [a,b].$$ *(2) $\Rightarrow$ (3)*: Assuming condition (2) holds with constants $L,C$, the complex $P_C(X, \nu)$ is connected. If $d_{P_C}(a,b)=n$ then there exist $x_0=a,x_1,\dots,x_n=b$ with each interval $[x_{i-1},x_i]$ having cardinality at most $C+1$, and hence $$d_\nu{}(a,b)\leq nC=Cd_{P_C}(a,b).$$ Now we fix $a,b\in X$ and choose mutually different points $a=z_0, z_1, \ldots, z_{k-1}, z_k=b$ in $X$ such that $$d_\nu\relax(a,b)= \sum_{i=0}^{k-1}(\sharp [z_i,z_{i+1}]-1).$$ For each $i=0,1,\ldots,k-1$, applying condition (3) to $z_i, z_{i+1}$ produces a number $k_i \in \mathbb N$ and points $z_i=w_i^0,w_i^1,\ldots,w_i^{k_i-1}, w_i^{k_i}=z_{i+1}$ in $X$ with $\sharp [w_i^j,w_i^{j+1}] \leqslant C+1$ and $k_i \leqslant L\sharp [z_i, z_{i+1}]$. Since $\sharp [z_i, z_{i+1}] \geq 2$, we have $\sharp [z_i, z_{i+1}] \leq 2(\sharp [z_i, z_{i+1}]-1)$. Hence, $$p:=\sum_{i=0}^{k-1}k_i \leq L\sum_{i=0}^{k-1}\sharp [z_i, z_{i+1}] \leq 2L\sum_{i=0}^{k-1}(\sharp [z_i, z_{i+1}]-1) = 2L d_\nu\relax(a,b).$$ Concatenating these chains provides a chain $a=w_0,w_1,\ldots,w_p=b$ with $\sharp [w_i,w_{i+1}] \leq C+1$ and $p \leq 2L d_\nu\relax(a,b)$, which gives an upper bound $$d_{P_C}(a,b)\leq p\leq 2L d_\nu\relax(a,b).$$ *(3) $\Rightarrow$ (1)*: As $d_{P_C}$ is geodesic, it follows that $d_\nu$ is quasi-geodesic. Combining Theorem \[unique metric prop\] with Proposition \[quasi-geodesic\] and Theorem \[bi-lip equi\], we obtain: \[unique metric prop 2\] A bounded valency ternary algebra is a quasi-geodesic coarse median algebra *if and only if* it admits a bounded geometry, quasi-geodesic coarse median metric. Such a metric, when it exists, is unique up to quasi-isometry. The rank of a coarse median algebra ----------------------------------- Motivated by Theorem \[hyper rank\] we make the following defiintion. A coarse median algebra $(X, \mu)$ is said to *have rank at most $n$* if there is a non-decreasing function $\varphi: \Rp \to \Rp$ such that for any $x_1,\ldots,x_{n+1};p,q \in X$, $$\min\{\sharp[p,\mu(x_i,p,q)]:i=1,\ldots,n+1\} \leqslant \varphi(\max\{\sharp[p,\langle x_i,x_j,p\rangle]: i\neq j\}).$$ ![The interval configuration for verifying the rank 1 condition []{data-label="default"}](rank1cropped) \[cma rank lemma\] The rank of a bounded valency coarse median algebra $(X,\mu)$ agrees with the rank of the corresponding coarse median space $(X,d_\mu,\mu)$. Lemma \[finiteness\] provides a non-decreasing function $C:\Rp\rightarrow \Rp$ such that $$d_\mu{}(a,b) < \sharp [a,b]\leq C(d_\mu{}(a,b)).$$ If the coarse median algebra $(X,\mu)$ has rank at most $n$, then by definition there exists a non-decreasing $\varphi:\Rp \to \Rp$ such that for any $x_1,\ldots,x_{n+1};p,q \in X$, $$\begin{aligned} &\min\{d_\mu{}(p,\mu(x_i,p,q)):i=1,\ldots,n+1\}<\min\{\sharp[p,\mu(x_i,p,q)]:i=1,\ldots,n+1\} \\ &\leqslant \varphi(\max\{\sharp[p,\langle{x_i,x_j,p}\rangle]: i\neq j\}) \leqslant \varphi(\max\{C(d_\mu{}(p,\langle x_i,x_j,p\rangle)): i\neq j\})\\ &=\varphi\circ C(\max\{d_\mu{}(p,\langle x_i,x_j,p\rangle): i\neq j\}).\end{aligned}$$ So by Theorem \[hyper rank\] the coarse median space $(X,d_\mu, \mu)$ has rank at most $n$. Conversely if the coarse median space $(X,d_\mu, \mu)$ has rank at most $n$, then by Theorem \[hyper rank\]: There exists a non-decreasing $\varphi:\Rp\to\Rp$ such that for any $x_1,\ldots,x_{n+1};p,q \in X$, $$\begin{aligned} &\min\{\sharp[p,\mu(x_i,p,q)]:i=1,\ldots,n+1\}\leq \min\{C(d_\mu{}(p,\mu(x_i,p,q))):i=1,\ldots,n+1\}\\ &=C(\min\{d_\mu{}(p,\mu(x_i,p,q)):i=1,\ldots,n+1\}) \leqslant C\circ\varphi(\max\{d_\mu{}(p,\langle x_i,x_j,p\rangle): i\neq j\})\\ &\leqslant C\circ\varphi(\max\{\sharp[p,\langle x_i,x_j,p\rangle]: i\neq j\}). \end{aligned}$$ So the coarse median algebra $(X,\mu)$ also has rank at most $n$. In particular in the case of rank 1, this lemma, together with Theorem \[hyperbolic char\] immediately show that the class of quasi-geodesic, bounded valency coarse median algebras of rank 1 corresponds to the class of quasi-geodesic bounded geometry hyperbolic spaces. A Categorical viewpoint ======================= To amplify and clarify the claim that coarse median spaces, coarse interval spaces and coarse median algebras are in some sense the same we will define suitable categories and show that they are equivalent. The coarse median (space) category {#The coarse median (space) category} ---------------------------------- \[coarse median category\] Let $(X,d_X),(Y,d_Y)$ be metric spaces with coarse median operators $\mu_X,\mu_Y$ respectively, and $f: X \to Y$ be a map. 1. $f$ is a *$C$-quasi-morphism* if for $a,b,c\in X$, $\langle f(a),f(b),f(c)\rangle_Y\thicksim_C f(\mu(a,b,c)_X)$; 2. $f$ is a *$(\rho_+,C)$-coarse median morphism* if $f$ is a $C$-quasi-morphism as well as a $\rho_+$-coarse map. As usual, we omit mentioning parameters unless we are keeping track of the values. \[composition of morphisms\] Note that without the assumption of coarseness for the map in condition (2), it is not the case that morphisms compose to give morphisms. The issue is that while the coarse median of the three points $fg(a), fg(b), fg(c)$ is necessarily close to the image under $f$ of the coarse median of $g(a), g(b), g(c)$, without requiring $f$ to be coarse we cannot control the distance between this image and the image under $fg$ of the median $\mu(a,b,c)$. Given two metric spaces $(X,d_X), (Y,d_Y)$ with coarse medians $\mu_X,\mu_Y$, let $f,g$ be coarse median morphisms from $X$ to $Y$. Denote $f \sim g$ if $f$ is close to $g$. This is an equivalence relation, and the equivalence class of $f$ is denoted by $[f]$. The *coarse median category*, denoted $\CM$, is defined as follows: - The objects are triples $(X,d_X,\mu_X)$ where $(X,d_X)$ is a metric space and $\mu_X$ is a coarse median operator on $(X,d_X)$; - Given two objects $\mathcal{X}=(X,d_X,\mu_X)$ and $\mathcal{Y}=(Y,d_Y,\mu_Y)$ the morphism set is $$\mor_{\CM}(\mathcal X,\mathcal Y):=\{\mbox{~coarse median morphisms~from~}X\mbox{~to~}Y~\}/\sim;$$ - Compositions are induced by compositions of maps. The *coarse median space category*, denoted $\CMS$, is the full subcategory whose objects are coarse median *spaces*, i.e. those whose coarse median additionally satisfies axioms (M1) and (M2). The objects of $\CM$ are those satisfying Bowditch’s original definition [@bowditch2013coarse Section 8]. We now characterise categorical isomorphisms in a more practical way. \[char for iso in CM\] Let $\mathcal X, \mathcal Y$ be objects in $\CM$ and $[f]\in \mor_\CM(\mathcal X, \mathcal Y)$. Then $[f]$ is an isomorphism in the category $\CM$ *if and only if* $f$ is a coarse equivalence. Let $\mathcal{X}=(X,d_X,\mu_X)$ and $\mathcal{Y}=(Y,d_Y,\mu_Y)$. Suppose $[f]$ is an isomorphism in $\CM$, i.e., there exists another coarse median morphism $g: Y \to X$ such that $[f][g]=[\id_Y]$ and $[g][f]=[\id_X]$. Hence clearly, $f$ is a coarse equivalence. On the other hand, suppose $f: X \to Y$ is a $(\rho_+,C)$-coarse median morphism as well as a $(\rho_+,C)$-coarse equivalence. In other words, there exists a $\rho_+$-coarse map $g: Y\to X$ such that $fg$ and $gf$ are $C$-close to the identities. It suffices to show that $g$ is a coarse median morphism. For any $x,y,z\in Y$, since $fg \thicksim_C \id_Y$, there exist $a,b,c\in X$ such that $f(a)\thicksim_C x$, $f(b)\thicksim_C y$ and $f(c)\thicksim_C z$. Since $g$ is $\rho_+$-bornologous, we have $gf(a) \thicksim_{\rho_+(C)}g(x)$, $gf(b) \thicksim_{\rho_+(C)}g(y)$ and $gf(c) \thicksim_{\rho_+(C)}g(z)$. Let $\rho_X,\rho_Y$ be the uniform bornology parameters of $\mathcal X,\mathcal Y$ provided by (C1). Then we have $$\mu({g(x),g(y),g(z)})_X \thicksim_{\rho_X(3\rho_+(C))}\mu({gf(a),gf(b),gf(c)})_X \thicksim_{\rho_X(3C)}\mu(a,b,c)_X.$$ We also have $$g(\mu(x,y,z)_Y)\thicksim_{\rho_+(\rho_Y(3C))}g(\mu({f(a),f(b),f(c)})_Y)\thicksim_{\rho_+(C)}gf(\mu(a,b,c)_X) \thicksim_C \mu(a,b,c)_X.$$ Combining these, we have $$\mu({g(x),g(y),g(z)})_X\thicksim_{C'}g(\mu(x,y,z)_Y)$$ for $C'=\rho_X(3\rho_+(C))+\rho_X(3C)+\rho_+(\rho_Y(3C))+\rho_+(C)+C$. \[dep of para for coarse median iso\] Recall from Definition \[cms isom\] that a $(\rho_+,C)$-coarse median isomorphism $f$ is a $(\rho_+,C)$-coarse median morphism and a $(\rho_+,C)$-coarse equivalence. Hence the previous lemma states that such an $f$ is a $(\rho_+,C)$-coarse median isomorphism if and only if it represents a categorical isomorphism. Any $(\rho_+,C)$-coarse inverse $g$ for $f$ is a $(\rho_+,C')$-coarse median isomorphism with the constant $C'$ depending only on $\rho_+,C$ and parameters of $X,Y$. And in this case, $[g]$ is a categorical inverse of $[f]$. We now discuss the relationship between the categories of coarse median spaces, $\CMS$, and coarse median structures, $\CM$. \[cms equiv cm\] The inclusion functor $\iota_\mathcal{M}:\CMS \hookrightarrow \CM$ gives an equivalence of categories. As $\CMS$ is a full subcategory of $\CM$, it suffices to show that each object in $\CM$ is isomorphic to an object of $\CMS$. For $(X,d,\mu)$ an object in $\CM$, as noted in Remark \[M1M2 remark\], $\mu$ is uniformly close to another coarse median $\mu'$ satisfying (M1) and (M2). The identity map $\mathrm{Id}_X$ is then a coarse median isomorphism from $(X,d,\mu')$ to $(X,d,\mu)$ and thus gives an isomorphism in $\CM$. The coarse interval (space) category ------------------------------------ We will define the coarse interval category and the coarse interval space category in this subsection. As we did in the coarse median case, let us start with morphisms. Let $(X,d_X),(Y,d_Y)$ be two metric spaces with coarse interval structures $\I_X,\I_Y$, respectively. A map $f: X \to Y$ is said to be a *$(\rho_+,C)$-coarse interval morphism*, if $f$ is a $\rho_+$-coarse map, and for any $a,b\in X$, $f([a,b]) \subseteq \N_C([f(a),f(b)])$. As usual, we omit mentioning parameters unless they are required. Given coarse interval morphisms $f,g$ from $X$ to $Y$, we introduce the notation $f \sim g$ if $f$ is close to $g$. This is an equivalence relation, and we denote the equivalence class of $f$ by $[f]$. The *coarse interval category*, denoted $\CI$, is defined as follows: - The objects are triples $(X,d_X,\I_X)$ where $(X,d_X)$ is a metric space and $\I_X$ is a coarse interval structure on $(X,d_X)$; - Given two objects: $\mathcal{X}=(X,d_X,\I_X)$ and $\mathcal{Y}=(Y,d_Y,\I_Y)$, the morphism set is $$\mor_{\CI}(\mathcal X,\mathcal Y):=\{\mbox{~coarse~interval~morphisms~from~}X\mbox{~to~}Y~\}/\sim;$$ - Compositions are induced by compositions of maps. The *coarse interval space category*, denoted $\CIS$, is the full subcategory whose objects are coarse interval *spaces*, i.e. those satisfying the stronger axioms (I1)$\sim$(I3). As in Lemma \[char for iso in CM\], we can characterise categorical isomorphisms in a more practical way. Let us start with the following observation: \[hausdorff control\] Let $(X,d_X),(Y,d_Y)$ be two metric spaces with coarse interval structures $\I_X,\I_Y$ respectively, and $f:X \to Y$ be a coarse interval morphism as well as a coarse equivalence. Then there exists some constant $D>0$ such that for any $a,b\in X$, $$d_H(f([a,b]),[f(a),f(b)])\leqslant D.$$ Suppose $f$ is a $(\rho_+,C)$-coarse interval morphism with $C\geqslant 3\kappao$ where $\kappao$ is the parameter of $\I_Y$ given in axioms (I1)’ and (I3)’, and $g:Y\to X$ is a $\rho_+$-bornologous map such that $f\circ g\thicksim_C \id_{Y}$ and $g\circ f \thicksim_C \id_{X}$. For any point $z\in [f(a),f(b)]$, $f(c)\thicksim_C z$ for $c=g(z)$. Hence by Remark \[ends close to intervals\] as $C\geqslant 3\kappao$, we have $$f(c) \in \N_C([f(a),f(b)]) \cap \N_C([f(b),f(c)]) \cap \N_C([f(c),f(a)]).$$ On the other hand, since $f$ is a $(\rho_+,C)$-coarse interval morphism, we have $$\begin{aligned} f([a,b]\cap [b,c]\cap [c,a]) &\subseteq& f([a,b])\cap f([b,c]) \cap f([c,a]) \\ &\subseteq& \N_C([f(a),f(b)]) \cap \N_C([f(b),f(c)]) \cap \N_C([f(c),f(a)]),\end{aligned}$$ which has diameter at most $C'$ for some constant $C'$ by axiom (I3)’. Hence there exists $c'\in [a,b]$ such that $f(c)\thicksim_{C'}f(c')$, which implies that $z\thicksim_C f(c)\thicksim_{C'}f(c')$, i.e., $z\in \N_{C+C'}(f([a,b]))$. Taking $D=C+C'$, we have $d_H(f([a,b]),[f(a),f(b)])\leqslant D$ as required. Now we give a characterisation of categorical isomorphism in $\CI$ and $\CIS$. \[char for iso in CI\] Let $(X,d_X,\I_X),(Y,d_Y,\I_Y)$ be two coarse interval spaces, and $f:X \to Y$ be a coarse interval morphism. Then $[f]$ is an isomorphism in $\CI$ *if and only if* $f$ is a coarse equivalence. The same holds in $\CIS$ by restricting to this full subcategory. Suppose $[f]$ is an isomorphism in $\CI$, i.e., there exists another coarse interval morphism $g: Y \to X$ such that $[f][g]=[\id_Y]$ and $[g][f]=[\id_X]$. Hence clearly, $f$ is a coarse equivalence. On the other hand, suppose $f$ is a $(\rho_+,C)$-interval morphism and $g:Y\to X$ is $\rho_+$-coarse such that $fg\thicksim_C \id_{Y}$, $gf \thicksim_C \id_{X}$. It suffices to show that there exists some constant $C'>0$ such that for any $z,w\in Y$, $g([z,w]) \subseteq \N_{C'}(g(z),g(w))$. Since $fg\thicksim_C \id_{Y}$, we have $z \thicksim_C f(z')$ and $w \thicksim_C f(w')$ for $z'=g(z)$ and $w'=g(w)$. By axioms (I1)’, (I2)’, there exists some constant $K>0$ such that $[z,w] \subseteq \mathcal N_K([f(z'),f(w')]$. Hence $$g([z,w]) \subseteq g(\mathcal N_K([f(z'),f(w')])) \subseteq \N_{\rho_+(K)}( g([f(z'),f(w')]) ).$$ By Lemma \[hausdorff control\], there exists a constant $D>0$ such that $[f(z'),f(w')]\subseteq \N_D(f[z',w'])$, which implies that $$\begin{aligned} g([z,w]) &\subseteq& \N_{\rho_+(K)}( g([f(z'),f(w')]) ) \subseteq \N_{\rho_+(K)}(g(\N_D(f([z',w'])))) \\ &\subseteq& \N_{\rho_+(K)+\rho_+(D)}(gf([z',w'])) \subseteq \N_{C'}([z',w'])=\N_{C'}([g(z),g(w)]).\end{aligned}$$ where $C'=\rho_+(K)+\rho_+(D)+C$ depending only on $\rho_+,C$ and parameters of $\I_X,\I_Y$. According to the above characterisation, we give the following definition. Let $(X,d_X),(Y,d_Y)$ be two metric spaces with coarse interval structures $\I_X,\I_Y$ respectively. A map $f: X \to Y$ is said to be a *$(\rho_+,C)$-coarse interval isomorphism*, if $f$ is a $(\rho_+,C)$-coarse interval morphism as well as a $(\rho_+,C)$-coarse equivalence. By Lemma \[char for iso in CI\], $f$ is a coarse interval isomorphism *if and only if* $[f]$ is a categorical isomorphism. Furthermore, for a $(\rho_+,C)$-coarse interval isomorphism, any $(\rho_+,C)$-coarse inverse is a $(\rho_+,C')$-coarse interval isomorphism with the constant $C'$ depending only on $\rho_+,C$ and parameters of $X,Y$. \[ci equiv cis\] The inclusion functor $\iota_{\I}:\CIS \hookrightarrow \CI$ gives an equivalence of categories. This follows from Lemma \[coarse interval close\], and the argument is similar to the proof of Proposition \[cms equiv cm\], hence omitted. Equivalence of the coarse median and coarse interval categories --------------------------------------------------------------- Now we construct functors connecting categories $\CM(\mathcal{S})$ and $\CI(\mathcal{S})$, and show that they are equivalent. First, Theorem \[induce cm and ci\] (1) offers a functor from $\CM$ to $\CI$ as follows: \[morphisms\] Let $(X,d_X,\mu_X), (Y,d_Y,\mu_Y)$ be objects in the category $\CM$, and $f:X\rightarrow Y$ be a $(\rho_+,C)$-coarse median morphism. Suppose $\I_X, \I_Y$ are the induced coarse interval structures on $X, Y$ respectively. Then $f$ is a $(\rho_+,C)$-coarse interval morphism from $(X, d_X, \I_X)$ to $(Y, d_Y, \I_Y)$. For any $x,y,z\in X$, we have $f(\mu(x,y,z)_X) \thicksim_C \mu({f(x),f(y),f(z)})_Y$. Hence for $\mu(x,y,z)_X \in [x,y]$, we have $f(\mu(x,y,z)_X) \in \N_C([f(x),f(y)])$. So $f([x,y]) \subseteq \N_C([f(x),f(y)])$, and we finish the proof. \[functor F\] We define a functor $F:\CM\rightarrow \CI$ by setting $F(X,d_X,\mu_X)=(X,d_X,\I_X)$ where $\I_X$ is the induced coarse interval structure on $X$ and defining $F[f]=[f]$ on morphisms. This is well defined by Lemma \[morphisms\] and also restricts to give a functor $F_{\mathcal{S}}: \CMS \rightarrow \CIS$ by Proposition \[coarse interval\]. Now we consider the opposite direction. Theorem \[induce cm and ci\] (2) provides a functor from $\CI$ to $\CM$ as follows: \[morphisms2\] Let $(X,d_X,\I_X), (Y,d_Y,\I_Y)$ be objects in the category $\CI$, and let $f:X\rightarrow Y$ be a $(\rho_+,C)$-coarse interval morphism. Suppose $\mu_X, \mu_Y$ are the induced coarse medians on $X, Y$ respectively. Then $f$ is a $(\rho_+, \g(\rho_+(\kappao)+C))$-coarse median morphism from $(X, d_X, \mu_X)$ to $(Y, d_Y, \mu_Y)$, where $\kappao$ is the parameter in axiom (I1)’ for $(X,d_X,\I_X)$ and $\g$ is the parameter in axiom (I3)’ for $(Y,d_Y,\I_Y)$. By definition, $f([x,y]) \subseteq \N_C([f(x),f(y)])$ for any $x,y\in X$. Now we have: $$\begin{aligned} f(\mu(a,b,c)_X) &\in & f(\N_{\kappao}([a,b])\cap \N_{\kappao}([b,c])\cap \N_{\kappao}([c,a]) ) \\ &\subseteq& \N_{\rho_+(\kappao)}(f([a,b])) \cap \N_{\rho_+(\kappao)}(f([b,c])) \cap \N_{\rho_+(\kappao)}(f([c,a])) \\ &\subseteq & \N_{C'}([f(a),f(b)]) \cap \N_{C'}([f(b),f(c)]) \cap \N_{C'}([f(c),f(a)]) \\ &\subseteq & B_{\g(C')}(\mu({f(a),f(b),f(c)})_Y)\end{aligned}$$ for $C'=\rho_+(\kappao)+C$, and any $a,b,c \in X$. Hence, we have $$f(\mu(a,b,c)_X) \thicksim_{\g(C')} \langle f(a),f(b),f(c)\rangle_Y,$$ which implies $f$ is a $(\rho_+, \g(\rho_+(\kappao)+C))$-coarse median morphism. \[functor G\] We define a functor $G:\CI\rightarrow \CM$ by setting $G(X,d_X,\I_X)=(X,d_X,\mu_X)$, where $\mu_X$ is the induced coarse median on $X$, and defining $G[f]=[f]$ on morphisms. This is well defined by Theorem \[coarse interval converse\] and Lemma \[morphisms2\], restricting to give a functor $G_{\mathcal{S}}: \CIS \rightarrow \CMS$. \[cat equiv\] The functors $F$ and $G$ from Definitions \[functor F\], \[functor G\] provide an equivalence of categories between coarse median structures ($\CM$) and coarse interval structures ($\CI$). This equivalence restricts to give an equivalence of categories between coarse median spaces ($\CMS$) and coarse interval spaces ($\CIS$). It suffices to show that $G\circ F$ is naturally equivalent to $\id_{\CM}$, and $F\circ G$ is naturally equivalent to $\id_{\CI}$. **(1).** First consider $G\circ F$. Given a metric space $(X,d_X)$ with a coarse median $\mu_X$, we have $F(X,d_X,\mu_X)=(X,d_X,\I_X)$ where $\I_X$ is the induced coarse interval structure. Now apply $G$ to the triple $(X,d_X,\I_X)$ and denote the induced operator by $\mu'_X$. More precisely, for any $x,y,z\in X$, $\langle x,y,z\rangle_X'$ is some point chosen from the intersection $\N_{\kappao}([x,y])\cap \N_{\kappao}([y,z])\cap \N_{\kappao}([z,x])$, which is uniformly bounded and contains $\mu(x,y,z)_X$ by Theorem \[induce cm and ci\]. Hence the identity $\id_X: (X,d_X,\mu_X) \to (X,d_X,\mu'_X)$ is a coarse median isomorphism, giving a natural isomorphism from $\id_{CM}$ to $G\circ F$ as follows: $$\xymatrixcolsep{1.8cm}\xymatrixrowsep{1.8cm}\xymatrix{ (X,d_X,\mu_X) \ar[r]^-{\textstyle \id_X} \ar[d]^{\textstyle \id_\CM([f])} & G\circ F(X,d_X,\mu_X)=(X,d_X,\mu'_X) \ar[d]^{\textstyle G\circ F([f])} \\ (Y,d_Y,\mu_Y) \ar[r]^-{\textstyle\id_Y} & G\circ F(Y,d_Y,\mu_Y)=(Y,d_Y,\mu'_Y). }$$ This restricts to give a natural isomorphism from $\id_\CMS$ to $G_{\mathcal S}\circ F_\mathcal{S}$. **(2).** Next consider $F\circ G$. Given a coarse interval structure $(X,d_X,\I_X)$, we have $G(X,d_X,\I_X)=(X,d_X,\mu_X)$ where $\mu_X$ is the induced coarse median operator on $X$. More precisely, for any $x,y,z\in X$, $\mu(x,y,z)_X$ is some point chosen from $\N_{\kappao}([x,y])\cap \N_{\kappao}([y,z])\cap \N_{\kappao}([z,x])$. Now apply $F$ to the triple $(X,d_X,\mu_X)$ and denote the induced interval structure by $\I_X'$. Note that for any $z\in X$, $\mu(x,z,y)_X \in \N_{\kappao}([y,x])\subseteq \N_{2\kappao}([x,y])$, hence $[x,y]' \subseteq \N_{2\kappao}([x,y])$. On the other hand, by Remark \[ends close to intervals\], we have $z\in [x,y]\cap \N_{3\kappao}([y,z]) \cap \N_{3\kappao}([z,x])$ for any $z\in [x,y]$. It follows that both $x$ and $\mu(x,y,z)$ lie in $\N_{\kappao}([x,y])\cap \N_{3\kappao}([y,z])\cap \N_{3\kappao}([z,x])$. So by axiom (I3)’, we have $z\thicksim_K \mu(x,y,z)_X \in [x,y]'$ for $K=\psi(3\kappao)>0$. Hence, $[x,y]\subseteq \N_K([x,y]')$, which implies $d_H([x,y],[x,y]') \leqslant \max\{2\kappao,K\}$ for any $x,y\in X$. Therefore, the identity $\id_X: (X,d_X,\I_X) \to (X,d_X,\I'_X)$ is a coarse interval isomorphism, giving a natural isomorphism from $\id_\CI$ to $F\circ G$ as follows: $$\xymatrixcolsep{1.8cm}\xymatrixrowsep{1.8cm}\xymatrix{ (X,d_X,\I_X) \ar[r]^-{\textstyle \id_X} \ar[d]^{\textstyle \id_\CI([f])} & F\circ G(X,d_X,\I_X)=(X,d_X,\I'_X) \ar[d]^{\textstyle F\circ G([f])} \\ (Y,d_Y,\I_Y) \ar[r]^-{\textstyle \id_Y} & F\circ G(Y,d_Y,\I_Y)=(Y,d_Y,\I'_Y). }$$ As usual this restricts to give a natural isomorphism from $\id_\CIS$ to $F_{\mathcal S}\circ G_\mathcal{S}$. Combining Propositions \[cms equiv cm\], \[ci equiv cis\], Theorem \[cat equiv\] and Corollary \[rank preserving\], we obtain the following. \[cat equiv final\] Consider the following diagram: $$\xymatrix@=1.25cm{ \CM \ar@/^/[r]^{\textstyle F} & \CI \ar@/^/[l]^{\textstyle G} \\ \CMS \ar[u]^{\textstyle\iota_{\mathcal{M}}} \ar@/^/[r]^{\textstyle F_{\mathcal{S}}} & \CIS. \ar[u]_{\textstyle\iota_{\mathcal{I}}} \ar@/^/[l]^{\textstyle G_{\mathcal{S}}} }$$ We have: - $F \circ \iota_{\mathcal{M}} = \iota_{\mathcal{I}} \circ F_{\mathcal{S}}$; - $\iota_{\mathcal{M}} \circ G_{\mathcal{S}} = G \circ \iota_{\mathcal{I}}$; - $\iota_{\mathcal{M}}$ gives an equivalence of categories between $\CMS$ and $\CM$; - $\iota_{\mathcal{I}}$ gives an equivalence of categories between $\CIS$ and $\CI$; - $(F,G)$ gives an equivalence of categories between $\CM$ and $\CI$; - $(F_{\mathcal{S}},G_{\mathcal{S}})$ gives an equivalence of categories between $\CMS$ and $\CIS$. Furthermore, all of these functors preserve rank in the sense of coarse median structures and coarse interval structures. We finally note that one can restrict the allowed metric spaces to quasi-geodesic spaces. In this case the above equivalences of categories restrict to equivalences between the full subcategories of quasi-geodesic coarse median spaces and quasi-geodesic coarse interval spaces. Comparing the categories of coarse median algebras and coarse median spaces --------------------------------------------------------------------------- In the spirit of Section \[The coarse median (space) category\] we now consider the category of bounded valency coarse median algebras. A *coarse median algebra map* from $(X, \mu_X)$ to $(Y, \mu_Y)$ is defined to be a finite-to-$1$ map $f:X\to Y$ such that 1. there exist a constant $C$ satisfying that for all $a,b,c\in X$, $$\sharp[\mu({f(a),f(b),f(c)})_Y, f(\mu(a,b,c)_X)]_Y\leq C.$$ 2. there exists a non-decreasing function $\rho:\Rp\rightarrow \Rp$ such that for all $a,b\in X$, $$\sharp[f(a), f(b)]\leq \rho(\sharp[a,b]).$$ When $C$ can be taken to be $1$ then $\mu({f(a),f(b),f(c)})_Y= f(\mu(a,b,c)_X)$, and $f$ is a morphism of ternary algebras. In particular when $X$ and $Y$ are median algebras and $C=1$, $f$ is a morphism of median algebras, and the second condition requires that $f$ is also a coarse map in the geometric sense. From the algebraic point of view one would not expect the second condition to be required, however, without this condition the composition of coarse median algebra maps would not in general yield another coarse median algebra map (cf. Remark \[composition of morphisms\]). The proof that, with this definition, composition behaves as required relies again on the comparability of the induced metric with the cardinality of intervals: $$d_\mu{}(a,b) < \sharp [a,b]\leq C(d_\mu{}(a,b))$$ for a non-decreasing function $C:\Rp\rightarrow \Rp$ provided by Lemma \[finiteness\]. Two coarse median algebra maps $f,g$ are said to be *equivalent* if there is a constant $D$ such that for all $x\in X$, $\sharp[ f(x), g(x)]_Y\leq D$, and a *coarse median algebra morphism* is an equivalence class of coarse median algebra maps. Equipping a coarse median algebra $(X, \mu_X)$ with the induced metric provides a functor from the category of bounded valency coarse median algebras to the category of bounded valency, bounded geometry coarse median spaces. The forgetful map which converts a bounded valency, bounded geometry coarse median space to the underlying coarse median algebra is a left inverse to this functor, but is not in general functorial. ![The tree $T$ with the subspace $X$ identified by the solid vertices[]{data-label="tree_example"}](Tree_example.pdf) Consider the tree $T$ obtained from $\mathbb Z$ by adding a spike of length $|n|$ to each integer $n$. As a tree this is naturally a discrete median space and can be viewed as a coarse median space with its natural path metric. Now take the subspace $X$ consisting of the original integer points, together with the leaves of the tree, and equip this with the subspace metric, see Figure \[tree\_example\]. This is a median sub-algebra and the inclusion is a morphism of coarse median spaces. However it is not a morphism of coarse median algebras, since taking $a$ to be the leaf on the spike based at the integer $b$ the interval $[a,b]_X$ has cardinality $2$, while its image in $T$ has cardinality $|b|+1$ contravening the second condition. Once again this illustrates that it is possible to endow a coarse median algebra with a metric which does not fully respect the algebraic structure. However, restricting to the quasi-geodesic world, or, more generally, imposing the induced metric prevents these problems and makes the forgetful map functorial. Applying Theorem \[unique metric prop 2\] we obtain the following theorem showing that, just as CAT(0) cube complexes can be studied combinatorially as median algebras, coarse median spaces can be studied as coarse median algebras. The forgetful functor, together with the “induced metric” functor provide an equivalence of categories from *bounded geometry, quasi-geodesic coarse median spaces* to *bounded valency, quasi-geodesic coarse median algebras*, and this equivalence preserves rank. [^1]: JZ is supported by the Sino-British Fellowship Trust by Royal Society. [^2]: This is perhaps counterintuitive: firstly because interval cardinality is far from being a metric, and secondly because even in a geodesic coarse median space the geodesic between two points can lie well outside the corresponding interval (see [@niblo2017four]).
--- abstract: 'A geometric interpretation of curvature and torsion of linear transports along paths is presented. A number of (Bianchi type) identities satisfied by these quantities are derived. The obtained results contain as special cases the corresponding classical ones concerning curvature and torsion of linear connections.' author: - 'Bozhidar Z. Iliev [^1] [^2] [^3]' bibliography: - 'bozhopub.bib' - 'bozhoref.bib' date: | Ended: November 28, 1995\ Revised: March 25, November 6, 1996\ Updated: July 9, 1998\ Produced:\ Submitted to JINR communication: January 13, 1997\ Published: Communication JINR, E5-97-1, Dubna, 1997\ LANL xxx archive E-print No. dg-ga/9709017\ title: | **Linear transports along paths\ in vector bundles\ V. Properties of curvature and torsion** --- [l-tran-5.bbl]{} [1]{} J. A. Schouten. . Springer Verlag, Berlin-G[ö]{}ttingen-Heidelberg, second edition, 1954. S. Helgason. . Academic Press, New York-San Francisco-London, 1978. Bozhidar Z. Iliev. Linear transports along paths in vector bundles. [III]{}. [Curvature]{} and torsion. JINR Communication E5-93-261, Dubna, 1993. Bozhidar Z. Iliev. Linear transports along paths in vector bundles. [I]{}. [General]{} theory. JINR Communication E5-93-239, Dubna, 1993. Bozhidar Z. Iliev. Some generalizations of the [Jacobi]{} identity with applications to the curvature- and torsion-depending hamiltonians of particle systems. In J. [Ł]{}awrynowicz, editor, [*Hurwitz-type structures and applications to surface physics*]{}, number [II]{} in Deformations of Mathematical Structures, pages 161–188. Kluwer Academic Publishers Group, Dordrecht-Boston-London, 1993. (Papers from the Seminars on Deformations 1988–1992). Bozhidar Z. Iliev. Deviation equations in spaces with a transport along paths. JINR Communication E5-94-40, Dubna, 1994. J. A. Schouten. . Clarendon Press, Oxford, 1951. [l-5pic\_1.pic]{} 0.65mm (124.00,143.01) (45.34,62.01)[(0,1)[0.2]{}]{} (65.34,42.01)[(1,0)[0.2]{}]{} (48.67,58.34)[(0,0)\[lt\][s]{}]{} (61.01,46.01)[(0,0)\[lt\][t]{}]{} (86.00,134.00)[(0,0)\[cc\][$\eta(\cdot,t+\varepsilon)$]{}]{} (80.00,112.34)[(0,0)\[lc\][$\eta(s+\delta,t+\varepsilon)$]{}]{} (92.99,43.34)[(0,0)\[lc\][$\eta(s,t+\varepsilon)$]{}]{} (11.66,105.00)[(0,0)\[lc\][$\eta(s+\delta,t)$]{}]{} (19.00,37.34)[(0,0)\[lc\][$\eta(s,t)$]{}]{} (92.67,84.67)[(0,0)\[cc\][$\eta(s+\delta,\cdot)$]{}]{} (124.00,39.00)[(0,0)\[cc\][$\eta(s,\cdot)$]{}]{} (23.67,133.00)[(0,0)\[cc\][$\eta(\cdot,t)$]{}]{} (64.67,32.00)[(3,-1)[0.2]{}]{} (46.66,27.66)[(0,0)\[lc\][$B=\varepsilon\eta''(s,t)$]{}]{} (39.33,72.33)[(1,4)[0.2]{}]{} (41.67,70.67)[(0,0)\[lc\][$A=\delta\eta'(s,t)$]{}]{} (59.33,111.00)[(4,1)[0.2]{}]{} (37.67,102.34)[(0,0)\[lc\][$B_1 = L_{s\to s+\delta}^{\eta(\cdot,t)}B$]{}]{} (82.33,63.33)[(-1,3)[0.2]{}]{} (86.00,60.00)[(0,0)\[lc\][$A_1 = L_{t\to t+\varepsilon}^{\eta(s,\cdot)}A$]{}]{} (110.33,89.33)[(3,-2)[0.2]{}]{} (98.00,100.67)[(0,0)\[lc\] [$B_2=L_{t\to t+\varepsilon}^{\eta(s+\delta,\cdot)}B_1$]{}]{} (54.33,140.33)[(-2,3)[0.2]{}]{} (55.66,143.00)[(0,0)\[lc\] [$A_2=L_{s\to s+\delta}^{\eta(\cdot,t+\varepsilon)}A_1$]{}]{} [l-5pic\_2.pic]{} 0.45mm (134.33,143.01) (41.67,71.34)[(0,1)[0.2]{}]{} (61.67,51.34)[(1,0)[0.2]{}]{} (45.00,67.67)[(0,0)\[lt\][s]{}]{} (56.34,58.01)[(0,0)\[lt\][t]{}]{} (86.33,134.67)[(0,0)\[cc\][$\eta(\cdot,t+\varepsilon)$]{}]{} (81.00,110.67)[(0,0)\[lc\][$\eta(s+\delta,t+\varepsilon)$]{}]{} (90.66,45.01)[(0,0)\[lc\][$\eta(s,t+\varepsilon)$]{}]{} (-0.01,106.34)[(0,0)\[lc\][$\eta(s+\delta,t)$]{}]{} (13.33,37.34)[(0,0)\[lc\][$\eta(s,t)$]{}]{} (93.67,85.34)[(0,0)\[cc\][$\eta(s+\delta,\cdot)$]{}]{} (134.33,40.01)[(0,0)\[cc\][$\eta(s,\cdot)$]{}]{} (21.00,133.33)[(0,0)\[cc\][$\eta(\cdot,t)$]{}]{} **Introduction** {#I} ================ The properties of curvature and torsion tensors of a linear connection and the satisfied by them Bianchi identities are well-known [@Schouten/Ricci; @Helgason]. Looking over the given in [@f-LTP-Cur+Tor] definitions and properties of the curvature and torsion of linear transports along paths, one can expect to find out similar results in this more general case too. To their derivation is devoted this paper. Sect. \[II\] reviews some definitions and results from [@f-LTP-general; @f-LTP-Cur+Tor] and also contains new ones needed for our investigation. Sect. \[III\] proposes a geometrical interpretation of the torsion of a linear transport along paths on the basis of the question of the existence of an ‘infinitesimal’ parallelogram. Sect. \[IV\] deals with the geometrical meaning of the curvature of a linear transport along paths. It is shown that the curvature governs the main change of a vector after a suitable transportation along a ‘small’ (infinitesimal) close path. Sect. \[V\] gives the derivation of the generalizations of the Bianchi identities in the case of linear transports along paths. This is done by using the developed in [@f-Jacobi] method for obtaining many-point generalizations of the Jacobi identity. Sect. \[VI\] closes the paper with some concluding remarks, including a criterion for flatness of a linear transport along paths. **Some preliminary definitions and results** {#II} ============================================ Below are summarized some needed for this investigation definitions and results for linear transports along paths in vector bundles and their curvature and torsion. Let $(E,\pi,B)$ be a real[^4] vector bundle with base $B$, total space $E$ and projection $\pi:E\to B$. The fibres $\pi^{-1}(x)$, $x\in B$ are supposed to be isomorphic real vector spaces. Let $\gamma:J\to B$, with $J$ being a real interval, be an arbitrary path in $B$. According to [@f-LTP-general definition 2.5] a linear transport (L-transport) in $(E,\pi,B)$ is a map $L:\gamma\mapsto L^\gamma$, where the L-transport along $\gamma$ is $L^\gamma:(s,t)\mapsto L^\gamma_{s\to t}$, $s,t\in J$. Here $L^\gamma_{s\to t}: \pi^{-1}(\gamma(s))\to \pi^{-1}(\gamma(t))$ is the L-transport along $\gamma$ from $s$ to $t$. It satisfies the equations: $$\begin{aligned} & & \!\!\! \!\!\! \!\!\! \!\!\! \! L^\gamma_{s\to t}(\lambda u + \mu v) = \lambda L^\gamma_{s\to t}u + \mu L^\gamma_{s\to t}v, \ \lambda,\mu \in {\mathbb{R}}, \ u,v \in \pi^{-1}(\gamma(s)), \label{2.1} \\ & & L^\gamma_{t\to r} \circ L^\gamma_{s\to t} = L^\gamma_{s\to r}, \quad r,s,t \in J, \label{2.2} \\ & & L^\gamma_{s\to s} = id_{\pi^{-1}(\gamma(s))} \label{2.3}\end{aligned}$$ with $id_U$ being the identity map of the set $U$. Propositions 2.1 and 2.3 of [@f-LTP-general] state that the general structure of $L^\gamma_{s\to t}$ is $$L^\gamma_{s\to t}= \left( F^\gamma_t\right) ^{-1} \circ F^\gamma_s, \ s,t\in J \label{2.4}$$ where the map $F^\gamma_s:\pi^{-1}(\gamma(s)) \to V$ is a linear isomorphism on a vector space $V$. The map $F^\gamma_s$ is defined up to a left composition with a linear isomorphism $D^\gamma:V\to \underline{V}$, with $\underline{V}$ being a vector space, i.e. up to the change $F^\gamma_s \to D^\gamma \circ F^\gamma_s $. Let $\{e_i(s)\}$ be a basis in $\pi^{-1}(\gamma(s))$. Here and below the indices $i,j,k,...$ run from 1 to $\dim(\pi^{-1}(x))=\mathrm{const}=:n$. The matrix of the L-transport $L$ (see [@f-LTP-general p. 5]), $H(t,s;\gamma)= \left[ H^i_j(t,s;\gamma) \right] = H^{-1}(s,t;\gamma)$, is defined by $L^\gamma_{s\to t}e_j(s)=H^i_j(t,s;\gamma)e_i(t) $, where hereafter summation over repeated indices is assumed. The matrix of the coefficients of $L$ [@f-LTP-general p. 13] is $\Gamma_\gamma(s) = \left[ \Gamma^i_j(s;\gamma) \right] = {\partial H(s,t;\gamma)/\partial t }\left.\right|_{t=s}$. Therefore for a $C^2$ L-transport, we have $$\begin{aligned} & & H^{\pm 1}(s+\varepsilon,s;\gamma) = H^{\mp 1}(s,s+\varepsilon;\gamma)= \nonumber \\ & & = \openone \mp \varepsilon \Gamma_\gamma(s) + {\varepsilon^2 \over 2} \left(\Gamma_\gamma(s) \Gamma_\gamma(s) \mp {\partial \Gamma_\gamma(s) \over \partial s} \right) + O(\varepsilon ^2), \label{2.5}\end{aligned}$$ with $\openone$ being the unit matrix. Here we have used $$\label{2.6} \begin{array}{c} \rule[-1.123456789em]{0em}{0em} \left. {\partial^2 H(s,t;\gamma) \over \partial t^2} \right|_{t=s} = \Gamma_\gamma(s) \Gamma_\gamma(s) + {\partial \Gamma_\gamma(s) \over \partial s}, \\ \left. {\partial^2 H(t,s;\gamma) \over \partial t^2}\right|_{t=s} = \Gamma_\gamma(s) \Gamma_\gamma(s) - {\partial \Gamma_\gamma(s) \over \partial s}. \end{array}$$ These equations follow from the fact that the general form of the matrix $H$ is $H(t,s;\gamma)=F^{-1}(t;\gamma) F(s;\gamma)$ for some nondegenerate matrix function $F$ [@f-LTP-general]. Let $\eta:J\times J^\prime\to M$, $J$, with $J^\prime$ being ${\mathbb R} $-intervals, be a $C^2$ map on the real differentiable manifold $M$ with a tangent bundle $(T(M),\pi,M)$. Let $\eta(\cdot,t):s\mapsto \eta(s,t)$ and $\eta(s,\cdot):t\mapsto \eta(s,t)$, $(s,t)\in J\times J^\prime$. Here by $\eta^\prime(\cdot,t)$ and $\eta^{\prime\prime}(s,\cdot)$ we denote the tangent to $\eta(\cdot,t)$ and $\eta(s,\cdot)$, respectively, vector fields. By [@f-LTP-Cur+Tor definition 2.1] the torsion (operator) of a $C^1$ L-transport $L$ in $(T(M),\pi,M)$ is a map $${\mathcal T}:\eta\mapsto{\mathcal T}^{\,\eta}:J\times J^\prime\to T(M)$$ such that $${\mathcal T}^{\,\eta} (s,t):= {\mathcal D}^{\eta(\cdot,t)}_s \eta^{\prime\prime}(\cdot,t) - {\mathcal D}^{\eta(s,\cdot)}_t \eta^{\prime}(s,\cdot) \in T_{\eta(s,t)}(M), \label{2.7}$$ where ${\mathcal D}^\gamma_s$ is the associated with $L$ differentiation along paths [@f-LTP-general], defined by $${\mathcal D}^\gamma_s \sigma := \left( {\mathcal D}^\gamma \sigma \right) (\gamma(s)) := \left. \left[ {\partial\over \partial \varepsilon}\left( L^\gamma_{s+\varepsilon \to s}\sigma(s+\varepsilon) \right) \right] \right|_{\varepsilon=0}$$ for a $C^1$ section $\sigma$. Analogously [@f-LTP-Cur+Tor], for $\eta:J\times J^\prime \to B$ the curvature (operator) of an sport $L$ in the vector bundle $(E,\pi,B)$ is a map $${\mathcal R}: \eta\mapsto {\mathcal R}^\eta:(s,t) \mapsto {\mathcal R}^\eta(s,t):\mathrm{Sec}^2(E,\pi,B)\to\mathrm{Sec}(E,\pi,B)$$ such that $$\label{2.8} {\mathcal R}^\eta(s,t):= {\mathcal D}^{\eta(\cdot,t)}\circ {\mathcal D}^{\eta(s,\cdot)} - {\mathcal D}^{\eta(s,\cdot)}\circ {\mathcal D}^{\eta(\cdot,t)}.$$ In terms of the coefficient matrix $\Gamma$ the components of torsion and curvature are respectively [@f-LTP-Cur+Tor] $$\begin{aligned} & & \left({\mathcal T}^{\,\eta}(s,t) \right)^i = \Gamma^i_j(s;\eta(\cdot,t)) \left( \eta^{\prime\prime}(s,t)\right)^j - \Gamma^i_j(t;\eta(s,\cdot)) \left( \eta^{\prime}(s,t) \right)^j , \label{2.9} \\ & & \left[ \left( {\mathcal R}^\eta(s,t)\right) ^i_j \right] = {\partial\over\partial s}\Gamma_{\eta(s,\cdot)}(t) - {\partial\over{\partial t}} \Gamma_{\eta(\cdot,t)}(s) \> + \nonumber\\ & & \hspace{24.3mm} + \> \Gamma_{\eta(\cdot,t)}(s)\Gamma_{\eta(s,\cdot)}(t) - \Gamma_{\eta(s,\cdot)}(t)\Gamma_{\eta(\cdot,t)}(s). \label{2.10}\end{aligned}$$ Below we shall need the following definitions: \[d2.1\] The torsion vector field (operator) of an L-transport in the tangent to a manifold bundle is a section $T^{\,\eta}\in$$ \mathrm{Sec} \left(\left.\left(T(M),\pi,M\right)\right|_{\eta(J,J')}\right) $ defined by $$T^{\,\eta}(\eta(s,t)):={\mathcal T}^{\,\eta}(s,t). \label{2.11}$$ Defining $ ({\mathcal D}^\gamma\sigma)(\gamma(s)) := {\mathcal D}_{s}^{\gamma}\sigma, $ from (\[2.7\]) we get $$\label{2.12} T^{\,\eta}(\eta(s,t)) = \left( {\mathcal D}^{\eta(\cdot,t)}\eta ''(\cdot,t) - {\mathcal D}^{\eta(s,\cdot)}\eta '(s,\cdot) \right)(\eta(s,t)).$$ \[d2.2\] The curvature vector field (operator) of an L-transport is a $C^2$ section $ R^\eta\in \mathrm{Sec}^2\left( \left. (E,\pi,B) \right|_{\eta(J,J')}\right) $ defined by $$R^\eta(\eta(s,t)):={\mathcal R}^\eta(s,t). \label{2.13}$$ \[d2.3\] An L-transport along paths is called flat ($\equiv$curvature free) on a set $U\subseteq B$ if its curvature operator vanishes on $U$. It is called flat if it is flat on $B$, i.e. in the case $U=B$. **Geometrical interpretation of the torsion** {#III} ============================================= Let $\eta:J\times J'\to M$ be a $C^1$ map into the manifold $M$, $(s,t)\in J\times J'$, and $\delta,\varepsilon\in{\mathbb{R}}$ be such that $(s+\delta,t+\varepsilon)\in J\times J'$. Below we consider $\delta$ and $\varepsilon$ as ‘small’ (infinitesimal) parameters with respect to which expansions like (\[2.5\]) will be used. Consider the following two paths from $\eta(s,t)$ to $\eta(s+\delta,t+\varepsilon)$ (see figure \[Fig1\]): the first, through $\eta(s+\delta,t)$, being a product of $\eta(\cdot,t):[s,s+\delta]\to M$ and $\eta(s+\delta,\cdot):[t,t+\varepsilon]\to M$, and the second one, through $\eta(s,t+\varepsilon)$, being a product of $\eta(s,\cdot):[t,t+\varepsilon]\to M$ and $\eta(\cdot,t+\varepsilon):[s,s+\delta]\to M$. (Here $\delta$ and $\varepsilon$ are considered as positive, but this is inessential.) Up to $O(\delta^2)$ and $O(\varepsilon^2)$ the vectors $A:=\delta\eta'(s,t)$ and $B:=\varepsilon\eta''(s,t)$ are the displacement vectors [@f-DE-TP] (linear elements [@Schouten/Ricci]), respectively, of $\eta(s+\delta,t)$ and $ \eta(s,t+\varepsilon)$ with respect to $\eta(s,t)$. Using (\[2.5\]) and keeping only the first order in $\varepsilon$ and $\delta$ terms in it, we get the following *component* relation: $$\left( L_{s\to s+\delta}^{\eta(\cdot,t)} B \right)^i - \left( L_{t\to t+\varepsilon}^{\eta(s,\cdot)} A \right)^i = (B-A)^i - \delta\varepsilon\left( {\mathcal T}^{\,\eta}(s,t) \right)^i + O(\delta\varepsilon^2) + O(\delta^2\varepsilon). \label{3.1}$$ According to [@Schouten/physics ch. V, sect. 1] this result has the following interpretation. *After the ‘L-transportation’ of two linear elements $A$ and $B$ along each other we get, up to second order terms, a pentagon with a closure vector $-\delta\varepsilon{\mathcal T}^{\,\eta}(s,t)$.* This implies the existence of an infinitesimal parallelogram only in the torsion free case. Using again (\[2.5\]) and keeping only first order terms, after some algebra, we find $$\begin{aligned} & & \nonumber \left( L_{t\to t+\varepsilon}^{\eta(s+\delta,\cdot)} \circ L_{s\to s+\delta}^{\eta(\cdot,t)} B - L_{s\to s+\delta}^{\eta(\cdot,t+\varepsilon)} \circ L_{t\to t+\varepsilon}^{\eta(s,\cdot)} A \right)^i \> = \\ & & \nonumber = \> \left[ \left( L_{t\to t+\varepsilon}^{\eta(s,\cdot)} B \right)^i - \left( L_{s\to s+\delta}^{\eta(\cdot,t)} A \right)^i \right] - \delta\varepsilon\left({\mathcal T}^{\,\eta}(s,t)\right)^i \> + \\ & & \label{3.2} \quad\> + \> O(\delta^3) + O(\delta^2\varepsilon) + O(\delta\varepsilon^2) + O(\varepsilon^3).\end{aligned}$$ Note that if $\eta$ is a family of L-paths, i.e. $ L_{s_1\to s_2}^{\eta(\cdot,t)}\eta'(s_1,t)=\eta^\prime(s_2,t) $ and $ L_{t_1\to t_2}^{\eta(s,\cdot)}\eta''(s,t_1) = \eta{^\prime}{^\prime}(s,t_2), $ for all $s,s_1,s_2\in J$ and $t,t_1,t_2\in J^\prime$, the expression in the square brackets in (\[3.2\]) is simply $(B-A)^i$. So, the torsion describes the first order correction to the difference of two (infinitesimal) displacement vectors when they are (L-)transported in the above-described way. **Geometrical interpretation of curvature** {#IV} =========================================== Let $(E,\pi,B)$ be a vector bundle, $\eta:J\times J^\prime\to B$ be a $C^1$ map, and $L$ be a $C^2$ L-transport along paths in $(E,\pi,B)$. Let $(s,t)\in J\times J^\prime$ and $\delta, \varepsilon \in {\mathbb{R}}$, $\delta,\varepsilon >0$ (this condition is insignificant for the final result) be such that $(s+\delta,t+\varepsilon)\in J\times J^\prime$. Consider the paths on figure \[Fig2\]. The result of an L-transport of a vector from $\eta(s,t)$ to $\eta(s+\delta,t)$ along $\left.\eta(\cdot,t)\right|_{[s,s+\delta]}$, then from $\eta(s+\delta,t)$ to $\eta(s+\delta,t+\varepsilon)$ along $\left.\eta(s+\delta,\cdot)\right|_{[t,t+\varepsilon]}$, then from $\eta(s+\delta,t+\varepsilon)$ to $\eta(s,t+\varepsilon)$ along $\left.\eta(\cdot,t+\varepsilon)\right|_{[s,s+\delta]}$, and, at last, from $\eta(s,t+\varepsilon)$ to $\eta(s,t)$ along $\left.\eta(s,\cdot)\right|_{[t,t+\varepsilon]}$ is expressed by \[p4.1\] For any $C^2$ L-transport, we have $$\begin{aligned} & & \!\!\!\! \!\!\!\! \!\!\!\!\! L_{t+\varepsilon\to t}^{\eta(s,\cdot)} \circ L_{s+\delta\to s}^{\eta(\cdot,t+\varepsilon)} \circ L_{t\to t+\varepsilon}^{\eta(s+\delta,\cdot)} \circ L_{s\to s+\delta}^{\eta(\cdot,t)} = \nonumber \\ & & \!\!\!\!\!\!\! \!\!\!\! =id_{\pi^{-1}(\eta(s,t))} - \delta\varepsilon{\mathcal R}^{\eta}(s,t) + O(\delta^3) + O(\delta^2\varepsilon) +O(\delta\varepsilon^2) + O(\varepsilon^3). \label{4.1} \end{aligned}$$ *Proof.* In a field $ \{e_i(s,t),\ (s,t)\in J\times J^\prime\}$ of bases in $\pi^{-1}(\eta(J,J^\prime))$ the matrix of the linear map standing in the l.h.s. of (\[4.1\]) is $$H(t,t+\varepsilon;\eta(s,\cdot)) H(s,s+\delta;\eta(\cdot,t+\varepsilon)) H(t+\varepsilon,t;\eta(s+\delta,\cdot)) H(s+\delta,s;\eta(\cdot,t)).$$ Substituting here (\[2.5\]) and using the expressions $$\Gamma_{\eta(s+\delta,\cdot)}(t)=\Gamma_{\eta(s,\cdot)}(t) + \delta{\partial\over\partial s}\Gamma_{\eta(s,\cdot)}(t) +O(\delta^2),$$ $$\Gamma_{\eta(\cdot,t+\varepsilon)}(s)=\Gamma_{\eta(\cdot,t)}(s) + \varepsilon{\partial\over\partial t}\Gamma_{\eta(\cdot,t)}(s) + O(\varepsilon^2),$$ we find this matrix to be $$\begin{aligned} &\!\!\!& \!\!\!\!\!\!\!\!\!\!\!\openone \! + \! \delta\varepsilon\left( {\partial\over{\partial t}}\Gamma_{\eta(\cdot,t)}(s) - {\partial\over{\partial s}}\Gamma_{\eta(s,\cdot)}(t) - \Gamma_{\eta(\cdot,t)}(s) \Gamma_{\eta(s,\cdot)}(t) + \Gamma_{\eta(s,\cdot)}(t) \Gamma_{\eta(\cdot,t)}(s) \right) \!\!+ \\ & & + \> O(\delta^3) +O(\delta^2\varepsilon) +O(\delta\varepsilon^2) +O(\varepsilon^3). \end{aligned}$$ Taking into account (\[2.10\]), we get this expression as $$\label{4.1a} \openone - \delta\varepsilon \left[ \left({\mathcal R}^\eta(s,t)\right)_{j}^{i} \right] +O(\delta^3)+O(\delta^2\varepsilon) +O(\delta\varepsilon^2) + O(\varepsilon^3)$$ which is simply the matrix form of (\[4.1\]).  Proposition \[p4.1\] shows that *up to third order terms the result of the above-described transportation of a vector $A\in\pi^{-1}(\eta(s,t))$ is* $$\label{4.2} A -\delta\varepsilon\left( {\mathcal R}^\eta(s,t)\right) (A).$$ Another corollary of (\[4.1\]) is the (equivalent to the definition of the curvature) equality $$\label{4.3} {\mathcal R}^\eta(s,t) = - \lim_{ \stackrel{\delta\to 0}{\varepsilon\to 0} } \left[{1\over \delta\varepsilon} \left( L_{t+\varepsilon\to t}^{\eta(s,\cdot)}\!\circ\! L_{s+\delta\to s}^{\eta(\cdot,t+\varepsilon)}\!\circ\! L_{t\to t+\varepsilon}^{\eta(s+\delta,\cdot)}\!\circ\! L_{s\to s+\delta}^{\eta(\cdot,t)} - id_{\pi^{-1}(\eta(s,t))} \right) \right].$$ **Bianchi-type identities** {#V} =========================== The curvature operator (\[2.8\]) is simply a commutator of two derivations along paths. As we shall see below, the torsion (\[2.7\]) is also a skewsymmetric expression. All this allows one to apply the developed in [@f-Jacobi] method for obtaining Jacobi-type identities. This can be done as follows. Let us take an arbitrary map $\tau^k:J^k\to B$, with $J^k:=J\times\cdots\times J$, where $J$ appears $k$ times, $k\in \mathbb{N}$, and $B$ being the base of the vector bundle $(E,\pi,B)$. Let $s:=(s^1,...,s^k)\in J^k$. We define the $C^1$ paths $\tau_a:J\to B$ by $\tau_a(\sigma):=\left.\tau^k(s)\right|_{s^a=\sigma}$, $\sigma\in J$ and the maps (families of paths) $\tau_{ab}:J\times J\to B$ by $\tau_{ab}(\sigma_1,\sigma_2):=\left. \tau^k(s)\right|_{s^a=\sigma_1, s^b=\sigma_2}$, $ \sigma_1,\sigma_2\in J$ which depend implicitly on s. Hereafter $a,b,c,d=1,\ldots,k$. We write $\dot\tau_a$ for the tangent to $\tau_a$ vector field in the case when $ (E,\pi,B)=(T(M),\pi,M) $ for some manifold $M$. \[p5.1\] The following properties of antisymmetry are valid: $$\begin{aligned} & & {\mathcal R}^{\tau_{ab}}(s_a,s_b) + {\mathcal R}^{\tau_{ba}}(s_b,s_a)=0, \label{5.1}\\ & & T^{\tau_{ab}} + T^{\tau_{ba}} =0 \ {\mathrm{or}}\ {\mathcal T}^{\,\tau_{ab}}(s_a,s_b) + {\mathcal T}^{\,\tau_{ba}}(s_b,s_a) =0. \label{5.2} \end{aligned}$$ [**Remark**]{} These equalities are analogues of the usual skewsymmetry of curvature and torsion tensors in the tensor analysis [@Schouten/Ricci]. *Proof.* The two-point Jacobi-type identity is $ \left( (A_{ab})_{[a,b]} \right)_{<a,b>} \equiv 0 $ (see [@f-Jacobi eq. (5.1)]), where $ A_{ab} $ are elements of an Abelian group, $ (A_{ab})_{[a,b]}:=A_{ab} - A_{ba} $ and $ \left(A_{ab}\right)_{<a,b>}:=A_{ab} + A_{ba} $. Substituting here $ A_{ab}={\mathcal D}^{\tau_a} \circ {\mathcal D}^{\tau_b} $ in the case of a vector bundle $ (E,\pi,B) $ and $ A_{ab}={\mathcal D}^{\tau_a}({\dot\tau_b})$ in the case of the tangent to a manifold $M$ bundle $(T(M),\pi,M)$ and using (\[2.8\]) and (\[2.12\]) (or (\[2.7\])), one gets respectively (\[5.1\]) and (\[5.2\]).  \[p5.2\] The following identities are valid: $$\begin{aligned} & & \begin{array}{l} \left\{ {\mathcal D}^{\tau_a} \circ {\mathcal R}^{\tau_{bc}}(s_b,s_c) - {\mathcal R}^{\tau_{bc}}(s_b,s_c) \circ {\mathcal D}^{\tau_a} \right\} _{<a,b,c>} \equiv 0 \ \mathrm{or}\ \\ \left\{ {\mathcal D}^{\tau_a} \left( {\mathcal R}^{\tau_{bc}}(s_b,s_c) \right) \right\}_{<a,b,c>} \equiv 0, \end{array} \label{5.3} \\ & & \ \left\{ \left( {\mathcal R}^{\tau_{ab}}(s_a,s_b) \right)(\dot\tau_c) \right\} _{<a,b,c>} \equiv \left\{ {\mathcal D}^{\tau_a} \left( {\mathcal T}^{\,\tau_{bc}} \right) \right\}_{<a,b,c>}, \label{5.4}\end{aligned}$$ where $<\ldots>$ means summation over the cyclic permutations of the corresponding indices. [**Remark.**]{} These identities are analogous, respectively, of the second and first Bianchi identities in tensor analysis [@Schouten/physics; @Helgason]. This is clear from the fact that due to the antisymmetries (\[5.1\]) and (\[5.2\]) the cyclization over the indices $a,\ b\ \mathrm{and}\ c$, i.e. the operation $<\ldots>$, in (\[5.3\]) and (\[5.4\]) may be replaced with antisymmetrization over the indices $a,\ b \ \mathrm{and}\ c$. (E.g. if $A_{abc}=-A_{acb}$ and $ \left(A_{abc}\right)_{[a,b,c]} := \left(A_{abc}+A_{bca}+A_{cab}\right)_{[b,c]}, $ then $2\left(A_{abc}\right)_{<abc>} = \left(A_{abc}\right)_{[abc]}$.) *Proof.* The (3-point) generalized Jacobi identity (see [@f-Jacobi eq. (5.2)]) is $\left(\left(A_{abc}\right)_{[a,[b,c]]}\right)_{<a,b,c>} \equiv 0,$ with $A_{abc}$ being elements of an Abelian group, $ \left(A_{abc}\right)_{[a,[b,c]]} := \left(A_{abc} - A_{bca}\right)_{[b,c]} $ and $\left(A_{abc}\right)_{<a,b,c>}:=A_{abc} + A_{bca} + A_{cab}$. We put $A_{abc}=\mathcal{D}^{\tau_a}\circ\mathcal{D}^{\tau_b}\circ \mathcal{D}^{\tau_c} $ in the vector bundle case and $A_{abc}=\left(\mathcal{D}^{\tau_a} \circ \mathcal{D}^{\tau_b}\right) (\dot\tau_c)$ in the tangent bundle case. In this way, after some simple algebra (see (\[2.8\]), (\[2.7\]) and (\[2.1\])-(\[2.3\])), we get respectively (\[5.3\]) and (\[5.4\]).  The 4-point generalized Jacobi-type identity $$\left\{ \left( A_{abcd} \right) _ {[a,[b,[c,d]]]} + \left( A_{adcb} \right) _ {[a,[d,[c,b]]]} \right\}_{<a,b,c,d>}\equiv 0$$ with $ \left( A_{abcd} \right) _ {[a,[b,[c,d]]]} := \left( A_{abcd} - A_{bcda} \right) _ {[b,[c,d]]} $ and $ \left( A_{abcd} \right) _ {<a,b,c,d>} := A_{abcd} + A_{bcda} + A_{cdab} + A_{dabc}$ also produces an interesting identity in our case. In fact, putting $ A_{abcd} = \mathcal{D}^{\tau_a}\circ\mathcal{D}^{\tau_b}\circ \mathcal{D}^{\tau_c}\circ\mathcal{D}^{\tau_d}$ in the vector bundle case, one can easily prove after some simple calculations \[p5.3\] The identity $$\left\{ \mathcal{R}^{\tau_{ab}}(s_a,s_b) \left(R^{\tau_{cd}} \right) \right\}_{<a,b,c,d>} \equiv 0, \label{5.5}$$ where $R^{\tau_{cd}}$ is the curvature vector field on $\tau^k(J,\ldots,J)$ is valid. [**Remark.**]{} This result generalizes eq. (6.5) of [@f-Jacobi] in the classical tensor case. The last result also follows from the evident chain identity $$\begin{aligned} 0 &\equiv& \left\{ \mathcal{R}^{\tau_{ab}}(s_a,s_b) \circ \mathcal{R}^{\tau_{cd}}(s_c,s_d) - \mathcal{R}^{\tau_{ab}}(s_a,s_b) \circ \mathcal{R}^{\tau_{cd}}(s_c,s_d) \right\}_{<a,b,c,d>} \equiv \\ &\equiv& \left\{ \mathcal{R}^{\tau_{ab}}(s_a,s_b) \circ \mathcal{R}^{\tau_{cd}}(s_c,s_d) - \mathcal{R}^{\tau_{cd}}(s_c,s_d) \circ \mathcal{R}^{\tau_{ab}}(s_a,s_b) \right\}_{<a,b,c,d>} \equiv \\ &\equiv& \left\{ \left( \mathcal{R}^{\tau_{ab}}(s_a,s_b) \left( R^{\tau_{cd}} \right) \right) \left( \tau_{cd}(s_c,s_d) \right) \right\}_{<a,b,c,d>} \equiv \\ &\equiv& \left( \left\{ \mathcal{R}^{\tau_{ab}}(s_a,s_b) \left( R^{\tau_{cd}} \right) \right\}_{<a,b,c,d>} \right) \left(\tau^k(s)\right). \end{aligned}$$ Note that in the tangent bundle case the substitution $$A_{abcd} = \left( \mathcal{D}^{\tau_a}\circ\mathcal{D}^{\tau_b}\circ \mathcal{D}^{\tau_c} \right) \bigl( \dot\tau_d \bigr)$$ leads to the trivial identity $0\equiv 0$. **Conclusion** {#VI} ============== In this paper we have examined some natural properties of the curvature (resp. the torsion) of linear transports along paths in vector bundles (resp. in the tangent bundle to a manifold). These properties are similar to the ones in the theory of linear connections. The cause for this similarity is that in the case of the parallel transport assigned to a linear connection our results reproduce the corresponding ones in the classical tensor analysis. The reduction to the known classical results can easily be proved by applying the used in [@f-LTP-Cur+Tor] method for introduction of curvature and torsion of a linear connection by means of its parallel transport. In connection with this below is presented the generalization of the theorem that a linear connection is flat iff the assigned to it parallel transport is independent of the path (curve) along which it acts and depends only on the initial and final points of the transportation. \[th6.1\] An L-transport in $ (E,\pi,B) $ is flat on $ U\subseteq B $ if and only if in $U$ it is independent of the path (lying in $U$) along which it acts and depends only on its initial and final points, i.e. the set $\{ L_{s\to t}^{\gamma}\}$ forms a flat L-transport in $U\subseteq B$ iff $L_{s\to t}^{\gamma}$ for $\gamma:J\to U$ depends only on the points $\gamma(s)$ and $\gamma(t)$, but not on the path $\gamma$ itself. [**Remark.**]{} In this theorem we implicitly suppose $U$ to be linearly connected, i.e. its every two points can be connected by a path lying entirely in $U$. Otherwise the theorem may not be true. *Proof.* Let the L-transport $L$ be flat, i.e. $ \mathcal{R}^\eta(s,t)\equiv 0 \ \mathrm{for}\ \eta:J\times J^\prime\to U\subseteq B. $ By [@f-LTP-Cur+Tor theorem 3.1] there is a field of bases $\{e_i\}$ on $U$ in which the matrix of $L$ is unit, i.e. $H(t,s;\gamma)=\openone$, $\gamma:J\to U$. In these bases for $u\in\pi^{-1}(\gamma(s))$, we have $L_{s\to t}^{\gamma}u= H_j^i(t,s;\gamma)u^j\left(e_i\left.\right|_{\gamma(t)}\right)= u^i\left(e_i\left.\right|_{\gamma(t)}\right)$, which evidently depends on the points $\gamma(s)$ and $\gamma(t)$ but not on the path $\gamma$ itself. Conversely, let for $\gamma:J\to U$ the transport $L_{s\to t}^{\gamma}$ depends only on the points $\gamma(s)$ and $\gamma(t)$ and not on the path $\gamma$ connecting them. For fixed $x_0\in U$ and basis $\{ e_{i}^{0} \}$ in $\pi^{-1}(x)$ we define on $U$ the field of bases $\{e_i\}$ by $e_i\left.\right|_x := L_{a\to b}^{\beta} e_{i}^{0}$, where $\beta$ is any path in $U$ joining $x_0$ and $x\in U$, and such that $\beta(a):=x_0\ \mathrm{and}\ \beta(b):=x$. By assumption $\{ e_i\left.\right|_x \}$ depends only on $x$ but not on $\beta$. Using that $L_{s\to t}^{\gamma}$ depends only on $\gamma(s)$ and $\gamma(t)$, we have $$\begin{aligned} L_{s\to t}^{\gamma} \left( { e_i\left.\right|_{\gamma(s)} }\right) &=& L_{a\to b}^{\alpha} \left( { e_i\left.\right|_{\alpha(a)} }\right)\>=\\ &=&\> L_{a\to b}^{\alpha} \left( L_{c\to a}^{\alpha} { e_i^0} \right) = L_{c\to b}^{\alpha} { e_i^0} = { e_i\left.\right|_{\alpha(b)} } = { e_i\left.\right|_{\gamma(t)} }, \end{aligned}$$ where $\alpha$ is any path in $U$ such that $\alpha(a)=\gamma(s),\ \alpha(b)=\gamma(t)$, and $\alpha(c)=x_0$. As $L_{s\to t}^{\gamma} \left( { e_i\left.\right|_{\gamma(s)} } \right)= H_{i}^{j}(t,s;\gamma) { e_j\left.\right|_{\gamma(t)} }$, we see that in $\{e_i\}$ the matrix of $L$ is $H(t,s;\gamma)=\openone$, which, again by [@f-LTP-Cur+Tor theorem 3.1], implies the flatness of $L$ in $U$.  In conclusion we have to note that all of the results of the present paper remain true in the complex case. For this purpose one has simply to replace in it the word ‘real’ with ‘complex’ and the symbols $\mathbb{R}$ and $\dim$ with $\mathbb{C}$ and $\dim_\mathbb{C}$ respectively. **Acknowledgement** {#VII .unnumbered} =================== This work was partially supported by the National Science Foundation of Bulgaria under Grant No. F642. [^1]: Permanent address: Department Mathematical Modeling, Institute for Nuclear Research and Energy, Bulgarian Academy of Sciences, Boul. Tzarigradsko chaussée 72, 1784 Sofia, Bulgaria. [^2]: E-mail address: bozho@inrne.bas.bg [^3]: URL: http://www.inrne.bas.bg/mathmod/bozhome/ [^4]: All of the results of this work are valid mutatis mutandis in the complex case too.
--- abstract: 'A drop impacting a solid surface with sufficient velocity will emit many small droplets creating a splash. However, splashing is completely suppressed if the surrounding gas pressure is lowered. The mechanism by which the gas affects splashing remains unknown. We use high-speed interference imaging to measure the air beneath all regions of a spreading viscous drop as well as optical absorption to measure the drop thickness. Although an initial air bubble is created on impact, no significant air layer persists until the time a splash is created. This suggests that splashing in our experimentally accessible range of viscosities is initiated at the edge of the drop as it encroaches into the surrounding gas.' author: - 'Michelle M. Driscoll' - 'Sidney R. Nagel' bibliography: - 'splashingbib.bib' title: Ultrafast Interference Imaging of Air in Splashing Dynamics --- When a liquid drop hits a surface, it may rebound [@quere_bounce], spread smoothly, or shatter violently in a splash, as first photographed by Worthington [@worth_nat]. Controlling whether a liquid splashes has important consequences in many applications, including fuel dispersion in the automative industry, splat formation in coating technologies, and pesticide application in agriculture [@Yarin; @Rein_rev]. Liquid and surface properties obviously influence impact dynamics [@Mundo; @Range; @Stow; @basaran; @lohse]; it is quite counter-intuitive however that lowering the ambient pressure eliminates splashing altogether [@LeiPRL; @LeiPRE; @2010_nagelPRE; @Yoon_2010]. To measure transient air-layer dynamics, we develop a technique that combines the high spatial precision of interferometry (nm scale) with high time resolution ($15 \mu s$). At impact, a small amount of air is trapped beneath the falling drop, creating a bubble [@Siggi_fingering; @Siggi_center; @Chandra; @vanDam; @Hicks]. Recent theoretical work has suggested that this air pocket is linked to splashing dynamics [@Mandre; @BrennerJFM; @Josserand_jet]. In a sufficiently viscous liquid, splashing occurs at late times, several tenths of millisecond after impact [@LeiPRE; @2010_nagelPRE]. This temporal separation between impact and splashing creates an ideal system to test whether the initial air pocket influences the later-time splashing dynamics. Using our interference technique, we find the initial air cavity dynamics to be consistent with theoretical predictions [@BrennerJFM]. However, we find no significant air layer that persists beneath a spreading drop until the time of thin-sheet ejection — a necessary precursor to splashing in high-viscosity liquids [@2010_nagelPRE]. Thus, an underlying air layer is not responsible for splashing in this high-viscosity regime. ![image](figure1.pdf){width="6.5in"} The drops used in this study were mixtures of water and glycerol, with kinematic viscosities, $\nu$, between $9$ and $58$ cSt. Over this range there is minimal variation in surface tension, $\gamma$ ($65$-$67$ dyn/cm) and density, $\rho$ ($1.1$-$1.2$ g/cm$^3$). Drops of uniform radius, $r_0$=$2.05$ mm, were created using a syringe pump. For each impact, we used a fresh glass substrate (Fisherbrand coverslip) to prevent surface contamination. The impact velocity, $u_0$, was varied between $1.5$ and $4.1$ m/s by releasing drops from various heights within an acrylic tube that could be evacuated to varying ambient pressures, $P$, between $2$ and $102$ kPa. Using a high-speed camera (Phantom v12, Vision Research), we imaged drop impacts as shown in Fig. $1a$. To determine the thickness of the liquid as a function of position and time, we measured the local optical absorption of a spreading drop of colored liquid (Brilliant Blue G dye in a $\nu$=$9$ cSst glycerol/water solution) as shown in Fig. $1b$; we converted the transmitted light intensity to liquid thickness by calibrating with a liquid wedge of known proportions. By modifying our setup as shown in Fig. 1$c$, we also measured the thickness of any air layer underneath the spreading liquid using interferometric high-speed imaging at speeds up to $67,000$ frames/second. We used a monochromatic LED ($\lambda$=$660$ nm) with a small coherence length, $\sim10 \mu m$, as a light source, so that there would be no interference between the two sides of the glass substrate. Adding a small amount of dye to our liquid greatly minimized the reflected light from the upper liquid surface and eliminated any interference generated within the liquid itself. Fig. $1a$ shows a $\nu$=$15$ cSt drop at atmospheric pressure: after spreading smoothly as a thick lamella for $\sim0.4$ms, it ejects a thin liquid sheet that subsequently disintegrates into smaller droplets — the splash. Using optical absorption, Fig. $1b$, we find the lamella edge to have a thickness of $106 \pm 4 \mu m$, while the ejected sheet is ten times thinner, only $10 \pm 2 \mu m$ thick. This jump in thickness occurs over a lateral extent of only $\sim300 \mu m$. There are many distinct splashing regimes that display different scalings and even qualitatively different behavior [@Yarin; @deegan]. For example, below $\sim3$ cSt, sheet (i.e., corona) formation occurs within a few $\mu s$ of impact, while above $3$ cSt sheet ejection is delayed [@LeiPRE]. While it is not at all clear that the instability is the same across all splashing regimes, it is nevertheless established that lowering the air pressure eliminates splashing in all cases [@LeiPRL; @LeiPRE]. The higher viscosity liquids used in this study allow for a large separation in time and space between the initial air layer entrapment and the creation of a splash. This allows us to test directly whether the initially trapped air layer persists to longer times to influence the splashing dynamics. Our interference technique determines the air-layer thickness beneath the drop as it spreads. Fig. 1$d$ shows an example interference image at several different times after impact: $(i)$ just after impact, there is a small air cavity (panel 1) $(ii)$ surrounding this tiny region, the spreading lamella is uniformly black indicating an optically flat surface (panel 2), and $(iii)$ underneath the ejected thin sheet (panels 3-5), interference fringes are widely spaced indicating a very shallow slope. The small cavity of air trapped under the impacting drop (Fig. $1e$) has been shown to be present under varying conditions, [@Siggi_fingering; @Siggi_center; @Chandra; @vanDam; @Hicks] including above and below the splashing threshold [@2010_nagelPRE]. By measuring the interference fringes that are clearly observed inside the cavity, we can directly measure the cavity curvature as a function of time. The air cavity is quite flat — the radius of curvature at the top of the cavity, $r_c$, is comparable to $r_0$. At impact, the overpressure at the edge of the cavity is predicted to be higher than at its center [@BrennerJFM]. This suggests $r_c/r_0$ should increase in time as the overpressure causes the cavity to flatten; this is consistent with the data shown in Fig. $1f$. ![The radius of curvature of the entrapped air cavity normalized to drop radius, $r_c/r_0$, vs. ambient pressure $P$. Inset schematic defines cavity height, $h_c$, radius of curvature, $r_c$, and lateral cavity radius, $A$. As $P$ decreases, $r_c/r_0$ increases and the cavity becomes flatter. The line shows the best fit to $r_c/r_0 \propto P^{-1}$.[]{data-label="bubble"}](figure2.pdf){width="3.1in"} ![image](figure3.pdf){width="6.75in"} The air cavity persists even as the ambient pressure is decreased below the threshold value for sheet ejection, $P_{\mbox{\tiny{sh}}}$ [@2010_nagelPRE]. However, the shape of the cavity strongly depends on pressure. As shown in Fig. $2$, the cavity flattens dramatically with decreasing pressure: $r_c/r_0 \propto P^{-1}$. To compare our curvature measurements with the theoretical prediction [@BrennerJFM] that cavity height, $h_c \propto P$, we approximate the cavity as a thin spherical cap: $$h_c = \frac{A^2}{2r_c},$$ where $2A$ is the lateral extent of the cavity (see Fig. $2$ inset). Our measurement that $r_c/r_o \propto P^{-1}$ thus corroborates the prediction that $h_c \propto P$. We further note there is no transition in $r_c/r_0$ at $P_{\mbox{\tiny{sh}}}$, emphasizing that bubble entrapment does not appear to be related to the instability producing the thin sheet. Moreover, we note that the cavity closes into a bubble well before sheet ejection occurs. We therefore conclude that the cavity dynamics is isolated in time and space from the edge of the spreading lamella where thin-sheet ejection and splashing take place. Underneath the lamella, outside the bright spot created by the central entrapped bubble, our images are always nearly uniform and dark, see second panel of Fig. $1d$. By carefully measuring the intensity in this region, we can constrain the height of any possible air film underneath the drop. For a liquid layer separated from a glass substrate by an air gap of height $h$, the total electric field produced by multiple reflections from the two interfaces is: $$\begin{aligned} E_{L}(h) &=& \tfrac{n_g-1}{n_g+1} E_0 +\\ \nonumber &&\tfrac{4 n_g (1-n_l)}{(1+n_g)^2(1+n_l)}E_0\sum_{k=1}^{\infty}\left[ \tfrac{(1 - n_g)(1-n_l)}{(1+n_g)(1+n_l)}\right]^{k-1} e^{i\delta k} , \label{sum}\end{aligned}$$ where $n_g$ = 1.52 and $n_l$ = 1.44 + 0.0032$i$ are respectively the glass and liquid complex indicies of refraction. The optical path length is given by $\delta = \frac{2\pi(2h)}{\lambda}$. The term $e^{i\delta k}$ in eqn. $2$ accounts for the phase shift caused by the path-length difference $2hk$ after the $k^{th}$ reflection from the lamella. Only the first few terms in eqn. $(2)$ are large enough to contribute significantly to the total electric field. Thus, the finite coherence length of the LED does not influence this calculation. The total intensity is then $I_{L}(h)=|E_{L}(h)|^2+I_b$, where $I_b$ is an unknown background intensity from stray light and incoherent reflections. To obtain the air gap thickness, $h$, we must eliminate the unknown quantities $|E_0|^2$ and $I_b$ from our expression for $I_{L}$. To do this, we measure $(i)$ the intensity under the lamella long after spreading has finished so that the liquid can be assumed to be in contact with the substrate, $$I_c=\left|\frac{n_l-n_g}{n_l+n_g}E_0\right|^2+I_b,$$ and $(ii)$ the intensity due to reflected light from only the substrate: $$I_1=\left|\frac{1-n_g}{1+n_g}E_0\right|^2+I_b.$$ We can determine $h$ by computing: $$I_s(h) = \frac{I_L (h) - I_c}{I_1 - I_c}. \label{signal}$$ Fig. \[lamellaplots\]$a$ shows $I_s$ from the time the cavity collapses into a bubble until just before sheet ejection. Fig. $3b$-$d$ shows $I_s$ measured just before the instant of sheet ejection vs. $u_0$, $\nu$, $P$. In all cases, the data remain essentially constant and within error of zero. The error bars in Fig. \[lamellaplots\] represent drop-to-drop fluctuations. The distribution of all measurements of $I_s$ has a mean $0.0004$ with a standard deviation $0.0054$. This is comparable to the noise in the camera between adjacent frames when filming a still drop. These measurements of $I_s$ are consistent with the liquid being in direct contact with the substrate; this data places an upper bound of 3.8 nm (using eqn. \[signal\]) on the thickness of any possible air layer beneath the spreading lamella for all of the parameter space sampled. An air layer of this thickness would be highly unstable and it is difficult to conceive that it could persist over $0.4$ ms, i.e. until the moment of sheet ejection. All of the air trapped beneath the falling drop is enclosed into the small central bubble discussed above and does not influence the subsequent sheet ejection and splashing. However, once the thin sheet is ejected, it does move over a layer of air, which is easily visualized with our interference technique, see the bottom three panels in Fig. $1d$. The sheet is ejected at a very shallow angle, varying from $0.1^\circ$ to 0.$25^\circ$, and this angle is relatively insensitive to $\nu$ and $P$ but decreases with increasing $u_0$. It is highly anti-intuitive that the surrounding gas controls splashing in all viscosity regimes. Our interference technique allows quantitative measurements of the air beneath a spreading drop as a function of position and time. In the low-viscosity splashing regime, corona formation occurs very near to the entrapped air bubble, both spatially and temporally. Techniques such as total internal reflection imaging have been used to explore low-viscosity impact dynamics [@Kolinski], confirming theoretical predictions  [@BrennerJFM] of the initial, transient air film. In higher viscosity fluids, we find these initial air-cavity dynamics are also in quantitative agreement those predictions. However, we find no trapped air beneath the spreading drop outside the small central bubble; there is no significant air film beneath the drop at the time of thin-sheet ejection. This suggests that, rather than an underlying air layer, gas flow at the edge of the spreading drop is responsible for destabilizing the liquid. This conclusion is consistent with previous splash experiments in the low-viscosity regime [@LeiPRL]. In that case, the scaling with gas pressure and molecular weight suggests that the liquid front expanding into the surrounding air leads to liquid destabilization and splashing. The results reported here for more viscous fluids suggest a similar instability due to leading-edge gas flows. We thank Michael Brenner, Taehun Lee, Shreyas Mandre, Cacey Stevens, Lei Xu and Wendy Zhang for many fruitful discussions. This work was supported by NSF-MRSEC grant No. DMR-0820054 and NSF grant No. DMR-1105145. Use of facilities of the Keck Initiative for Ultrafast Imaging are gratefully acknowledged.
--- abstract: 'A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson’s strong negation. *Under consideration for acceptance in TPLP.*' author: - 'FELICIDAD AGUADO$^1$, PEDRO CABALAR$^1$, JORGE FANDINNO$^2$' - | DAVID PEARCE$^3$, GILBERTO P[É]{}REZ$^1$, CONCEPCI[Ó]{}N VIDAL$^1$\ \ $^1$ Information Retrieval Lab, Centro de Investigación en Tecnoloxías da Información e as Comunicacións (CITIC),\ Universidade da Coruña, Spain\ \ \ $^2$ IRIT, University of Toulouse, CNRS, France\ \ Universität Potsdam, Germany\ \ \ $^3$ Universidad Polit[é]{}cnica de Madrid, Spain\ bibliography: - 'refs.bib' title: 'Revisiting Explicit Negation in Answer Set Programming[^1]' --- \[firstpage\] Answer set programming; Non-monotonic reasoning; Equilibrium logic; Explicit negation. Introduction ============ Although the introduction of *stable models* [@GL88] in logic programming was motivated by the search of a suitable semantics for default negation, their early application to knowledge representation revealed the need of a second negation to represent explicit falsity. This second negation was already proposed in [@GelfondL91] under the name of *classical negation*, an operator only applicable on atoms that, when present in the syntax, led to a change in the name of stable models to become *answer sets*. Classical negation soon became common in applications for commonsense reasoning and action theories [@GL93] and was also extrapolated to the Well-Founded Semantics [@Per92] under the name of *explicit negation*. Later on, it was incorporated to the paradigm of *Answer Set Programming* [@Nie99; @MT99] (ASP), being nowadays present in the input language of most ASP solvers. [ To understand the difference for knowledge representation between default negation (in this paper, written as $\neg$) and explicit negation (represented as $\sneg$ ), a typical example is to distinguish the rule $\neg {\mathit{train}} \to {\mathit{cross}}$, that captures the criterion “you can cross if you have no information on a train coming,” from the (safier) encoding $\sneg {\mathit{train}} \to {\mathit{cross}}$ that means “you can cross if you have evidence that no train is coming.” In ASP, this explicit negation can only be used in front of atoms[^2] so it is not seen as a real connective. In an attempt of providing more flexibility to logic program connectives, introduced programs with *nested expressions* where conjunction, disjunction and default negation could be arbitrarily nested both in the heads and bodies of rules, but classical negation was still restricted to an application on atoms. To see an example, suppose that a given moment, three trains should be crossing, and we have an alarm that fires if one of them is known to be missing. Using nested expressions, we can rewrite the program: $$\begin{aligned} \sneg {\mathit{train}}_1 & \to & {\mathit{alarm}} \\ \sneg {\mathit{train}}_2 & \to & {\mathit{alarm}} \\ \sneg {\mathit{train}}_3 & \to & {\mathit{alarm}}\end{aligned}$$ as a single rule with a disjunction in the body: $$\begin{aligned} \sneg {\mathit{train}}_1 \vee \sneg {\mathit{train}}_2 \vee \sneg {\mathit{train}}_3 & \to & {\mathit{alarm}}\end{aligned}$$ but we cannot further apply De Morgan to rewrite the rule above as: $$\begin{aligned} \sneg ({\mathit{train}}_1 \wedge {\mathit{train}}_2 \wedge {\mathit{train}}_3) & \to & {\mathit{alarm}}\end{aligned}$$ It is easy to imagine that providing a semantics for this kind of expressions would be interesting if we plan to jump from the propositional case to programs with variables and aggregates (where, for instance, the number of trains is some arbitrary value $n \geq 0$). ]{} An important breakthrough that meant a purely logical treatment, was the characterisation of stable models in terms of *Equilibrium Logic* proposed by . This non-monotonic formalism is defined in terms of a models selection criterion on top of the (monotonic) intermediate logic of *Here-and-There* (HT) [@Hey30] and captures default negation $\neg \varphi$ as a derived operator in terms of implication $\varphi \to \bot$, as usual in intuitionistic logic. The definition of Equilibrium Logic also included a second, constructive negation ‘$\sneg$’ corresponding to Nelson’s *strong negation* [@Nel49] for intermediate logics. In the case of HT, this extension yields a five-valued logic called $\N5$ where, although ‘$\sneg$’ can now be nested as the rest of connectives, there exists a reduction for shifting it in front of atoms, obtaining a *negative normal form* (NNF). Once in NNF, the obtained equilibrium models actually coincide with answer sets for the syntactic fragments of nested expressions [@LTT99] or for regular programs [@GL93]. For this reason, most papers on Equilibrium Logic for ASP assumed a reduction to NNF from the very beginning, and little attention was paid to the behaviour of formulas in the scope of strong negation under a logic programming perspective. There are, however, cases in which this behaviour is not aligned with the reduct-based understanding of nested expressions in ASP. Take, for instance, the formula: $$\begin{aligned} \sneg\neg p \to p \label{f:snegneg}\end{aligned}$$ Its NNF reduction removes the combination of negations $\sneg \neg$ and produces the tautological rule whose unique equilibrium model is $\emptyset$, i.e., neither $p$ nor $\sneg p$ hold. However, if we start instead from the formula $\sneg\neg \neg \neg p \to p$, the NNF reduction removes again the first pair of negations producing the rule $\neg \neg p \to p$ with a second answer set $\{p\}$. This illustrates that we cannot replace $\neg p$ by $\neg \neg \neg p$ in the scope of strong negation, even though they would produce the same effect in any reduct of the style of [@LTT99] for nested expressions. In this paper, we consider a different characterisation of ‘$\sneg$’  in HT and Equilibrium Logic. We call this variant *explicit negation* to differentiate it from Nelson’s strong negation. To test its adequacy, we start generalising the definition of nested expression by introducing an arbitrary nesting of ‘$\sneg$’, adapting the definitions of reduct and answer set from [@LTT99] to that context. After that, we prove that equilibrium models (with explicit negation) capture the answer sets for these extended nested expressions and, in fact, preserve the strong equivalences from [@LTT99] even for arbitrary formulas (including implication). We also prove several properties of HT with explicit negation and provide a reduction to NNF that produces a different effect from $\N5$ when applied on implications or default negation. The rest of the paper is organised as follows. In the next section, we introduce the extended definition of answer sets for programs with nested expressions, where explicit negation can be arbitrarily combined both in the rule bodies and the rule heads. In Section \[sec:eqx\], we present Equilibrium Logic with explicit negation and in particular, its new monotonic basis, $\X5$, since the selection of equilibrium models is the same one as in [@Pearce96]. Section \[sec:fiveval\] provides a five-valued characterisation of $\X5$ and studies different types of equivalence relations, including variants of strong equivalence. In Section \[sec:related\], we briefly explain the main differences between explicit ($\X5$) and strong ($\N5$) negations. Finally, Section \[sec:conc\] concludes the paper. Nested expressions with explicit negation {#sec:nested} ========================================= We begin describing the syntax of nested expressions, starting from a set of atoms $\At$. A *nested expression* $F$ is defined with the following grammar: $$F ::= \top \mid \bot \mid p \mid F \vee F \mid F \wedge F \mid \neg F \mid \ \sneg F$$ where $p$ is any atom $p\in \At$. The two negations $\neg$ and $\sneg$  are respectively called *default* and *explicit* negation (the latter is also called *classical* in the ASP literature). An *explicit literal* is either an atom $p$ or its explicit negation $\sneg p$. A *default literal* is either an explicit literal $A$ or its default negation $\neg A$. Thus, given atom $p$, we can form the default literals $p, \sneg p, \neg p$ and $\neg\!\!\sneg p$. As we can see, the main difference with respect to [@LTT99] is that, in that case, the explicit negation[^3] operator $\sneg$  was only used for explicit literals, whereas in this definition, it can be arbitrarily nested. For instance, $\sneg(p \vee \neg q)$ is a nested expression under this new definition, but it is not under [@LTT99]. A *rule* is an implication of the form $F \to G$ where $F$ and $G$ are nested expressions respectively called the *body*  and the *head* of the rule. A rule of the form $\top \to G$ is sometimes abbreviated as $G$ and is further called a *fact* if $G$ is an explicit literal. A *logic program* is a set of rules. We say that a nested expression, a rule or a program is *explicit* if it does not contain default negation. A program rule $F \to G$ is said to be *regular* if the body $F=B_1 \wedge \dots \wedge B_n$ is a conjunction of default literals and the head $G=H_1 \vee \dots \vee H_m$ is a disjunction of default literals. In a regular rule, we allow an empty body $n=0$ and write $F=\top$ or an empty head $m=0$ and $G=\bot$ but not both. A program is *regular* if all its rules are regular. An *interpretation* is a set of explicit literals that is consistent, that is, it does not contain both $p$ and $\sneg p$ for any atom $p$. We define when an interpretation $T$ *satisfies* (resp. *falsifies*) a nested expression $F$, written $T \models F$ (resp. $T \falsif F$) providing the following recursive conditions: $$\begin{array}{r@{\,}c@{\,}ll@{\hspace{40pt}}r@{\,}c@{\,}ll} T & \models & \top & & T & \not\falsif & \top \\ T & \not\models & \bot & & T & \falsif & \bot \\ T & \models & p & \mbox{if } p \in T & T & \falsif & p & \mbox{if } \sneg p \in T \\ T & \models & \varphi \wedge \psi & \mbox{if } T\models \varphi \mbox{ and } T \models \psi& T & \falsif & \varphi \wedge \psi & \mbox{if } T\falsif \varphi \mbox{ or } T \falsif \psi \\ T & \models & \varphi \vee \psi & \mbox{if } T\models \varphi \mbox{ or } T \models \psi& T & \falsif & \varphi \vee \psi & \mbox{if } T\falsif \varphi \mbox{ and } T \falsif \psi \\ T & \models & \sneg \varphi & \mbox{if } T \falsif \varphi & T & \falsif & \sneg \varphi & \mbox{if } T \models \varphi\\ T & \models & \neg \varphi & \mbox{if } T \not\models \varphi & T & \falsif & \neg \varphi & \mbox{if } T \models \varphi \end{array}$$ As an example, given $\At=\{p,q\}$ and $T=\{\sneg p\}$ we have $T \models \sneg p \vee q$ because $T \models\, \sneg p$ (i.e. $T \falsif p$) although neither $T \models q$ nor $T \falsif q$, that is, $q$ is undefined. The latter can be expressed as $T \models \neg q \wedge \neg\!\sneg q$ (i.e., $q$ is neither true nor false). As another example, $T \falsif p \wedge q$ because $T \falsif p$ even though, as we said, $q$ is undefined. We say that $\varphi$ is *valid* if we have $T \models \varphi$ for every interpretation $T$. The logic induced by these valid expressions precisely corresponds to *classical logic with strong negation* as studied by . Note that, as usual in classical logic, $\varphi \to \psi$ is definable as $\neg\varphi \vee \psi$ in this context. Let $\Pi$ be an explicit program. A consistent set of literals $T$ is a *model* of $\Pi$ if, for every rule $F \to G$ in $\Pi$, $T \models G$ whenever $T \models F$. The reduct of a nested expression $F$ with respect to an interpretation $T$ is denoted as $F^T$ and defined recursively as follows: $$\begin{array}{rcll} p^T & {\mathbin{\stackrel{\mathrm{def}}{=}}}& p & \mbox{for any atom } p \in \At \\ (F \wedge G)^T & {\mathbin{\stackrel{\mathrm{def}}{=}}}& F^T \wedge G^T \\ (F \vee G)^T & {\mathbin{\stackrel{\mathrm{def}}{=}}}& F^T \vee G^T \\ (\sneg F)^T & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \sneg (F^T)\\ (\neg F)^T & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \left\{ \begin{array}{rl} \bot & \mbox{if } T \models F \\ \top & \mbox{otherwise} \end{array} \right.\\ \end{array}$$ The *reduct* of a program $\Pi$ with respect to $T$ corresponds to the explicit program:\ $\Pi^T {\mathbin{\stackrel{\mathrm{def}}{=}}}\{ \ (F^T \to G^T) \mid (F \to G) \in \Pi\ \}$. \[prop:total\_model\_reduct\] For any consistent set of literals $T$ and any nested formula $F$: - $T \models F$ iff $T \models F^T$; - $T \falsif F$ iff $T \falsif F^T$. A consistent set of literals $T$ is an *answer set* of a program $\Pi$ if it is a $\subseteq$-minimal model of the reduct $\Pi^T$. Notice that the definitions of reduct and answer set for the case of regular programs directly coincide with the standard definitions in ASP without nested expressions [@GelfondL91]. They also coincide with [@LTT99], defined on the case of programs with nested expressions where ‘$\sneg$’  is only in front of atoms. \[ex:nonot\] Take the program consisting of the single rule . For $\At=\{p\}$, we have three possible interpretations $T_1=\{p\}$, $T_2=\{\sneg p\}$ and $T_3=\emptyset$. This yields two possible reducts $\Pi^{T_1}=\{\sneg \bot \to p\}$ and $\Pi^{T_2}=\Pi^{T_3}=\{\sneg \top \to p\}$. It is easy to see that their corresponding minimal models are $T_1$ and $T_3$ which constitute the two answer sets of $\Pi$. \[ex:bird\] Take the program consisting of the single rule: $$\begin{aligned} \neg ({\mathit{bird}} \wedge \sneg {\mathit{flies}}) \to \ \sneg ({\mathit{bird}} \wedge \sneg {\mathit{flies}}) \label{f:bird}\end{aligned}$$ capturing the idea that “being a bird that does not fly” should be false by default. If we choose any interpretation $T$ such that $T \models {\mathit{bird}} \wedge \sneg {\mathit{flies}}$ then the reduct will have a single rule with $\bot$ in the body and the minimal model will be $\emptyset$ which does not satisfy ${\mathit{bird}} \wedge \sneg {\mathit{flies}}$. If $T \not\models {\mathit{bird}} \wedge \sneg {\mathit{flies}}$ instead, the reduct becomes $\top \to \ \sneg ({\mathit{bird}} \wedge \sneg {\mathit{flies}})$ and the minimal models of this program are $\{\sneg {\mathit{bird}}\}$ and $\{{\mathit{flies}}\}$ that, as they are both compatible with the assumption for $T$, they become the two answer sets of . Suppose we extend now with the fact $bird$. Doing so, it is easy to see that the only answer set becomes $\{{\mathit{flies}}\}$. Analogously, if we take plus the fact $\sneg {\mathit{flies}}$ the only answer set becomes $\{\sneg {\mathit{bird}}\}$. Finally, if we add the facts ${\mathit{bird}}$ and $\sneg {\mathit{flies}}$ to , the default is deactivated and we get the unique answer set $\{{\mathit{bird}},\sneg {\mathit{flies}}\}$. Equilibrium logic with explicit negation {#sec:eqx} ======================================== We start defining the monotonic logic of *Here-and-There with explicit negation*, $\X5$. Let $\At$ be a set of atoms. A *formula* $\varphi$ is an expression built with the grammar: $$\varphi ::= p \mid \bot \mid \varphi \wedge \varphi \mid \varphi \vee \varphi \mid \varphi \to \varphi \mid \ \sneg \varphi$$ for any atom $p\in \At$. We also use the abbreviations: [2]{} $$\begin{aligned} \neg \varphi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& (\varphi \to \bot)\\ \top & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \neg \bot\\\end{aligned}$$ $$\begin{aligned} \\ \varphi \leftrightarrow \psi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& (\varphi \to \psi) \wedge (\psi \to \varphi)\\ \varphi \Leftrightarrow \psi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& (\varphi \leftrightarrow \psi) \wedge (\sneg \varphi \leftrightarrow \sneg \psi)\end{aligned}$$ Given a pair of formulas $\varphi$ and $\alpha$, we write $\varphi[\alpha/p]$ to denote the uniform substitution of all occurrences of atom $p$ in $\varphi$ by $\alpha$. As usual, a *theory* is a set of formulas. We sometimes understand finite theories (or subtheories) as the conjunction of their formulas. Notice that programs with nested expressions are also theories under this definition. An $\X5$-*interpretation* is a pair ${\langle H,T \rangle}$ of consistent sets of explicit literals (respectively standing for “here” and “there”) satisfying $H \subseteq T$. We say that the interpretation is *total* when $H=T$. \[def:satfals\] We say that ${\langle H,T \rangle}$ *satisfies* (resp. *falsifies*) a formula $\varphi$, written ${\langle H,T \rangle} \models \varphi$ (resp. ${\langle H,T \rangle} \falsif \varphi$), when the following recursive conditions hold: $$\begin{array}{r@{\,}c@{\,}l@{\;}l@{\hspace{10pt}}r@{\,}c@{\,}l@{\;}l} {\langle H,T \rangle} & \models & \top & & {\langle H,T \rangle} & \not\falsif & \top \\ {\langle H,T \rangle} & \not\models & \bot & & {\langle H,T \rangle} & \falsif & \bot \\ {\langle H,T \rangle} & \models & p & \mbox{if } p \in H & {\langle H,T \rangle} & \falsif & p & \mbox{if } \sneg p \in H \\ {\langle H,T \rangle} & \models & \varphi \wedge \psi & \mbox{if } {\langle H,T \rangle} \models \varphi \mbox{ and } {\langle H,T \rangle} \models \psi& {\langle H,T \rangle} & \falsif & \varphi \wedge \psi & \mbox{if } {\langle H,T \rangle} \falsif \varphi \mbox{ or } {\langle H,T \rangle} \falsif \psi \\ {\langle H,T \rangle} & \models & \varphi \vee \psi & \mbox{if } {\langle H,T \rangle} \models \varphi \mbox{ or } {\langle H,T \rangle} \models \psi& {\langle H,T \rangle} & \falsif & \varphi \vee \psi & \mbox{if } {\langle H,T \rangle} \falsif \varphi \mbox{ and } {\langle H,T \rangle} \falsif \psi \\ {\langle H,T \rangle} & \models & \sneg \varphi & \mbox{if } {\langle H,T \rangle} \falsif \varphi & {\langle H,T \rangle} & \falsif & \sneg \varphi & \mbox{if } {\langle H,T \rangle} \models \varphi\\ {\langle H,T \rangle} & \models & \varphi\! \to \! \psi & \mbox{if both} & {\langle H,T \rangle} & \falsif & \varphi \! \to \! \psi & \mbox{if } {\langle T,T \rangle} \models \varphi \mbox{ and } {\langle H,T \rangle} \falsif \psi\\ & & & (i) {\langle H,T \rangle} \not\models \varphi \mbox{ or } {\langle H,T \rangle} \models \psi \\ & & & (ii) {\langle T,T \rangle} \not\models \varphi \mbox{ or } {\langle T,T \rangle} \models \psi & & & & \hfill\Box \end{array}$$ A formula $\varphi$ is a *tautology* (or is *valid*), written $\models \varphi$, if it is satisfied by every possible interpretation. We say that an $\X5$-interpretation ${\langle H,T \rangle}$ is a *model* of a theory $\Gamma$, written ${\langle H,T \rangle} \models \Gamma$, if ${\langle H,T \rangle}\models \varphi$ for all $\varphi \in \Gamma$. The next observation about Definition \[def:satfals\] connects satisfaction ‘$\models$’ with standard HT. \[obs:ht\] The satisfaction relation ‘$\models$’ (left column in Def. \[def:satfals\]) of any formula corresponds to regular HT satisfaction up to the first occurrence of ‘$\sim$’, where the falsification ‘$\falsif$’ comes into play. As a result, any tautology from HT can be shifted to $\X5$, even if its atoms are uniformly replaced by subformulas containing explicit negation. \[th:httaut\] If formula $\varphi$ is HT valid (and so, it does not contain $\sneg$ ) then $\varphi[\alpha/p]$ is also $\X5$ valid, for any formula $\alpha$ and any atom $p$. If we choose any $p$ not occurring in $\varphi$, then $\varphi[\alpha/p]=\varphi$ and the theorem above is just saying that $\X5$ is a conservative extension of HT. But it can also be exploited further by replacing, in the HT tautology, any atom by an arbitrary formula containing negation. For instance, if explicit negation only occurs in front of atoms, we essentially get HT with explicit literals playing the role of atoms (disregarding inconsistent models). However, when we combine explicit negation in an arbitrary way, some usual properties of HT need to be checked in the new context. \[lem:satisfaction\_total\_models\] Let $T$ be a consistent set of literals and $F$ a nested expression. Then: - ${\langle T,T \rangle} \models F$ iff $T \models F$; - ${\langle T,T \rangle} \falsif F$ iff $T \falsif F$. \[th:persistence\] For any $\X5$-interpretation ${\langle H,T \rangle}$ and any formula $\varphi$ then both: - ${\langle H,T \rangle} \models \varphi$ implies ${\langle T,T \rangle} \models \varphi$; - ${\langle H,T \rangle} \falsif \varphi$ implies ${\langle T,T \rangle} \falsif \varphi$. \[prop:default\_negation\] For any $\X5$-interpretation ${\langle H,T \rangle}$, any formula $\varphi$: - ${\langle H,T \rangle} \models \neg \varphi$ iff ${\langle T,T \rangle} \not\models \varphi$; - ${\langle H,T \rangle} \falsif \neg \varphi$ iff ${\langle T,T \rangle} \models \varphi$. The following results establish a connection between $\X5$ and the reduct of a nested expression or a program. \[lem:aux\_reduct\] Let ${\langle H,T \rangle}$ be an $\X5$-interpretation and $F$ a nested expression. Then: - ${\langle H,T \rangle} \models F$ iff $H \models F^T$; - ${\langle H,T \rangle} \falsif F$ iff $H \falsif F^T$. \[cor:equivalence\_for\_total\_model\] For any consistent set of literals $T$ and any program $\Pi$: ${\langle T,T \rangle} \models \Pi$ iff $T \models \Pi$. \[prop:htreduct\] For any $\X5$-intepretation ${\langle H,T \rangle}$ and any program $\Pi$: ${\langle H,T \rangle} \models \Pi$ iff $H$ is a model of $\Pi^T$ and $T$ is a model of $\Pi$. \[def:eqmodel\] A total $\X5$-interpretation ${\langle T,T \rangle}$ is an *equilibrium model* of a theory $\Gamma$ if ${\langle T,T \rangle}$ is a model of $\Gamma$ and there is no other model ${\langle H,T \rangle}$ of $\Gamma$ with $H \subset T$. *Equilibrium logic (with explicit negation)* is the non-monotonic logic induced by equilibrium models. The following theorem guarantees that equilibrium models and answer sets coincide for the syntax of programs with nested expressions. \[th:answersets\] An interpretation $T$ is an answer set of a program $\Pi$ iff ${\langle T,T \rangle}$ is an equilibrium model of $\Pi$. o conclude this section, we provide an alternative reduct definition for arbitrary formulas (and not just nested expressions) obtained as a generalisation of Ferraris’ reduct [@Fer05]. This generalisation introduces a main feature[^4] with respect to [@Fer05]: it actually uses two dual transformations, $\varphi^T_+$ and $\varphi^T_-$, to obtain a symmetric behaviour depending on the number of explicit negations in the scope. \[def:Ferraris\_reduct\] Given a formula $\varphi$ and an interpretation $T$ (a consistent set of explicit literals) we define the following pair of mutually recursive transformations: $$\begin{array}{cc} \varphi^T_+ {\mathbin{\stackrel{\mathrm{def}}{=}}}\left\{ \begin{array}{cl} \bot & \text{if } T \not\models \varphi \\ p & \text{if } \varphi=p \in \At, p \in T \\ \alpha^T_+ \otimes \beta^T_+ & \text{if } T \models \varphi, \varphi=\alpha \otimes \beta, \\ & \text{ for } \otimes \in\{\vee,\wedge \}\\ \neg (\alpha^T_+) \vee \beta^T_+ & \text{if } T \models \varphi, \varphi=\alpha \to \beta \\ \neg (\alpha^T_+) & \text{if } T \models \varphi, \varphi=\neg \alpha, \\ \sneg (\alpha^T_-) & \text{if } T \models \varphi, \varphi=\sneg \alpha \end{array} \right. & \varphi^T_- {\mathbin{\stackrel{\mathrm{def}}{=}}}\left\{ \begin{array}{cl} \top & \text{if } T \not\falsif \varphi\\ p & \text{if } \varphi=p \in \At, \sneg p \in T \\ \alpha^T_- \otimes \beta^T_- & \text{if } T \falsif \varphi, \varphi=\alpha \otimes \beta, \\ & \text{ for } \otimes \in\{\vee,\wedge \}\\ \beta^T_- & \text{if } T \falsif \varphi, \varphi=\alpha \to \beta\\ \bot & \text{if } T \falsif \varphi, \varphi=\neg \alpha\\ \sneg (\alpha^T_+) & \text{if } T \falsif \varphi, \varphi=\sneg \alpha \end{array} \right. \end{array}$$ The reduct $\Gamma^T_+$ of a theory $\Gamma$ is just defined as the set $\{\varphi^T_+ \mid \varphi \in \Gamma\}$. For instance, given $\varphi=\eqref{f:bird}$ and $T=\{\sneg {\mathit{bird}}\}$, the reader can check that the application of the definition above eventually produces the formula $\varphi^T_+ = \neg \neg \bot \vee \sneg ({\mathit{bird}} \wedge \top)$ which is equivalent to $\sneg {\mathit{bird}}$. If we take $T=\{{\mathit{flies}}\}$ instead, the result is $\varphi^T_+=\neg \neg \bot \vee \sneg (\top \wedge \sneg {\mathit{flies}})$ that is equivalent to ${\mathit{flies}}$. As a third example, if we take $T=\{{\mathit{bird}}\}$ then we directly get $\varphi^T_+=\bot$. \[th:Ferraris\_reduct\] For any formula $\varphi$ and any pair of interpretations $H \subseteq T$: 1. \(i)  $H \models \varphi^T_+$ iff ${\langle H,T \rangle} \models \varphi$; 2. \(ii) $H \falsif \varphi^T_-$ iff ${\langle H,T \rangle} \falsif \varphi$. From Lemma \[lem:aux\_reduct\] and Theorem \[th:Ferraris\_reduct\] we immediately conclude: \[cor:reducts\] For any nested expression $F$ and any pair of interpretations $H \subseteq T$: 1. $H \models F^T$ iff $T\models F$ and $H \models F^T_+$; 2. $H \falsif F^T$ iff $T\falsif F$ and $H \falsif F^T_-$. \[cor:reducteq\] ${\langle T,T \rangle}$ is an equilibrium model of $\Gamma$ iff $T$ is a minimal model of $\Gamma^T_+$. Back to the example formula $\varphi=$, taking $T=\{\sneg {\mathit{bird}}\}$ we saw that $\varphi^T_+$ is equivalent to $\sneg {\mathit{bird}}$ whose minimal model is obviously $T$. Therefore, ${\langle T,T \rangle}$ is an equilibrium model. [\[rev1.1\]]{} Multivalued characterisation and equivalence relations {#sec:fiveval} ====================================================== An alternative way of characterising $\X5$ is as a five-valued logic defined as follows. Given any $\X5$-interpretation $M={\langle H,T \rangle}$ we define its corresponding 5-valued mapping $M: \At \to \{-2,-1,0,1,2\}$ so that, for any atom $p\in \At$: $$M(p) {\mathbin{\stackrel{\mathrm{def}}{=}}}\left\{ \begin{array}{rl} 2 & \mbox{if } p \in H\\ -2 & \mbox{if } \sneg p \in H\\ 1 & \mbox{if } p \in T\setminus H\\ -1 & \mbox{if } \sneg p \in T\setminus H\\ 0 & \mbox{otherwise, i.e., } p \not\in T, \sneg p \not\in T \end{array} \right.$$ We can read these five values as follows: $2$ = *proved to be true*; $-2$ = *proved to be false*; $1$ = *true by default*; $-1$ = *false by default*; and $0$ = *undefined*. Notice that values $1$ and $-1$ are used for explicit literals in $T \setminus H$. As a consequence: An $\X5$-interpretation $M={\langle H,T \rangle}$ is total (i.e. $H=T$) iff $M(p) \in \{-2,0,2\}$ for all $p \in \At$. \[def:valuation\] This 5-valuation can be extended to arbitrary formulas in the following way: [lCl+x\*]{} M() & & -2\ M() & & 2\ M() & & (M(),M())\ M() & & (M(),M())\ M() & & { [cl]{} 2 & M() (M(),0)\ M() & .\ M() & & -M() & The designated value is $2$, that is, we will understand that a formula is satisfied when $M(\varphi)=2$. Moreover, a complete correspondence with the satisfaction/falsification of formulas given in the previous section is fixed by the following theorem: \[th:corresp\] For any $\X5$-interpretation $M={\langle H,T \rangle}$ and any formula $\varphi$: [2]{} - ${\langle H,T \rangle} \models \varphi$ iff $M(\varphi)=2$; - ${\langle T,T \rangle} \models \varphi$ iff $M(\varphi)>0$; - ${\langle H,T \rangle} \falsif \varphi$ iff $M(\varphi)=-2$; - ${\langle T,T \rangle} \falsif \varphi$ iff $M(\varphi)<0$. The equilibrium condition given in Definition \[def:eqmodel\] can be rephrased in 5-valued terms as follows. Given two $\X5$-interpretations $M={\langle H,T \rangle}$ and $M'={\langle H',T' \rangle}$ we say that $M$ is *smaller* than $M'$, written $M \leq M'$, when $T=T'$ and $H \subseteq H'$. \[prop:leq\] Let $M$ and $M'$ be a pair of $\X5$-interpretations. Then $M \leq M'$ iff, for any atom $p \in \At$, the following three conditions hold: 1. $M(p)=0$ iff $M'(p)=0$; 2. If $M(p) >0$, then $M(p) \leq M'(p)$; 3. If $M(p) <0$, then $M'(p) \leq M(p)$. A total interpretation $M={\langle T,T \rangle}$ is an equilibrium model of a theory $\Gamma$ iff $M(\varphi)=2$ for all $\varphi \in \Gamma$ and there is no $M' < M$ such that $M'(\varphi)=2$ for all $\varphi \in \Gamma$. It follows from Theorem \[th:corresp\] and the definition of $\leq$ relation. The truth tables derived from Definition \[def:valuation\] are depicted in Figure \[fig:tables\], including the tables for derived operators ‘$\neg$’, ‘$\leftrightarrow$’ and ‘$\Leftrightarrow$’. Note that the table for $\neg \varphi=(\varphi \to \bot)$ is just the first column of the table for ‘$\to$’ since the evaluation of ‘$\bot$’ is fixed to $-2$. $$\begin{array}{c@{\hspace{5pt}}c} \begin{array}{r|rrrrr} \wedge & -2 & -1 & 0 & 1 & 2\\ \hline -2 & -2 & -2 & -2 & -2 & -2 \\ -1 & -2 & -1 & -1 & -1 & -1 \\ 0 & -2 & -1 & 0 & 0 & 0 \\ 1 & -2 & -1 & 0 & 1 & 1 \\ 2 & -2 & -1 & 0 & 1 & 2 \end{array} & \begin{array}{r|rrrrr} \vee & -2 & -1 & 0 & 1 & 2\\ \hline -2 & -2 & -1 & 0 & 1 & 2 \\ -1 & -1 & -1 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 & 1 & 2 \\ 1 & 1 & 1 & 1 & 1 & 2 \\ 2 & 2 & 2 & 2 & 2 & 2 \end{array} \\ \\ \begin{array}{r|rrrrr} \to & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 2 & 2 & 2 & 2 \\ -1 & 2 & 2 & 2 & 2 & 2 \\ 0 & 2 & 2 & 2 & 2 & 2 \\ 1 & -2 & -1 & 0 & 2 & 2 \\ 2 & -2 & -1 & 0 & 1 & 2 \end{array} & \begin{array}{c@{\hspace{20pt}}c} \begin{array}{r|r} \varphi & \sneg \varphi\\ \hline -2 & 2 \\ -1 & 1 \\ 0 & 0 \\ 1 & -1 \\ 2 & -2 \end{array} & \begin{array}{r|r} \varphi & \neg \varphi\\ \hline -2 & 2 \\ -1 & 2 \\ 0 & 2 \\ 1 & -2 \\ 2 & -2 \end{array} \end{array} \\ \\ \begin{array}{r|rrrrr} \leftrightarrow & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 2 & 2 & -2 & -2 \\ -1 & 2 & 2 & 2 & -1 & -1 \\ 0 & 2 & 2 & 2 & 0 & 0 \\ 1 & -2 & -1 & 0 & 2 & 1 \\ 2 & -2 & -1 & 0 & 1 & 2 \end{array} & \begin{array}{r|rrrrr} \Leftrightarrow & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 1 & 0 & -2 & -2 \\ -1 & 1 & 2 & 0 & -1 & -2 \\ 0 & 0 & 0 & 2 & 0 & 0 \\ 1 & -2 & -1 & 0 & 2 & 1 \\ 2 & -2 & -2 & 0 & 1 & 2 \end{array} \end{array}$$ It is easy to check, for instance, that the following implication is valid: $$\begin{aligned} \sneg \varphi \to \neg \varphi \label{f:coher}\end{aligned}$$ expressing that explicit negation is stronger than default negation[^5]. Moreover, default negation is definable in terms of implication and explicit negation (without resorting to $\bot$) since, with some effort, it can be checked that the table for $\neg \varphi$ can be equally obtained through the expression: $$\begin{aligned} \sneg ((\varphi \to \ \sneg \varphi) \to \ \sneg (\varphi \to \ \sneg \varphi))\end{aligned}$$ An important remark regarding equivalence is that to express that this (or any) pair of formulas are equivalent, double implication does not suffice. This is because, as we can see in the tables, $M(\varphi \leftrightarrow \psi)=2$ does not imply that $M(\varphi)=M(\psi)$. To get such a correspondence, we must resort instead to the stronger ‘$\Leftrightarrow$’ for which $M(\varphi \Leftrightarrow \psi)=2$ holds if and only if $M(\varphi)=M(\psi)$. This lack of the ‘$\leftrightarrow$’ equivalence (we call it *weak* equivalence) has an important consequence: it does not define a congruence relation since $\models \alpha \leftrightarrow \beta$ no longer implies that we can freely replace subformula $\alpha$ by $\beta$ in any arbitrary context: it may be the case that $\not\models \sneg \alpha \leftrightarrow \sneg \beta$. For instance, we can easily check that $\models p \wedge \neg p \leftrightarrow \bot$ because $\min(M(p),M(\neg p)) \leq 0$ and $M(\bot)=-2$, so $M(p \wedge \neg p \leftrightarrow \bot)=2$ for any $M$. However, we cannot replace $p \wedge \neg p$ by $\bot$ in any context. Take the program $\Pi$ consisting of the unique rule $$\begin{aligned} \sneg (p \wedge \neg p) \label{f:pnotp}\end{aligned}$$ with empty body. Interpretation $T=\{\sneg p\}$ is an answer set because $\Pi^T=\{\sneg (p \wedge \top)\}$ has $\{\sneg p\}$ as minimal model (in fact, it is the unique answer set) but if we replace $p \wedge \neg p$ by $\bot$ in $\Pi$ we get the trivial program $\{\sneg \bot\}$ whose unique answer set is $\emptyset$. Although weak equivalence does not guarantee arbitrary replacements, it can be used to replace formulas in a theory, as stated below: \[prop:replace\] Let $\alpha$, $\beta$ be a pair of formulas such that $\models \alpha \leftrightarrow \beta$. Then, $M \models \Gamma \cup \{\alpha\}$ iff $M \models \Gamma \cup \{\beta\}$ for any theory $\Gamma$ and $\X5$-interpretation $M$. As we mentioned before, for obtaining a congruence relation we can use validity of ‘$\Leftrightarrow$’ instead, which guarantees the following substitution theorem. \[th:subst\] Let $\alpha$, $\beta$ be a pair of formulas satisfying $\models \alpha \Leftrightarrow \beta$. Then, for any formula $\varphi$, we also obtain $\models \varphi[\alpha/p] \Leftrightarrow \varphi[\beta/p]$. Still, there are some cases in which $\leftrightarrow$ can be used for substitution, provided that the replaced formulas are not in the scope of explicit negation. \[th:replace\] Let $\varphi$ be a formula where atom $p$ only occurs outside the scope of explicit negation, and let $\alpha, \beta$ be two formulas satisfying $\models \alpha \leftrightarrow \beta$. Then, $\models \varphi[\alpha/p] \leftrightarrow \varphi[\beta/p]$. An important property of ASP related to HT equivalence is *strong equivalence*. We say that two programs (resp. theories) $\Gamma$ and $\Gamma'$ are *strongly equivalent* iff $\Gamma \cup \Delta$ and $\Gamma' \cup \Delta$ have the same answer sets (resp. equilibrium models), for any additional program (resp. theory) $\Delta$. When we talk about strong equivalence of formulas $\alpha$ and $\beta$ we assume they correspond to the singleton theories $\{\alpha\}$ and $\{\beta\}$. As shown in [@LPV01] (for the case without explicit negation), two programs or theories are strongly equivalent if and only if they are HT equivalent. Since the ‘$\leftrightarrow$’ relation in HT is congruent, there is no difference between strong equivalence (replacing formulas in a theory) and substitution (replacing subformulas in a formula). However, as explained in [@Ortiz07], once congruence is lost, we can further refine strong equivalence in the following way. We say that two formulas $\alpha$ and $\beta$ are *strongly equivalent on substitutions* if $\Delta \cup \{ \varphi[\alpha/p] \} $ and $\Delta \cup \{ \varphi[\beta/p] \}$ have the same equilibrium models, for any formula $\varphi$ and theory $\Delta$. The proof of the next lemma can be obtained following similar steps to the proof of the main theorem in [@LPV01] replacing atoms in that case by explicit literals in ours. \[lem:strong.equivalence.aux\] Let $\alpha$ and $\beta$ be two formulas and be an interpretation such that ${\langle H,T \rangle} \models\alpha$ but ${\langle H,T \rangle} \not\models\beta$. Then, there is a finite theory $\Delta$ such that ${\langle T,T \rangle} $ is an equilibrium model of one of $\Delta \cup {\ensuremath{\{\beta\}}}$, $\Delta \cup {\ensuremath{\{ \alpha \}}}$ but not of both. \[thm:strong.equivalence\] Formulas $\alpha$ and $\beta$ are strongly equivalent iff $\models \alpha \leftrightarrow \beta$. \[th:substeq\] Formulas $\alpha$ and $\beta$ are strongly equivalent on substitutions iff $\models \alpha \Leftrightarrow \beta$. The following set of valid equivalences allow us reducing any nested expression with explicit negation to an *explicit negation normal form* (NNF) where $\sneg$  is only applied on atoms. $$\begin{aligned} \sneg \top & \Leftrightarrow & \bot \label{f:nnf1}\\ \sneg \bot & \Leftrightarrow & \top \label{f:nnf2}\\ \sneg (\varphi \wedge \psi) & \Leftrightarrow & \sneg \varphi \,\,\vee \sneg \psi \label{f:nnf3}\\ \sneg (\varphi \vee \psi) & \Leftrightarrow & \sneg \varphi \,\,\wedge \sneg \psi \label{f:nnf4}\\ \sneg \ \sneg \varphi & \Leftrightarrow & \varphi \label{f:nnf5}\\ \sneg \neg \varphi & \Leftrightarrow & \neg \neg \varphi \label{f:nnf6}\end{aligned}$$ For instance, we can reduce the nested expression  to NNF as follows: $$\begin{array}{rcll} \sneg (p \wedge \neg p) & \Leftrightarrow & \sneg p \vee \sneg \neg p & \mbox{ by } \eqref{f:nnf3}\\ & \Leftrightarrow & \sneg p \vee \neg \neg p & \mbox{ by } \eqref{f:nnf6} \end{array}$$ Programs in NNF correspond to the original syntax in [@LTT99]. That paper provided several transformations that allowed reducing any program in NNF to a regular program. These transformations included commutativity and associativity of conjunction and disjunction (which are obviously satisfied in $\X5$) plus the equivalences in the following proposition. The following formulas are $\X5$ tautologies: $$\begin{aligned} \varphi \wedge (\psi \vee \gamma) \Leftrightarrow (\varphi \wedge \psi) \vee (\varphi \wedge \gamma) & & \varphi \vee (\psi \wedge \gamma) \Leftrightarrow (\varphi \vee \psi) \wedge (\varphi \vee \gamma) \label{f:distrib} \\ \varphi \wedge \bot \Leftrightarrow \bot & & \varphi \vee \top \Leftrightarrow \top \label{f:anhil} \\ \varphi \wedge \top \Leftrightarrow \varphi & & \varphi \vee \bot \Leftrightarrow \varphi \label{f:neut} \\ \neg (\varphi \wedge \psi) \Leftrightarrow \neg \varphi \vee \neg \psi & & \neg (\varphi \vee \psi) \Leftrightarrow \neg \varphi \wedge \neg \psi \label{f:demorgan} \\ \neg \top \Leftrightarrow \bot & & \neg \bot \Leftrightarrow \top \label{f:notconst}\end{aligned}$$ $$\begin{aligned} \neg \neg \neg \varphi & \Leftrightarrow & \neg \varphi \label{f:triplenot}\\ \varphi \to \psi \wedge \gamma & \Leftrightarrow & (\varphi \to \psi) \wedge (\varphi \to \gamma) \label{f:andhead}\\ \varphi \vee \psi \to \gamma & \Leftrightarrow & (\varphi \to \gamma) \wedge (\psi \to \gamma) \label{f:orbody}\\ \varphi \wedge \neg \neg \psi \to \gamma & \Leftrightarrow & \varphi \to \gamma \vee \neg \psi \label{f:notbody}\\ \varphi \to \gamma \vee \neg \neg \psi & \Leftrightarrow & \varphi \wedge \neg \psi \to \gamma \label{f:nothead}\end{aligned}$$ and correspond to the transformations in [@LTT99]. For instance, as we saw, was equivalent to $\sneg p \vee \neg \neg p$ but this can be further transformed into the regular rule $\neg p \to \sneg p$ commonly used to assign falsity of $p$ by default. Rule can be transformed as follows: $$\begin{array}{rcl@{\ \ \ }l} \eqref{f:bird} & \Leftrightarrow & \neg {\mathit{bird}} \vee \neg\!\!\sneg {\mathit{flies}} \to \ \sneg ({\mathit{bird}} \wedge \sneg {\mathit{flies}}) & \mbox{by } \eqref{f:demorgan}\\ & \Leftrightarrow & \neg {\mathit{bird}} \vee \neg\!\!\sneg {\mathit{flies}} \to \ \sneg {\mathit{bird}} \vee \sneg \ \sneg {\mathit{flies}} & \mbox{by } \eqref{f:nnf3} \\ & \Leftrightarrow & \neg {\mathit{bird}} \vee \neg\!\!\sneg {\mathit{flies}} \to \ \sneg {\mathit{bird}} \vee {\mathit{flies}} & \mbox{by } \eqref{f:nnf5} \\ & \Leftrightarrow & (\neg {\mathit{bird}} \to \ \sneg {\mathit{bird}} \vee {\mathit{flies}})\\ & & \wedge (\neg\!\!\sneg {\mathit{flies}} \to \ \sneg {\mathit{bird}} \vee {\mathit{flies}}) & \mbox{by } \eqref{f:orbody} \\ \end{array}$$ and the last step is a conjunction of two regular rules as in standard ASP solvers. Reduction to NNF is also possible on arbitrary formulas. For that purpose, we can combine - with the following valid (weak) equivalence: $$\begin{aligned} \sneg (\varphi \to \psi) & \leftrightarrow & \neg \neg \varphi \wedge \sneg \psi \label{f:nnf7}\end{aligned}$$ However, the reduction must be done with some care, because this last equivalence cannot be shifted to $\Leftrightarrow$. Indeed, the left and right expressions have different valuations when $M(\varphi)=M(\psi)=1$, obtaining $M(\sneg (\varphi \to \psi))=-2 \neq -1 = M(\neg \neg \varphi \wedge \sneg \psi)$. Fortunately, Theorem \[th:replace\] allows us applying from the outermost occurrence of $\sim$ and then recursively combining with - until $\sim$ is only applied to atoms. For any formula $\varphi$ there exists a formula $\psi$ in NNF such that $\models \varphi \leftrightarrow \psi$. For instance, we can reduce the following formula into NNF as follows: $$\begin{aligned} \sim (a \to \ \sneg b \wedge (c \to d)) & \leftrightarrow & \neg \neg a \wedge \sim( \sneg b \wedge (c \to d)) \\ & \leftrightarrow & \neg \neg a \wedge (\sneg \ \sneg b \vee \sneg (c \to d)) \\ & \leftrightarrow & \neg \neg a \wedge (b \vee \neg \neg c \wedge \sneg d)\end{aligned}$$ However, we cannot apply  making a replacement in the scope of explicit negation. A clear counterexample is the formula $\sneg \ \sneg (p \to q)$ that, due to , is strongly equivalent to $p \to q$, but applying inside would incorrectly lead to the nested expression $\sneg (\neg \neg p \wedge \sneg q)$ that can be transformed into the strongly equivalent expression $\neg p \vee q$, different from $p \to q$ in ASP. Related work {#sec:related} ============ As explained in the introduction, this work is obviously related to the characterisation of ‘$\sim$’ as Nelson’s *strong negation* [@Nel49] for intermediate logics. In particular, the addition of strong negation to HT produces the five-valued logic $\N5$ already present in the original definition of Equilibrium Logic [@Pearce96]. In fact, the interpretations and the truth values we have chosen for $\X5$ coincide with those for $\N5$, and their evaluation of (non-derived) connectives $\top, \wedge, \vee$ and $\to$ from Figure \[fig:tables\] also coincide in both logics, except for one difference in the table of implication: the value for $M(\varphi)=1$ and $M(\psi)=-2$ changes from $-2$ to $-1$ in $\N5$. This change and its result on derived operators is shown in Figure \[fig:tablesn5\] where the different values are framed in rectangles. $$\begin{array}{c@{\hspace{20pt}}c} \begin{array}{r|rrrrr} \to & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 2 & 2 & 2 & 2 \\ -1 & 2 & 2 & 2 & 2 & 2 \\ 0 & 2 & 2 & 2 & 2 & 2 \\ 1 & \minusone & -1 & 0 & 2 & 2 \\ 2 & -2 & -1 & 0 & 1 & 2 \end{array} & \begin{array}{c@{\hspace{20pt}}c} \begin{array}{r|r} \varphi & \neg \varphi\\ \hline -2 & 2 \\ -1 & 2 \\ 0 & 2 \\ 1 & \minusone \\ 2 & -2 \end{array} \end{array} \\ \\ \begin{array}{r|rrrrr} \leftrightarrow & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 2 & 2 & \minusone & -2 \\ -1 & 2 & 2 & 2 & -1 & -1 \\ 0 & 2 & 2 & 2 & 0 & 0 \\ 1 & \minusone & -1 & 0 & 2 & 1 \\ 2 & -2 & -1 & 0 & 1 & 2 \end{array} & \begin{array}{r|rrrrr} \Leftrightarrow & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 1 & 0 & \minusone & -2 \\ -1 & 1 & 2 & 0 & -1 & -2 \\ 0 & 0 & 0 & 2 & 0 & 0 \\ 1 & \minusone & -1 & 0 & 2 & 1 \\ 2 & -2 & -2 & 0 & 1 & 2 \end{array} \end{array}$$ As a result, $\N5$ ceases to satisfy and whose role in the reduction to NNF is respectively replaced by the $\N5$-valid weak equivalences: $$\begin{aligned} \sneg \neg \varphi & \leftrightarrow & \varphi \label{f:N1}\\ \sneg (\varphi \to \psi) & \leftrightarrow & \varphi \wedge \sneg \psi \label{f:N2}\end{aligned}$$ The difference between and also reveals the effect on falsification of implication in both logics. While ${\langle H,T \rangle} \falsif \varphi \to \psi$ requires ${\langle T,T \rangle} \models \varphi$ in $\X5$, this is replaced by condition ${\langle H,T \rangle}\models \varphi$ in $\N5$. Curiously, although these two logics provide a different behaviour for $\sneg$  as strong versus explicit negation, they actually have the same evaluation for that connective, while their real technical difference lies on falsity of implication. The reason why $\N5$ does not capture the extended reduct for nested expressions proposed in this paper is that is not valid in that logic. This is because, when $M(\varphi)=1$, we get $M(\neg \varphi)=-1 \neq -2 = M(\neg \neg \neg \varphi)$. It is still possible to define $\N5$ operators in $\X5$ as follows: $$\begin{aligned} \varphi \stackrel{\N5}{\to} \psi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \varphi \to \ \sneg \varphi \vee \psi \\ \stackrel{\N5}{\neg} \varphi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \varphi \to \ \sneg \varphi\end{aligned}$$ using here the $\X5$ interpretation for implication. Analogously, we can also define the $\X5$ operators in $\N5$ in the following way: $$\begin{aligned} \varphi \stackrel{\X5}{\to} \psi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& (\varphi \to \psi) \wedge (\sneg \psi \to \ \neg \neg \neg \varphi) \\ \stackrel{\X5}{\neg} \varphi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \neg \neg \neg \varphi\end{aligned}$$ assuming that we interpret implication and $\neg$ under $\N5$ instead. An interesting connection between both variants is that the addition of the excluded middle axiom schemata $\varphi \vee \neg \varphi$ imposes the restriction of total models ${\langle T,T \rangle}$ both in $\X5$ and in $\N5$. This means that all atoms and formulas are evaluated in the set $\{-2,0,2\}$, for which the truth tables coincide in these two logics and actually collapse to classical logic with strong negation [@vakarelov1977notes] introduced in Section \[sec:nested\]. This coincidence is important since equilibrium models (and so, answer sets) are total models. To conclude the section on related work, another possibility for interpreting a second negation ‘$\sneg$’ inside intuitionistic logic was provided by [@FH96] using a *classical* negation interpretation. Although the idea seems closer to Gelfond and Lifschitz’ original terminology for a second negation, it actually provides undesired effects from an ASP point of view. Classical negation in HT means keeping only the satisfaction relation ‘$\models$’ in Definition \[def:satfals\] (falsification ‘$\falsif$’ is not needed) but replacing the condition for ‘$\sneg$’ so that ${\langle H,T \rangle} \models \sneg \varphi$ if ${\langle H,T \rangle} \not\models \varphi$. One important effect of this change is that HT with classical negation ceases to satisfy the persistence property (Theorem \[th:persistence\]). But perhaps a more important problem from the ASP perspective is that $\neg p$ implies $\sneg p$ for any atom $p$. Thus, the rule $\neg p \to \sneg p$ becomes a tautology in this context, whereas it is normally used in ASP to conclude that $p$ is explicitly false by default. Conclusions {#sec:conc} =========== We have introduced a variant of constructive negation in Equilibrium Logic (and its monotonic basis, HT) we called *explicit negation*. This variant shares some similarities with the previous formalisation based on Nelson’s strong negation, but changes the interpretation for falsity of implication. We have also introduced a reduct-based definition of answer sets for programs with nested expressions extended with explicit negation, proving the correspondence with equilibrium models. For future work, we will study a possible axiomatisation. To this aim, it is interesting to observe that the formulas - (in their weak equivalence versions) plus and actually correspond to Vorob’ev axiomatisation [@Vo52a; @Vo52b] of strong negation in intuitionistic logic. As we saw, the role of and in $\N5$ is replaced in $\X5$ by and , so an interesting question is whether this replacement may become a complete axiomatisation for explicit negation in $\X5$ or intuitionistic logic in the general case. We also plan to explore the effect of explicit negation on extensions of equilibrium logic, revisiting the use of strong negation in paraconsistent [@OdintsovP05] and partial [@COP06] equilibrium logic, or considering its combination with partial functions [@Cab11; @CabalarCPV14], and temporal [@ACD+13] or epistemic [@CerroHS15; @CFF19] reasoning. \[lastpage\] [^1]: This work was partially supported by MINECO, Spain, grant , Xunta de Galicia, Spain (GPC ED431B 2019/03 and 2016-2019 ED431G/01, CITIC). The third author is funded by the Centre International de Mathématiques et d’Informatique de Toulouse (CIMI) through contract ANR-11-LABEX-0040-CIMI within the programme ANR-11-IDEX-0002-02 and the Alexander von Humboldt Foundation.. [^2]: In fact, the construct “$\sneg {\mathit{train}}$” is normally treated in ASP as a new atom ${\mathit{train}}'$ and an implicit constraint ${\mathit{train}} \wedge {\mathit{train}}' \to \bot$ is used to guarantee that both atoms cannot be true simultaneously. [^3]: To be precise, [@LTT99] used a different notation and names for operators: $\wedge$, $\vee$ and $\neg$ were respectively denoted as comma, semicolon and ‘not’ in [@LTT99], whereas explicit negation $\sneg$ was denoted as $\neg$ and called *classical negation*. [^4]: We also provide a translation for implications $\alpha \to \beta$ but this is not strictly necessary: for computing the reduct, they can be previously replaced by $\neg \alpha \vee \beta$. [^5]: This property is called the *coherence* principle in [@Per92].
--- author: - Elad Noor bibliography: - 'stfba\_lib.bib' title: 'Removing both Internal and Unrealistic Energy-Generating Cycles in Flux Balance Analysis' --- Abstract ======== Constraint-based stoichiometric models are ubiquitous in metabolic research, with Flux Balance Analysis (FBA) being the most widely used method to describe metabolic phenotypes of cells growing in steady-state. Of the many variants of constrain-based modelling methods published throughout the years, only few have focused on thermodynamic issues, in particular the elimination of non-physical and non-physiological cyclic fluxes. In this work, we revisit two of these methods, namely thermodynamic FBA and loopless FBA, and analyze the strengths and weaknesses of each one. Finally, we suggest a compromise denoted *semi-thermodynamic FBA* (st-FBA) which imposes stronger thermodynamic constrains on the flux polytope compared to loopless FBA, without requiring a large set of thermodynamic parameters as in the case of thermodynamic FBA. We show that st-FBA is a useful and simple way to eliminate thermodynamically infeasible cycles that generate ATP. Introduction ============ The repertoire of genome-scale metabolic reconstructions is growing quickly, with more and more organisms being modeled every year [@King2016-it]. Likewise, the list of constraint-based modeling methods is getting longer with more than 100 papers published since 1961 in a rapidly increasing rate [@Lewis2012-mu] (see <http://cobramethods.wikidot.com/methods>). Flux Balance Analysis (FBA), is arguably the most well-known method to describe possible steady-state fluxes in a growing cell, and typically used to predict biomass yield and reaction essentiality. Throughout the years, many extensions have been suggested in order to incorporate different aspects of thermodynamics into constrain-based frameworks [@Beard2002-xt; @Warren2007-wm; @Henry2007-xp; @Schellenberger2011-bq; @Price2006-ua; @Bordel2010-pl; @Fleming2010-py; @Holzhutter2004-qj; @Fleming2009-um; @Kummel2006-qn; @Henry2006-nt; @Hoppe2007-sw; @De_Martino2012-cj; @Tepper2013-gd; @Nolan2006-eg; @Nagrath2007-bn; @Boghigian2010-vz]. Two of these methods, *thermodynamic FBA* [@Henry2007-xp] and *loopless FBA* [@Schellenberger2011-bq] (also known as TMFA and ll-COBRA), define a new list of variables that represent the Gibbs free energy of formation of each metabolite in the system, and add constraints on reaction directionality that correspond to these free energies. Thermodynamic FBA imposes an extra set of constraints defining a range of possible values for each formation energy, and therefore requires many more parameters (which are often difficult to obtain). On the other hand, loopless FBA requires no extra parameters but does not necessarily eliminate all thermodynamically infeasible reactions or pathways. In this paper, we suggest a compromise denoted *semi-thermodynamic FBA* (st-FBA) which imposes stronger thermodynamic constrains on the flux polytope compared to loopless FBA, without requiring a large set of thermodynamic parameters. Finally, we show that st-FBA is a useful and simple way to prevent flux in Energy Generating Cycles (EGCs) – i.e. thermodynamically infeasible cycles that generate ATP or other energy currencies. Typically, FBA requires a reconstruction of the metabolic network with $n$ metabolites and $r$ reactions, which is described by a stoichiometric matrix $\mathbf{S} \in \mathbb{R}^{n \times r}$. Some of the reactions in $\mathbf{S}$ are denoted *primary exchange* reactions. These reactions represent the exchange of material between the model and the environment, i.e. input of nutrients and export of by-products. These reactions are not real chemical transformations nor transport reaction, but rather “conceptual” constructs that enable the system to be in a non-trivial steady state. Typically, primary exchange reactions have only one product and no substrates, or vice versa (see magenta reactions in Figure \[fig:cycles\]). Another subset of reactions correspond to *currency exchange* fluxes. These reactions represent the exchange of currency metabolites between different sub-networks (or compartments) in the cell. In the context of this work, we consider only the exchange of energy (e.g. ATP hydrolysis) as currency exchange. For instance, this exchange can be used to model the exchange of energy between the mitochondria (where ATP is generated) and the nucleus (where ATP is used). In bacterial models, where compartments play a minor role, many ATP utilizing processes are lumped into one cytoplasmic reaction called *ATP maintenance* (see green reaction in Figure \[fig:cycles\]). Finally, all other reactions are considered *internal* reactions, and the sub-matrix of $\mathbf{S}$ corresponding to them is denoted ${\mathbf{S_{int}}}$. Steady-state assumption ----------------------- $\mathbf{S}$ relates between the flux vector $v \in \mathbb{R}^{r}$ and the change in metabolite concentrations $x \in \mathbb{R}^{n}$: $$\frac{dx}{dt} = \mathbf{S} \cdot v\,\,.$$ The steady-state assumption imposes a constraint that all concentrations are constant over time, therefore $$\label{eq:st1} \mathbf{S} \cdot v = 0 \,\,.$$ Most constrain-based applications come with additional constraints on individual fluxes, i.e. $$\label{eq:st2} \forall i~~~\alpha_i \leq v_i \leq \beta_i\,\,.$$ Extreme pathways ---------------- Extreme pathways are convex basis vectors that define the polytope of solutions to the steady-state problem (Equations \[eq:st1\]-\[eq:st2\]). In [@Schilling2000-rn; @Beard2002-xt], three basic categories of extreme pathways were defined: Type I : – *Primary systemic pathways*, i.e. pathways where at least one primary exchange flux is active. Type II : – *Futile cycles*, i.e. pathways where none of the primary exchange fluxes are active, but at least one currency exchange flux is active. Type III : – *Internal cycles*, i.e. pathways where none of the exchange fluxes (primary and currency) are active. ![**The Three Types of Extreme Pathways.** A small toy model with 8 internal reactions (grey), 2 primary exchange reactions (magenta), and one currency exchange reaction (green). In this simple network, one can identify all three types of extreme pathways. The Type I pathway (connecting between $A_{ex}$ and $E_{ex}$, cyan) is typically the type of solution that most constrain-based models are seeking. The Type II pathway ($A \rightarrow B \rightarrow C \rightarrow A$, blue) is a typical futile cycle, since it does not involve any primary exchange reactions, but does *waste* ATP. The Type III pathway ($A \rightarrow C \rightarrow D \rightarrow A$, red) is called internal since none of its reactions are exchange reactions. An internal cycle will never be thermodynamically feasible.[]{data-label="fig:cycles"}](figure1.pdf){width="3in"} Figure \[fig:cycles\] demonstrates this classification in a toy model. It is important to note, that this classification depends on the definition of primary and currency exchange reactions, and therefore is not a property of the stoichiometric matrix itself. Many algorithms have been proposed for identifying and eliminating internal cycles (sometimes called “infeasible loops”) from the solution space [@Price2002-ef; @Kummel2006-qn; @Price2006-ua; @Wright2008-rh]. A more recent method denoted “loopless” FBA (or ll-FBA) [@Schellenberger2011-bq], adds extra binary variables and constraints to the standard FBA framework, which effectively eliminate all type III pathways from the solution space, without removing any of the other solutions – namely type I and type II pathways (as proven mathematically in [@Noor2012-qb]). On its face, it seems to be exactly what one would want. Type I pathways are exactly the type of steady-state flux solutions one seeks in constraint-based models. Type II pathway might seem inefficient for the cell as they “waste” ATP without having any metabolic function, but they are still feasible and are even known to operate in vivo. Therefore, removing type II pathways from the solution space might impinge on the predictive value of a model. Nevertheless, there are cases where type II pathways must be removed. In some cases, especially in automatically generated stoichiometric models [@Fritzemeier2017-ba], one can find type II cycles that *generate* ATP. A simple example would be a standard futile cycle running in reverse. This special case of type II pathways is denoted *Energy Generating Cycle* (EGC). Typically, reaction directionality constraints are imposed to prevent such cycles, but it is often the case that some of these EGCs have been overlooked and are still possible flux solutions in published models. Fritzemeier et al. [@Fritzemeier2017-ba] found that this is the case in most network reconstructions in ModelSEED (<http://modelseed.org/>) and MetaNetX (<http://www.metanetx.org/>). Some might ask, why aren’t EGCs eliminated by ll-FBA, as they are also thermodynamically infeasible cycles? To answer this, it is important to understand that type III cycles and EGCs are infeasible in two different ways. Internal type III cycles stand in violation of the first law of thermodynamics, i.e. conservation of energy states. A type III cycle is equivalent to a type one perpetual motion machine, or a river flowing in a complete circle[^1]. EGCs are different, as they do not form a complete chemical cycle but are rather coupled to an ATP forming reaction. One could imagine a world where ADP + P$_i$ $\rightleftharpoons$ ATP + H$_2$O was a favorable reaction that could drive the other part of the cycle[^2]. Therefore, EGCs do not violate the first law of thermodynamics, and are only infeasible in light of what we know about ADP and ATP in physiologically relevant conditions. In other words, EGCs violate the second law of thermodynamics, i.e. they altogether decrease the entropy of the universe. It is important to note here that type III cycles are often called Thermodynamically Infeasible Cycles (TICs) [@DeMartino2013; @Desouki2015-lh], but by now it should be clear that this can be confusing. In the specific context of Flux Balance Analysis (FBA), EGCs are much more problematic than internal cycles, as their existence can increase the maximal yield of the metabolic network. A typical scenario would be an ATP-coupled cycle that effectively creates ATP from ADP and orthophosphate while all other intermediate compounds are mass balanced. This is equivalent to making ATP without any metabolic cost, which could effectively satisfy the ATP requirement of the biomass function and allow more resources to be diverted to biosynthesis. As we pointed out earlier, in well curated models such as the genome-scale *E. coli* model [@Carrera2014-ys], EGCs have been eliminated by manually constraining the directionality of many reactions (specifically, ATP coupled reactions). Although this is an effective way of removing EGCs, it has two major disadvantages: (i) it imposes hard constraints on reactions that might otherwise be reversible, and (ii) it is labor intensive and thus not scalable. Recently, an automated method based on <span style="font-variant:small-caps;">GlobalFit</span> was shown to successfully eliminate almost all EGCs by removing a small number of reactions from the network [@Fritzemeier2017-ba]. This method makes it much easier to scale up to virtually all available metabolic network reconstructions, but does not deal with the first problem of over-constraining the flux solution set. Example of an Energy Generating Cycle in iJO1366 ------------------------------------------------ One of the well-known examples for an EGC appears in the latest genome-scale reconstruction of *E. coli* metabolism, denoted iJO1366 [@Orth2011-qi]. Orth et al. published the model together with a warning that “hydrogen peroxide producing and consuming reactions carry flux in unrealistic energy generating loops” and therefore these reactions are constrained by default to zero (see table \[table:egc\_example\] and figure \[fig:egc\_example\]). Ideally, we would like to keep these reactions as they might be useful (or even essential) for the cell in some environments. ![image](figure2.pdf){width="6in"} \[table:egc\_example\] Reaction Formula --------------------------------- ------------------------------------------------------------ MDH mal\_\_L-c + nad\_c = h\_c + nadh\_c + oaa\_c MOX h2o2\_c + oaa\_c = mal\_\_L\_c + o2\_c Htex h\_e = h\_p EX\_h\_e = h\_e SPODM 2.0 h\_c + 2.0 o2s\_c = h2o2\_c + o2\_c NADH17pp 4.0 h\_c + mqn8\_c + nadh\_c = mql8\_c + nad\_c + 3.0 h\_p QMO3 mql8\_c + 2.0 o2\_c = 2.0 h\_c + mqn8\_c + 2.0 o2s\_c ATPS4rpp adp\_c + pi\_c + 4.0 h\_p = atp\_c + 3.0 h\_c + h2o\_c Total adp\_c + pi\_c = atp\_c + h2o\_c Malate oxidase (MOX) reaction is used in the direction of oxaloacetate reduction. As was suggested previously [@Fritzemeier2017-ba], constraining the reaction to be irreversible only in the direction of malate oxidation (after all, its $\Delta_r G'^\circ$ is approximately $-100$ kJ/mol) would solve this problem and eliminate the EGC from the solution space. Nevertheless, many EGCs cannot be solved by constraining the direction of one thermodynamically irreversible reaction. Before giving an example for such a case, we must first explain the notion of *distributed thermodynamic bottlenecks* [@Mavrovouniotis1996-dq; @Mavrovouniotis1993-zq]. Distributed Thermodynamic Bottlenecks ------------------------------------- The second law of thermodynamics, as applied to enzyme-catalyzed reactions, states that a reaction is feasible only if the $\Delta_r G'$ of the chemical transformation is negative. Typically, we use the formula $$\begin{aligned} \Delta_r G' = \Delta_r G'^\circ + RT \cdot \sum_i \nu_i \ln(c_i)\,\,,\end{aligned}$$ where $c_i$ is the concentration of reactant $i$ and $\nu_i$ is its stoichiometric coefficient (negative for substrates and positive for products) and $\Delta_r G'^\circ$ is the change in standard Gibbs free energy of this reactions. In matrix notation, for all reactions in the system: $$\begin{aligned} \mathbf{\Delta_r G'} = \mathbf{\Delta_r G'^\circ} + RT \cdot \mathbf{S}^\top \ln(\mathbf{c})\,\,.\end{aligned}$$ In a series of papers from the 90’s, Michael Mavrovouniotis developed methods for estimating these standard Gibbs energy changes [@Mavrovouniotis1990-wv; @Mavrovouniotis1991-kf], and for quantifying what he defined as thermodynamic bottlenecks [@Mavrovouniotis1993-zq; @Mavrovouniotis1996-dq]. Mavrovouniotis noticed that the classic approach to reversibility, which is based on rather arbitrary thresholds imposed on every single reaction’s $\Delta_r G'$, is insufficient. Moreover, since the vector of metabolite concentrations – $\ln(\mathbf{c})$ – is common to all reactions, the constraints on $\mathbf{\Delta_r G'}$ arising from the second law of thermodynamics can be coupled in a way that does not allow sets of reactions to be feasible together, even though each single reaction is feasible individually. Mavrovouniotis denoted such cases as *distributed bottlenecks*. In the context of this work, we claim that distributed bottlenecks can also form EGCs. When such case occurs, it is especially difficult to eliminate these EGCs from the set of feasible fluxes, as no single directionality flux constraint can be justified thermodynamically. Only the *combined* activity of all the EGC reactions is thermodynamically infeasible. Moreover, these distributed EGCs are not as rare as one might think. ### Example of Distributed Energy Generating Cycle in the Central Metabolism of E. coli Consider the following three enzyme-catalyzed reactions: - Pyruvate kinase (PYK):\ ADP + PEP $\rightleftharpoons$ ATP + pyruvate - PEP synthase (PPS):\ ATP + pyruvate + H$_2$O $\rightleftharpoons$ AMP + PEP + P$_i$ - Pyruvate-phosphate dikinase (PPDK):\ ATP + pyruvate + P$_i$ $\rightleftharpoons$ AMP + PEP + PP$_i$ - Adenylate kinase (ADK1):\ 2 ADP $\rightleftharpoons$ ATP + AMP In most if not all metabolic models that include PYK and PPS (or PPDK), these reactions are marked as irreversible. However, this annotation is not based on thermodynamic reversibility constraints, but rather from higher-level knowledge about the system. Pyruvate kinase (PYK, EC 2.7.1.40) is used exclusively in glycolysis and its activity is inhibited when gluconeogensis is required for growth. However, the reaction itself is reversible, as has been shown in vitro [@Lardy1945-ze]. In fact, the equilibrium constant has been measured to be as high as $1.5 \times 10^{-3}$ in some conditions [@Krimsky1959-mt]. The equilibrium constant of PEP synthase (PPS, EC 2.7.9.2) has not been directly measured in vitro, but a similar reaction catalyzed by pyruvate-phosphate dikinase (PPDK, EC 2.7.9.2) was found to be reversible with an equilibrium constant of $\sim10^{-3}$ at neutral pH, and as high as 0.5 at pH 8.39 [@Reeves1968-mq]. Combining the reaction of PPDK and the pyrophosphatase reaction PP$_i$ $\rightleftharpoons$ 2 P$_i$, would yield the PPS reaction [@De_Meis1982-pe]. The equilibrium constant of pyrophosphatase has been measured directly numerous times, and lies between 100 and 1000, depending on pH. Therefore, the equilibrium constant overall combined reaction (PEP synthase) is the product of both these values and is somewhere very close to 1. According to estimates done by eQuilibrator[^3] K’$_{\rm eq}$ $\approx$ 1.3 when the pH is set to 7.4 and the ionic strength is 0.1 M [@Flamholz2011]. Overall, we see that all three reactions (ADK1, PYK, and PPS) are individually reversible. Nevertheless, combining all three reactions can create an energy generating cycle, as shown in Figure \[fig:distributed\_egc\]. This is one of the simplest examples for distributed EGCs, and it appears in the most ubiquitous metabolic pathway of all, glycolysis. One might ask, then why doesn’t this EGC cause problems for the iJO1366 model for *E. coli*? There is a simple answer, PYK and PPS are both annotated as irreversible reactions in this model (and virtually all similar models). Admittedly, the regulatory network in *E. coli* is hard-wired to prevent backward flux in PYK when PPS is active and vice versa, therefore constraining these fluxes in the model might not be very harmful. However, one can imagine scenarios where the reversibility of PPS and PYK might be important to consider. For example, in metabolic engineering projects, where the possibility of evolving bypasses to certain pathways could play a major role. In the following sections, we describe two established methods for dealing with thermodynamic constraints and futile cycles in FBA models, discuss their strengths and weaknesses, and suggest a compromise that would specifically target distributed EGCs. ![**An Energy Generating Cycle in *E. coli* consisting of a distributed thermodynamic bottleneck.** One can see, that the overall reaction of the three combined enzymes gives ADP + P$_i$ $\rightleftharpoons$ ATP + H$_2$O[]{data-label="fig:distributed_egc"}](figure3.pdf){width="3in"} Thermodynamic Flux Balance Analysis (TFBA) ------------------------------------------ Thermodynamic FBA (also known as Thermodynamic-based Metabolic Flux Analysis [@Henry2007-xp]) was designed to deal with thermodynamically infeasible flux solutions. However, its widespread adoption has been hampered by the requirement for thermodynamic parameters. The set of equations that describe TFBA are: $$\begin{aligned} \textsf{\textbf{TFBA}} && \nonumber\\ \mathbf{v^*} &=& \mathrm{arg\max_v} {~\mathbf{c}^\top\mathbf{v}}\nonumber\\ \textsf{such that:} && \nonumber\\ \mathbf{S} \cdot \mathbf{v} &=& \mathbf{0} \label{eq:tfba1} \\ 0 &\leq& M \mathbf{y} - \mathbf{v} ~\leq~ M \label{eq:tfba2} \\ \varepsilon &\leq& M \mathbf{y} + \mathbf{\Delta_r G'} ~\leq~ M - \varepsilon \label{eq:tfba3} \\ \mathbf{v}^L &\leq& \mathbf{v} ~\leq~ \mathbf{v}^U \label{eq:tfba4}\\ \mathbf{y} &\in& \{0, 1\}^r \label{eq:tfba5}\\ \mathbf{\Delta_r G'} &=& \mathbf{\Delta_r G'^\circ} + RT \cdot {\mathbf{S_{int}}}^\top \mathbf{x} \label{eq:tfba6}\\ \ln(\mathbf{b}^L) &\leq& \mathbf{x} ~\leq~ \ln(\mathbf{b}^U) \label{eq:tfba7}\end{aligned}$$ where $\mathbf{c} \in \mathcal{R}^r$ is the objective function, and the constants are the stoichiometric matrix of internal reactions ${\mathbf{S_{int}}}\in \mathcal{R}^{m \times r}$ and the vector of standard Gibbs energies of reaction $\mathbf{\Delta_r G'^\circ}$ (in units of kJ/mol), the gas constant $R$ = 8.31 J/mol/K and temperature $T$ = 300 K. It is important to verify that $\mathbf{\Delta_r G'^\circ}$ is orthogonal to the null-space of $\mathbf{S_{int}}$ (or, equivalently, in the image of $\mathbf{S_{int}}^\top$). Otherwise, this would be a violation of the first law of thermodynamics [@Noor2012-mp]. The variables are the flux vector $\mathbf{v} \in \mathcal{R}_{+}^{r}$, the vector of binary reaction indicators $\mathbf{y} \in \{0,1\}^{r}$, and the vector of log-scale metabolite concentrations $\mathbf{x} \in \mathcal{R}^{m}$. $M$ is set to a positive number which is larger than any possible flux value and larger than any possible $\Delta_r G'$. $\varepsilon$ is a very small number (e.g. $10^{-9}$), usually added to LP constraints in order to achieve a strong inequality (for numerical stability reasons). Equation \[eq:tfba2\] ensures that $\forall j~v_j > 0 \rightarrow y_j = 1$ and $v_j < 0 \rightarrow y_j = 0$, and equation \[eq:tfba3\] ensures that $y = 1 \rightarrow \Delta_r G'_j < 0$ and $y = 0 \rightarrow \Delta_r G'_j > 0$. This forces any flux solution to follow the second law of thermodynamics [@Hoppe2007-sw; @Machado2017-gh], which is summarized as $\forall j:v_j = 0~\vee~\text{sign}(v_j) = -\text{sign}(\Delta_r G'_j)$. It is a rather different notation compared to the original paper [@Henry2007-xp], where one had to assume all fluxes are positive (which usually requires making the model irreversible by decomposing each reversible reaction into two irreversible ones). Using the reversible notation for TFBA can reduce the number of boolean variable considerably and thus decrease the effective running time of the MILP solver. Finally, Equations \[eq:tfba6\]-\[eq:tfba7\] together set the bounds on the Gibbs energies of reaction, assuming the concentration of each metabolite $i$ is between $b^L_i$ and $b^U_i$. Note that $\mathbf{\Delta_r G'}$ is not really a variable in the LP (as it is an affine transformation of $\mathbf{x}$), and is explicitly defined only for the sake of clarity. While the stoichiometric matrix ($\mathbf{S}$) and general flux constraints are exactly the same as in the standard FBA formulation, $\mathbf{\Delta_r G'^\circ}$ comes as an additional requirement for running TFBA. Unfortunately, we still lack precise measurement for many of the compounds comprising biochemical networks, and computational methods that estimate $\mathbf{\Delta_r G'^\circ}$ [@Jankowski2008-hd; @Noor2012-mp; @Noor2013-an; @Jinich2014-nv] are far from perfect and sometimes introduce significant errors. The fact that TFBA also adds one boolean variable for each reaction in the model definitely doesn’t help either, since solving the LP becomes much harder, requires a good MILP solver, and takes much longer than standard FBA. Due to the effort involved, and the unclear benefit of the method, TFBA has not gained a wide audience of users so far. A more recent method called tEFMA [@Gerstl2015a; @Gerstl2015] also aims to eliminate thermodynamically infeasible solutions, but in the context of Elementary Flux Mode Analysis (EFMA). Here, we focus only on FBA extensions, and therefore leave tEFMA and similar methods out of this comparison. Loopless Flux Balance Analysis (ll-FBA) --------------------------------------- In light of these caveats, it might be easier to understand why ll-FBA was introduced four years *after* TFBA [@Schellenberger2011-bq]. Essentially, the loopless algorithm uses exactly the same MILP design as TFBA, while forgoing the actual thermodynamic values. This way, thermodynamically infeasible internal (Type III) cycles are eliminated, while all other pathways are kept [@Noor2012-qb]. The set of equations describing ll-FBA are: $$\begin{aligned} \textsf{\textbf{ll-FBA}} && \nonumber\\ \mathbf{v^*} &=& \mathrm{arg\max_v} {~\mathbf{c}^\top\mathbf{v}}\nonumber\\ \textsf{such that:} && \nonumber\\ \mathbf{S} \cdot \mathbf{v} &=& \mathbf{0} \label{eq:llfba1} \\ 0 &\leq& M \mathbf{y} - \mathbf{v} ~\leq~ M \label{eq:llfba2} \\ \varepsilon &\leq& M \mathbf{y} + \mathbf{\Delta_r G'} ~\leq~ M - \varepsilon \label{eq:llfba3} \\ \mathbf{v}^L &\leq& \mathbf{v} ~\leq~ \mathbf{v}^U \\ \mathbf{y} &\in& \{0, 1\}^r \label{eq:llfba4} \\ \mathbf{\Delta_r G'} &\in& (\ker{(\mathbf{S_{int}})})^\perp \label{eq:llfba5}\end{aligned}$$ where all variables and constants are the same as in equations \[eq:tfba1\]-\[eq:tfba7\]. One change made here relative to the original formulation presented by Schellenberger et al. [@Schellenberger2011-bq], is rewriting equations \[eq:llfba1\]-\[eq:llfba2\] to facilitate the comparison to TFBA. In addition, we use $\varepsilon$ rather than $1$ as the margin in equation \[eq:llfba2\]. Comparing this system of equations to TFBA (Equations \[eq:tfba1\]-\[eq:tfba7\]), one can easily see that they are identical except for the last two constraints (Equation \[eq:tfba6\]-\[eq:tfba7\]). Furthermore, Equation \[eq:llfba5\] can actually be rewritten as $\mathbf{\Delta_r G'} = \mathbf{S_{int}}^\top \cdot \mathbf{\Delta_f G'}$, where $\mathbf{\Delta_f G'} \in \mathcal{R}^{m}$ is an unconstrained vector (this is a general form for a vector in the image of $\mathbf{S_{int}}^\top$, and from the fundamental theorem of linear algebra it is orthogonal to $\ker{(\mathbf{S_{int}})}$). Since $\mathbf{\Delta_f G'} = \mathbf{\Delta_f G'^\circ} + RT \cdot \mathbf{x}$, having it unconstrained is equivalent to having $\mathbf{x}$ completely unconstrained. Therefore, ll-FBA is simply TFBA with infinite concentration bounds. #### Limited adoption of ll-FBA Although ll-FBA requires no extra parameters compared to FBA, and its implementation is streamlined as part of the COBRA toolbox, it has yet to become mainstream. A plausible explanation would be that there is an alternative method for eliminating internal cycles in FBA solutions, which does not require an MILP, and can be easily implemented: after applying additional constraints that define the relevant solution space (e.g. realizing the maximal biomass yield, or keeping all exchange fluxes constant), find a solution with the minimum sum of absolute (or squared) fluxes [@Holzhutter2004-qj]. Implementations of this principle with slight variations have been presented under different names such as parsimonious FBA [@Lewis2010-rx; @Schuetz2012-sv] or CycleFreeFlux [@Desouki2015-lh]. Nevertheless, such methods are not suitable for some applications of ll-FBA, such as loopless Flux Variability Analysis (ll-FVA). #### Related methods and improvements Already in 2002, Beard et al. [@Beard2002-xt] introduced Energy Balance Analysis (EBA), a method for enforcing the laws of thermodynamics in FBA simulations. The additional constraints that EBA enforces are essentially identical to ll-FBA, except that nonlinear optimization is applied instead of the MILP formulation introduced in equations \[eq:llfba2\]-\[eq:llfba3\]. Unfortunately, nonlinear optimization software tends to be much less efficient than state-of-the-art MILP solvers, and therefore the use of EBA was limited to relatively small models. Nevertheless, the methodology behind EBA was developed further and became a way to learn about chemical potentials from pure stoichiometric data [@Beard2004; @Warren2007-wm; @Reznik2013]. Network-embedded thermodynamic (NET) analysis [@Kummel2006-px; @Kummel2006-qn; @Zamboni2008] is derived from the same basic constraints (we discuss it in more detail in the following section). More recently, two different approaches for reducing the number of boolean variables in ll-FBA have been published: Fast sparse null-space pursuit (fast-SNP) [@Saa2016] and Localized Loopless Constraints (LLCs) [@Chan2017]. These method do not change the ll-FBA problem, but rather try to get rid of constraints and boolean variables that are not necessary for solving it. Therefore, ll-FBA can be solved for much larger networks, and without the compromises that come with the methods that use flux minimization. It is still unclear whether these approaches can be extended to TFBA as well. That being said, the aim of this work is not to discuss issues of complexity and runtime, but rather the theoretical implications of thermodynamic and loopless constraints on the set of feasible flux solutions. Network-Embedded Thermodynamic (NET) analysis --------------------------------------------- NET analysis [@Kummel2006-px] is a highly related method which applies the same directionality constraints as TFBA, but does not aim to predict flux distributions. Instead, it aims to explore the ranges of possible $\Delta_r G'$ values, and can check for thermodynamic inconsistencies in quantitative metabolomic data sets. The NET analysis optimization problem (adapted from [@Kummel2006-px] to fit the notation in this manuscript) is: $$\begin{aligned} \textsf{\textbf{NET}} && \nonumber\\ \forall k && \mathrm{\min/\max} ~~\Delta_r G'_k\nonumber\\ \textsf{such that:} && \nonumber\\ \forall v_i > 0 && \Delta_r G'_i < 0 \label{eq:net1}\\ \forall v_i < 0 && \Delta_r G'_i > 0 \label{eq:net2}\\ \mathbf{\Delta_r G'} &=& \mathbf{\Delta_r G'^\circ} + RT \cdot {\mathbf{S_{int}}}^\top \mathbf{x} \\ \ln(\mathbf{b}^L) &\leq& \mathbf{x} ~\leq~ \ln(\mathbf{b}^U)\,.\end{aligned}$$ Note that here the second law of thermodynamics is laid out explicitly (equations \[eq:net1\]-\[eq:net2\]), and not given as a pair of constraints involving boolean variables (as in TFBA and ll-FBA). This is facilitated by the fact that all intracellular flux directions are a prerequisite for the NET analysis, and therefore these constraints are hard-coded in the linear optimization problem. The only free variables are the log-concentrations in the vector $\mathbf{x}$ (as before, $\mathbf{\Delta_r G'}$ is just an affine transformation of $\mathbf{x}$). Therefore, this formulation of NET analysis is a linear optimization problem, and does not require an MILP solver. However, in a more generalized case (such as the one used in [@Kummel2006-px]), constraints can also be imposed on the total concentrations of a subset of metabolites (for example, the sum of all pentose-phosphates). This requirement reflects the reality of many mass-spectrometry datasets where only the total concentration of several mass-isomers can be measured reliably. The downside is that these constraints are non-linear (due to the fact that our variables are log-concentrations, so the constraints would be on the sum of their exponents). NET analysis is akin to Flux Variability Analysis (FVA), except that the fluxes are fixed and the studied variability is the possible range of Gibbs free energies. A GUI-based Matlab program called anNET [@Zamboni2008] facilitates the application of NET analysis in labs working on quantitative metabolomics. Results ======= A compromise between ll-FBA and TFBA ------------------------------------ Is there a version of thermodynamic-based FBA, that doesn’t require a large set of extra (unknown) parameters, and still has a clear benefit over the standard tools that ignore thermodynamics? Here, we propose such a compromise, by relaxing the majority of second-law constraints (i.e. Equation \[eq:tfba3\]) and keeping only a few important ones. We will show that this method, which we denote st-FBA (semi-thermodynamic Flux Balance Analysis), is sufficient to eliminate energy generating cycles, while requiring a relatively small set of heuristic assumptions and thermodynamic constants. \[table:potentials\] **metabolite** $\mathbf{B_{low}}$ $\mathbf{B_{high}}$ $\Delta_f G'^\circ$ \[kJ/mol\] ------------------------------------- -------------------------- --------------------- -------------------------------- ATP $9.63$ mM $9.63$ mM $-2296$ ADP $0.56$ mM $0.56$ mM $-1424$ AMP $0.28$ mM $0.28$ mM $-549$ orthophosphate $1$ mM $10$ mM $-1056$ pyrophosphate $1$ $\upmu$M $1$ mM $-1939$ H$^+$ (cytoplasm) $10^{-7.6}$ M $^\dagger$ $10^{-7.6}$ M $0$ H$^+$ (extracellular) $10^{-7.0}$ M $10^{-7.0}$ M $0$ H$_2$O 1 1 $-157.6$ $\dagger$ Corresponding to pH 7.6, as measured by [@Wilks2007-lh] First, one must define a set of energy currency metabolites. Although the definition is somewhat heuristic, most biologists would agree that the following are energy equivalents: ATP, pyrophosphate, and a gradient of protons across the membrane. Other specific energy-carrying currency metabolites can be added to the list if desired. Next, we must bound the chemical potential ($\mathbf{\Delta_f G'}$) of these currency metabolites and all their associated degraded forms (see Table \[table:potentials\]). Note that we chose the chemical potential at 1 mM concentration in an aqueous solution, which is a typical concentration for co-factors in *E. coli* [@Bennett2009-rm]. Although it is possible to use the exact measured concentration of each of these metabolites, the effect on the st-FBA results would be at most very minor. Finally, we fix the values in the $\mathbf{G_f}$ vector only for these metabolites from the table, while the rest of the values remain free. $$\begin{aligned} \textsf{\textbf{st-FBA}}\nonumber\\ \mathbf{v^*} &=& \mathrm{arg\max_v} {~\mathbf{c}^\top\mathbf{v}}\nonumber\\ \textsf{such that:} && \nonumber\\ \mathbf{S} \cdot \mathbf{v} &=& \mathbf{0} \label{eq:stfba1}\\ 0 &\leq& M \mathbf{y} - \mathbf{v} ~\leq~ M \label{eq:stfba2}\\ \varepsilon &\leq& M \mathbf{y} + \mathbf{\Delta_r G'} ~\leq~ M - \varepsilon\label{eq:stfba3}\\ \mathbf{v}^L &\leq& \mathbf{v} ~\leq~ \mathbf{v}^U \label{eq:stfba4}\\ \mathbf{y} &\in& \{0, 1\}^r \label{eq:stfba5}\\ \mathbf{\Delta_r G'} &=& {\mathbf{S_{int}}}^\top \cdot \mathbf{\Delta_f G'}\\ \mathbf{\Delta_f G'} &\in& \mathcal{R}^{m}\\ \forall i~\textsf{in Table \ref{table:potentials}}\nonumber\\ \Delta_f G'_i &\geq& \Delta_f G'^\circ_i + RT \cdot \ln(b^L_i) \\ \Delta_f G'_i &\leq& \Delta_f G'^\circ_i + RT \cdot \ln(b^U_i) \end{aligned}$$ So st-FBA is very similar to TFBA, except that only the energy currency metabolites have predefined bounds on their formation energies. In fact, since the concentrations of these metabolites tend to be tightly controlled by homeostasis, it is recommended to set them to fixed concentrations (i.e. by setting the lower and upper bounds to the same value). All other metabolites, on the other hand, have no constraints on their $\Delta_f G'$ (or, equivalently, no constraints on their concentrations) – similar to the case in ll-FBA. In other words, $\mathbf{\Delta_r G'}$ is still a vector from $\ker{(\mathbf{S_{int}})}^\perp$, with a few extra constraints, but not as many as in TFBA. Network-Embedded Semi-Thermodynamic analysis (NEST) --------------------------------------------------- A relatively straight-forward way to measure the effect of semi-thermodynamic constraints on the solution space, is to use an approach derived from NET analysis. $$\begin{aligned} \textsf{\textbf{NEST}} && \nonumber\\ \forall k && \mathrm{\min/\max} ~~\Delta_r G'_k\nonumber\\ \textsf{such that:} && \nonumber\\ \forall v_i > 0 && \Delta_r G'_i \leq -\varepsilon \\ \forall v_i < 0 && \Delta_r G'_i \geq \varepsilon \\ \mathbf{\Delta_r G'} &=& {\mathbf{S_{int}}}^\top \cdot \mathbf{\Delta_f G'} \\ \mathbf{\Delta_f G'} &\in& \mathcal{R}^{m}\\ \forall i~\textsf{in Table \ref{table:potentials}}\nonumber\\ \Delta_f G'_i &\geq& \Delta_f G'^\circ_i + RT \cdot \ln(b^L_i) \\ \Delta_f G'_i &\leq& \Delta_f G'^\circ_i + RT \cdot \ln(b^U_i) \end{aligned}$$ Just as in st-FBA, the thermodynamic constraints are only applied to a subset of metabolites (i.e. energy co-factors which we know with high confidence). Implementation of st-FBA ------------------------ The semi-thermodynamic Flux Balance Analysis algorithm was implemented using COBRApy [@Ebrahim2013-vw] and can be found at <https://github.com/eladnoor/stFBA>. [^1]: It would be more precise to say that type III cycles violate either the first or the second law. In essence, an internal cycle would not violate the first law if it were not generating any heat. However, in order to have a net flux in a biochemical reaction, the reaction must be out of equilibrium (according to the second law) and therefore the molecules flowing from the high energy state to the lower state would necessarily generate heat. [^2]: Another way of looking at this, is to imagine running all the reactions in reverse. If we reverse all the reactions in a type III cycle, we will still have a type III cycle. If we do the same for an EGC, we would get a futile type II cycle, which is thermodynamically feasible. [^3]: http://equilibrator.weizmann.ac.il/
--- abstract: 'WTe$_2$ has attracted a great deal of attention because it exhibits extremely large and non-saturating magnetoresistance. The underlying origin of such a giant magnetoresistance is still under debate. Utilizing laser-based angle-resolved photoemission spectroscopy with high energy and momentum resolutions, we reveal the complete electronic structure of WTe$_2$. This makes it possible to determine accurately the electron and hole concentrations and their temperature dependence. We find that, with increasing the temperature, the overall electron concentration increases while the total hole concentration decreases. It indicates that the electron-hole compensation, if it exists, can only occur in a narrow temperature range, and in most of the temperature range there is an electron-hole imbalance. Our results are not consistent with the perfect electron-hole compensation picture that is commonly considered to be the cause of the unusual magnetoresistance in WTe$_2$. We identified a flat band near the Brillouin zone center that is close to the Fermi level and exhibits a pronounced temperature dependence. Such a flat band can play an important role in dictating the transport properties of WTe$_2$. Our results provide new insight on understanding the origin of the unusual magnetoresistance in WTe$_2$.' author: - 'Chenlu Wang$^{1,5}$, Yan Zhang$^{1,5}$, Jianwei Huang$^{1,5}$, Guodong Liu$^{1,5,*}$, Aiji Liang$^{1}$, Yuxiao Zhang$^{1}$, Bing Shen$^{1,5}$, Jing Liu$^{1,5}$, Cheng Hu$^{1,5}$, Ying Ding$^{1,5}$, Defa Liu$^{1}$, Yong Hu$^{1,5}$, Shaolong He$^{1}$, Lin Zhao$^{1}$, Li Yu$^{1}$, Jin Hu$^{2}$, Jiang Wei $^{2}$, Zhiqiang Mao$^{2}$, Youguo Shi$^{1}$, Xiaowen Jia$^{3}$, Fengfeng Zhang$^{4}$, Shenjin Zhang$^{4}$, Feng Yang$^{4}$, Zhimin Wang$^{4}$, Qinjun Peng$^{4}$, Zuyan Xu$^{4}$, Chuangtian Chen$^{4}$, X. J. Zhou$^{1,5,6,*}$' date: 'May 23, 2017' title: 'Evidence of Electron-Hole Imbalance in WTe$_2$ from High-Resolution Angle-Resolved Photoemission Spectroscopy' --- Recently, WTe$_2$, a typical kind of transition metal dichalcogenides (TMDs), has attracted considerable attention and initiated intensive study, since it manifests many novel and intriguing physical properties including extremely large magnetoresistance (XMR)[@1], type II Weyl Fermion[@22; @23; @24; @25; @26], pressure-induced superconductivity[@18; @19], two-dimensional topological insulator in monolayer[@32], and so on[@20; @21; @27; @28; @29; @30]. In WTe$_2$, the magnetoresistance is reported to reach an order of at least 10$^{5}$$\%$$\sim$10$^{6}$$\%$ at low temperature and remains quadratic up to a field of 60 Tesla with no indication of saturation. However, the exact origin of the unusual magnetoresistance is still under debate[@12; @13; @14; @15; @16; @17]. On the basis of band structure calculations and two-fluid model analysis, a perfect electron-hole compensation mechanism was proposed to account for the extremely large magnetoresistance and its unsaturated behavior. WTe$_2$ is regarded as a perfect semimetal that shows a small overlap between the valence-band and conduction-band states with an equal number of hole and electron carriers[@1; @7; @8; @9; @10; @11]. It has become a mainstream mechanism after many magneto-transport and angle-resolved photoemission (ARPES) experiments were performed, though different groups reported rather inconsistent results even with the same kind of technique. However, so far direct experimental verification on the perfect compensation of electrons and holes in WTe$_2$ is still lacking. This requires an accurate and complete measurement on the band structure and Fermi surface of WTe$_2$. The precise determination on the electronic structure of WTe$_2$ is challenging because of its multiple Fermi pockets that are tiny and located in a narrow momentum space, complications of bulk bands and surface states, and the three-dimensional nature of the electronic structure. Using the existing ARPES results it is hard to provide a conclusive answer to whether the electron and hole carriers are compensated for or not in WTe$_2$[@7; @12; @27]. In this paper, we have carried out high-resolution ARPES measurements on WTe$_2$ at different temperatures to examine on the origin of its extremely large magnetoresistance. Utilizing laser-based ARPES with high energy and momentum resolutions, we reveal the complete electronic structure of WTe$_2$. This makes it possible to determine accurately the electron and hole concentrations and their temperature dependence. We find that, with increasing the temperature, the overall electron concentration increases while the total hole concentration decreases. It indicates that the electron-hole compensation, if exists, can only occur in a narrow temperature range, and in most of the temperature range there is an electron-hole imbalance. Our results are not consistent with the perfect electron-hole compensation picture that is commonly considered to be the cause of the unusual magnetoresistance in WTe$_2$. We identify a flat band near the Brillouin zone center that is close to the Fermi level and exhibits a pronounced temperature dependence. Such a flat band can play an important role in dictating the transport properties of WTe$_2$. Our results provide new insight on understanding the origin of the unusual magnetoresistance in WTe$_2$. The ARPES measurements were performed using our newly developed laser-based ARPES system equipped with a 6.994 eV vacuum-ultra-violet (VUV) laser a the time-of-flight electron energy analyzer (ARToF10K by Scienta Omicron)[@24]. The unique capability of our ARPES system, including simultaneous coverage of two-dimensional momentum space and high energy and momentum resolutions, made it possible to get obtain high resolution ARPES data on WTe$_2$. Figure 1 shows the temperature dependence of the measured Fermi surface, with the original data, its second derivative image and its extracted contour displayed in Fig. 1(a), 1(b) and 1(c), respectively. The corresponding temperature dependence of the energy bands along a few typical momentum cuts is shown in Fig. 2 and 3. These results are robust and highly reproducible by measuring on different samples, or by measuring the same sample during warming-cooling cycles. According to our analysis of the measured constant energy contours and the band structures, combined in a comparison with the band structure calculations from our previous ARPES studies[@24], the observed Fermi surfaces of WTe$_2$ can be summarized as follows: as shown in the left-most panel of Fig. 1(c). (1) Four hole pockets are identified, labeled as $\alpha$, $\beta$ and nearly degenerate $\gamma$, and $\gamma$’; (2) Two nearly degenerate electron pockets can be resolved as $\epsilon$ and $\delta$; (3) A flat band around the $\Gamma$ point is resolved with its band top being about 5 meV below the Fermi level in our high resolution data (also see Fig. 3). (4) A prominent V-shaped SS1 Fermi surface segment can be easily resolved. Since the observed unusual magnetoresistance in WTe$_2$ represents a bulk property, it is usually believed that the surface state has very little contribution to the observed magnetotransport property. Therefore, we are not going to dwell on this feature in the following. The electron-hole compensation mechanism is proposed to account for the extremely large magnetoresistance in WTe$_2$[@1]. Examination on the picture requires a precise determination on the area of each electron pocket and hole pocket, together with their temperature dependence. It is clear from Fig. 1 that, with increasing the temperature, the overall hole pockets shrink, while the electron pockets slightly expand. To determine the area of each Fermi pocket quantitatively, we plot the Fermi pocket contours in Fig. 1(c) by tracking the locus of the Fermi pockets shown in Fig. 1(b) that are the second derivative image with respect to momentum. Our high quality data allow us for the first time to depict the shape of all the Fermi pockets. Then we can accurately determine the area of each pocket, as shown in Fig. 4(a) and 4(b) for electron pockets and hole pockets, respectively. It is clear that the area of the two electron pockets increases with the temperature while the area of the four hole pockets decreases. We note that a Lifshitz transition was reported in WTe$_2$ at 160 K where the hole pockets disappear[@27]. From our present data in Fig. 1, we can still see the presence of the hole pockets at 165 K that are not consistent with such a Lifshitz transition. This difference might be due to different k$_z$ we measured and/or slightly different doping levels in the measured WTe$_2$ samples. Figure 2 shows the temperature dependence of the band structures for WTe$_2$ measured along three typical momentum cuts. Figure 2(a) shows the bands measured at different temperatures along the cut 1, i.e., the $\Gamma$X direction (its location is shown in Fig. 2(i) as marked by the red arrow), from which the corresponding bands that contribute to the bulk electron pockets and hole pockets can all be seen. To keep track of the temperature evolution of the Fermi momenta for the observed bands, we plot the momentum distribution curves (MDCs) at the Fermi level measured at different temperatures in Fig. 2(d), with the black and red arrows pointing to k$_F$s of the hole- and electron-like bands, respectively. It is clear that the distance between the two Fermi momenta of the hole-like bands becomes smaller with increasing the temperature, while the Fermi momentum of the electron-like band slightly moves to a large value. Figure 2(b) shows the bands measured at different temperatures that only cut the electron pocket perpendicular to the $\Gamma$X direction (cut 2 in Fig. 2(i)). The corresponding MDCs at the Fermi level are shown in Fig. 2(e). With increasing the temperature, a slight increase of the Fermi surface along this momentum cut can be seen. Figure 2(c) shows the bands that only cut the hole pocket (cut 3 in Fig. 2(i)). The corresponding MDCs at the Fermi level are shown in Fig. 2(f). With increasing the temperature, an obvious shrinking of the Fermi surface along this momentum cut can be seen. The temperature evolution of the band structures shown in Fig. 2 is consistent with the temperature-dependent Fermi surface evolution in Fig. 1. They all indicate that, with increasing the temperature, the hole pockets exhibit an obvious shrinking while the electron pockets show a slight increase in WTe$_2$. In the two-band model, the transport properties of materials are dictated not only by the concentration of charge carriers, but also by the mobility of the carriers. In ARPES measurements, the measured scattering rate can be closely related to the mobility of the carriers, and they are inversely proportional to each other. To derive the temperature dependence of the scattering rate for the Fermi pockets in WTe$_2$, we plot the photoemission spectra (energy distribution curves, EDCs) at different temperatures in Fig. 2(g) for the hole pockets and in Fig. 2(h) for the electron pockets, at two representative momentum positions marked as spots a and b in Fig. 2(i), respectively. The corresponding symmetrized EDCs are shown in the right panels in Fig. 2(g) and 2(h). The extracted EDC widths, which are related to the scattering rate, are plotted in Fig. 4(d) as a function of temperature. It is found that the hole pockets and electron pockets show comparable scattering rates, and both of them increase with increasing the temperature. Figure 3 focuses on the temperature evolution of the bands measured near the Brillouin zone center $\Gamma$ point. Here the bands are measured along two perpendicular momentum cuts in the $\Gamma$X (cut 2) and $\Gamma$Y (cut 1) directions. For Fig. 3(b) measured along $\Gamma$Y (cut 1) direction, the band structures at $\Gamma$ consists of two hole-like bands: one is the narrow band with its top at $\sim$5 meV below the Fermi level at 20K[@24], while the other is a broad band with its top at $\sim$55 meV below the Fermi level that is composed of four nearly degenerate bands as seen in the band calculations[@24]. These two bands are rather anisotropic in the momentum space. When measured along the $\Gamma$X direction, they both become quite flat, as seen in Fig. 3(c). Figure 3(d) shows EDCs at the $\Gamma$ point where the two peaks in EDCs correspond to the two bands observed. With increasing the temperature, both the flat band and the high binding energy broad band shift to higher binding energy, as seen in Fig. 4(e). The top of the flat band shifts from 5 meV binding energy at 20 K to 30 meV at 165K while the broad band shows a similar shifting down from 55 meV binding energy at 20K to 72 meV at 165K. Moreover, the peak intensity of the flat band obviously decreases with increasing temperature, as seen from the EDCs at the $\Gamma$ point in Fig. 3(d). Figure 3(e) shows the momentum-integrated EDCs around the momentum region of the $\Gamma$ point covering the flat band. Here we find that the original data and the Fermi-distribution-function removed data show slight difference thus we only show the original data in Fig.3(d) and 3(e). At low temperature, the flat band is very close to the Fermi level which contributes some integrated spectrum weight at the Fermi surface mapping at $\Gamma$, as seen in Fig. 1. With increasing the temperature, the flat band shifts to higher binding energy, accompanied by the reduction of its peak intensity (Fig. 3(d)), leading to spectral weight reduction near the Fermi level in the integrated EDCs (Fig. 3(e) and 4(f)). The broad hole-like band also shows a similar spectral weight reduction during warming up. The quantitative determination of the area for each electron pocket (Fig. 4(a)) and each hole pocket (Fig. 4(b)) offers an opportunity to check on the electron-hole balance in WTe$_2$ at this specific k$_z$. Figure 4(c) shows the total areas of the electron pockets (red circles) and hole pockets (blue circles) which correspond to carrier concentrations of electrons and holes in WTe$_2$. With increasing the temperature, the total number of holes shows a pronounced decrease while the total number of electrons exhibits a slight increase. The opposite trend of the temperature evolution of the electrons and holes causes the result that only at one temperature, here at 135 K, that the total areas of electrons and holes are the same. For all the other temperatures, they are different. In particular, the difference becomes larger with decreasing the temperature. At 20 K, the area of the hole pocket is more than twice that of the electron pocket. We note that the electronic structure of WTe$_2$ shows a clear three-dimensional k$_z$ effect, therefore, one should take three-dimensional Fermi surface into consideration when examining the electron-hole compensation picture[@1; @23; @33]. For the Fermi surface sheets we measured at a given photon energy, they represent a momentum cut of the three-dimensional Fermi surface at a given k$_z$. One major finding of our work is that there is an overall Fermi level shift with temperature, from both the temperature dependence of the Fermi surface (Fig. 1) and band structures (Figs. 2 and 3). It is also consistent with the work reported before[@7; @27]. In this case, with increasing the temperature, the Fermi level shows an overall upward shift, which gives rise to an increase of the electron pockets and a concomitant reduction in the hole pockets. This is true for the two-dimensional Fermi pockets at a given k$_z$, and it is also true for the three-dimensional electron pockets and hole pockets in WTe$_2$. Therefore, one may expect that the three-dimensional hole concentration and electron concentration in WTe$_2$ show opposite trends of evolution with temperature, similar to those shown in Fig. 4(c). In this case, the perfect electron-hole compensation, if exists, can only occur at one temperature, and in the rest of the temperature range, the electron and hole concentrations are different and their difference becomes larger when moving away from that particular temperature. If the particular temperature is at a finite temperature like 100 K, it will apparently violates the electron-hole compensation as the origin of the unusual magnetoresistance. Only if accidentally the electron-hole compensation temperature is at 0 K can the picture explain the observed large magnetoresistance at low temperature and its disappearance at high temperature. Although this is quite unlikely, more work needs to be carried out to find out whether there is a perfect electron-hole compensation in WTe$_2$ and at what temperature it may occur. The chemical potential of the three dimensional Fermi surface shifts with temperature, but the shift is slight, which can be less than 1 meV with increasing the temperature from very low temperature to room temperature. Thus the reason why the Fermi level shift with temperature is that the density of state of the occupied state near the Fermi level is different from that of the unoccupied state. The Fermi level shift with temperature, which causes the opposite evolution trend of the temperature dependence for the electrons and holes, is the major cause of electron-hole imbalance in WTe$_2$. These cast doubt on the validity of the electron-hole compensation picture in understanding the extremely large magnetoresistance in WTe$_2$. By comparing the measured bands and Fermi surfaces with the calculated ones[@24], spin degeneration of the observed Fermi pockets are mostly removed, due to very strong spin orbital coupling in WTe$_2$. The backscattering suppression caused by such an effect may also play an important role on magnetorersistance as suggested before[@12]. With increasing the temperature, the magnetoresistance effect rapidly becomes suppressed at high temperature in WTe$_2$[@1]. Such behavior was attributed to the lost balance between electrons and holes due to the thermal excitation of the high binding energy bands at $\sim$50 meV below the Fermi level[@7]. This is quite unlikely because it is usually impossible for such a deep band to substantially contribute to the electronic transport at the limited temperature. On the contrary, the near-E$_F$ flat band we have observed may play an important role in affecting the transport properties of WTe$_2$. The flat band is quite close to the Fermi level at low temperature, shifts away from the Fermi level with increasing the temperature, accompanied by a loss of its spectral weight near the Fermi level. Such a temperature-dependent evolution follows the temperature-dependent magnetoresistance in WTe$_2$, thus the effect of this flat band on the unusual magnetoresistive behavior should be taken into consideration. In summary, by taking high-resolution ARPES measurements, for the first time, we have revealed the complete electronic structure of WTe$_2$. Our results show that, with increasing the temperature, the Fermi level shifts upwards, causing an increase in the electron concentration and a concomitant reduction of the hole concentration. This indicates that the perfect electron-hole compensation, if it exists, can only occur in a narrow temperature region. In the rest of wide temperature range there is an electron-hole imbalance in WTe$_2$. These results ask for re-examination on the perfect electron-hole compensation picture as the main cause of the extremely large magneto-resistance in WTe$_2$. We observed a flat band near the Brillouin zone center that is close to the Fermi level at low temperature, and get suppressed at high temperature. This flat band may play an important role in dictating the transport properties of WTe$_2$. Our results provide important information in understanding the unusual magnetoresistance in WTe$_2$ that calls for further efforts to clarify its exact origin. [99]{} M. N. Ali, J. Xiong, S. Flynn, J. Tao, Q. D. Gibson, L. M. Schoop, T. Liang, N. Haldolaarachchige, M. Hirschberger, N. P. Ong, and R. J. Cava, Nature (London) **514**, 205 (2014). A. A. Soluyanov, D. Gresch, Z. Wang, Q. Wu, M. Troyer, X. Dai, and B. A. Bernevig, Nature (London) **527**, 495 (2015). F. Y. Bruno, A. Tamai, Q. S. Wu, I. Cucchi, C. Barreteau, A. delaTorre, S. McKeownWalker, S. Ricco, Z. Wang, T. K. Kim, M. Hoesch, M. Shi, N. C. Plumb, E. Giannini, A. A. Soluyanov, and F. Baumberger, Phys. Rev. B **94**, 121112(R) (2016). C. Wang, Y. Zhang, J. Huang, S. Nie, G. Liu, A. Liang, Y. Zhang, B. Shen, J. Liu, C. Hu et al., Phys. Rev. B **94**, 241119(R) (2016). Y. Wu, D. Mou, N. H. Jo, K. Sun, L. Huang, S. L. Bud¡¯ko, P. C. Canfield, and A. Kaminski, Phys. Rev. B **94**, 121113(R) (2016). J. Sanchez-Barriga, M. G. Vergniory, D. Evtushinsky, I. Aguil-era, A. Varykhalov, S. Blugel, and O. Rader, Phys. Rev. B **94**, 161401(R) (2016). D. Kang, Y. Zhou, W. Yi, C. Yang, J. Guo, Y. Shi, S. Zhang, Z. Wang, C. Zhang, S. Jiang, A. Li, K. Yang, Q. Wu, G. Zhang, L. Sun, and Z. Zhao, Nat. Commun. **6**, 7804 (2015). X.-C. Pan, X. Chen, H. Liu, Y. Feng, Z. Wei, Y. Zhou, Z. Chi, L. Pi, F. Yen, F. Song, X. Wan, Z. Yang, B. Wang, G. Wang, and Y. Zhang, Nat. Commun. **6**, 7805 (2015). Z.-Y. Jia, Y.-H. Song, X.-B. Li, K. Ran, P. Lu, X.-Y. Zhu, Z.-Q. Shi, J. Sun, J. Wen, D. Xing, and S.-C. Li, Phys. Rev. B **96**, 041108 (2017). P. K. Das, D. Di Sante, I. Vobornik, J. Fujii, T. Okuda, E. Bruyer, A. Gyenis, B. E. Feldman, J. Tao, R. Ciancio, G. Rossi, M. N. Ali, S. Picozzi, A. Yadzani, G. Panaccione, and R. J. Cava, Nat. Commun. **7**, 10847 (2016). B. Feng, Y.-H. Chan, Y. Feng, R.-Y. Liu, M.-Y. Chou, K. Kuroda, K. Yaji, A. Harasawa, P. Moras, A. Barinov, W. G. Malaeb, C. Bareille, T. Kondo, S. Shin, F. Komori, T.-C. Chiang, Y. Shi, and I. Matsuda, Phys. Rev. B **94**, 195134 (2016). Y. Wu, N. H. Jo, M. Ochi, L. Huang, D. Mou, S. L. Bud[’]{}ko, P. C. Canfield, N. Trivedi, R. Arita, and A. Kaminski, Phys. Rev. Lett. **115**, 166602 (2015). C. C. Homes, M. N. Ali, and R. J. Cava, Phys. Rev. B **92**, 161109(R) (2015). X. Qian, J. Liu, L. Fu, and J. Li, Science **346**, 1344 (2015). F. Zheng, C. Cai, S. Ge, X. Zhang, X. Liu, H. Lu, Y. Zhang, J. Qiu, T. Taniguchi, K. Watanabe, S. Jia, J. Qi, J.-H. Chen, D. Sun, and J. Feng, Adv. Mater. **28**, 4845 (2016). J. Jiang, F. Tang, X. C. Pan, H. M. Liu, X. H. Niu, Y. X. Wang, D. F. Xu, H. F. Yang, B. P. Xie, F. Q. Song, P. Dudin, T. K. Kim, M. Hoesch, P. K. Das, I. Vobornik, X. G. Wan, and D. L. Feng, Phys. Rev. Lett. **115**, 166601 (2015). Y.-Y. Lv, B.-B. Zhang, X. Li, B. Pang, F. Zhang, D.-J. Lin, J. Zhou, S.-H. Yao, Y. B. Chen, S.-T. Zhang, M. Lu, Z. Liu, Y. Chen and Y.-F. Chen, Sci. Rep. **6**, 26903 (2016) D. Rhodes, S. Das, Q. R. Zhang, B. Zeng, N. R. Pradhan, N. Kikugawa, E. Manousakis, and L. Balicas, Phys. Rev. B **92**, 125152 (2015). Y. Luo, H. Li, Y. M. Dai, H. Miao, Y. G. Shi, H. Ding, A. J. Taylor, D. A. Yarotski, R. P. Prasankumar, and J. D. Thompson, Appl. Phys. Lett. **107**, 182411 (2015). S. Flynn, M. N. Ali, and R. J. Cava, arXiv:1506.07069 (2015). Y. Wang, K. Wang, J. Reutt-Robey, J. Paglione, and M. S. Fuhrer, Phys. Rev. B **93**, 121108 (2016). I. Pletikosić, M. N. Ali, A. V. Fedorov, R. J. Cava, and T. Valla, Phys. Rev. Lett. **113**, 216601 (2014). H. Y. Lv, W. J. Lu, D. F. Shao, Y. Liu, S. G. Tan and Y. P. Sun, Europhys. Lett. **110**, 37004 (2015). Z. Zhu, X. Lin, J. Liu, B. Fauqué, Q. Tao, C. Yang, Y. Shi, and K. Behnia, Phys. Rev. Lett. **114**, 176601 (2015). F. Xiang, M. Veldhorst, S. Dou, and X. Wang, Europhys. Lett. **112**, 37009 (2015). P. L. Cai, J. Hu, L. P. He, J. Pan, X. C. Hong, Z. Zhang, J. Zhang, J. Wei, Z. Q. Mao, and S. Y. Li, Phys. Rev. Lett. **115**, 057202 (2015). Y. Wu, N. H. Jo, D. Mou, L. Huang, S. L. Bud’ko, P. C. Canfield, and A. Kaminski, Phys. Rev. B **95**, 195138 (2017). [**Acknowledgement**]{}\ Supported by the National Natural Science Foundation of China under Grant No 11574367, the National Basic Research Program of China under Grant Nos 2013CB921904 and 2015CB921300, the National Key Research and Development Program of China under Grant No 2016YFA0300600, the Strategic Priority Research Program (B) of the Chinese Academy of Sciences under Grant No XDB07020300, and the US Department of Energy under Grant No DE-SC0014208. ![image](fig1_FST){width="1.0\columnwidth"} ![image](fig2_BandT){width="1.0\columnwidth"} ![image](fig3_Gamma){width="1.0\columnwidth"} ![image](fig4_Summary){width="1.0\columnwidth"}
--- abstract: 'In supersymmetric models with light higgsinos (which are motivated by electroweak naturalness arguments), the direct production of higgsino pairs may be difficult to search for at LHC due to the low visible energy release from their decays. However, the wino pair production reaction $\tw_2^\pm\tz_4\to (W^\pm\tz_{1,2})+(W^\pm\tw_1^\mp)$ also occurs at substantial rates and leads to final states including equally opposite-sign (OS) and same-sign (SS) diboson production. We propose a novel search channel for LHC14 based on the SS diboson plus missing $E_T$ final state which contains only modest jet activity. Assuming gaugino mass unification, and an integrated luminosity $\agt 100$ fb$^{-1}$, this search channel provides a reach for SUSY well beyond that from usual gluino pair production.' author: - Howard Baer - Vernon Barger - Peisi Huang - Dan Mickelson - Azar Mustafayev - Warintorn Sreethawong - Xerxes Tata title: | Same sign diboson signature from supersymmetry models\ with light higgsinos at the LHC --- The recent discovery of a Higgs-like resonance at $m_h\sim 125$ GeV by the Atlas and CMS collaborations[@atlas_h; @cms_h] completes the identification of all the states in the Standard Model (SM). However, the existence of fundamental scalars in the SM is problematic in that they lead to gauge instability and fine-tuning issues. Supersymmetric (SUSY) theories stabilize the scalar sector due to a fermion-boson symmetry, thus providing a solution to the gauge hierarchy problem[@witten]. In fact, the measured Higgs boson mass $m_h\simeq 125$ GeV falls squarely within the narrow range predicted[@mhiggs] by the minimal supersymmetric Standard Model (MSSM); this may be interpreted as indirect support for weak scale SUSY. In contrast, the associated superparticle states have failed to be identified at LHC, leading the Atlas and CMS collaborations[@atlas_s; @cms_s] to place limits of $m_{\tg}\agt 1.4$ TeV (for $m_{\tg}\simeq m_{\tq}$) and $m_{\tg}\agt 0.9$ TeV (for $m_{\tg}\ll m_{\tq}$) within the popular mSUGRA/CMSSM model[@msugra]. In many SUSY models used for phenomenological analyses, the higgsino mass parameter $|\mu|$ is larger than the gaugino mass parameters $|M_{1,2}|$. In the alternative case where $|\mu| \ll |M_{1,2}|$, the lighter electroweak chargino $\tw_1$ and the lighter neutralinos $\tz_{1,2}$ are higgsino-like, while (assuming $|M_2|>|M_1|$) the heavier chargino and the heaviest neutralino $\tz_4$ is wino-like, and $\tz_3$ is bino-like. Electroweak $\tw_2\tz_4$ production which occurs with $SU(2)$ gauge strength then leads to a novel $W^\pm W^\pm + \eslt$ signature via the process shown in Fig. \[fig:diagram\]. We examine prospects for observing this signal in the 14 TeV run of the CERN LHC. Models with light higgsinos have a number of theoretical advantages, and have recently received considerable attention. To understand why, we note that the minimization condition for the Higgs scalar potential leads to the well known (tree-level) relation, = -\^2 -m\_[H\_u]{}\^2-\^2 \[eq:mssmmu\] where $m_{H_u}^2$ and $m_{H_d}^2$ are the tree-level mass squared parameters of the two Higgs doublets that are required to give masses to up- and down-type quarks, and $\tan\beta$ is the ratio of their vacuum expectation values. The value of $M_Z$ that is obtained from (\[eq:mssmmu\]) is [*natural*]{} if the three terms on the right-hand-side (RHS) each have a magnitude of the same order as $M_Z^2$, implying $\mu^2/(M_Z^2/2)$ is limited from above by the extent of fine-tuning one is willing to tolerate. The lack of a chargino signal at the LEP2 collider requires $|\mu|\agt 103.5$ GeV [@lep2ino], so that light higgsino models with low fine-tuning favour $|\mu|\sim 100-300$ GeV (in fact, $\mu^2$ was suggested as a measure of fine-tuning in Ref. [@ccn]). When radiative corrections to (\[eq:mssmmu\]) are included, masses of other superpartners (most notably third generation squarks) also enter on the RHS, and large cancellations may be needed if these have super-TeV masses. Models favouring low values of $|\mu|$ include: the hyperbolic branch/focus point (HB/FP) region of minimal supergravity model (mSUGRA or CMSSM)[@hb_fp] or its non-universal Higgs mass extension[@fs], models of “natural SUSY” (NS)[@kn; @ah; @ns; @nat] which have $\mu\sim 100-300$ GeV, top- and bottom-squarks with $m_{\tst_{1,2}},\ m_{\tb_1}\alt 500$ GeV and $m_{\tg}\alt 1.5$ TeV, and radiative natural SUSY (RNS)[@rns], where again $\mu\sim 100-300$ GeV and where $m_{H_u}^2$ is driven to small values $\sim -M_Z^2$ via the large top quark Yukawa coupling. The HB/FP region of mSUGRA[@msugra] remains viable[@dm125] but suffers high fine-tuning due to large top squark masses. The NS models as realized within the MSSM also seem to be disfavoured because much heavier top-squark masses are required to lift $m_h$ up to 125 GeV and to bring the $b\to s\gamma$ branching fraction into accord with measurements[@nat]. Models of NS with extra exotic matter which provide additional contributions to $m_h$ would still be allowed[@hpr]. The RNS model allows for top- and bottom-squarks in the 1-4 TeV range, and with large mixing can accommodate $m_h\simeq 125$ GeV and $BF(b\to s\gamma )$ while maintaining cancellations in (\[eq:mssmmu\]) at the 3-10% level. Another potential advantage of models with light higgsinos is that if the lightest supersymmetric particle (LSP) is higgsino-like, then it annihilates rapidly in the early universe, thus avoiding cosmological overclosure bounds. In this case, the higgsino might serve as a co-dark-matter particle along with perhaps the axion[@az1]. Although the production of charged and neutral higgsinos may occur at large rates (pb-level cross sections for $\mu \sim 150$ GeV at the LHC), detection of these reactions is very difficult because the mass gaps $m_{\tw_1}-m_{\tz_1}$ and $m_{\tz_2}-m_{\tz_1}$ are typically small, $\sim 5-20$ GeV, resulting in very low visible energy release from $\tw_1$ and $\tz_2$ decays. Thus, higgsino pair production events are expected to be buried beneath SM backgrounds[@bbh]. We examine instead signals from the heavier gaugino-like states focusing on the wino-like states $\tw_2$ and $\tz_4$, whose production cross sections will be fixed by essentially just the wino mass parameter $M_2$ if first generation squarks are heavy. As an illustration, we show sparticle production cross sections for a model line from the RNS model, which can be generated from the two-extra-parameter non-universal Higgs model (NUHM2) [@nuhm2] with parameters m\_0, m\_[1/2]{}, A\_0, ,    [and]{}  m\_A . The independent GUT scale parameters $m_{H_u}^2$ and $m_{H_d}^2$ have been traded for convenience for the weak scale parameters $\mu$ and $m_A$. We take $m_0=5$ TeV, $A_0=-1.6m_0$, $\tan\beta =15$, $\mu =150$ GeV, $m_A=1$ TeV, and allow $m_{1/2}$ to vary between $300-1000$ GeV. The large negative $A_0$ value allows $m_h\sim 125$ GeV[@h125] and at the same time limits the cancellation between the terms in (\[eq:mssmmu\]) to no better than 3.5%. We use Isajet[@isajet] for spectrum generation, branching fractions and also later for signal event generation. The cross sections for various electroweak-ino pair production are shown versus $m_{1/2}$ in Fig. \[fig:xsec\] for $pp$ collisions at $\sqrt{s}=14$ TeV, where we have used Prospino[@prospino] to obtain results at next-to-leading-order in QCD. The difficult-to-detect $\tw_1^+\tw_1^-$, $\tw_1\tz_1$ and $\tz_1\tz_2$ higgsino processes dominate sparticle production with a cross section $\sigma\sim (2-4)\times 10^3$ fb. The corresponding curves are nearly flat with $m_{1/2}$ variation since $\mu$ is fixed at $150$ GeV. The charged and neutral wino-like states $\tw_2$ and $\tz_4$ are mainly produced via $\tw_2\tz_4$ and $\tw_2^+\tw_2^-$ reactions with cross sections that begin at $\sim 1000$ fb but fall slowly with increasing $m_{1/2}$ because their masses increase with $m_{1/2}$ (since $m_{\tw_2}\simeq m_{\tz_4} \simeq M_2\sim 0.8 m_{1/2}$). Cross sections for mixed gaugino-higgsino production reactions such as $\tw_2\tz_2$, $\tw_1\tz_3$ [*etc.*]{} fall more rapidly with $m_{1/2}$ and become subdominant. The gluino pair production cross section (‘+’s on the red curve) starts at $\sim 1000$ fb, but drops rapidly as $m_{1/2}$ (alternatively $m_{\tg}\simeq 2.4m_{1/2}$) increases. To understand the final states, we show in Fig. \[fig:bfw2\] the dominant $\tw_2$ branching fractions versus $m_{1/2}$ along the same model line. Here, we see that $\tw_2^+\to\tw_1^+ Z$ and $\tz_2\ W^+$ at about 25% each while $\tw_2^+\to\tz_1 W^+$ is increasing with $m_{1/2}$ to also approach $\sim 25\%$. In Fig. \[fig:bfz4\], we show the $\tz_4$ branching fraction versus $m_{1/2}$, and here find $\tz_4\to \tw_1^+ W^- +\tw_1^- W^+$ occurring at $\sim 50\%$, followed by $\tz_4\to \tz_2 Z$ and $\tz_1 h$ occurring at $\sim 15-20\%$ level; several other subdominant decay modes are also shown. Combining the $\tw_2^\pm \tz_4$ production reaction with decay modes, the following potentially interesting signatures emerge: $\tw_2^\pm\tz_4 \to \left(W^+W^-,\ WZ,\ ZZ\ {\rm and}\ W^\pm W^\pm\right) +\eslt$. (The $W^+W^-$, $WZ$ and $ZZ$ plus $\eslt$ signals also arise from chargino and neutralino production in models such as mSUGRA/CMSSM.) The $W^+W^-$ signal will likely be buried beneath prodigious SM backgrounds from $W^+W^-$ and $t\bar{t}$ production, while the $ZZ$ signal is likely to be rate-limited at least in the golden four lepton mode. There may also exist some limited LHC14 reach for the $WZ\to 3\ell$ signal as in Ref. [@wz]. However, same-sign diboson production– $W^\pm W^\pm +\eslt$– [*is a novel signature, characteristic of the light higgsino scenario.*]{} Assuming leptonic decays of the $W$ bosons, we expect events with same-sign (SS) dileptons $+\eslt$ accompanied by modest levels of hadronic activity arising from initial state QCD radiation and from hadronic decays of $\tw_1$ or $\tz_2$ where the usually soft decay products might become boosted to create a jet. The SS dilepton signal emerging from wino-pair production is quite distinct from that expected from gluino pair production[@ssdil] since in the latter case several very high $p_T$ jets and large $\eslt$ are also expected. The SM physics backgrounds to the SS diboson signal come from $uu\to W^+W^+ dd$ or $dd\to W^-W^- uu$ production, with a cross section $\sim 350$ fb. These events will be characterized by high rapidity (forward) jets and rather low $\eslt$. $W^\pm W^\pm$ pairs may also occur via two over-lapping events; such events will mainly have low $p_T$ $W$s and possibly distinct production vertices. Double parton scattering will also lead to SS diboson events, at a rate somewhat lower than the $qq \to W^\pm W^\pm q'q'$ process[@stirling]. Additional physics backgrounds come from $t\bar{t}$ production where a lepton from a daughter $b$ is non-isolated, from $t\bar{t}W$ production, and $4t$ production. SM processes such as $WZ\to 3\ell$ and $t\bar{t}Z\to 3\ell$ production, where one lepton is missed, constitute [*reducible*]{} backgrounds to the signal. To estimate background, we employ a toy detector simulation with calorimeter cell size $\Delta\eta\times\Delta\phi=0.05\times 0.05$ and $-5<\eta<5$ . The HCAL (hadronic calorimetry) energy resolution is taken to be $80\%/\sqrt{E}\oplus 3\%$ for $|\eta|<2.6$ and FCAL (forward calorimetry) is $100\%/\sqrt{E} \oplus 5\%$ for $|\eta|>2.6$, where the two terms are combined in quadrature. The ECAL (electromagnetic calorimetry) energy resolution is assumed to be $3\%/\sqrt{E}\oplus 0.5\%$. In all these, $E$ is the energy in GeV units. We use the cone-type Isajet [@isajet] jet-finding algorithm to group the hadronic final states into jets. Jets and isolated leptons are defined as follows: Jets are hadronic clusters with $|\eta| < 3.0$, $R\equiv\sqrt{\Delta\eta^2+\Delta\phi^2}\leq0.4$ and $E_T(jet)>40$ GeV. Electrons and muons are considered isolated if they have $|\eta| < 2.5$, $p_T(l)>10 $ GeV with visible activity within a cone of $\Delta R<0.2$ about the lepton direction, $\Sigma E_T^{cells} < \min[5,0.15p_{T}(l)]$ GeV. We identify hadronic clusters as $b$-jets if they contain a $B$ hadron with $E_T(B)>$ 15 GeV, $|\eta(B)|<$ 3.0 and $\Delta R(B,jet)<$ 0.5. We assume a tagging efficiency of 60$\%$ and light quark and gluon jets can be mis-tagged as a $b$-jet with a probability $1/R_b$, with $R_b=150$ for $E_{T} \leq$ 100 GeV, $R_b=50$ for $E_{T} \geq$ 250 GeV, and a linear interpolation in between. We require the following cuts on our signal and background event samples: [*exactly*]{} 2 isolated same-sign leptons with $p_T(\ell_1 )>20$ GeV and $p_T(\ell_2 )>10$ GeV, $n(b-jets)=0$ (to aid in vetoing $t\bar{t}$ background). At this point the event rate is dominated by $WZ$ and $t\bar{t}$ backgrounds. To reduce these further, we construct the transverse mass of each lepton with $\eslt$ and require: $\mtmin\equiv \min\left[ m_T(\ell_1,\eslt),m_T(\ell_2 ,\eslt )\right] > 125 \ {\rm GeV}$, since the signal gives rise to a continuum distribution, while the background has a kinematic cut-off around $\mtmin \simeq M_W$ (as long as the $\eslt$ dominantly arises from the leptonic decay of a single $W$). After these cuts, we are unable to generate any background events from $t\bar{t}$ and $WZ$ production, where the 1 event level in our simulation was 0.05 fb and 0.023 fb, respectively. The dominant SM background for large $\mtmin$ then comes from $Wt\bar{t}$ production for which we find (including a QCD $k$-factor $k=1.18$ extracted from Ref. [@garzelli]) a cross section of $0.019$ ($0.006$) fb after the harder cuts, $\mtmin > 125$ (175) GeV and $\eslt>200$ GeV that serve to optimize the signal reach for high $m_{1/2}$ values.[^1] The calculated signal rates after cuts along the RNS model line from just $\tw_2^\pm\tz_4$ and $\tw_2^\pm\tw_2^\mp$ production are shown vs. $m_{1/2}$ in Fig. \[fig:ss\] where the upper (blue) curves require $\mtmin>125$ GeV and the lower (orange) curve requires $\mtmin>175$ GeV. The $\tw_2\tz_4$ and $\tw_2\tw_2$ cross sections are normalized to those from Prospino[@prospino]. For observability with an assumed value of integrated luminosity, we require: 1) significance $> 5\sigma$, 2) Signal/BG$>0.2$ and 3) at least 5 signal events. The LHC reach for SS diboson events for integrated luminosity values 100, 300 and 1000 fb$^{-1}$ is shown by horizontal lines in Fig. \[fig:ss\] and also in Table \[tab:reach\]. For just 10 fb$^{-1}$ of integrated luminosity there is no LHC14 reach for SS dibosons, while $\tg\tg$ production gives a reach of $m_{\tg}\sim 1.4$ TeV[@bblt1014]. However, for 100 fb$^{-1}$ the LHC14 reach for SS dibosons extends to $m_{1/2}\sim 680$ GeV corresponding to $m_{\tg}\sim 1.6$ TeV in a model with gaugino mass unification. The direct search for $\tg\tg$ gives a projected reach of $m_{\tg}\sim 1.6$ TeV [@bblt2012], so already the SS diboson signal offers a comparable reach. For 300 (1000) fb$^{-1}$ of integrated luminosity, we find the LHC14 reach for SS dibosons extends to $m_{1/2}\sim 840$ (1000) GeV, corresponding to a reach in $m_{\tg}$ of 2.1 and 2.4 TeV. These numbers extend well beyond the LHC14 reach for direct gluino pair production[@bblt1014]. Int. lum. (fb$^{-1}$) $m_{1/2}$ (GeV) $m_{\tg}$ (TeV) $m_{\tg}$ (TeV) \[$\tg\tg$\] ----------------------- ----------------- ----------------- ------------------------------ 10 – – 1.4 100 680 1.6 1.6 300 840 2.1 1.8 1000 1000 2.4 2.0 : Reach of LHC14 for SUSY assuming various integrated luminosity values. The reach is given for $m_{1/2}$ along the RNS model line, and also for the equivalent reach in $m_{\tg}$ assuming heavy squarks. The corresponding reach in $m_{\tg}$ from $\tg\tg$ searches is also shown for comparison. \[tab:reach\] We emphasize here that the SS diboson signal from SUSY models with light higgsinos is quite distinct from the usual SS dilepton signal arising from gluino pair production, which is usually accompanied by numerous hard jets and high $\eslt$. For instance, recent CMS searches for SS dileptons from SUSY[@cms_ss] required the presence of two tagged $b$-jets or large $H_T$ in the events; these cuts reduce or even eliminate our SS diboson signal. Likewise, the cuts $n_j\ge 4$ high $p_T$ jets along with $\eslt >150$ GeV required by a recent Atlas search for SS dileptons from gluinos[@atlas_ss] would have eliminated much of the SS diboson signal from SUSY with light higgsinos. [*Summary:*]{} In SUSY models with light higgsinos, as motivated by electroweak naturalness considerations) the production of wino pairs gives rise to a novel same-sign diboson plus modest hadronic activity signature. For an integrated luminosity of 100 (1000) fb$^{-1}$ this SS diboson signal should be observable at LHC14 for wino masses up to 550 (800) GeV. Assuming gaugino mass unification, this extends the LHC SUSY reach well beyond that of conventional searches for gluino pair production in the case where squarks are heavy. [*Acknowledgements:*]{} We thank Andre Lessa for discussions. This work was supported in part by the US Department of Energy, Office of High Energy Physics, by Suranaree University of Technology, and by the Higher Education Research Promotion and National Research University Project of Thailand, Office of the Higher Education Commission. [99]{} G. Aad [*et al.*]{} (ATLAS Collaboration), [[*Phys Lett. *]{}[**B 710**]{} (2012) 49]{}. S. Chatrachyan [*et al.*]{} (CMS Collaboration), [[*Phys Lett. *]{}[**B 710**]{} (2012) 26]{}. E. Witten, [[*Nucl. Phys. *]{}[**B 188**]{} (1982) 513]{}; R. Kaul, [[*Phys Lett. *]{}[**B 109**]{} (1982) 19]{}. M. S. Carena and H. E. Haber, Prog. Part. Nucl. Phys.  [**50**]{} (2003) 63 \[hep-ph/0208209\]. G. Aad [*et al.*]{} (ATLAS collaboration), arXiv:1109.6572 \[hep-ex\]. S. Chatrchyan [*et al.*]{} (CMS collaboration), [[*Phys. Rev.   Lett.  *]{}[**107**]{} (2011) 221804]{}. For a review, see [*e.g.*]{} R. Arnowitt and P. Nath, In \*Kane, G.L. (ed.): Perspectives on supersymmetry II\* 222-243 \[arXiv:0912.2273 \[hep-ph\]\] and references therein; G. Kane, C. Kolda, L. Roszkowski and J. Wells, . Joint LEP 2 Supersymmetry Working Group, [*Combined LEP Chargino Results up to 208 GeV*]{},\ <http://lepsusy.web.cern.ch/lepsusy/www/inos_moriond01/charginos_pub.html>. K. L. Chan, U. Chattopadhyay and P. Nath, [[*Phys. Rev. *]{}[**D 58**]{} (1998) 096004]{}. J. Feng, K. Matchev and T. Moroi, [[*Phys. Rev.   Lett.  *]{}[**84**]{} (2000) 2322]{} and [[*Phys. Rev. *]{}[**D 61**]{} (2000) 075005]{}; see also H. Baer, C. H. Chen, F. Paige and X. Tata, [[*Phys. Rev. *]{}[**D 52**]{} (1995) 2746]{} and [[*Phys. Rev. *]{}[**D 53**]{} (1996) 6241]{}; for a model-independent approach, see H. Baer, T. Krupovnickas, S. Profumo and P. Ullio, [[*J. High Energy Phys. *]{}[**0510**]{} (2005) 020]{}; J. L. Feng, K. T. Matchev and D. Sanford, [[*Phys. Rev. *]{}[**D 85**]{} (2012) 075007]{}. J. L. Feng and D. Sanford, [[*Phys. Rev. *]{}[**D 86**]{} (2012) 055015]{}. R. Kitano and Y. Nomura, [[*Phys Lett. *]{}[**B 631**]{} (2005) 631]{} and [[*Phys. Rev. *]{}[**D 73**]{} (2006) 095004]{}. N. Arkani-Hamed, talk at WG2 meeting, Oct. 31, 2012, CERN, Geneva. M. Papucci, J. T. Ruderman and A. Weiler, [[*J. High Energy Phys. *]{}[**1209**]{} (2012) 035]{}; C. Brust, A. Katz, S. Lawrence and R. Sundrum, [[*J. High Energy Phys. *]{}[**1203**]{} (2012) 103]{}; R. Essig, E. Izaguirre, J. Kaplan and J. G. Wacker, [[*J. High Energy Phys. *]{}[**1201**]{} (2012) 074]{}. H. Baer, V. Barger, P. Huang and X. Tata, [[*J. High Energy Phys. *]{}[**1205**]{} (2012) 109]{}. H. Baer, V. Barger, P. Huang, A. Mustafayev and X. Tata, [[*Phys. Rev.   Lett.  *]{}[**109**]{} (2012) 161802]{} and arXiv:1212:2655. H. Baer, V. Barger and A. Mustafayev, [[*J. High Energy Phys. *]{}[**1205**]{} (2012) 091]{}. L. Hall, D. Pinner and J. T. Ruderman, [[*J. High Energy Phys. *]{}[**1204**]{} (2012) 131]{}; S. P. Martin, and ; K. J. Bae, T. H. Jung and H. D. Kim, [[*Phys. Rev. *]{}[**D 87**]{} (2013) 015014]{}. H. Baer, A. Lessa, S. Rajagopalan and W. Sreethawong, JCAP[**1106**]{} (2011) 031. H. Baer, V. Barger and P. Huang, [[*J. High Energy Phys. *]{}[**1111**]{} (2011) 031]{}. J. Ellis, K. Olive and Y. Santoso, [[*Phys Lett. *]{}[**B 539**]{} (2002) 107]{}; J. Ellis, T. Falk, K. Olive and Y. Santoso, [[*Nucl. Phys. *]{}[**B 652**]{} (2003) 259]{}; H. Baer, A. Mustafayev, S. Profumo, A. Belyaev and X. Tata, [[*J. High Energy Phys. *]{}[**0507**]{} (2005) 065]{}, and references therein. H. Baer, V. Barger and A. Mustafayev, [[*Phys. Rev. *]{}[**D 85**]{} (2012) 075010]{}. H. Baer, F. Paige, S. Protopopescu and X. Tata, hep-ph/0312045 (2003). W. Beenakker, R. Hopker and M. Spira, hep-ph/9611232. H. Baer, V. Barger, S. Kraml, A. Lessa, W. Sreethawong and X. Tata, [[*J. High Energy Phys. *]{}[**1203**]{} (2012) 092]{}. V. D. Barger, W. -Y. Keung and R. J. N. Phillips, [[*Phys. Rev.   Lett.  *]{}[**55**]{} (1985) 166]{}; R. M. Barnett, J. F. Gunion and H. E. Haber, UCD-88-30, [[*Phys. Rev. *]{}[**D 37**]{} (1988) 1892]{} and [[*Phys Lett. *]{}[**B 315**]{} (1993) 349]{}; H. Baer, V. D. Barger, D. Karatas and X. Tata, [[*Phys. Rev. *]{}[**D 36**]{} (1987) 96]{}; H. Baer, X. Tata and J. Woodside, FSU-HEP-881011 and [[*Phys. Rev. *]{}[**D 41**]{} (1990) 906]{}. See J. Gaunt, C-H. Kom, A. Kulesza and W. J. Stirling, Eur. Phys. J. [**C69**]{} (2010) 53, for a recent assessment of dileptons from SS $W$ production and other SM sources at the LHC. M. V. Garzelli, A. Kardos, C. G. Papadopoulos and Z. Trocsanyi, [[*J. High Energy Phys. *]{}[**1211**]{} (2012) 056]{}. H. Baer, V. Barger, A. Lessa and X. Tata, [[*J. High Energy Phys. *]{}[**0909**]{} (2009) 063]{}. H. Baer, V. Barger, A. Lessa and X. Tata, [[*Phys. Rev. *]{}[**D 86**]{} (2012) 117701]{}. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], [[*Phys. Rev.   Lett.  *]{}[**109**]{} (2012) 071803]{} and arXiv:1212.6194 \[hep-ex\]. \[ATLAS Collaboration\], ATLAS-CONF-2012-105. [^1]: We have ignored detector-dependent backgrounds from jet-lepton misidentification in our analysis, but are optimistic that these can be controlled by the $\mtmin$ and $\eslt$ cuts.
--- author: - | P. Jacob and P. Mathieu[^1]\ \ Department of Mathematical Sciences, University of Durham, Durham, DH1 3LE, UK\ and\ Département de physique, de génie physique et d’optique,\ Université Laval, Québec, Canada, G1K 7P4. date: June 2005 title: ' The $Z_k^{(su(2),3/2)}$ parafermions' --- .2in 0.3cm [**ABSTRACT**]{} We introduce a novel parafermionic theory for which the conformal dimension of the basic parafermion is $\tfrac32(1-1/k)$, with $k$ even. The structure constants and the central charges are obtained from mode-type associativity calculations. The spectrum of the completely reducible representations is also determined. The primary fields turns out to be labeled by two positive integers instead of a single one for the usual parafermionic models. The simplest singular vectors are also displayed. It is argued that these models are equivalent to the non-unitary minimal $W_k(k+1,k+3)$ models. More generally, we expect all $W_k(k+1,k+2\beta)$ models to be identified with generalized parafermionic models whose lowest dimensional parafermion has dimension $\beta(1-1/k)$. Introduction ============ The Fateev-Zamolodchikov $Z_k$ parafermionic conformal field theory, originally introduced in [@ZFa], can be generalized along two quite distinct lines. The first one relies on the observation that the standard parafermionic model is equivalent to the coset $\suh(2)_k/\uh(1)$. In that vein, a natural generalization amounts to consider the cosets $\gh/\uh(1)^r$ where $r$ is the rank of the affine Lie algebra $\gh$ [@Gep]. Originally formulated in terms of untwisted affine simple Lie algebras, this approach has been further extended to the cases where $\gh$ is either twisted [@DGZ] or contains fermionic generators [@CRS]. For the second line of generalization, one preserves the structure of the standard theory, i.e., the cyclic structure of the OPEs (namely $\psi_n\times \psi_m\sim \psi_{n+m}$ with $\psi_k\sim I$), but attributes different conformal dimension to the parafermionic fields. We recall in that regard that for a candidate chiral algebra whose OPE has a $Z_ k$ invariance, the dimension of the basic parafermionic fields, $h_{\psi_n}$, have to be introduced as an input. The $Z_ k$ model of [@ZFa] amounts to choose the dimension of the parafermions $\psi_n$ to be $ h_{\psi_n} = {n(k-n)/k}$. This is the simplest choice compatible with the monodromy constraints. A general solution to these constraints has also been displayed in [@ZFa]. It reads $$\label{modelb} h^{(\beta)}_{\psi_n} = {\beta n(k-n)\over k}+a_n\;,$$ where $\beta$ is an integer and $a_n$ are integers that satisfy $a_n=a_{k-n}$. The second type of generalization is thus defined in terms of such a generalized form of the dimension of the parafermionic fields. Actually, both types of generalization (with $a_n=0$ and $\beta$ positive) can be combined, leading to a much larger set of parafermionic theories, denoted as $ Z_ k^{(g,\beta)}$, classified in terms of both their underlying structural algebra[^2] $g$ and the parameter $\beta$. In this notation, the original parafermionic theories of [@ZFa] would correspond to the $ Z_ k^{(su(2),1)}$ models. In this work we explore a generalization of the second type. The corresponding models will be referred to as the $Z_ k^{(su(2),\beta)}$ models, written $Z_ k^{(\beta)}$ for short. Some $Z_ k^{(\beta)}$ models have been considered previously. The unitary sequence of the $ Z_3^{(2)}$ models, first introduced in appendix A of [@ZFa], have been studied in more detail in [@ZFc]. Further developments are presented in [@FPT]. More recently, the unitary sequence of the $ Z_k^{(2)}$ models for arbitrary $k$ have been analyzed in depth in [@Dot]. The only known results for higher values of $\beta$ seems to be those presented in [@DotS] pertaining to $\beta=4$ and $k=3$. Here we explore a novel possibility, which is to consider $\beta$ to be half-integer. This is allowed whenever $k$ is even. The possibility of having $\beta$ non-integer seems to have first been mentioned in [@Rav], although it has not been studied there. This enhancement in the range of the possible values of $\beta$ in relation with the parity of $k$ is a direct consequence of the requirement $k\beta\in \Z$. This, in turn, ensures that moving the parafermionic field $\psi_1(z)$ $k$ times around $\psi_1(w)$ should not produce any phase. The special case $\beta=3/2$ is analyzed here in some detail.[^3] Since the dimension of the basic parafermion reads $$h^{(3/2)}_{\psi_1}= {3\over 2}\left(1-{1\over k}\right) \; ,$$ this provides a sort of parafermionic deformation of supersymmetry.[^4] Like for the $Z_k^{(1)}$ models, an associativity analysis is enough to fix the central charge of the $Z_ k^{(3/2)}$ models to $$\label{centre} c=-{3(k-1)^2\over (k+3)}\; .$$ For $k=2$, the central charge is $-3/5$, which is precisely that of the Virasoro minimal model $ {\cal M}(3,5)$. The $Z_ 2^{(3/2)}$ model is indeed equivalent to ${\cal M}(3,5)$.[^5] This relation between the $Z_ 2^{(3/2)}$ model and a particular Virasoro minimal model is the simplest example of a general relationship between $Z_ k^{(3/2)}$ and $W_k$ models, whose precise phrasing is $$\label{zwiden} Z_ k^{(3/2)}\simeq W_k^{(k+1,k+3)} \;.$$ Note that for $k$ even (and positive), $k+1$ and $k+3$ are ensured to be relatively prime. Structure of the $Z_ k^{(3/2)}$ algebra ==================== Consider the set of $k$ parafermionic fields $\psi_n$, $n=0,\cdots, k-1$, of conformal dimensions $$h_n\equiv h^{(3/2)}_{\psi_n}= {3\over 2} n \left(1-{n\over k}\right)\; .$$ As usual with parafermions, the mode decomposition of $\psi_1$ depends upon the sector on which it acts. These sectors are labeled by integers $t=0,\cdots , k-1$. In the sector $t$, we have $$\psi_1(z) = \sum_{m=-\y}^\y z^{-t/k-m-1}A_{m+1+t/k-3/2+3/2k}=\sum_{m=-\y}^\y z^{-\la q-m-1}A_{m-1/2+\la(1+q)} \; , \label{modep}$$ i.e., the sector determines the fractional power of $z$. In the second relation, we have traded $t$ for its rescaled version $q$, and wrote $3/2k$ as $\la$: $$t={3 q\over 2}\; , \qquad \la= {3\over 2k}\; .$$ A similar expression holds for the decomposition of $\psi^\dagger_1$, with $q\rw -q$. The inverted versions of (\[modep\]) and its dagger version are: $$\label{modepin} A_{m-1/2+\la(1+q)} = {1\over 2 \pi i}\oint_0 dz\, z^{\la q + m}\,\psi_1 (z)\, \qquad {\rm and} \qquad A_{m-1/2+\la(1-q)}^\dagger ={1 \over 2 \pi i} \oint_0 {dz }\, z^{-\la q+m}\,\psi^\dagger_1 (z)\;.$$ Interpreting $q$ as the charge that characterizes the sector $t$ amounts to assign the charge $q=2$ to $A$ (actually this is essentially the reference charge, the charge concept being relative) while that of $A^\dagger$ is $-2$. It is convenient to drop the fractional part (more precisely, the part proportional to $\la$) since it is easily reconstructed from the charge of the state on which the mode acts), signaling this omission in the mode writing by replacing $A$ by $\A$. Thus, when acting on an arbitrary state of charge $q$, we write $$A_{u+\la(1+q)}= \A_u$$ The defining (holomorphic) OPEs of the $Z_ k$ parafermionic conformal algebra [@ZFa] are $$\begin{split} \psi_n (z) \,\psi_{n'} (w) &\sim {c_{r,s}\over (z-w)^{3nn'/k}}\; \psi_{n+n'} (w) \qquad (n+n'<k) \\ \psi_n(z) \,\psi^\dagger_{n'} (w) &\sim { c_{n,k-n'}\over (z-w)^{3{\rm min}(n,n')-3nn'/k} }\; \psi_{k+n-n'} (w) \qquad (n+n'<k) \\ \psi_n(z) \,\psi^\dagger_n (w) &\sim {1\over (z-w)^{3n(k-n)/k}} \left[I+(z-w)^2 {2 h_{n} \over c}\, T(w) +\cdots\right]\;, \end{split}$$ where $\psi_0=I$ and $ \psi^\dagger_n= \psi_{k-n}$. The remaining OPEs are obtained by conjugation, with $c_{n,n'}= c_{k-n,k-n'}$. Following [@ZFa], the commutation relations are found to be $$\label{comisaa} \sum_{l=0}^{\infty} C^{(l)}_{-2\la} \left[ \A_{n-l-1/2} \A^\dagger_{m+l+1/2} - \A_{m-l+1/2}^\dagger \A_{n+l-1/2} \right] = \left[ {2h_{1}\over c} L_{n+m} + {1 \over 2} (n+{\la q})(n-1+\la q) \delta_{n+m,0} \right]$$ where (with $c$ given by (\[centre\])) $${2h_{1} \over c }= - {(k+3)\over k(k-1) } \; \qquad {\rm and}\qquad C^{(l)}_a={\Gamma(l-a)\over \Gamma(-a) l!}\;.$$ In the same way, we easily obtain the following commutation relations involving only $\A$ modes: $$\sum_{l=0}^{\infty} C^{(l)}_{2\la} \left[ \A_{n-l-1/2} \A_{m+l+1/2} - \A_{m-l+1/2} \A_{n+l-1/2} \right] = 0 \;. \label{comisaTwo}$$ There is an identical expression with $\A$ replaced by $\A^\dagger$ [^6]. The highest-weight states $|{\rm hws}\R$ are naturally defined from being annihilated by the action of the positive parafermionic modes: $$\A_{m+1/2}|{\rm hws}\R = \A_{m+1/2}^\dagger|{\rm hws}\R= 0 \quad {\rm for}\,\;\; m\geq 0\;. \label{highestW}$$ The vacuum $|0\R$ is certainly a particular example of a highest-weight state. The parafermionic field $\psi_n$ itself is associated to the following vacuum descendant state: $$\psi_n(0)|0\R= (\A_{-3/2})^n|0\R\;.$$ Structure constants and central charge ====================================== In this section, we will fix the value of the central charge and the structure constants for the $Z_ k^{(3/2)}$ parafermionic theory. This will be done by a rather simple method using mode computations (cf. [@JM3p]). Let us start by fixing the central charge. The trick is to start with a simple string of three modes acting on the vacuum. For the first mode $\A_u|0\R$ we chose the highest value of $u$ that makes the state non-vanishing. This is $u=-3/2$. The second mode is chosen to be such that the resulting state is proportional to $L_{-1}|0\R$. This second mode is thus $\A^\dagger_{-1-u}= \A^{\dagger}_{1/2}$. Finally, we choose the third mode such that the resulting state is proportional to $\A_u|0\R$. We thus consider $$\A_{u+1}\A^\dagger_{-1-u} \A_{u}|0\R=\A_{-1/2}\A^{\dagger}_{1/2}\A_{-3/2} |0 \rangle \;.$$ By commuting the two rightmost term (which is indicated below by the underbrace) with (\[comisaa\]), we have $$\A_{-1/2} \underbrace{\A^{\dagger}_{1/2}\A_{-3/2}}|0 \rangle =0\;.$$ This, of course, is compatible with the fact that $\A^{\dagger}_{1/2}\A_{-3/2}|0 \rangle$ is proportional to $L_{-1}|0\R=0$. On the other hand, by commuting the two left-most terms still using (\[comisaa\]), we obtain $$\begin{split} \underbrace{\A_{-1/2}\A^{\dagger}_{1/2} } \A_{-3/2} |0 \rangle &=\, {2 h^2_{1} \over c}\A_{-3/2} |0 \rangle + {1 \over 2} (2 \lambda)(-1+2 \lambda)\A_{-3/2} |0 \rangle - C^{(1)}_{-2\lambda} \A_{-3/2}\A^{\dagger}_{3/2}\A_{-3/2} |0 \rangle \cr &= \left( {9 \over 2}{ (k-1)^2 \over c k^2} + {3 \over 2} {(3-k) \over k^2} +{3 \over k}\right)\A_{-3/2} |0 \rangle \;. \end{split}$$ In establishing this result, we used the case $n=1$ of the following relations: $$\A^{\dagger}_{3/2+\ell}(\A_{-3/2})^n |0 \rangle= 0= \A_{-3/2+\ell}(\A_{-3/2})^n|0 \rangle\qquad {\rm for }\qquad \ell>0\;, \label{restri}$$ which are easily proved by means of the commutation relations (\[comisaa\]) and (\[comisaTwo\]) supplemented by an inductive argument. Coming back the computation of the central charge, equating the two expressions obtained for $\A_{-1/2}\A^{\dagger}_{1/2}\A_{-3/2} |0 \rangle$ yields the announced result (\[centre\]). This value can be recalculated in several different ways, confirming thereby the associativity of the theory. Let us now turn to the computation of the structure constants. We first show that the constants $c_{n,n'}$ are fixed by $c_{1,n}$. Introducing the compact notation $$(c)_n= c_{1,1}\cdots c_{1,n-1}\;,$$ we have, in a schematic notation (i.e., by dropping the $z$ dependence and regarding these equalities in terms of leading terms):$$\psi_{n+n'}= {1\over (c)_{n+n'}}(\psi_1)^{n+n'}= {(c)_n(c)_{n'} \over (c)_{n+n'}}\psi_{n}\psi_{n'}= {(c)_n(c)_{n'} \over (c)_{n+n'}} c_{n,n'}\psi_{n+n'}\;.$$ From this, we conclude that $$c_{n,n'}= { (c)_{n+n'}\over (c)_n(c)_{n'} }\;, \label{reCu}$$ the sought for relationship. Note also that $$\psi_1^\dagger (\psi_1)^{n+1}= (c)_{n+1} \psi_1^\dagger \psi_{n+1} = (c)_{n+1} c_{1,n}\psi_{n}= c_{1,n}^2(\psi_{1})^n\;.$$ We are now in position to calculate $c^2_{1,n}$. From the previous relation, we get $$\A^{\dagger}_{3/2}(\A_{-3/2})^{n+1} |0 \rangle = c^2_{1,n} (\A_{-3/2})^{n} |0 \rangle\; .$$ We next commute $\A^{\dagger}_{3/2}$ with the first $\A_{-3/2}$ factor using (\[comisaa\]): $$\begin{split} \A^{\dagger}_{3/2}(\A_{-3/2})^{n+1} |0 \rangle &= \left[-{2h_1\over c}L_0 - {1 \over 2}{ (-1+2n\la) (-2+ 2n \lambda) } \right](\A_{-3/2})^{n} |0 \rangle +\A_{-3/2}\A^{\dagger}_{3/2}(\A_{-3/2})^{n} |0 \rangle \cr &\equiv \, \Delta_n \, (\A_{-3/2})^{n} |0 \rangle +\A_{-3/2}\A^{\dagger}_{3/2}(\A_{-3/2})^{n} |0 \rangle \;, \end{split}$$ where $\Delta_n$ stands for $$\Delta_n = -{2h_1h_n \over c} - {1 \over 2}{ (-1+2n\la) (-2+ 2n \lambda) } \;,$$ and used $$L_0(\A_{-3/2})^{n} |0 \rangle = h_n (\A_{-3/2})^{n} |0 \rangle\;.$$ In the intermediate steps, the conditions (\[restri\]) have been taken into account. Now by iterating this result, we get, with $c$ fixed as above, $$\A^{\dagger}_{3/2}(\A_{-3/2})^{n+1} |0 \rangle= \left[ \sum_{j=0}^n \Delta_j \right](\A_{-3/2})^{n} |0 \rangle= - {(n+1)(k-n)(k-2n-1) \over k (k-1)}(\A_{-3/2})^{n} |0 \rangle \;.$$ Comparing the two distinct expressions we have derived for this state, we end up with $$c^2_{1,n} = - {(n+1)(k-n)(k-2n-1) \over k (k-1)}\;.$$ Note that the generic structure constant have to be imaginary. In the simplest case $k=2$, we see that $c_{1,1}^2=c_{1,k-1}^2=1$ as it should. Finally, using (\[reCu\]), we have $$c^2_{n,n'} = - {(n+n')!\, (k-n)! \, (k-n')!\, (k-2n-1)!! \, (k-2n'-1)!! \over k(k-1)\, (k-1)!\, n! \, n'! \, (k-n-n')!\, (k-3)! !\, (k-2n-2n'-1)!!}\;.$$ Primary singular vectors and spectrum ===================================== Verma modules are obtained by acting on the highest-weight states with the different lowering operators. A spanning set of states is given by: $$\begin{split} & L_{-n_1} \cdots L_{-n_{m_1}} \A_{-1/2-n'_1} \cdots \A_{-1/2-n'_{m_2}} \A^{\dagger}_{-1/2-n''_1} \cdots \A^{\dagger}_{-1/2-n''_{m_3}} |{\rm hws}\R \cr & {\rm for} \cr & n_i \geq n_{i+1} \geq 1 \; ; \;\; n'_i \geq n'_{i+1} \geq 0 \; ; \;\; n''_i \geq n''_{i+1} \geq 0 \;,\cr \end{split} \label{verma}$$ with $m_1,\, m_2$ and $m_3$ running over all integers. Note that the above states have relative charge $2(m_2-m_3)$. We are interested in finding the characteristics of those highest-weight states that make the modules completely reducible, that is, for which there is an infinite number of the above states that are not independent. Like for any parafermionic theory, if a string made of $\A_{-1/2}$ or $\A^{\dagger}_{-1/2}$ modes, acting on a highest-weight state, is allowed to run freely, it will eventually reach a conformal dimension lower than that of the highest-weight state. (We stress that in order to see this, we need to take into account the fractional part of the modes.) For that reason, it is natural to look for singular vectors in the form of $(\A_{-1/2})^{r+1}|{\rm hws}\R$ and $(\A^{\dagger}_{-1/2})^{r'+1}|{\rm hws}\R$ for some integers $r$ and $r'$. Yet, we have no characterization of the highest-weight states. For sure, it must belong to a definite sector $t$ (and recall for the usual parafermions, the sector label characterizes the highest-weight state uniquely). So we first require the singular vectors $$\label{singA} (\A_{-1/2})^{r+1}|{\rm hws}\R = 0$$ to obey the highest-weight conditions (\[highestW\]). This fixes the conformal dimensions of the highest-weight states to be $$h_{t,r} = -{ k (k-2r-t-1)(2r+t) +t^2 \over 2k(k+3)}\;. \label{confDim}$$ We next look for singular vectors in the form $(\A^{\dagger}_{-1/2})^{r'+1}|{\rm hws} \rangle $ leading the to same conformal dimension for the highest-weight states. We thereby obtain $$( \A^{\dagger}_{-1/2} )^{r+t+1} |{\rm hws} \rangle =0 \; . \label{singB}$$ It is clear at this point that we cannot get rid of this second parameter $r$. Consequently, it must become a second quantum number characterizing our highest-weight states: $$|{\rm hws} \rangle \equiv |t,r\rangle \;.$$ At this stage, the absence of any dependence upon $k$, meaning that the different models would have the same primary singular vectors, indicate us that we certainly do not have obtained the full set of primary singular vectors that would characterize the completely reducible representations. In order to unravel further constraints, observe that we have yet no way of removing states of the form $(A_{-3/2})^\ell (A_{-1/2})^{r} |t,r \rangle$, which, for $\ell$ sufficiently large, will once again have negative relative conformal dimension. It is thus unsurprising to discover another set of null states that does depend upon $k$: $$( \A_{-3/2} )^{k-t-2r+1} ( \A_{-1/2} )^{r} |t,r \rangle =0\;, \label{singD}$$ and $$( \A^{\dagger}_{-3/2} )^{k-t-2r+1}( \A^{\dagger}_{-1/2} )^{r+t} |t,r \rangle =0\;. \label{singF}$$ Note however that these null states $|\chi \rangle$ have been obtained as the solutions of the conditions $$\A_{3/2} |\chi \rangle= 0 = \A^{\dagger}_{-3/2} |\chi \rangle$$ instead of (\[highestW\]). These conditions are enough to ensure their decoupling from the whole module, but they also suggest that these states do arise as descendants of genuine singular vectors. We found the simplest of such singular vectors in the module of relative charge $0$ to be: $$\label{mixe} \left[L_{-1} + { 2(2r+t)(2r+t-1) \over (t-1)(2r+t+3) } \A_{-1/2} \A^\dagger_{-1/2} \right]|t,r \rangle= 0 \quad{\rm for }\quad k=2r+t\;.$$ This is indeed a primary singular vector, obeying the highest-weight conditions (\[highestW\]). By acting on this vector (\[mixe\]) with $(A_{-1/2} )^{r+1}$ and taking (\[singA\]) into account, we get $$\A_{-3/2} ( \A_{-1/2} )^{r} |t,r \rangle =0\;, \label{singDD}$$ which is identical to (\[singD\]) for $k=2r+t$. We can also generate (\[singD\]) in a similrar way. We thus naturally expect a whole sequence of similar singular vectors to be at the source of (\[singD\]) and (\[singF\]).[^7] By definition, $r$ has to be a non-negative integer. From (\[singD\]) or (\[singF\]), we deduce that $0 \leq r \leq (k-t)/2$. This bound, together with $0\leq t\leq k-1$, allow us to fix the spectrum of the theory. Conclusion ========== We have introduced new $Z_ k$ parafermionic models that belongs to the general class of models introduced in [@ZFa]: the structure of the algebra is still a cyclic $Z_k$-one but the dimension of the basic parafermion is modified as compared with those of the usual $\suh(2)_k/\uh(1)$ model, by the multiplicative factor $3/2$. In this letter, we have confined ourself to the study of the basic properties of these models, such as the determination of the essential parameters of the theory (structure constants and central charge) as well as a determination of the spectrum of the reducible modules. Much remains to be done and/or clarified: the determination of the field identifications, the complete analysis of the singular vectors, the derivation of character formulae, the unravelling of a quasi-particle basis and the corresponding fermionic form of the character, etc. These topics will be considered elsewhere. Our main aim here was to establish the well-definiteness of these models. Further support for this comes from their proposed identification with the $W_k(k+1,k+3)$ models, which has been explicitly checked for $k=2,4,6,8$.[^8] With regard to the last point, let us indicate that the identification (\[zwiden\]) can be generalized in a very natural way, for all integer or half-integer values of $\beta\geq 1$ as follows: $$\label{gzwiden} Z_k^{(\beta)}\simeq W_k^{(k+1,k+2\beta)}\;.$$ For $\beta=3/2$, this reduces to (\[zwiden\]) while for $\beta=1$, this is the standard identification of the $Z_k^{(1)}$ models as the simplest minimal $W_k$ model [@Nar]. As evidence for (\[gzwiden\]) when $\beta> 3/2$, we notice that the dimension $W_k$ field labeled by the two $\suh(k)$ weights $\{{\widehat\omega}_0,\muh\}$ (at respective levels 1 and 3) and such that the corresponding non-zero finite weight is $\mu= 2\beta \om_1$ matches precisely that of $\psi_1$, namely $\beta (1-1/k)$. We also verified that under some restrictions, associativity fixes the central charge of the $Z_3^{(2)}$ model to that of the $W_3^{(4,7)}$ model. Let us stress that for $\beta=2$, the relation between the $Z_k^{(2)}$ models and the $W_k^{(k+1,k+4)}$ models is not recovered in [@Dot]. The reasons for this might be that the authors have focussed on unitary solutions and/or finding solutions for which the conformal dimension of the primary fields $\phi_t$ is symmetric under the transformation $t\rightarrow k-t$ (as for the usual parafermions). We note that the primary fields do not have this kind symmetry for the $Z_k^{(3/2)}$ models (cf. (\[confDim\])). This relation however, is not incompatible with the associativity analysis. For instance, the expression for the $Z_k^{(2)}$ central charge, parameterized in terms of a number $\la$, reads (cf. eq. (A.8) of [@ZFa])): $$\ c_{Z_k^{(2)}}= {4(k-1)(k+\la-1)\la\over (k+2\la)(k+2\la-2)}\;,$$ while that of the $W_k^{(k+1,k+4)}$ models is: $$c_{W_k^{(k+1,k+4)}}=(k-1)\left(1-{9k\over (k+4)}\right)= -{4(k-1)(2k-1)\over (k+4)}\;.$$ Enforcing the equality of these two expressions yields the two solutions[^9] $$\la= {(2-k)\over 3} \quad {\rm or} \quad {(1-2k)\over 3} \;.$$ The relationship with the work [@Dot] certainly requires further analysis but (\[gzwiden\]) suggests that the sequence $W_k^{(k+1,k+4)}$ corresponds to the first of an infinite sequence of non-unitary solutions for an eventual complete realization of the $Z_k^{(2)}$ models. Note finally that for $k=2$, (\[gzwiden\]) reduces to $ Z_2^{(p/2-1)}\simeq \M(3,p).$ The dimension of $\psi_1$, namely $(p-2)/4$, is precisely that of $\phi_{2,1}$. The corresponding $Z_ 2$ parafermionic description of these Virasoro minimal models has been studied recently in [@JM3p]. The relation (\[gzwiden\]) hints for the existence of a similar parafermionic description of these non-unitary $W_k(k+1,k+2\beta)$ models. [**ACKNOWLEDGMENTS**]{} The work of PJ is supported by EPSRC and partially by the EC network EUCLID (contract number HPRN-CT-2002-00325), while that of PM is supported by NSERC. [99]{} A.B. Zamolodchikov and V.A. Fateev, Sov. Phys. JETP [**43**]{} (1985) 215. D. Gepner, Nucl. Phys. [**B290**]{} (1987) 10. X.-M. Ding, M. D. Gould and Y.-Z. Zhang Phys. Lett. [**B530**]{} (2002) 197; Nucl. Phys. [**B636**]{} (2002) 549. J. M. Camino, A. V. Ramallo and J. M. Sanchez de Santos, Nucl. Phys. [**B530**]{} (1998) 715. A.B. Zamolodchikov and V.A. Fateev. Theor. Math. Phys. [**71**]{} (1987) 163. P. Furlan, R.R. Paunov and I.V. Todorov, Fortschr. Phys. [**40**]{} (1992) 211. VI.S. Dotsenko, J.L. Jacobsen and R. Santachiara, Nucl. Phys [**B664**]{} (2003) 477; Phys. Lett. [**B 584**]{} (2004) 186; [**B679**]{} (2004) 464. VI.S. Dotsenko and R. Santachiara, [*The third parafermionic chiral algebra with the symmetry $Z_ 3$*]{}, hep-th/0501128. F. Ravanini, Int. J. Mod. Phys. [**A7**]{} (1992) 4949. P. Jacob and P. Mathieu, Nucl. Phys. [**B630**]{} (2002) 433. L. Bégin, J.-F. Fortin, P. Jacob and P. Mathieu, Nucl. Phys [**B659**]{} (2003) 365. P. Di Francesco, P. Mathieu and D. Sénéchal, [*Conformal Field theory*]{}, Springer Verlag, 1997. P. Jacob and P. Mathieu, [*A quasi-particle description of the $\M(3,p)$ models*]{}, hep-th/0506074. V.A. Fateev and S.L. Lukyanov, Int. J. Mod. Phys. [**A3**]{} (1988) 507 and Sov. Sci. Rev. A Phys. [**15**]{} (1990) 1. F.J. Narganes-Quijano, Int. J. Mod. Phys [**A6**]{} (1991) 2611. [^1]: patrick.jacob@durham.ac.uk, pmathieu@phy.ulaval.ca. [^2]: More precisely, the algebra $g$ is the finite form of the affine algebra involved in the coset $\gh/\uh(1)^r$ pertaining to the $\beta=1$ version, the algebra that governs the form of the parafermionic OPE within this class. [^3]: This turns out to be the simplest possibility when $\beta$ is positive since the associativity conditions are not satisfied for $\beta=1/2$. [^4]: Exotic supersymmetry has been explored in [@Rav] also from a parafermionic point of view. However, the models considered there are such that the basic parafermionic field has dimension $1+1/k$, which requires $\beta=-1$ and $a_1=2$. This choice of dimension is motivated by the aim of reproducing the relation $Q^k \sim P$ between the exotic supercharge $Q$ and the translation operator $P$. [^5]: Note also that $ {\cal M}(3,5)$ is the simplest graded parafermionic model, associated to the coset ${\osp}(1,2)_\kappa/\uh(1)$ for $\kappa=1$ [@JM]. [^6]: The relative sign in (\[comisaa\]) indicates that $\psi_1$ and $\psi_1^\dagger$ are mutually fermionic. The one in (\[comisaTwo\]) shows that $\psi_1$ is not fermionic with respect to itself. These signs have been obtained by associativity. [^7]: The complete singular-vector structure and the resulting character formula will be presented elsewhere. [^8]: Note that the central charge of the $W_k^{(p',p)}$ models is $$c= (k-1)\left(1-{k(k+1)(p-p')^2\over pp'}\right)$$ and with $(p',p)= (k+1,k+3)$, this reduces to (\[centre\]). [^9]: For $k=2$, for instance, $\la=0,-1$. With $\la=0$ say, we set $k=2+\e$ so that $\la=-\e/3$ et thus: $$c^{\rm ZF}= {4(1+\e)(1+2\e/3)(-\e/3)\over (2+\e/3)(\e/3)}\xrightarrow{(\e\rw0)}-2$$ which corresponds to that of $\M(3,6)=\M(1,2)$. On the other hand, without this connection between $\la$ and $k$, we get $c=1$ by directly setting $k=2$ in the expression for $c_{Z_k^{(2)}}$, thereby confirming that both solutions are possible in this case.
--- abstract: 'Some models of asymmetric dark matter commonly employ a gauge group structure of the form $G_{V}\times{}G_{D}$ where $G_{V}$ is the visible gauge group containing the Standard Model and $G_{D}$ is the gauge group responsible for self-interactions amongst components of dark matter. In some models, there is also an additional spontaneously broken $U(1)$ gauge symmetry coupling the visible and dark sectors at high energies. One theoretical problem is how to unify the visible and dark sectors by inducing the spontaneous breaking $G\rightarrow{}G_{V}\times{}G_{D}$ for some large gauge group $G$. In this paper, we discuss how to generate such a structure at low energies, in the context of 4+1-dimensional domain-wall brane model, by employing a generalization of the Dvali-Shifman mechanism, used to localized gauge bosons on domain walls, called the clash-of-symmetries mechanism. In one model, we describe a clash-of-symmetries domain wall solution in a theory with two scalar fields in the adjoint representation which breaks the group $SU(12)$ to two differently embedded copies of $SU(6)\times{}SU(6)\times{}U(1)$, leading to a an effective $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$-invariant field theory on the wall. We find that fermions in the mixed representations of $SU(5)_{V}\times{}SU(5)_{D}$ do not couple to the domain wall and thus remain 5D vector-like Dirac fermions, attaining masses of order $M_{GUT}$ when we perform the breaking $SU(5)_{V}\rightarrow{}SU(3)_{c}\times{}SU(2)_{I}\times{}U(1)_{Y}$, thus being removed from the spectrum. We also outline how to build a few alternative models, one based on the group $SU(9)$, and a couple more based on non-clash-of-symmetries domain wall solutions in $SU(12)$ and $SU(10)$ models.' author: - 'Benjamin D. Callen' bibliography: - 'bibliography2.bib' title: 'Unified Dark Matter from a Simple Gauge Group on a Domain-Wall Brane' --- Introduction ============ Dark matter composes roughly 25 per cent of the energy content of the universe, with 70 per cent of the remaining energy content being dark energy which is responsible for the universe’s expansion. Only about 5 per cent of the energy content of the universe is visible matter that is described by the particles of the Standard Model. We know that dark matter interacts with visible matter primarily through gravity, and most theories describing it postulate that it is made of a stable Weakly-Interacting Massive Particle (WIMP). Some examples of theories yielding stable WIMPs are R-parity conserving supersymmetric theories, which predict that the lightest supersymmetric particle (LSP) is a stable, neutrally charged particle, as well as theories with sterile neutrinos. There is, however, no a priori reason why dark matter has to be composed entirely of a single, stable particle; it could be that there are multiple different species of dark particles, with their own dark gauge forces, completely hidden with respect to the Standard Model particles. It is also curious that, while there is more dark matter than visible matter in the universe, the disparity is not significant in terms of orders of magnitude. In fact, the dark matter mass density of the universe is only roughly five times that for visible matter, $$\label{eq:darkmatterdensityvvisible} \Omega_{DM} \simeq{} 5\Omega_{VM}.$$ This naturally raises the question whether dark matter density is somehow related to the visible matter density at high energy scales. On the other hand, there is still the imbalance between matter and anti-matter in the visible sector which is still unaccounted for. The dominance of visible matter over anti-matter due to the number difference between baryons and anti-baryons, is characterized by the parameter $\eta{(B)}$ [@wmap; @planck2013], $$\label{eq:matterantimatterasymmetry} \eta{(B)} \equiv{} \frac{n_{B}-n_{\bar{B}}}{s} \simeq{} 10^{-10},$$ where here $n_{B}$, $n_{\bar{B}}$ and $s$ are the baryon number, the anti-baryon number and the entropy densities of the universe respectively. In models of baryogenesis, this asymmetry arises from CP-violating processes as well as out-of-equilibrium dynamics. This raises the idea of a scenario where the observed ratio between visible and dark matter described in Eq. \[eq:darkmatterdensityvvisible\] arises fundamentally from a visible matter - dark matter asymmetry and, furthermore that this asymmetry and the matter-antimatter asymmetry are related, typically via the relation $$\label{eq:matterantimattervmdm} n_{X}-n_{\bar{X}} \sim{} n_{B}-n_{\bar{B}}.$$ In other words, the matter antimatter asymmetry in the visible sector leads to an asymmetry between the corresponding matter and antimatter in the dark sector. Given the above correspondence, the relative dark matter abundance is explained if dark matter particles have masses around five times that of the proton. This scenario is called the asymmetric dark matter scenario [@asymmetricdarkmatterkalliaray; @zurekasymmetricdm]. One possibility of realizing asymmetric dark matter is through grand unification, where the dark matter components are the additional components of a simple group such as the $SU(6)$ model proposed by Barr [@sbarrsu6dm], or as the colors of a dark GUT $G_{D}$ in a theory based on a $G_{V}\times{}G_{D}$ gauge structure, such as the models based on $SU(5)\times{}SU(5)$ and $SO(10)\times{}SO(10)$ recently proposed by Refs. [@lonsdalegrandunifieddm; @lonsdaleso10xso10adm]. In the latter model, a particularly compelling model of asymmetric dark matter was made in which dark quarks form dark protons, and when the model has a certain number of dark quarks, the running of the dark gauge coupling constant induces a dark QCD scale $\Lambda_{D}$ which is of roughly the same order as $\Lambda_{QCD}$. In the model proposed by Barr, and in the $SU(6)$ and $SU(7)$ models proposed by Ma [@ernestmadarkgutunification], dark matter particles are unified with visible matter inside the required multiplets of of these groups; in the case of the $SU(7)$ model of Ma, it is possible to produce an unbroken $U(1)_{D}$ group which acts solely on dark matter. Given that it is possible to generate a dark Abelian group, this raises the question of whether we can break a simple group to produce dark non-Abelian groups, leading to the $G_{V}\times{}G_{D}$ scenarios considered in Refs. [@lonsdalegrandunifieddm; @lonsdaleso10xso10adm]. One might consider, for example, starting with an $SU(N)$ gauge group, and then breaking it to $SU(5)_{V}\times{}SU(N-5)_{D}\times{}U(1)_{X}$, where $SU(5)_{V}$ contains the visible SM gauge groups and $SU(N-5)_{D}$ contains the dark gauge groups. If one tries to construct such a grand unified theory in ordinary 3+1D, one can easily see that one will run into some significant obstacles. As is known, in ordinary $SU(5)$ theories, the right-chiral up quark, the right-chiral electron and the quark doublet are embedded into the antisymmetric rank 2 tensor $10$ representation. This means that the most natural candidate for embedding the very same fermions in these extended GUTs is the corresponding antisymmetric rank two tensor $N(N-1)/2$ representation of $SU(N)$. Unfortunately, the same representations will contain chiral bi-quark fermions charged under representations of the form $(5, N-5)$. These mediating fermions, which are charged under both the visible and dark groups, must be made massive in some way. On top of this are the constraints that come from the requirement of anomaly cancellation in 3+1D theories with chiral fermions. Satisfying this set of constraints while not running into other problems, such as undesirable exotics, non-perturbative Yukawa interactions and a four-generation Standard Model in the visible sector, is an extremely difficult task. This seems to suggest that the generation of a $G_{V}\times{}G_{D}$ gauge theory from the spontaneous breaking of a higher simple group $G$ requires additional physics. One possibility that one might think of is to construct a grand unified theory from higher dimensions, with dimensional reduction being performed from the inclusion of, for instance, a brane. In particular, given that in odd-dimensional spacetimes chiral anomalies are absent from gauge theories, and, that in many braneworld models there is a bulk-brane mechanism called anomaly inflow [@callananomalyinflow; @daemishaposhnikov] that cancels anomalies associated with an anomalous effective field theory, one can see that going to 4+1D spacetime with branes can resolve the problems arising in 3+1D approaches from anomalies. That leaves us to find a mechanism within braneworld models which eliminates, in particular, the unwanted bi-fundamental fermion states. One way of realizing the braneworld scenario is through the dynamical localization of fields and gravity to a domain wall [@rubshapdwbranes; @firstpaper]. A domain wall is typically formed via a scalar field which interpolates between two discrete, disconnected vacua from negative infinity to positive infinity along some dimension. Fermions are localized by Yukawa coupling 4+1D fermionic fields to the scalar field which generates the domain wall, yielding localized 3+1D chiral zero modes [@jackiwrebbi]. Scalars can be localized through quartic interactions [@modetower]. Gravity can also be localized [@gremmdwgravity; @adavidsondwgravity; @clashofsymgravity; @kehagiastamvakisgravity; @slatyervolkasrsgrav; @sodarsgravity; @rsgravitydaviesgeorge2007]. The localization of gauge bosons is the most difficult aspect of domain-wall brane model building, yet its conjectured solution, the Dvali-Shifman mechanism [@dsmech], offers some of the most enriching and interesting parts of these types of models. To implement the Dvali-Shifman mechanism, a non-Abelian gauge group $G$ is respected and confining in the bulk, but is spontaneously broken to some subgroup $H$ in the interior of the domain-wall brane. The confining bulk will then act as a dual superconductor, repelling the ‘electric’ field lines of H (or H-field lines) from the bulk. If a test charge is placed on the wall, the H-field lines will simply diverge out through the world volume of the domain wall. If a test charge is placed in the bulk, the H-field lines will form a flux string onto the domain wall and then diverge, behaving as if it were actually placed on the wall. In this way, gauge bosons are localized without violating gauge charge universality [@dubrub]. Given the requirement of a large gauge group $G$ being spontaneously broken to a subgroup $H$ in order to implement the Dvali-Shifman mechanism, as well as the need for $H$ to contain the Standard Model gauge fields, this obviously motivates interesting models based on grand unification in 4+1D, such as the minimal choice $G=SU(5)$ and $H=SU(3)\times{}SU(2)\times{}U(1)$ [@firstpaper] or, alternatively, a non-minimal choice such as $G=SO(10)$ [@jayneso10paper]. Also, in the minimal choice, one finds that the profiles for the various fermions and scalars transforming under the SM gauge group are split, leading to resolution of the the fermion mass hierarchy problem and colored-Higgs induced proton decay [@su5branemassfittingpaper; @su5a4braneworldcallen]. Furthermore, the Dvali-Shifman mechanism has an interesting extension called the clash-of-symmetries (CoS)) mechanism [@o10kinks; @abeliankinkscos; @clashofsymmetries; @e6domainwallpaper; @pogosianvachaspaticos; @vachaspaticos2; @pogosianvachaspaticos3]. Here, rather than just leave $G$ unbroken, we now give the scalar field (or fields) which generates the domain wall a gauge charge, and give it a potential such that the scalar field has a disconnected vacuum manifold, whose path-connected components are homeomorphic to the coset $G/H$. This means that on one side of the domain wall, $G$ is broken to $H$ while on the other side of the wall, $G$ is broken generally to an isomorphic but differently embedded subgroup $H'$. Due to $H'$ not being exactly the same copy of $H$, this leads to further symmetry breaking to $H\cap{}H'$ in the interior of the wall. Gauge bosons of $H\cap{}H'$, whether Abelian or non-Abelian, are then localized if the corresponding generators originate entirely from the non-Abelian subgroups of $H$ and $H'$. An interesting model based on $E_{6}$ was constructed in Ref. [@e6domainwallpaper] using the Clash-of-Symmetries mechanism, in which $H$ and $H'$ are isomorphic to $SO(10)\times{}U(1)$, leading to $H\cap{}H' = SU(5)\times{}U(1)\times{}U(1)$, with the $SU(5)$ subgroup being localized. The same reference gave a treatment for dynamical localization of fermions in the same model, and given that the scalar field generating the domain wall is now charged under the gauge group, the localization of the various $H\cap{}H'$-covariant components of the fermions depend non-trivially on how they couple to the kink. This leads to the interesting property that for some given sign of the Yukawa coupling to the kink, some of the $H\cap{}H'$-covariant fermionic components attain localized left-chiral zero modes, some will attain right-chiral zero modes, and some components can be *completely decoupled* from the domain wall. In this paper, we will show that this last interesting property of fermion localization in the context of a Clash-of-Symmetries domain wall can be exploited to eliminate the troublesome fermionic mediators which arise in attempting to generate a GUT which leads to both visible and dark gauge sectors after symmetry breaking. In particular, we choose our gauge group to be $SU(12)$, and we generate a series of Clash-of-Symmetries domain-wall solutions in 4+1D from a scalar field theory with two scalars transforming under the adjoint $143$ representation, with a potential which is invariant under a $\mathbb{Z}_{2}$-symmetry which interchanges the two scalar fields. We will show that in a special parameter regime, a Clash-of-Symmetries domain wall which has an $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ gauge group localized to its world volume can be made to be the most energetically stable of the solutions. Upon coupling fermions in the fundamental $12$ and rank-two antisymmetric $66$ representations, we find in particular that the potentially troublesome $(5, 5)$ bi-fundamental fermion in the $66$ is completely decoupled from the domain-wall brane. This means that this bi-fundamental fermion remains a 4+1D *Dirac* fermion and thus vector-like, and when we include an additional adjoint scalar field which induces the usual breaking $SU(5)_{V}\rightarrow{}SU(3)\times{}SU(2)\times{}U(1)$, the SM-covariant components of this bi-fundamental fermion will attain masses of order $M_{GUT}$ in the interior of the domain wall, removing them entirely from the low energy 3+1D spectrum on the wall. It turns out that other troublesome components, such as the additional $SU(5)_{V}$ and $SU(5)_{D}$ quintets in the $66$, the singlet components in the $12$, and the lone singlet in the $66$, are either semi-delocalized or fully delocalized, and/or attain only massive modes (either through a 4+1D mass through the effective coupling to the kink, or after breaking $U(1)_{X}$). The only components which attain localized chiral zero modes are the $(5, 1)$ and $(1, 5)$ components in the $12$ as well as the $(10, 1)$ and $(1, 10)$ components of the $66$. Interestingly, for given signs of the Yukawa couplings, if the $(5, 1)$ component in the $12$ or the $(10, 1)$ component in the $66$, which transform solely under $SU(5)_{V}\times{}U(1)_{X}$, attain a chiral zero modes of a given chirality (either left or right), the corresponding dark multiplets for $SU(5)_{D}$, the $(1, 5)$ component in the $12$ and the $(1, 10)$ component in the $66$, attain chiral zero modes of the *opposite* chirality. This is very interesting as it means that if we break $SU(5)_{V}\times{}SU(5)_{D}$ symmetrically, this leads directly to the mirror matter scenario [@footvolkasorigmirrormatter; @footlewvolkas2; @leeyangmm; @kobzarevmm; @pavsicmm], which can be thought of as a realization of asymmetric dark matter [@footvolkasmirroradm1; @footvolkasmirroradm2; @asymmetricdarkmatterkalliaray]. We also have the option to break $SU(5)_{V}\times{}SU(5)_{D}$ asymmetrically, leading to the kind of scenarios described in Refs. [@lonsdalegrandunifieddm; @lonsdaleso10xso10adm]. At the very least, we have a 3+1D effective field theory in which the particle content contains a left-chiral $\overline{5}$ and a left-chiral $10$ under $SU(5)_{V}$ in the visible sector, and a right chiral $\overline{5}$ and a right-chiral $10$ under $SU(5)_{D}$ in the dark sector. At low energies, after appropriate breaking of $SU(5)_{V}$ to the Standard Model as well as the breaking of $U(1)_{X}$, these sectors have no mediators and are completely sequestrated. Scalars can also be localized, yielding Higgs potentials for both the visible and dark sectors. We also present some alternative models which generate hidden sectors, including an $SU(9)$ model in which the localized gauge group is $SU(5)_{V}\times{}SU(2)\times{}U(1)$, and a model based on the non-CoS domain wall in the $SU(12)$ model. In the $SU(9)$ model, we again have two adjoint scalar fields which generate the domain wall, and these scalars break $SU(9)$ to differently embedded copies of $SU(6)\times{}SU(3)\times{}U(1)$, which overlap to yield a localized $SU(5)_{V}\times{}SU(2)_{D}\times{}U(1)_{X'}$ on the wall. It turns out that if we choose two copies of the fundamental $9$ representation and one copy of the totally antisymmetric rank three $84$ representation, we attain the desired particle content without mediators at low energies. With the model based on the non-CoS kink in $SU(12)$, the gauge group respected on the wall is $H\cap{}H' = SU(6)_{V}\times{}SU(6)_{D}\times{}U(1)$. The $SU(6)$ subgroups are then broken with additional scalar fields to $SU(5)$ (or $SU(5)\times{}U(1)$) subgroups, leading to the localization of an $SU(5)_{V}\times{}SU(5)_{D}$-invariant theory by the original Dvali-Shifman mechanism. Just as before, the undesired mediators are eliminated from the spectrum in the same way. The cost of using the non-CoS domain wall is additional scalar fields as well as some additional fermionic particle content, since we have more localized $SU(5)_{V}$ and $SU(5)_{D}$ quintets than we need. We show in the same subsection that this non-CoS domain wall scenario can be refined and simplified by using an $SU(10)$ model, in which the gauge group is broken to the same $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)$ subgroup and, subsequently, the visible $SU(5)_{V}$ group is broken directly to the Standard Model gauge group. In the next section, we go into further detail as to why 3+1D unification of visible and dark gauge sectors is difficult. We give the best examples of 3+1D GUTs that the author invented which leads to a $G_{V}\times{}G_{D}$ structure, namely a model based on $SU(7)$ which is broken to $SU(5)_{V}\times{}SU(2)_{D}\times{}U(1)$, and another based on $SU(9)$ being broken to $SU(5)_{V}\times{}SU(4)_{D}\times{}U(1)$. These models turn out to have highly undesirable features, including four-generation Standard Models in the visible sector as well as fermionic mediators attaining their masses from electroweak symmetry breaking, both of which lead to non-perturbative Yukawa interactions. In Sec. \[sec:cosmechanism\], we give a short treatment of domain walls, the Dvali-Shifman mechanism and the Clash-of-Symmetries mechanism. In Sec. \[sec:solution\], we describe the scalar potential with two adjoint scalar fields and find the CoS solutions for several parameter choices. In Sec. \[sec:fermionlocalization\], we deal with fermion localization and describe how the fermionic mediators are eliminated. Section \[sec:scalarlocalization\] describes scalar localization. In Sec. \[sec:alternativemodels\] we give some nice alternative models: a sketch for a Clash-of-Symmetries model based on $SU(9)$ is given in Sec. \[subsec:su9model\], and two more models based on non-Clash-of-Symmetries domain walls in $SU(12)$ and $SU(10)$ gauge theories are given in Sec. \[subsec:noncossolution\]. Section \[sec:conclusion\] is our conclusion. The Difficulty of attaining $G_{V}\times{}G_{D}$ from Grand Unification in 3+1D {#sec:whynot3+1DGUT} =============================================================================== In this section, we discuss in detail why ordinary $3+1D$ GUTs are unpromising candidates for the unification of the visible Standard Model gauge forces with a hidden gauge sector which includes non-Abelian interactions. We mainly do this in the context of unification for $SU(N)$, but there are some reasons we will give at the end of this section as to why $SO(N)$ unifications are not promising either. Given the $SU(6)$ model proposed by Barr [@sbarrsu6dm] in which dark matter arises as a sixth color, and the $SU(7)$ model proposed by Ma [@ernestmadarkgutunification], in which a dark $U(1)$ interaction is generated, it is natural to ask whether a similar unification theory can generate gauge interactions in the dark matter sector which are non-Abelian. Consider breaking an $SU(N)$ gauge theory to $SU(5)_{V}\times{}SU(N-5)_{D}\times{}U(1)$ with an adjoint scalar field. The decomposition of the fundamental representation in terms of representations of $SU(5)_{V}\times{}SU(N-5)_{D}\times{}U(1)$ is $$\label{eq:sunfundamental} N = (5, 1, N-5)\oplus{}(1, N-5, -5).$$ Naturally, this is the representation to include the Standard Model fermions which are embedded in a quintet in ordinary $SU(5)$ grand unification, and the $(1, N-5, -5)$ component is identified as a dark quark. However, we also need to include the fermions which are embedded in the decuplet representation of $SU(5)$. The decuplet representation is a rank two antisymmetric representation, and thus the natural and minimal candidate representation to embed these fermions in an $SU(N)$ theory is the corresponding rank two antisymmetric $N(N-1)/2$ representation. The decomposition of the $N(N-1)/2$ representation can be deduced by multiplying the $N$ representation with itself and then taking the antisymmetric products. The result is $$\label{eq:sunantisymmetric} \frac{N(N-1)}{2} = \big(10, 1, 2(N-5)\big)\oplus{}\big(5, N-5, N-10\big)\oplus{}\big(1, \frac{(N-5)(N-6)}{2}, -10\big).$$ From this we see that we get not only components which transform under the rank 2 antisymmetric representations for the visible and dark gauge groups, but also an undesirable bi-fundamental state, which is the $(5, N-5, N-10)$ component. This bi-fundamental fermion is chiral just like the rest of the fermions in these representations, and also needs to be made massive. This introduces the problem of choosing a number of representations such that the chiral fermions which are charged under both $SU(5)_{V}$ and $SU(N-5)_{D}$, the fermionic mediators, will all attain masses after electroweak symmetry breaking or, preferably, the breaking of a subgroup of $SU(N-5)_{D}$. This is on top of the usual chiral anomaly cancellation constraint for 3+1D GUTs. We have found a couple of models in which the fermionic mediators all attain masses after electroweak symmetry breaking. The first model is based on $SU(7)$, which is broken to $SU(5)_{V}\times{}SU(2)_{D}\times{}U(1)$. In our construction of these models, we restricted ourselves to totally antisymmetric representations, as the anomalies coming from symmetric representations are larger and grow faster with rank, as well as leading to more potentially undesirable components. The combination of left-chiral fermionic representations that we choose for the $SU(7)$ model is the anomaly free combination $\overline{7}\oplus{}21\oplus{}\overline{35}$. Under $SU(5)_{V}\times{}SU(2)_{D}\times{}U(1)$, these three representations decompose as $$\begin{aligned} \label{eq:su7reps} \overline{7} &= (\overline{5}, 1, -2)\oplus{}(1, 2, +5), \\ 21 &= (10, 1, +4)\oplus{}(5, 2, -3)\oplus{}(1, 1, -10), \\ \overline{35} &= (10, 1, -6)\oplus{}(\overline{10}, 2, +1)\oplus{}(\overline{5}, 1, +8) \end{aligned}$$ In this particular scenario, given we want to preserve a non-Abelian group in the dark sector, we don’t have the option of breaking $SU(2)_{D}$, so we must make the all the fermions massive through electroweak symmetry breaking. In ordinary $SU(5)$ unification, the electroweak Higgs doublet is usually embedded into either the $\overline{5}$ representation. If we embed this anti-quintet into the $\overline{7}$ representation, we see that we have the following possible invariant Yukawa interactions (which are assumed to be either of the form $\overline{(\psi^{R_{1}}_{L})^{c}}\psi^{R_{2}}_{L}\phi^{R_{3}}$ or $\overline{(\psi^{R_{1}}_{L})^{c}}\psi^{R_{2}}_{L}(\phi^{R_{3}})^{*}$) in the theory: $$\label{eq:su7bifundamentalmass1} \overline{35}_{F}\times{}21_{F}\times{}7_{S}\supset{}(\overline{10}, 2, +1)_{F}\times{}(5, 2, -3)_{F}\times{}(5, 1, +2)_{S}\supset{}(1, 1, 0),$$ $$\label{eq:su7bifundamentalmass2} \overline{35}_{F}\times{}\overline{35}_{F}\times{}\overline{7}_{S}\supset{}(\overline{10}, 2, +1)_{F}\times{}(\overline{10}, 2, +1)_{F}\times{}(\overline{5}, 1, -2)_{S}\supset{}(1, 1, 0),$$ and $$\label{eq:su7bifundamentalmass3} \overline{7}_{F}\times{}21_{F}\times{}\overline{7}_{S}\supset{}(1, 2, +5)_{F}\times{}(5, 2, -3)_{F}\times{}(\overline{5}, 1, -2)_{S}\supset{}(1, 1, 0),$$ where $F$ denotes a fermionic component and $S$ denotes a scalar component. Given these all contain singlets, they generate mass terms. The interaction in Eq. \[eq:su7bifundamentalmass1\] generates masses for the electron-like and down quark-like components of the $(5, 2, -3)$ and $(\overline{10}, 2, +1)$ fermionic mediators after electroweak symmetry breaking. The interaction in Eq. \[eq:su7bifundamentalmass2\] generates a masses for the up quark-like components of the $(\overline{10}, 2, +1)$ fermionic mediator. Finally, the interaction in Eq. \[eq:su7bifundamentalmass3\] generates a mass between the left-chiral neutrino-like component of the $(5, 2, -3)$ mediator and the dark quark $(1, 2, +5)$ doublet. Thus, all the fermionic mediators attain masses. However, the interaction in Eq. \[eq:su7bifundamentalmass3\] contains $$\label{eq:su7firstgendownelectronmass} \overline{7}_{F}\times{}21_{F}\times{}\overline{7}_{S}\supset{}(\overline{5}, 1, -2)_{F}\times{}(10, 1, +4)_{F}\times{}(\overline{5}, 1, -2)_{S}\supset{}(1, 1, 0),$$ which generates down-quark and electron masses for the generation of visible SM fermions coming from the $(\overline{5}, 1, -2)$ and $(10, 1, +4)$ components, which implies that the Dirac mass formed between the $\nu_L$-like component of the $(5, 2, -3)$ state and the dark $(1, 2, +5)$ quark is of order the MeV scale. This is clearly in opposition to experiment since the state formed from these components would couple to the $W$ and $Z$ bosons, if we choose the $(\overline{5}, 1, -2)$ and $(10, 1, +4)$ components to generate the first generation of visible fermions. With only a Higgs doublet coming from the $\overline{7}$, the only term that generates masses for the up quark components of the visible $SU(5)_{V}$ decuplets is $$\label{eq:tenpletmass1} (10, 1, -6)_{F}\times{}(10, 1, +4)_{F}\times{}(5, 1, +2)_{S}\subset{}\overline{35}_{F}\times{}21_{F}\times{}7_{S}.$$ Unless we introduce additional Higgs multiplets which can yield a Yukawa interaction which produces a second independent mass term amongst the $(10, 1, -6)$ and $(10, 1, +4)$ multiplets, we will have massless up quarks. Fortunately, the $35$ representation contains a $(5, 1, -8)$ component, so if we introduce a $35$ scalar, we attain the Yukawa interaction $$\label{eq:secondsu7higgsdoubletyukawa} 21_{F}\times{}21_{F}\times{}35_{S}\supset{}(10, 1, +4)_{F}\times{}(10, 1, +4)_{F}\times{}(5, 1, -8)_{S}\supset{}(1, 1, 0),$$ which yields a mass term for the up quark component in the $(10, 1, +4)$ fermion. We could also choose an appropriate representation coming from the tensor product $\overline{35}\times{}\overline{35}$ which contains a $(5, 1, +12)$ component, which can generate a mass term for the up quark in the $(10, 1, -6)$ fermion. Having produced mass terms for all the fermions, we need to break the $U(1)$ subgroup. We can simply do this with a scalar in the $21$ representation, because this representation contains a $(1, 1, -10)$ component, which can break the $U(1)$ group when it attains a VEV. When the $(1, 1, -10)$ condenses, it also yields a Majorana mass term for the dark $(1, 2, +5)$ quark doublet from the interaction $$\label{eq:darkquarkmajmass} \overline{7}_{F}\times{}\overline{7}_{F}\times{}21_{S}\supset{}(1, 2, +5)_{F}\times{}(1, 2, +5)_{F}\times{}(1, 1, -10)_{S}\supset{}(1, 1, 0).$$ We have constructed a model in which all the fermionic mediators and also the visible and dark fermions in the combination $\overline{7}\oplus{}21\oplus{}\overline{35}$ attain masses after electroweak symmetry breaking. This means that in a situation where the relevant Yukawa coupling constants are natural, the fermionic mediators will attain masses which are of order the electroweak scale. Given that the LHC has so far failed to detect such exotics at the TeV scale, this is undesirable, implying that the associated Yukawa coupling constants must enter the non-perturbative regime. Furthermore, we can see that one generation of the $\overline{7}\oplus{}21\oplus{}\overline{35}$ combination yields *two* visible Standard Model generations, implying that the minimal model based on this group theoretic structure will contain a four-generation SM. The recent results from the LHC also put strong constraints on a fourth generation [@eberhardt4gensmlhc; @pdg2014]. In light of the numerous undesirable properties of the $SU(7)$ model, one may think of extending to a higher gauge group so that we could possibly make the mediators massive through symmetry breaking in the dark sector rather than the visible sector. The simplest model that the author found which could possibly lead to this outcome is based on $SU(9)$. Unfortunately, this also does not work. The $SU(9)$ model that the author formulated is based on the initial symmetry breaking pattern $SU(9)\rightarrow{}SU(5)_{V}\times{}SU(4)_{D}\times{}U(1)$ with an adjoint. Also, we choose each generation of left-chiral fermions to consist of the combination $\overline{9}\oplus{}36\oplus{}\overline{84}\oplus{}126$ of representations of $SU(9)$. These $SU(9)$ representations decompose under $SU(5)_{V}\times{}SU(4)_{D}\times{}U(1)$ as $$\begin{aligned} \label{eq:su9admcombo} \overline{9} &= (\overline{5}, 1, -4)\oplus{}(1, \overline{4}, +5), \\ 36 &= (10, 1, +8)\oplus{}(5, 4, -1)\oplus{}(1, 6, -10), \\ \overline{84} &= (10, 1, -12)\oplus{}(\overline{10}, \overline{4}, -3)\oplus{}(\overline{5}, 6, +6)\oplus{}(1, 4, +15), \\ 126 &= (\overline{5}, 1, +16)\oplus{}(\overline{10}, 4, +7)\oplus{}(10, 6, -2)\oplus{}(5, \overline{4}, -11)\oplus{}(1, 1, -20). \end{aligned}$$ Now, one may think of making the fermionic mediators massive through symmetry breaking in the dark sector. The most obvious symmetry breaking pattern to consider is the breaking $SU(4)_{D}\rightarrow{}SU(3)_{D}$ with one of the various dark quartets embeded in the representations given in Eq. \[eq:su9admcombo\]. If we introduce a scalar in the $9$ representation, and use the $(1, 4, -5)$ component to break $SU(4)_{D}\rightarrow{}SU(3)_{D}$, we see that we get the following mass-generating terms: $$\label{eq:su93684bar9} 36_{F}\times{}\overline{84}_{F}\times{}9_{S}\supset{}(5, 4, -1)_{F}\times{}(\overline{5}, 6, +6)_{F}\times{}(1, 4, -5)_{S}\supset{}(5, 3)_{F}\times{}(\overline{5}, \overline{3})_{F}\times{}(1, 1)_{S}\supset{}(1, 1),$$ $$\label{eq:su984bar1269bar1} \overline{84}_{F}\times{}126_{F}\times{}\overline{9}_{S}\supset{}(\overline{5}, 6, +6)_{F}\times{}(5, \overline{4}, -11)_{F}\times{}(1, \overline{4}, +5)\supset{}(\overline{5}, 3)_{F}\times{}(5, \overline{3})_{F}\times{}(1, 1)_{S}\supset{}(1, 1),$$ $$\label{eq:su984bar1269bar2} \overline{84}_{F}\times{}126_{F}\times{}\overline{9}_{S}\supset{}(\overline{10}, \overline{4}, -3)_{F}\times{}(10, 6, -2)_{F}\times{}(1, \overline{4}, +5)\supset{}(\overline{10}, \overline{3})_{F}\times{}(10, 3)_{F}\times{}(1, 1)_{S}\supset{}(1, 1),$$ and $$\label{eq:su91261269} 126_{F}\times{}126_{F}\times{}9_{S}\supset{}(\overline{10}, 4, +7)_{F}\times{}(10, 6, -2)_{F}\times{}(1, 4, -5)_{S}\supset{}(\overline{10}, 3)_{F}\times{}(10, \overline{3})_{F}\times{}(1, 1)_{S}\supset{}(1, 1).$$ Here, we have suppressed the $U(1)$ charge in the representations under $SU(5)_{V}\times{}SU(3)_{D}$. All the above interactions imply that the mediators with non-trivial charges under both $SU(5)_{V}$ and $SU(3)_{D}$ attain masses of order the breaking scale of $SU(4)_{D}$. Unfortunately, the leftover $SU(3)_{D}$-singlet components which are charged under $SU(5)_{V}$ also go on to attain masses from the breaking of $SU(4)_{D}$ coming from the following interactions: $$\label{eq:su99bar369bar} \overline{9}_{F}\times{}36_{F}\times{}\overline{9}_{S}\supset{}(\overline{5}, 1, -4)_{F}\times{}(5, 4, -1)_{F}\times{}(1, \overline{4}, +5)_{S}\supset{}(\overline{5}, 1)_{F}\times{}(5, 1)_{F}\times{}(1, 1)_{S}\supset{}(1, 1),$$ $$\label{eq:su93684bar92} 36_{F}\times{}\overline{84}_{F}\times{}9_{S}\supset{}(10, 1, +8)_{F}\times{}(\overline{10}, \overline{4}, -3)_{F}\times{}(1, 4, -5)_{S}\supset{}(10, 1)_{F}\times{}(\overline{10}, 1)_{F}\times{}(1, 1)_{S}\supset{}(1, 1),$$ $$\label{eq:su984bar1269bar3} \overline{84}_{F}\times{}126_{F}\times{}\overline{9}_{S}\supset{}(10, 1, -12)_{F}\times{}(\overline{10}, 4, +7)_{F}\times{}(1, \overline{4}, +5)_{S}\supset{}(10, 1)_{F}\times{}(\overline{10}, 1)_{F}\times{}(1, 1)_{S}\supset{}(1, 1),$$ and $$\label{eq:su912612692} 126_{F}\times{}126_{F}\times{}9_{S}\supset{}(\overline{5}, 1, +16)_{F}\times{}(5, \overline{4}, -11)_{F}\times{}(1, 4, -5)_{S}\supset{}(\overline{5}, 1)_{F}\times{}(5, 1)_{F}\times{}(1, 1)_{S}\supset{}(1, 1).$$ Hence, all the visible fermions attain masses at the $SU(4)_{D}$ breaking scale, which we wish to be above the electroweak scale. Like the $SU(7)$ model, we can also make the mediators massive through electroweak symmetry breaking via a combination of the various Higgs quintets embedded in the representations in Eq. \[eq:su9admcombo\]. Again, it turns out that we have many of the same problems: non-perturbative coupling constants, a four-generation Standard Model at low energies, and a complicated Higgs sector. Given the troubles that we have encountered with $SU(N)$ groups, one might consider $SO(N)$ gauge theories instead. Given that we need complex representations to embed the SM fermions, we would need to choose $N = 4n+2$. Naturally, one would try a breaking pattern of the form $SO(4n+2)\rightarrow{}SO(10)_{V}\times{}SO(4n-8)_{D}$. Given that all the SM fermions naturally fit into the $16$ spinor representation of $SO(10)_{V}$, the natural representation to consider embedding the SM fermions along with dark quarks is the spinor representation of $SO(4n+2)$, which has a dimension of $2^{2n}$. The problem is that the $2^{2n}$ spinor representation typically decomposes completely into components which are charged under both $SO(10)_{V}$ and $SO(4n-8)_{D}$. For example, for $SO(18)$, the spinor $256$ representation decomposes under $SO(10)_{V}\times{}SO(8)_{D}$ as $$\label{eq:so18spinor} 256 = (16, 8)\oplus{}(\overline{16}, 8'),$$ where here the $8$ and $8'$ denote the two different, complex 8-dimensional spinor representations of $SO(8)_{D}$. Hence, both components in the $256$ are mixed fermions. Having experimented with many special orthogonal groups and combinations of their representations, this seems to be a generic trait that is difficult to overcome. Hence, in the context of $3+1D$ GUTs, $SO(N)$ theories do not show much promise either. We have not disproven that a satisfying $3+1D$ GUT yielding a non-Abelian dark gauge group can be constructed. However, the above examples seem to highlight the major difficulties in constructing such a theory. The large numbers of Higgs fields required to induce the various breakings, as well as the complicated representations required for the fermions to satisfy the constraints coming from anomaly cancellation and to ensure that the fermionic mediators can attain masses, make the types of models described above very undesirable. This seems to suggest that we need to consider additional physics to efficiently eliminate the fermionic mediators from the spectrum and to perhaps reduce the number of constraints on the theory. One may consider adding an extra dimension and localizing the desired fields on a domain-wall brane. Going to $4+1D$ automatically eliminates the constraints coming from anomalies, and, as it turns out in the context of a Clash-of-Symmetries domain wall, presents a way to make the fermionic mediators attain masses of order the GUT scale. We now turn our attention to domain-wall brane models and, as we will show later, a desirable Clash-of-Symmetries domain-wall brane model based on the gauge group $SU(12)$ in $4+1D$ which resolves many of the problems found in the $3+1D$ constructions in this section can be constructed. Domain Walls, the Dvali-Shifman Mechanism, and the Clash-of-Symmetries Mechanism {#sec:cosmechanism} ================================================================================ Domain walls are topological defects in which the boundary conditions for a scalar field(s) at positive and negative infinity along some spatial dimension are mapped to two discrete, degenerate and disconnected vacua for that (those) scalar field(s). Their topological stability is ensured due to the fact that they are mappings which belong to non-trivial homotopy classes of $\pi_{0}(M)$, where $M$ is the moduli space of the theory, unlike standard homogeneous vacuum states which belong to the trivial class. The simplest scalar field theory supporting topologically stable domain-wall solutions is a $\mathbb{Z}_{2}$-symmetric quartic scalar field theory for a single scalar field $\eta$ with a tachyonic mass, which may be written $$\label{eq:dwscalarpotential} V(\eta) = \frac{1}{4}\lambda{}(\eta^2-v^2)^2.$$ This potential has two discrete, degenerate vacua $\eta=-v$ and $\eta=+v$. One can show that $$\label{eq:dwsolution} \eta{(y)} = v\tanh{(ky)},$$ where $k^2 = \lambda{}v^2/2$, is a solution to the Euler-Lagrange equations with the scalar potential in Eq. \[eq:dwscalarpotential\]. This solution also satisfies the boundary conditions $$\begin{gathered} \label{eq:dqboundaryconditions} \eta{(y\rightarrow{}-\infty{})} = -v, \\ \eta{(y\rightarrow{}+\infty{})} = +v, \end{gathered}$$ and is thus a domain-wall solution. A plot of the potential in terms of $\eta$ is given in Fig. \[fig:dwpotential\] and a plot of the solution in Eq. \[eq:dwsolution\] is given in Fig. \[fig:dwsolution\]. ![A plot of the potential $V(\eta)$ from Eq. \[eq:dwscalarpotential\].[]{data-label="fig:dwpotential"}](etapotential.pdf) ![A plot of the domain-wall solution for $\eta$ given by Eq. \[eq:dwsolution\].[]{data-label="fig:dwsolution"}](thekink3.pdf) Having formed a domain wall, the next goal is to localize the various particle content required to formulate a realistic domain-wall brane localized effective field theory which contains the Standard Model. This involves the introduction of various dynamical localization mechanisms for fermions, scalars, gravitons and gauge bosons. We will not deal with the localization of gravity at all in this paper, but one can show that the localization of gravitons is possible [@gremmdwgravity; @adavidsondwgravity; @clashofsymgravity; @kehagiastamvakisgravity; @slatyervolkasrsgrav; @sodarsgravity; @rsgravitydaviesgeorge2007]. Fermionic chiral zero modes can be localized by Yukawa interactions to the domain wall as first shown in Ref. [@jackiwrebbi]. Scalar modes can be localized via quartic interactions [@rsgravitydaviesgeorge2007]. We will deal with localization of fermions and scalars in the context of the model proposed in this paper in later sections. We will now turn to the localization of gauge bosons. Gauge bosons are the most difficult species of particle to localize to a domain wall. They cannot be localized to the domain wall through direct cubic or quartic couplings to it because the development of a zero mode profile for a gauge boson will in general mean that gauge charge universality in non-Abelian theories is lost [@dubrub], because the effective gauge couplings for the different fermions and scalars depend on overlap integrals of the profiles of the particles involved. The only known, plausible mechanism which is conjectured to localize gauge bosons and retain gauge charge universality is the Dvali-Shifman mechanism [@dsmech]. This mechanism involves breaking a gauge group $G$ to a subgroup $H$ in the interior of the domain wall and then localizing the gauge bosons of $H$ through the confinement dynamics of $G$ in the bulk. To help illustrate this, we will consider the original $SU(2)$-model that Dvali and Shifman considered. Consider taking the original model with a singlet scalar $\eta$ described by the potential in Eq. \[eq:dwscalarpotential\] and now adding to it another scalar field $\chi$ in the adjoint representation of $SU(2)$. Under the discrete $\mathbb{Z}_{2}$ symmetry, $\eta{}\rightarrow{}-\eta$ and $\chi{}\rightarrow{}-\chi{}$. The potential of this new theory is given by $$\begin{aligned} \label{eq:dvalishifmanmodel} V(\eta, \chi) &= \frac{1}{4}\lambda_{\eta}(\eta^2-v^2)^2+\lambda_{\eta\chi}(\eta^2-v^2)\rm{Tr}[\chi^2]+\mu^2_{\chi}\rm{Tr}[\chi^2] \\ &+\lambda_{\chi}\rm{Tr}[\chi^2]^2. \end{aligned}$$ To ensure that the above potential is bound from below and to ensure stable domain-wall solutions, we impose the parameter conditions $$\label{eq:boundednessconditions} \lambda_{\eta}>0, \qquad{} \lambda_{\chi}>0, \qquad \lambda_{\eta\chi}v^2>\mu^2_{\chi}>0.$$ In this region of parameter space, the global minima are $\eta = \pm{}v$, $\chi = 0$. In the middle of the wall, the field $\chi$ develops a tachyonic mass and should condense. Without loss of generality, we choose the component proportional to the isospin operator $I = diag(-1, +1)$, which we call $\chi_{1}$, to condense with the other components set to zero. If we impose the additional special parameter choice $$\label{eq:singlekinklumpconditions} 2\mu^2_{\chi}(\lambda_{\eta\chi}-\lambda_{\chi})+(\lambda_{\eta}\lambda_{\chi}-\lambda^2_{\eta\chi})v^2=0,$$ one finds that $$\begin{aligned} \label{eq:singlekinklumpsolution} \eta(y)&=v\tanh{(ky)}, \\ \chi_{1}(y)&=A\sech{(ky)}, \end{aligned}$$ where $k^2=\mu^2_{\chi}$ and $A^2=\frac{\lambda_{\eta\chi}v^2-2\mu^2_{\chi}}{\lambda_{\chi}}$, is a solution to the Euler-Lagrange equations satisfying the boundary conditions $\eta{(y\rightarrow{}\pm{}\infty)} = \pm{}v$ and $\chi{(y\rightarrow{}\pm{}\infty)} = 0$. For this solution, $\eta$ still generates the domain-wall kink while $\chi_{1}$ forms what we call a *lump* which condenses to a non-zero vacuum expectation value in the interior of the wall. In parameter regions outside that implied by Eq. \[eq:singlekinklumpconditions\] (but within those of Eq. \[eq:boundednessconditions\]), there will still exist a solution in which $\eta$ generates a kink and $\chi_{1}$ generates a lump, although the solution will have to be solved numerically. In the interior of the wall, $\chi$ attains a non-zero vacuum expectation value which induces the breaking $SU(2)\rightarrow{}U(1)$. In the bulk, $\chi$ asymptotes to zero, leaving $SU(2)$ unbroken and confining there. Suppose a $U(1)$ test charge is placed on the wall. In the bulk, $SU(2)$ is unbroken and confining which implies, under the dual superconductor picture of confinement first proposed by ’t Hooft and Mandelstam [@thooftdualsuperconductor; @mandelstamdualsuperconductor], that the bulk behaves as a dual superconductor. It follows that the electric field lines emanating from the test charge will be repelled from the bulk by the dual Meissner effect and will diverge outwards parallel to the 3D world volume of the domain-wall brane. Now imagine that the test charge is placed in the bulk. The electric field lines will still be repelled from the bulk and they will form a flux string which diverges out onto the wall, so that the charge behaves as if it really were on the wall. In this way, the couplings of localized fermion and scalar modes, which are charged under the remaining $U(1)$ theory, to the $U(1)$ photon will be independent of where these modes are localized. The Dvali-Shifman mechanism as proposed generalizes this simple $SU(2)\rightarrow{}U(1)$ toy model to the case where the gauge symmetry $G$ respected is a larger non-Abelian group and the subgroup $H$ to which it is broken on the wall is a non-Abelian semi-simple group. Extrapolating the results for the test charge in the case described above to the case where $H$ is non-Abelian, this means that the couplings of localized quarks to gluons is independent of the quark profiles, preserving gauge charge universality as well as localizing the gluons. Note that the Dvali-Shifman idea depends on whether 5D Yangs-Mills gauge theories are confining. Although not absolutely proven, we have good numerical evidence that in 4+1D, SU(2) [@5dconfinement] and SU(5) [@damiengeorgephd] gauge theories are in fact confining. The Dvali-Shifman mechanism is an attractive mechanism for the localization of gauge bosons as far as the building of domain-wall brane models is concerned. One may ask whether there are ways to extend this mechanism. In particular, given that the Dvali-Shifman mechanism requires two scalar fields which condense, one to form a kink and one to form a lump, one can ask whether it is possible to achieve the same dynamics with a single scalar field in some representation of a gauge group $G$. The first idea which naturally comes to mind is to reassign the kink-forming field $\eta$ from the gauge singlet representation to a non-trivial representation of $G$. If we ensure that the discrete $\mathbb{Z}_{2}$ symmetry is *outside* the gauge group $G$, then, instead of just the two discrete vacua we had in the potential of Eq. \[eq:dwscalarpotential\], we now have two disconnected vacuum *manifolds*. If we choose parameters such that the most stable breaking pattern is from $G$ to a subgroup $H$, both these vacuum manifolds are then (individually) diffeomorphic to the coset manifold $G/H$. One can then think of forming domain wall solutions which interpolate between the two disconnected manifolds, which opens the possibility of breaking $G$ to two differently embedded copies of $H$ on each side of the domain wall, one of which we will call $H'$. Because these isomorphic subgroups are not exactly the same, there has to be further breaking in the core of the defect to the overlap of these groups, $H\cap{}H'$. The idea is then that if $H$ and $H'$ (or subgroups of them) are non-Abelian and confining in the bulk, then smaller subgroups can be localized to the domain wall interior by Dvali-Shifman dynamics. This version of the Dvali-Shifman mechanism is called the Clash-of-Symmetries (CoS) mechanism. Many attempts at forming either realistic models or simply just domain walls from the CoS mechanism exist in the literature Refs. [@o10kinks; @abeliankinkscos; @clashofsymmetries; @e6domainwallpaper; @intersectingclashofsym; @pogosianvachaspaticos; @vachaspaticos2; @pogosianvachaspaticos3]. A thorough exploration of the group theoretic aspects underlying it are given in Ref. [@clashofsymgrouptheorypaper]. We now turn to the requirements of localization for non-Abelian and Abelian subgroups of $H\cap{}H'$. In general, both $H$ and $H'$ are semi-simple and may be written as $$\begin{aligned} \label{eq:semisimple} H &= N_{1}\times{}N_{2}\times{}N_{3}\times{}...\times{}N_{k-1}\times{}N_{k}\times{}U(1)_{Q_{1}}\times{}U(1)_{Q_{2}}\times{}U(1)_{Q_{3}}...U(1)_{Q_{l-1}}\times{}U(1)_{Q_{l}}, \\ H' &= N'_{1}\times{}N'_{2}\times{}N'_{3}\times{}...\times{}N'_{k-1}\times{}N'_{k}\times{}U(1)_{Q'_{1}}\times{}U(1)_{Q'_{2}}\times{}U(1)_{Q'_{3}}...U(1)_{Q'_{l-1}}\times{}U(1)_{Q'_{l}}, \end{aligned}$$ where the $N_{i}$ and $N'_{i}$ denote the non-Abelian factor groups and the $Q_{i}$ and $Q'_{i}$ denote the generators of the Abelian factor groups belonging to $H$ and $H'$ respectively. Since, $H$ and $H'$ are semi-simple, $H\cap{}H'$ is also semi-simple. We will denote its non-Abelian factor groups as $n_{i}$ and the generators of its Abelian factor groups as $q_{i}$ and write $$\label{eq:intersectingsemisimple} H\cap{}H' = n_{1}\times{}n_{2}\times{}n_{3}\times{}...\times{}n_{r-1}\times{}n_{r}\times{}U(1)_{q_{1}}\times{}U(1)_{q_{2}}\times{}U(1)_{q_{3}}\times{}...\times{}U(1)_{q_{s-1}}\times{}U(1)_{q_{s}}.$$ We will deal with localization of non-Abelian groups first. Given the Dvali-Shifman mechanism relies on confinement dynamics, it follows that for a gauge group to be localized to the domain wall, it must lie inside a larger non-Abelian group in the bulk. Given that that on one side of the wall, a non-Abelian subgroup $n_{i}$ of $H\cap{}H'$ will lie inside a non-Abelian factor $N_{a}$ of $H$, while on the other it will be a subgroup of a non-Abelian factor $N'_{b}$ of $H'$, it follows that to be fully localized $n_{i}$ must be a proper subgroup of both $N_{a}$ and $N'_{b}$ $$\begin{gathered} \label{eq:nonabelianlocclashofsym} n_{i}\subset{}N_{a} \; \mathrm{and} \; n_{i}\subset{}N'_{b}. \end{gathered}$$ If, on the other hand, $n_{i}$ is precisely equal to one of these groups, then on one side of the bulk is will be free to propogate and thus semi-delocalized, and if, in the rare case, $n_{i}=N_{a}=N'_{b}$, then $n_{i}$ will be fully delocalized. Localization of Abelian gauge bosons is slightly more complex, but similar. In general, the Abelian generators $q_{i}$ of $H\cap{}H'$ must, to be respected at the level of *symmetries* on the wall, be able to be written as linear combinations of generators of $H$ and $H'$ independently. Obviously, the Abelian generators in $H$ and $H'$, $Q_{i}$ and $Q'_{i}$, can contribute to these respective linear combinations, but there are also leftover generators inside the non-Abelian factors $N_{a}$ and $N'_{b}$, which we call $T_{i}$ and $T'_{i}$ respectively, which are outside the non-Abelian factors $n_{i}$ of $H\cap{}H'$. Hence the condition for $U(1)_{q_{i}}$ to be a symmetry inside $H\cap{}H'$ respected on the domain-wall brane is for the generator $q_{i}$ to satisfy $$\begin{aligned} \label{eq:abeliangeneratorsymmetrycondition} q_{i} &= \sum^{l}_{j=1}\alpha^{i}_{j}Q_{j}+\sum^{m}_{j=1}\beta^{i}_{j}T_{j}, \\ &= \sum^{l}_{j=1}\alpha'^{i}_{j}Q'_{j}+\sum^{m}_{j=1}\beta'^{i}_{j}T'_{i}. \end{aligned}$$ The conditions for full *localization* of an Abelian gauge boson are more stringent. Just like the localized non-Abelian gauge bosons, the Abelian gauge bosons must lie completely inside non-Abelian groups wherever they may propagate through the bulk. Furthermore, the photons corresponding to the respective Abelian generators of $H$ and $H'$, $Q_{i}$ and $Q'_{i}$, are able to propagate through the halves of the bulk in which they are respected, and if they contribute to the linear combination for $q_{i}$, there is a chance that the photon associated with $q_{i}$ will leak into the bulk. The consequence is that for the photon of $q_{i}$ to be fully localized to the domain wall, it must only be a linear combination of the $T_{i}$ and $T'_{i}$ generators of $H$ and $H'$ respectively, satisfying $$\label{eq:abeliangeneratorlocalizationcondition} q_{i} = \sum^{m}_{j=1}\beta^{i}_{j}T_{j} = \sum^{m}_{j=1}\beta'^{i}_{j}T'_{j}, \qquad{} \alpha^{i}_{j} = \alpha'^{i}_{j} = 0\; \forall{} \; j.\\$$ If any of the $\alpha^{i}_{j}$ are non-zero while all of the $\alpha'^{i}_{j}$ are zero, then the photon is semi-delocalized and able to propagate into the $H$-respecting side of the bulk, but not the $H'$-respecting side, and vice versa if some $\alpha'^{i}_{j}$ are non-zero and all the $\alpha^{i}_{j}$ are zero. If there exist both some non-zero $\alpha^{i}_{j}$ and some non-zero $\alpha'^{i}_{j}$, the photon can leak into both sides of the bulk and is thus fully delocalized. In this section, we have explained the formation of domain walls, the Dvali-Shifman mechanism for gauge boson localization and ended with the clash-of-symmetries mechanism. There have been attempts to form viable domain-wall brane models with a localized Standard Model based on $SO(10)$ [@o10kinks] and $E_{6}$ [@e6domainwallpaper] using the clash-of-symmetries mechanism. A slightly altered version of this mechanism can be used to localize gauge fields on to the intersection of two domain-wall branes in $5+1D$ spacetime, assuming both $4+1D$ and $5+1D$ Yang-Mills theories are confining [@intersectingclashofsym]. In this paper, we will exploit this mechanism in an $SU(12)$-invariant theory to generate a localized $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ gauge theory, where $SU(5)_{V}$ contains the gauge groups of the visible Standard Model sector and $SU(5)_{D}$ contains the gauge groups of a dark matter hidden sector, and $U(1)_{X}$ is an Abelian gauge group coupling the two sectors (and thus must be broken spontaneously by adding further Higgs fields). Furthermore, we will show that troublesome fermionic and scalar mediators which are charged under both $SU(5)_{V}$ and $SU(5)_{D}$ are eliminated from the spectrum, leading to sufficient sequestration of the visible and hidden sectors. In particular, we will show that the mixed $(5, 5)$ fermion in the rank two antisymmetric representation of $SU(12)$ is completely decoupled from the wall, meaning it remains 5D and vector-like and will thus attain a mass of order $M_{GUT}$ on the brane, once we break $SU(5)_{V}$ to the Standard Model. A Localized $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$-Invariant Effective Action from a Clash-of-Symmetries Domain Wall in a 4+1D $SU(12)\times{}\mathbb{Z}_{2}$ Scalar Field Theory {#sec:solution} ======================================================================================================================================================================================= In this section, we will describe a CoS domain-wall solution which yields a localized $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ gauge theory from a 4+1D $SU(12)$ theory. To achieve this, we break $SU(12)$ to two differently embedded copies of $SU(6)\times{}SU(6)\times{}U(1)$ on each side of the wall. One may first consider achieving this with a single adjoint scalar field $\eta$ which transforms under a discrete $\mathbb{Z}_{2}$-symmetry as $\eta{}\rightarrow{}-\eta{}$. The $\mathbb{Z}_{2}$-symmetric scalar potential for this field is $$\label{eq:singleadjointpotential} V(\eta) = -\mu^{2}Tr[\eta^2]+\lambda_{1}(Tr[\eta^2])^2+\lambda_{2}Tr[\eta^4].$$ The global minima of the potential have been well studied [@lingfongli74]. For $\lambda_{2}<0$, this potential induces the breaking $SU(12)\rightarrow{}SU(11)\times{}U(1)$, and, for $\lambda_{2}>0$, the most stable symmetry breaking pattern is $SU(12)\rightarrow{}SU(6)\times{}SU(6)\times{}U(1)$. Thus, we desire that $\lambda_{2}>0$. However, given that, in a convenient choice of basis, $\langle{}\eta{}\rangle{}\propto{}diag(-1, -1, -1, -1, -1, -1, +1, +1, +1, +1, +1, +1)$, one can see that for $U=i\sigma_{1}\otimes{}\mathbbm{1}_{6\times{}6}\in{}SU(12)$ (where here $\sigma_{1}$ is the first Pauli matrix), $U^{\dagger}\langle{}\eta{}\rangle{}U = -\langle{}\eta{}\rangle{}$. This means that $\langle{}\eta{}\rangle{}$ is related to $-\langle{}\eta{}\rangle{}$ by a gauge transformation, and thus the vacuum manifold is in fact *connected* and contains only a single component diffeomorphic to $G/H = SU(12)/SU(6)\times{}SU(6)\times{}U(1)$. This means that there will not exist any stable domain wall solutions since we require a disconnected vacuum manifold. The simplest way to resolve the connectedness problem of the single adjoint $\mathbb{Z}_{2}$-symmetric scalar potential is to use a scalar field theory with two independent adjoint scalar fields. We denote these two adjoint scalar fields as $\eta$ and $\chi$, and, instead of a discrete reflection symmetry, we utilize the interchange symmetry $\eta{}\rightarrow{}\chi{}$, $\chi{}\rightarrow{}\eta{}$ as our discrete $\mathbb{Z}_{2}$ symmetry. The most general potential for this system may be written $$\label{eq:twoadjointpotentialfull} V(\eta, \chi) = V(\eta)+V(\chi)+I(\eta, \chi),$$ where $$\begin{aligned} \label{eq:twoadjointpotentialparts} V(\eta) &= -\mu^{2}Tr[\eta^2]-\frac{1}{3}cTr[\eta^3]+\lambda_{1}(Tr[\eta^2])^2+\lambda_{2}Tr[\eta^4], \\ V(\chi) &= -\mu^{2}Tr[\chi^2]-\frac{1}{3}cTr[\chi^3]+\lambda_{1}(Tr[\chi^2])^2+\lambda_{2}Tr[\chi^4], \\ I(\eta, \chi) &= 2\delta^2Tr[\eta{}\chi]+dTr[\eta^2\chi]+dTr[\eta{}\chi^2]+l_{1}Tr[\eta^2]Tr[\chi^2]+l_{2}Tr[\eta^2\chi^2]+l_{3}(Tr[\eta{}\chi])^2 \\ &+l_{4}Tr[\eta{}\chi{}\eta{}\chi{}]+l_{5}Tr[\eta^2]Tr[\eta{}\chi]+l_{5}Tr[\eta{}\chi]Tr[\chi^2]+l_{6}Tr[\eta^3\chi]+l_{6}Tr[\eta{}\chi^3]. \end{aligned}$$ The single-field potentials $V(\eta)$ and $V(\chi)$ are simply the single-adjoint scalar potential with the cubic invariant, while $I(\eta, \chi)$ is the interaction potential containing all the terms which couple $\eta$ and $\chi$ non-trivially. The determination of the global minima for the two-adjoint scalar Higgs potential is obviously much more complicated. An analysis of the most general potential (without the discrete symmetry that we have imposed) was first given by Wu [@dandiwusymbreaking]. Nevertheless, we can present an argument for the existence of domain wall solutions and an argument for the existence of a region of parameter space in which the desired solution with $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ localized to the wall is the most stable domain wall solution. The existence of domain wall solutions is ensured by the disconnectedness of the vacuum manifold. If we choose parameters such that the potential is bound from below, $V(\eta, \chi)$ must have at least one global minimum. If this minimum is given by $\eta = A$, $\chi = B$, then by the interchange symmetry $\eta = B$, $\chi = A$ is also a global minimum. Given that both the fields are adjoint fields, there is a connected component of the vacuum manifold described by $G/H = \lbrace{}(\eta, \chi) = (U^{\dagger}AU, U^{\dagger}BU); U\in{}G\rbrace{}$ and another component described by $(G/H)_{\eta{}\leftrightarrow{}\chi{}} = \lbrace{}(\eta, \chi) = (U^{\dagger}BU, U^{\dagger}AU); U\in{}G\rbrace{}$. Given that $\eta$ and $\chi$ are independent fields, the interchange symmetry is by construction outside $G$, ensuring that $G/H$ and $(G/H)_{\eta{}\leftrightarrow{}\chi{}}$ are disconnected. To argue for the existence of a parameter space yielding the desired CoS solution, it will greatly help us if we make some choices which simplify the analysis a great amount. There are two things which make the analysis rather simple: choosing parameters such that the vacua of the respective disconnected components of the vacuum manifold are of the form $\eta\neq{}0$, $\chi=0$ and $\eta=0$, $\chi\neq{}0$, and, choosing parameters such that the solutions for which $[\eta, \chi] \neq 0$, at any point, will clearly be non-minimal. Consider the analogous potential with two gauge singlets, $\phi$ and $\varphi$, with the interchange symmetry $\phi{}\leftrightarrow{}\varphi{}$, $$\label{eq:twosingletinterchangepotential} V(\phi, \varphi) = -\frac{1}{2}M^2\phi^2-\frac{1}{3}a\phi^3+\frac{1}{4}F\phi^{4}-\frac{1}{2}M^2\varphi^2-\frac{1}{3}a\varphi^3+\frac{1}{4}F\varphi^{4}+N^2\phi\varphi+\frac{1}{2}L\phi^2\varphi^2+g\phi^3\varphi.$$ Suppose we turn off the interactions involving odd powers of $\phi$ and $\varphi$ for now by setting the $N=a=g=0$. The potential in this case has a few additional reflection symmetries, $\phi\rightarrow{}-\phi$, $\varphi\rightarrow{}\varphi$ and $\phi\rightarrow{}\phi$, $\varphi\rightarrow{}-\varphi$. A quick calculation of the stationary points shows that, other than $\phi=0$, $\varphi=0$, which is always a maximum, there are stationary points at $\phi = \pm{}\sqrt{M^2/F}$, $\varphi = 0$ and $\phi = 0$, $\varphi = \pm{}\sqrt{M^2/F}$, and at $\phi = \pm{}\sqrt{M^2/(F+L)}$, $\varphi = \pm{}\sqrt{M^2/(F+L)}$. The values of the potential for these respective vacua are $V(\pm{}\sqrt{M^2/F}, 0) = V(0, \pm{}\sqrt{M^2/F}) = -M^4/4F$ and $V(\pm{}\sqrt{M^2/F}, \pm{}\sqrt{M^2/F}) = -M^4/2(F+L)$. In the region $L>F$, the stationary points $\phi = \pm{}\sqrt{M^2/F}$, $\varphi = 0$, and $\phi = 0$, $\varphi = \pm{}\sqrt{M^2/F}$ are degenerate global minima while the stationary points $\phi = \pm{}\sqrt{M^2/(F+L)}$, $\varphi = \pm{}\sqrt{M^2/(F+L)}$ are saddle points. In the region $L<F$, the points $\phi = \pm{}\sqrt{M^2/(F+L)}$, $\varphi = \pm{}\sqrt{M^2/(F+L)}$ are the global minima and $\phi = \pm{}\sqrt{M^2/F}$, $\varphi = 0$ and $\phi = 0$, $\varphi = \pm{}\sqrt{M^2/F}$ are the saddle points. For $F=L$, the symmetry of the potential is enhanced to $SO(2)$. Note that for the above potential for two singlet fields that choosing the coupling constant for the $\phi^2\varphi^2$ interaction to be larger than that for the $\phi^4$ and $\varphi^4$ self-interactions, the minima are such that only one of the fields develops a non-zero vacuum expectation value. Similarly, at least when we leave the interactions with odd powers of $\eta$ and $\chi$ switched off, the minima of the potential in Eq. \[eq:twoadjointpotentialparts\] should be of the form $\eta \neq{} 0$, $\chi = 0$ and $\eta = 0$, $\chi \neq{} 0$ if some of the coupling constants involving the products of quadratic powers of $\eta$ and $\chi$, $l_{1}$, $l_{2}$, $l_{3}$ and $l_{4}$, are made sufficiently positive and larger compared to the quartic self-couplings $\lambda_{1}$ and $\lambda_{2}$. In particular, along any direction $\eta = v_{1}A$, $\chi = v_{2}B$, where $A$ and $B$ are generators, we need the effective coupling for $v^{2}_{1}v^2_{2}$ to be larger than that for $v^4_{1,2}$. This can be easily done by making $l_{1}$ sufficiently large, since $Tr[\eta^2]Tr[\chi^2]$ is independent of the vacuum alignment. In choosing conditions such that the minima are of the form $\eta \neq{} 0$, $\chi = 0$ and $\eta = 0$, $\chi \neq{} 0$, $I(\eta, \chi)$ vanishes and becomes positive if we deviate from the minima. This implies that for the $\eta \neq{} 0$, $\chi = 0$ minima, $\eta$ must exist at the minimum of the single adjoint Higgs potential $V(\eta)$ (and likewise, by the interchange symmetry, $\chi$ must exist at a minimum of $V(\chi)$ for the $\eta = 0$, $\chi \neq{} 0$ minima). This means that the symmetry breaking patterns under such conditions reduce to those of the single adjoint Higgs potential, for which the minimal breaking patterns are well known [@lingfongli74; @rueggscalarpot]. Setting $c=0$ and choosing $\lambda_{2}>0$, $\eta$ in the $\eta \neq{} 0$, $\chi = 0$ minimum, and $\chi$ in the $\eta = 0$, $\chi \neq{} 0$ minimum, will attain vacuum expectation values which break $SU(12)$ to $SU(6)\times{}SU(6)\times{}U(1)$. A domain-wall solution can then be obtained by looking for a solution which interpolates from the $\eta \neq{} 0$, $\chi = 0$ minimum as $y\rightarrow{}-\infty$ to the $\eta = 0$, $\chi \neq{} 0$ as $y\rightarrow{}+\infty$. In the parameter regime we have chosen, this means that with this solution we have two domains in which $SU(12)$ is broken to $SU(6)\times{}SU(6)\times{}U(1)$ subgroups which need not be exactly the same. In other words, in the domain in which the solution converges to the $\eta \neq{} 0$, $\chi = 0$ minimum, $SU(12)$ is broken to the embedding $H_{1} = SU(6)_{1}\times{}SU(6)_{2}\times{}U(1)_{A}$, while in the domain in which the solution converges to the $\eta = 0$, $\chi \neq{} 0$ minimum, $SU(12)$ is broken to a potentially different embedding $H_{2} = SU(6)_{3}\times{}SU(6)_{4}\times{}U(1)_{B}$. In the interior of the domain wall, the symmetry is broken to $H_{1}\cap{}H_{2}$. To analyze what $H_{1}\cap{}H_{2}$ should be, we need to look at the vacua attained by $\eta$ and $\chi$ at $y=-\infty$ and $y=+\infty$ respectively. Without loss of generality, we can choose $\eta$ to attain the VEV pattern $$\label{eq:etaminusinfinity} \eta{(y\rightarrow{}-\infty)} = vA,$$ where $$\label{eq:amatrix} A = \begin{pmatrix} -\mathbbm{1}_{6\times{}6} & 0 \\ 0 & +\mathbbm{1}_{6\times{}6} \end{pmatrix}.$$ In general, at $y=+\infty$, $\chi$ will in general attain a VEV of the form $\chi(y\rightarrow{}+\infty) = vB$, where $$\label{eqbmatrixgeneral} B = U^{\dagger}AU,$$ where $U$ is some unitary rotation matrix. In the general case, $A$ and $B$ will not commute. To make the upcoming analysis much simpler, we will further restrict the parameter space such that we ensure that $A$ and $B$ commute and are simultaneously diagonalizable, and, more generally, that $\eta{(y)}$ and $\chi{(y)}$ commute along the entire extra dimension. If we rewrite $I(\eta, \chi)$ in terms of $[\eta, \chi]$ and $\{\eta, \chi\}$, it turns out there is only one term which depends non-trivially on the commutator; that term is precisely $$\label{eq:commutatorterm} \frac{1}{4}(l_{4}-l_{2})Tr([\eta, \chi]^2).$$ Given that $\eta$ and $\chi$ are real fields, and that the commutator $[\eta, \chi]$ is anti-hermitian and thus has complex eigenvalues, it follows that $[\eta, \chi]^2$ is a negative definite operator and that the trace, $Tr([\eta, \chi]^2)$ should always yield a negative number. Hence, to ensure $[\eta, \chi]$ along the domain wall solution, we need to make the difference $l_{2}-l_{4}$ sufficiently positive. We will always assume this is the case. This means that $B$ will be simultaneously diagonalizable with $A$ and, in general, may be written in the form $$\label{eq:bmatrixdiagonalized} B = \begin{pmatrix} +\mathbbm{1}_{(6-m)\times{}(6-m)} & 0 & 0 & 0 \\ 0 & -\mathbbm{1}_{m\times{}m} & 0 & 0 \\ 0 & 0 & +\mathbbm{1}_{m\times{}m} & 0 \\ 0 & 0 & 0 & -\mathbbm{1}_{(6-m)\times{}(6-m)} \end{pmatrix},$$ where here $m$ is an integer between zero and six. In the cases that $m=0$ and $m=6$, we see that $B=-A$ and $B=+A$ respectively. This means that $H_{1}=H_{2}$ in these cases and that they are non-CoS domain walls. For $m=0$ and $m=6$, the symmetry respected in the interior of the wall is simply the same $SU(6)\times{}SU(6)\times{}U(1)_{A}$ subgroup respected in the bulk. Otherwise, in the case that $1\leq{}m\leq{}5$, we can see by inspection of $A$ and $B$ that the symmetry in the interior of the wall can be written $$\label{eq:cossym} H_{1}\cap_{}H_{2} = SU(6-m)_{1}\times{}SU(m)_{1}\times{}SU(6-m)_{2}\times{}SU(m)_{2}\times{}U(1)_{X_{m}}\times{}U(1)_{A}\times{}U(1)_{B}.$$ Here, $X_{m}$ is a generator which is a sum of leftover generators from the original $SU(6)$ subgroups, namely $T_{1m}$, $T_{2m}$, $T_{3m}$ and $T_{4m}$ from $SU(6)_{1}$, $SU(6)_{2}$, $SU(6)_{3}$ and $SU(6)_{4}$ respectively. We choose to write these generators as $$\label{eq:t1su61} T_{1m} = \begin{pmatrix} +m\mathbbm{1}_{(6-m)\times{}(6-m)} & 0 & 0 \\ 0 & -(6-m)\mathbbm{1}_{m\times{}m} & 0 \\ 0 & 0 & 0_{6\times{}6} \end{pmatrix},$$ $$\label{eq:t2su62} T_{2m} = \begin{pmatrix} 0_{6\times{}6} & 0 & 0 \\ 0 & -(6-m)\mathbbm{1}_{m\times{}m} & 0 \\ 0 & 0 & +m\mathbbm{1}_{(6-m)\times{}(6-m)} \end{pmatrix},$$ $$\label{eq:t3su63} T_{3m} = \begin{pmatrix} +m\mathbbm{1}_{(6-m)\times{}(6-m)} & 0 & 0 & 0\\ 0 & 0_{m\times{}m} & 0 & 0 \\ 0 & 0 & -(6-m)\mathbbm{1}_{m\times{}m} & 0 \\ 0 & 0 & 0 & 0_{(6-m)\times{}(6-m)} \end{pmatrix},$$ and $$\label{eq:t4su64} T_{4m} = \begin{pmatrix} 0_{(6-m)\times{}(6-m)} & 0 & 0 & 0\\ 0 & -(6-m)\mathbbm{1}_{m\times{}m} & 0 & 0 \\ 0 & 0 & 0_{m\times{}m} & 0 \\ 0 & 0 & 0 & +m\mathbbm{1}_{(6-m)\times{}(6-m)} \end{pmatrix}.$$ Putting this together, we see that $$\begin{aligned} \label{eq:xgenerator} X_{m} &= T_{1m}+T_{2m} \\ &= T_{3m}+T_{4m} \\ &= \begin{pmatrix} +m\mathbbm{1}_{(6-m)\times{}(6-m)} & 0 & 0 & 0\\ 0 & -(6-m)\mathbbm{1}_{m\times{}m} & 0 & 0 \\ 0 & 0 & -(6-m)\mathbbm{1}_{m\times{}m} & 0 \\ 0 & 0 & 0 & +m\mathbbm{1}_{(6-m)\times{}(6-m)} \end{pmatrix}, \end{aligned}$$ satisfies not only Eq. \[eq:abeliangeneratorsymmetrycondition\] but also the condition of Eq. \[eq:abeliangeneratorlocalizationcondition\] and is thus fully localized on the domain wall. Likewise, given $1\leq{}m\leq{}5$, all the $SU(m)_{1, 2}$ and $SU(6-m)_{1, 2}$ subgroups are smaller than their parent $SU(6)$ subgroups on both sides of the wall and are thus localized. Given that $U(1)_{A}$ and $U(1)_{B}$ are unbroken in their respective domains, their photons are semi-delocalized. Hence, each of the solutions for $m$ between one and five leads to the localization of gauge bosons associated with a $SU(6-m)_{1}\times{}SU(m)_{1}\times{}SU(6-m)_{2}\times{}SU(m)_{2}\times{}U(1)_{X_{m}}$ subgroup on the domain wall. Obviously, we are most interested in the $m=1$ and $m=5$ CoS solutions as they lead to a localized $SU(5)\times{}SU(5)\times{}U(1)$ gauge theory. We have outlined some of the parameter region and types of boundary conditions needed for the domain-wall solutions of interest. To calculate a domain-wall solution we also need to solve the Euler-Lagrange equations. Having chosen conditions such that $\eta$ and $\chi$ will be diagonal along the whole extra dimension described by the coordinate $y$, we may write $\eta{(y)} = diag(a_{1}(y), a_{2}(y), ..., a_{12}(y))$ and $\chi = diag(b_{1}(y), b_{2}(y), ..., b_{12}(y))$. In terms of $\eta$ and $\chi$, noting that the kinetic terms for these fields are given by $$\label{eq:etachikinetic} K(\eta, \chi) = \frac{1}{2}Tr[\partial^{\mu}\eta{}\partial_{\mu}\eta]+\frac{1}{2}Tr[\partial^{\mu}\chi{}\partial_{\mu}\chi],$$ the Euler-Lagrange equations resulting from the potential in Eq. \[eq:twoadjointpotentialparts\] are given by $$\begin{aligned} \label{eq:eulerlagrangetwoadjointscalarpotential} &\Box{}\eta-2\mu^2\eta-c\eta^2+4\lambda_{1}Tr(\eta^2)\eta+4\lambda_{2}\eta^3+2\delta^{2}\chi+d\eta{}\chi{}+d\chi{}\eta{}+d\chi^2+2l_{1}Tr(\chi^2)\eta+l_{2}\eta{}\chi^{2}+l_{2}\chi^{2}\eta \\ &+2l_{3}Tr(\eta\chi)\chi+2l_{4}\chi\eta\chi+2l_{5}Tr(\eta\chi)\eta+l_{5}Tr(\eta^2)\chi+l_{5}Tr(\chi^2)\chi+l_{6}\eta^2\chi+l_{6}\eta\chi\eta+l_{6}\chi{}\eta^2+l_{6}\chi^3 = 0, \\ &\Box{}\chi-2\mu^2\chi-c\chi^2+4\lambda_{1}Tr(\chi^2)\chi+4\lambda_{2}\chi^3+2\delta^{2}\eta+d\eta{}\chi{}+d\chi{}\eta{}+d\eta^2+2l_{1}Tr(\eta^2)\chi+l_{2}\chi{}\eta^{2}+l_{2}\eta^{2}\chi \\ &+2l_{3}Tr(\eta\chi)\eta+2l_{4}\eta\chi\eta+2l_{5}Tr(\eta\chi)\chi+l_{5}Tr(\chi^2)\eta+l_{5}Tr(\eta^2)\eta+l_{6}\chi^2\eta+l_{6}\chi\eta\chi+l_{6}\eta{}\chi^2+l_{6}\eta^3 = 0. \end{aligned}$$ Under the conditions we have chosen, in terms of $a_{i}(y)$ and $b_{i}(y)$, these equations simply reduce to $$\begin{aligned} \label{eq:eulerlagrangeab} &-\frac{d^{2}a_{i}}{dy^2}-2\mu^2a_{i}-ca^{2}_{i}+4\lambda_{1}(\sum^{12}_{j=1}a^2_{j})a_{i}+4\lambda_{2}a^{3}_{i}+2\delta^{2}b_{i}+2da_{i}b_{i}+db^{2}_{i}+2l_{1}(\sum^{12}_{j=1}b^2_{j})a_{i}+2(l_{2}+l_{4})b^{2}_{i}a_{i} \\ &+2l_{3}(\sum^{12}_{j=1}a_{j}b_{j})b_{i}+2l_{5}(\sum^{12}_{j=1}a_{j}b_{j})a_{i}+l_{5}(\sum^{12}_{j=1}a^2_{j})b_{i}+l_{5}(\sum^{12}_{j=1}b^2_{j})b_{i}+3l_{6}a^2_{i}b_{i}+l_{6}b^{3}_{i} = 0, \\ &-\frac{d^{2}b_{i}}{dy^2}-2\mu^2b_{i}-cb^{2}_{i}+4\lambda_{1}(\sum^{12}_{j=1}b^2_{j})b_{i}+4\lambda_{2}b^{3}_{i}+2\delta^{2}a_{i}+2da_{i}b_{i}+da^{2}_{i}+2l_{1}(\sum^{12}_{j=1}a^2_{j})b_{i}+2(l_{2}+l_{4})a^{2}_{i}b_{i} \\ &+2l_{3}(\sum^{12}_{j=1}a_{j}b_{j})a_{i}+2l_{5}(\sum^{12}_{j=1}a_{j}b_{j})b_{i}+l_{5}(\sum^{12}_{j=1}b^2_{j})a_{i}+l_{5}(\sum^{12}_{j=1}a^2_{j})a_{i}+3l_{6}b^2_{i}a_{i}+l_{6}a^{3}_{i} = 0. \end{aligned}$$ Note in the above equation implies that the coupled Euler-Lagrange equation for each pair $(a_{i}, b_{i})$ is the same, independent of the index $i$. This means, similarly to other clash-of-symmetries models [@e6domainwallpaper; @o10kinks], that the solutions are determined entirely from the boundary conditions. If one looks at the boundary conditions at infinity, namely $\eta(y\rightarrow{}-\infty) = vA$, $\chi(y\rightarrow{}-\infty) = 0$ and $\eta(y\rightarrow{}+\infty) = 0$, $\chi(y\rightarrow{}+\infty) = vB$, one notices that for some $i$, $a_{i}(y\rightarrow{}-\infty)$ and $b_{i}(y\rightarrow{}+\infty)$ have the same sign (either both $-v$ or $+v$), and for other pairs they have the opposite sign (either $(-v, +v)$ or $(+v, -v)$). If we further impose the symmetry $\eta{}\rightarrow{}-\eta$, $\chi{}\rightarrow{}-\chi{}$, which eliminates the cubic terms from the potential (ie. $c=d=0$), then the underlying equations describing components in which both $a_{i}(y\rightarrow{}-\infty)=b_{i}(y\rightarrow{}+\infty)=-v$ and $a_{j}(y\rightarrow{}-\infty)=b_{j}(y\rightarrow{}+\infty)=+v$ are the same. Likewise, the equations describing the components for which $a_{i}(y\rightarrow{}-\infty)=-b_{i}(y\rightarrow{}+\infty)=-v$ and $a_{j}(y\rightarrow{}-\infty)=-b_{j}(y\rightarrow{}+\infty)=+v$ are also the same. That means, when we set $c=d=0$, we can think of the solution for $\eta$ and $\chi$ as being of the form $$\label{eq:gensolutionetachi} \begin{aligned} \eta{(y)} &= \begin{pmatrix} +\eta_{-}(y)\mathbbm{1}_{(6-m)\times{}(6-m)} & 0 & 0 & 0 \\ 0 & +\eta_{+}(y)\mathbbm{1}_{m\times{}m} & 0 & 0 \\ 0 & 0 & -\eta_{+}(y)\mathbbm{1}_{m\times{}m} & 0 \\ 0 & 0 & 0 & -\eta_{-}(y)\mathbbm{1}_{(6-m)\times{}(6-m)} \end{pmatrix}, \\ \chi{(y)} &= \begin{pmatrix} +\chi_{-}(y)\mathbbm{1}_{(6-m)\times{}(6-m)} & 0 & 0 & 0 \\ 0 & +\chi_{+}(y)\mathbbm{1}_{m\times{}m} & 0 & 0 \\ 0 & 0 & -\chi_{+}(y)\mathbbm{1}_{m\times{}m} & 0 \\ 0 & 0 & 0 & -\chi_{-}(y)\mathbbm{1}_{(6-m)\times{}(6-m)} \end{pmatrix}, \end{aligned}$$ where $\eta_{\pm{}}$ and $\chi_{\pm{}}$ satisfy the boundary conditions $$\label{eq:etachiminusbcs} \begin{aligned} \eta_{-}(y\rightarrow{}-\infty) &= -v, \quad{} \eta_{-}(y\rightarrow{}+\infty) = 0, \\ \chi_{-}(y\rightarrow{}-\infty) &= 0, \quad{} \chi_{-}(y\rightarrow{}+\infty) = +v, \end{aligned}$$ and $$\label{eq:etachimplusbcs} \begin{aligned} \eta_{+}(y\rightarrow{}-\infty) &= -v, \quad{} \eta_{+}(y\rightarrow{}+\infty) = 0, \\ \chi_{+}(y\rightarrow{}-\infty) &= 0, \quad{} \chi_{+}(y\rightarrow{}+\infty) = -v. \end{aligned}$$ This means that Eq. \[eq:eulerlagrangeab\] now simplifies to $$\label{eq:eulerlagrange+-} \begin{aligned} &-\frac{d^{2}\eta_{\pm{}}}{dy^2}-2\mu^2\eta_{\pm{}}+4\lambda_{1}((12-2m)\eta^2_{-}+2m\eta^2_{+})\eta_{\pm{}}+4\lambda_{2}\eta^{3}_{\pm{}}+2\delta^{2}\chi_{\pm{}}+2l_{1}((12-2m)\chi^{2}_{-}+2m\chi^{2}_{+})\eta_{\pm{}}+2(l_{2}+l_{4})\chi^{2}_{\pm{}}\eta_{\pm{}} \\ &+2l_{3}((12-2m)\eta_{-}\chi_{-}+2m\eta_{+}\chi_{+})\chi_{\pm{}}+2l_{5}((12-2m)\eta_{-}\chi_{-}+2m\eta_{+}\chi_{+})\eta_{\pm{}}+l_{5}((12-2m)\eta^2_{-}+2m\eta^2_{+})\chi_{\pm{}} \\ &+l_{5}((12-2m)\chi^{2}_{-}+2m\chi^{2}_{+})\chi_{\pm{}}+3l_{6}\eta^2_{\pm{}}\chi_{\pm{}}+l_{6}\chi^{3}_{\pm{}} = 0, \\ &-\frac{d^{2}\chi_{\pm{}}}{dy^2}-2\mu^{2}\chi_{\pm{}}+4\lambda_{1}((12-2m)\chi^{2}_{-}+2m\chi^{2}_{+})\chi_{\pm{}}+4\lambda_{2}\chi^{3}_{\pm{}}+2\delta^{2}\eta_{\pm{}}+2l_{1}((12-2m)\eta^2_{-}+2m\eta^2_{+})\chi_{\pm{}}+2(l_{2}+l_{4})\eta^{2}_{\pm{}}\chi_{\pm{}} \\ &+2l_{3}((12-2m)\eta_{-}\chi_{-}+2m\eta_{+}\chi_{+})\eta_{\pm{}}+2l_{5}((12-2m)\eta_{-}\chi_{-}+2m\eta_{+}\chi_{+})\chi_{\pm{}}+l_{5}((12-2m)\chi^{2}_{-}+2m\chi^{2}_{+})\eta_{\pm{}} \\ &+l_{5}((12-2m)\eta^2_{-}+2m\eta^2_{+})\eta_{\pm{}}+3l_{6}\chi^2_{\pm{}}\eta_{\pm{}}+l_{6}\eta^{3}_{\pm{}} = 0. \end{aligned}$$ To analyze the stability of the solutions for the various values of $m$ under the simplifying assumptions and conditions that we have made, we need to solve Eq. \[eq:eulerlagrange+-\] numerically. We first do this initially with $\delta$, $l_{5}$ and $l_{6}$ set to zero. To calculate solutions numerically, we must work in non-dimensionalized coordinates, and in this particular case we do this by non-dimensionalizing the variables and parameters of the theory in terms of an arbitrary energy scale $k$. This means that we calculated the solutions numerically with respect to the non-dimensionalized coordinate $\tilde{y} = ky$, and the masses and coupling constants in terms of dimensionless numbers multiplied by the powers of $k$ corresponding to their mass dimensions. We calculated solutions to Eq. \[eq:eulerlagrange+-\] using the relaxation technique on a mesh with 2001 grid points, with the domain of $\tilde{y}$ truncated to $(-10, 10)$, and then calculated the energy density of the solutions for various values of $m$. The relaxation technique we used was a higher order technique, in which for most points we used not only the point $\tilde{y}_{i}$ and its neighbors $\tilde{y}_{i\pm{}1}$, but also the next nearest neighbors $\tilde{y}_{i\pm{}2}$ to approximate the second order derivatives of the functions generating the domain walls at $\tilde{y}_{i}$. This approximation of the second order derivative is accurate to $O(\epsilon^{4})$, where $\epsilon$ is the mesh spacing, and this means that when we apply this to the relaxation technique, the functions can be evaluated to an accuracy of $O(\epsilon^{6})$. For the $i=1$ and $i=1999$ points, the points which are neighbors to the points on the boundaries of the domain, we used a different combination of points, ranging from $\tilde{y}_{i-1}$ to $\tilde{y}_{i+4}$ for $i=1$, and from $\tilde{y}_{i-4}$ to $\tilde{y}_{i+1}$ for $i=1999$, to generate the same accuracy. We first did this for the parameter choice $$\begin{aligned} \label{eq:nonCoschoice} \mu^{2} &= 2.0k^2, \\ \lambda_{1} &= \frac{1.0}{k}, \\ \lambda_{2} &= \frac{1.0}{k}, \\ l_{1} &= \frac{8.0}{k}, \\ l_{2} &= \frac{7.0}{k}, \\ l_{3} &= -\frac{3.0}{k}, \\ l_{4} &= -\frac{2.0}{k}, \\ l_{5} &= 0.0, \\ l_{6} &= 0.0. \end{aligned}$$ With this parameter choice, we found that the energy densities, which we denote $\epsilon{(m)}$, for each of the choices of $m$ from zero to six were respectively $$\label{eq:nonCoschoiceenergydensities} \begin{aligned} \epsilon{(m = 0)} &= 0.806759k, \\ \epsilon{(m = 1)} &= 0.855328k, \\ \epsilon{(m = 2)} &= 0.907039k, \\ \epsilon{(m = 3)} &= 0.947975k, \\ \epsilon{(m = 4)} &= 0.907039k, \\ \epsilon{(m = 5)} &= 0.855328k, \\ \epsilon{(m = 6)} &= 0.806759k. \\ \end{aligned}$$ Hence, for the above choice, the non-Clash-of-Symmetries domain wall solutions, the $m=0$ and $m=6$ solutions, are the most stable and are degenerate, while the $SU(3)\times{}SU(3)\times{}SU(3)\times{}SU(3)\times{}U(1)$ generating wall corresponding to the choice $m=3$ is the least stable. This is not unexpected given that for the choice in Eq. \[eq:nonCoschoice\], the coupling constant $l_{3}$ which corresponds to the $[Tr(\eta{}\chi{})]^2$ interaction, is chosen to be negative. The term $[Tr(\eta{}\chi{})]^2$ is maximized when $m=0$ or $m=6$, since in this case the term is roughly proportional to $[Tr(AB)]^2 = [\pm{}Tr(A^2)]^2 = 1/4$ near $\tilde{y} = 0$, yielding a negative contribution to the energy density for negative $l_3$, while for the $m=3$ solution, $Tr(AB) = 0$, and hence the $[Tr(\eta{}\chi{})]^2$ interaction does not contribute to its energy density. Conversely, if we choose $l_3$ to be positive, we expect that the $m=3$ solution will be the most stable, and the $m=0$ and $m=6$ solutions will be the most unstable. For the parameter choice $$\label{eq:maximalCoschoice} \begin{aligned} \mu^{2} &= 2.0k^2, \\ \lambda_{1} &= \frac{1.0}{k}, \\ \lambda_{2} &= \frac{1.0}{k}, \\ l_{1} &= \frac{8.0}{k}, \\ l_{2} &= \frac{7.0}{k}, \\ l_{3} &= \frac{6.0}{k}, \\ l_{4} &= -\frac{2.0}{k}, \\ l_{5} &= 0.0, \\ l_{6} &= 0.0. \end{aligned}$$ we find that the energy densities are $$\begin{aligned} \label{eq:maximalCoschoiceenergydensities} \epsilon{(m = 0)} = 1.077678k, \\ \epsilon{(m = 1)} = 0.985394k, \\ \epsilon{(m = 2)} = 0.956093k, \\ \epsilon{(m = 3)} = 0.947975k, \\ \epsilon{(m = 4)} = 0.956093k, \\ \epsilon{(m = 5)} = 0.985394k, \\ \epsilon{(m = 6)} = 1.077678k. \end{aligned}$$ Indeed, the $SU(3)\times{}SU(3)\times{}SU(3)\times{}SU(3)\times{}U(1)$ solution is the most stable for this choice, with the solutions with $m$ decreasing or increasing from $m=3$ getting progressively more unstable, leaving the $m=0$ and $m=6$ non-CoS domain walls with the highest energy density and the least amount of stability. With the above two choices, the outcome has been that either the non-CoS solutions ($m=0$ and $m=6$) or the $m=3$ solution are the most stable. Howevever, the solutions we would like to make the most stable are the ones generating a localized $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ subgroup, namely one of the $m=1$ or $m=5$ solutions. In the previous parameter choices, we turned off the interactions which were proportional to odd powers of either $\eta$ or $\chi$ (or both) as well as the mixed $Tr(\eta{}\chi{})$ mass term. In this area of parameter space, there is an enhanced symmetry, with $\eta{}\rightarrow{}\chi{}$, $\chi{}\rightarrow{}-\eta$ and $\eta{}\rightarrow{}-\eta{}$, $\chi{}\rightarrow{}-\chi{}$ being symmetries of the potential in Eq. \[eq:twoadjointpotentialparts\]. The second of these is what eliminates the cubic interactions and is what allows us to write the form of the solutions for $\eta$ and $\chi$ solely in terms of four functions: $\eta_{-}$, $\chi_{-}$, $\eta_{+}$ and $\chi_{+}$. If the second of these symmetries is broken by allowing cubic interactions, then eight functions are required: the functions which take a component along the diagonal of $\eta$ from $-v$ to zero and the corresponding component of $\chi$ from zero to $+v$ are not exactly the negative of the functions which take a component along the diagonal of $\eta$ from $+v$ to zero and the corresponding component of $\chi$ from zero to $-v$. Hence, for simplicity of analysis, we will keep the second of these symmetries and keep the cubic interactions set to zero. The first of the symmetries mentioned in the above paragraph, $\eta{}\rightarrow{}\chi{}$, $\chi{}\rightarrow{}-\eta$, is the one which arises by setting the quartic interactions proportional to odd powers of both $\eta$ and $\chi$ as well as the mixed mass $Tr[\eta{}\chi{}]$ term to zero. It is the one that is responsible for the degeneracy in energy density between solutions with $m = n$ and $m = 6-n$. We will break this symmetry by turning on these terms, thus breaking the degeneracies between solutions with $m = n$ and $m = 6-n$ for $n = 0, 1, 2$. If $l_{3}$ is negative, the energy density of one of the non-CoS domain walls will be raised but the other lowered, and terms like $Tr(\eta{}\chi^3)$ are still maximized in magnitude for the non-CoS solutions, so in this case one of the non-CoS domain walls will be the most stable. Hence, the parameter region we are interested in is one where $l_{3}$ is positive. In the following analysis, we fix $\delta^2$ to be $$\label{eq:deltafixing} \delta^2 = -\frac{12l_{5}+l_{6}}{48\lambda_{1}+4\lambda_{2}}\mu^{2}.$$ We make this fixing purely for computational convenience, since it ensures that the minima remain of the form $\eta{}\neq{}0$, $\chi = 0$ and $\eta{} = 0$, $\chi{}\neq{}0$. The main worry if we break this fixing is whether the global minima still generate the same symmetry breaking patterns, since they will have both $\eta$ and $\chi$ non-zero. There is reason to believe that this is the case, since if we perturb away from the fixing in Eq. \[eq:deltafixing\], we can analyze what happens to the minima by doing a resultant perturbation from $\eta{}\neq{}0$, $\chi = 0$ (and vice versa). Consider a minimum of the form $\eta = vA$, $\chi = \epsilon{}C$, where $C$ is a generator, and $\epsilon$ is a small number resulting from perturbing away from the condition in Eq. \[eq:deltafixing\]. Then, to first order in $\epsilon$, only the mixed mass term $Tr[\eta{}\chi{}]$ and the $Tr[\eta^{3}\chi{}]$ and $Tr[\eta^{2}]Tr[\eta{}\chi{}]$ interactions contribute to the perturbation in energy of the minima. Given that $A^2$ is proportional to the identity, the contributions from all these terms to the perturbation in energy are proportional to $Tr(AC)$. Hence, the symmetry breaking pattern that will result will be the one which extremizes $Tr(AC)$ such that the perturbation in energy is minimal, which will correspond to the case that $Tr(AC)$ is maximally positive if $\epsilon$ is a negative, or to the case that $Tr(AC)$ is maximally negative if $\epsilon{}$ is positive. These cases happen respectively if $C$ is either totally aligned or totally anti-aligned with $A$: in other words, if $C = \pm{}A$. This means that, at least for a small perturbation, the symmetry breaking patterns of the minima remain the same. Given the assumptions we have made, we indeed find that it is possible to make one of the desired $m=1$ or $m=5$ solutions the most energetically stable. For the parameter choice $$\label{eq:su5xsu5Coschoice} \begin{aligned} \mu^{2} &= 2.0k^2, \\ \lambda_{1} &= \frac{1.0}{k}, \\ \lambda_{2} &= \frac{1.0}{k}, \\ l_{1} &= \frac{8.0}{k}, \\ l_{2} &= \frac{7.0}{k}, \\ l_{3} &= \frac{6.0}{k}, \\ l_{4} &= -\frac{2.0}{k}, \\ l_{5} &= -\frac{2.2}{k}, \\ l_{6} &= -\frac{2.0}{k}, \end{aligned}$$ along with the condition used in Eq. \[eq:deltafixing\], we find that the resultant energy densities are $$\label{eq:su5xsu5Coschoiceenergydensities} \begin{aligned} \epsilon{(m = 0)} = 0.893757k, \\ \epsilon{(m = 1)} = 0.883037k, \\ \epsilon{(m = 2)} = 0.891332k, \\ \epsilon{(m = 3)} = 0.913322k, \\ \epsilon{(m = 4)} = 0.952365k, \\ \epsilon{(m = 5)} = 1.023219k, \\ \epsilon{(m = 6)} = 1.220998k. \end{aligned}$$ The graphs of $\tilde{\eta}_{-} = \eta_{-}k^{-3/2}$, $\tilde{\chi}_{-} = \chi_{-}k^{-3/2}$, $\tilde{\eta}_{+} = \eta_{+}k^{-3/2}$ and $\tilde{\chi}_{+} = \chi_{+}k^{-3/2}$ for the various choices of $m$ for this parameter choice are shown in Figs. \[fig:etaminus3plots\], \[fig:chiminus3plots\], \[fig:etaplus3plots\], and \[fig:chiplus3plots\]. ![A plot of the solutions for $\tilde{\eta}_{-}$ for $0\leq{}m\leq{}5$ for the parameter choice in Eq. \[eq:su5xsu5Coschoice\] subject to the constraint in Eq. \[eq:deltafixing\].[]{data-label="fig:etaminus3plots"}](etaminus3tilde.pdf) ![A plot of the solutions for $\tilde{\chi}_{-}$ for $0\leq{}m\leq{}5$ for the parameter choice in Eq. \[eq:su5xsu5Coschoice\] subject to the constraint in Eq. \[eq:deltafixing\].[]{data-label="fig:chiminus3plots"}](chiminus3tilde.pdf) ![A plot of the solutions for $\tilde{\eta}_{+}$ for $1\leq{}m\leq{}6$ for the parameter choice in Eq. \[eq:su5xsu5Coschoice\] subject to the constraint in Eq. \[eq:deltafixing\].[]{data-label="fig:etaplus3plots"}](etaplus3tilde.pdf) ![A plot of the solutions for $\tilde{\chi}_{+}$ for $1\leq{}m\leq{}6$ for the parameter choice in Eq. \[eq:su5xsu5Coschoice\] subject to the constraint in Eq. \[eq:deltafixing\].[]{data-label="fig:chiplus3plots"}](chiplus3tilde.pdf) To conclude this section, we have successfully generated a parameter choice for which one of the domain walls leading to a localized $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ gauge group is the most energetically stable. The next step is to show that fermions and scalars can be localized in an acceptable way, which we shall show in the following two sections. One worry about the solutions resulting from the parameter choice in Eq. \[eq:su5xsu5Coschoice\] is that the energy density of the desired $m=1$ solution only differs from the $m=0$ and $m=2$ solutions by about one percent. It is thus plausible that the desired construction could be unstable when we account for quantum corrections. We leave the analysis of these corrections to later work. In the section after those dealing with fermion and scalar localization, we detail several alternative models. Fermion Localization and the Elimination of Fermionic Mediators from the Spectrum of the 3+1D Effective Field Theory {#sec:fermionlocalization} ==================================================================================================================== In this section, we will show how to couple fermions to the $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$-generating domain-wall solutions described in the previous section. From the last section, depending on the parameter region we choose, there are two options which generate the desired localized group: the $m=1$ solution, for which $\eta$ and $\chi$ can be described as being composed of five copies of $\pm{}(\eta_{-}, \chi_{-})$ and one of $\pm{}(\eta_{+}, \chi_{+})$, and the $m=5$ solution, which can be described by five copies of $\pm{}(\eta_{+}, \chi_{+})$ and one of $\pm{}(\eta_{-}, \chi_{-})$. Normally, when coupling a 4+1D fermion field $\Psi$ to a scalar field $\phi$, which transforms under a discrete $\mathbb{Z}_{2}$ reflection symmetry $\phi{}\rightarrow{}-\phi{}$ and which generates a domain wall, the only acceptable Yukawa coupling one can write down is $h\overline{\Psi}\Psi{}\phi$. For this interaction, the reflection symmetry is extended so that $\overline{\Psi}\Psi{}\rightarrow{}-\overline{\Psi}\Psi{}$ (which can be achieved by taking $\Psi\rightarrow{}i\Gamma^{5}\Psi$), and, depending on the sign of the coupling constant $h$, this interaction leads to an effective $y$-dependent mass term which is either a kink or an anti-kink, leading respectively to either a localized, massless left-chiral or right-chiral zero mode. These type of fermionic chiral zero modes are generally the candidates for embedding the Standard Model fermions. In the case of the CoS domain walls from the two-field model with an interchange symmetry described in the previous section, the various $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$-covariant components embedded in the $SU(12)$ multiplets have different localization properties, and, furthermore, we have two different types of Yukawa coupling. The two ways of Yukawa coupling a fermion to $\eta$ and $\chi$ which respect the interchange symmetry $\eta{}\leftrightarrow{}\chi$ as well as $SU(12)$ can be described as follows. Let $\Psi_{R}$ be a fermion in some non-trivial representation $R$ of $SU(12)$. Then we can either couple $\Psi_{R}$ to $\eta$ and $\chi$ as $$\label{eq:yukawacoupling1} h(\overline{\Psi_{R}}\eta{}\Psi_{R})_{1}+h(\overline{\Psi_{R}}\chi{}\Psi_{R})_{1},$$ with $\Psi_{R}$ invariant under the discrete interchange symmetry, or, secondly, as $$\label{eq:yukawacoupling2} h'(\overline{\Psi_{R}}\eta{}\Psi_{R})_{1}-h'(\overline{\Psi_{R}}\chi{}\Psi_{R})_{1},$$ with $\Psi\rightarrow{}i\Gamma^{5}\Psi$ under the discrete interchange symmetry. Here, in both equations, the $1$ subscript denotes taking the gauge singlet component of the $\overline{R}\times{}Adjoint\times{}R$ structure which arises in Yukawa couplings between $\Psi_{R}$, $\eta$ and $\chi$. The $m=1$ solution is effectively a domain wall between $\eta = vA$, $\chi = 0$ at negative infinity and $\eta = 0$ $\chi = vB$ at positive infinity. It is therefore helpful to notice the charges of the various $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ components of the $12$ and $66$ representations under $A$ and $B$. Under $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}\times{}U(1)_{A}\times{}U(1)_{B}$, the fundamental $12$ representation breaks down as $$\label{eq:fundamental12charges} 12 = (5, 1, +1, -1, +1)\oplus{}(1, 5, +1, +1, -1)\oplus{}(1, 1, -5, -1, -1)\oplus{}(1, 1, -5, +1, +1),$$ and the rank two anti-symmetric $66$ representation breaks down as $$\label{eq:rank2antisymmetric66charges} \begin{gathered} 66 = (10, 1, +2, -2, +2)\oplus{}(1, 10, +2, +2, -2)\oplus{}(5, 5, +2, 0, 0)\oplus{}(5, 1, -4, -2, 0) \\ \oplus{}(5, 1, -4, 0, +2)\oplus{}(1, 5, -4, 0, -2)\oplus{}(1, 5, -4, +2, 0)\oplus{}(1, 1, -10, 0 , 0). \end{gathered}$$ Let’s now consider applying the coupling in Eq. \[eq:yukawacoupling1\] to the components of the fundamental. We shall label the $(5, 1, +1, -1, +1)$ component by $\Psi^{5V}$, the $(1, 5, +1, +1, -1)$ component by $\Psi^{5D}$, the $(1, 1, -5, -1, -1)$ component by $\Psi^{--}$ and the $(1, 1, -5, +1, +1)$ component by $\Psi^{++}$. Generally, if the coupling of a fermion to a domain wall is represented by the $\tilde{y}$-dependent mass term $W(\tilde{y})$, or, in other words, that the resultant $4+1D$ Dirac equation is given by $$\label{eq:4+1DDiracfromsuperpotential} i\Gamma^{M}\partial_{M}\Psi - W(y)\Psi = 0,$$ then if we expand $\Psi$ as a tower of left- and right-chiral modes of the form $$\label{eq:towerofmodes} \Psi{(x, \tilde{y})} = \sum_{m} f^{m}_{L}(\tilde{y})\psi^{m}_{L}(x)+ f^{R}_{L}(\tilde{y})\psi^{m}_{R}(x),$$ where the $3+1D$ modes $\psi_{L,R}$ satisfy the $3+1D$ Dirac equation $$\label{eq:3+1Ddirac} i\gamma^{\mu}\psi_{L,R} = m\psi_{R,L},$$ then the profiles $f^{m}_{L,R}$ satisfy Schródinger equations with the potentials $$\label{eq:leftSEpotential} V_{L}(\tilde{y}) = W(\tilde{y})^2-W'(\tilde{y}),$$ for the left-chiral modes, and $$\label{eq:leftSEpotential} V_{R}(\tilde{y}) = W(\tilde{y})^2+W'(\tilde{y}),$$ for the right-chiral modes. Note that given the above potentials are of the form of those that arise in supersymmetric quantum mechanics, $W(\tilde{y})$ can be thought of as a superpotential. Applying the interaction of Eq. \[eq:yukawacoupling1\] to the components of the fundamental $12$ representation, we attain the superpotentials $$\label{eq:visiblequintetsuperpot} W^{5V}(\tilde{y}) = h[\eta_{-}(\tilde{y})+\chi_{-}(\tilde{y})],$$ for the visible quintet, $$\label{eq:darkquintetsuperpot} W^{5D}(\tilde{y}) = -h[\eta_{-}(\tilde{y})+\chi_{-}(\tilde{y})] = -W^{5V}(\tilde{y}),$$ for the dark quintet, $$\label{eq:mmsingletsuperpot} W^{--}(\tilde{y}) = h[\eta_{+}(\tilde{y})+\chi_{+}(\tilde{y})],$$ for the $\Psi^{--}$ singlet component, and $$\label{eq:ppsingletsuperpot} W^{++}(\tilde{y}) = -h[\eta_{+}(\tilde{y})+\chi_{+}(\tilde{y})] = -W^{--}(\tilde{y}).$$ To know if we will end up with chiral zero modes for the visible and dark quintets, we need to know the form of $\eta_{-}+\chi_{-}$. This should be kink like as $\eta_{-}\rightarrow{}-v$, $\chi_{-}\rightarrow{}0$ as $\tilde{y}\rightarrow{}-\infty$ (or $-10$, in our truncation), and $\eta_{-}\rightarrow{}0$, $\chi_{-}\rightarrow{}+v$ as $\tilde{y}\rightarrow{}+\infty$, which means that $\eta_{-}+\chi_{-}\rightarrow{}\pm{}v$ as $\tilde{y}\rightarrow{}\pm{}\infty{}$. Indeed, as the plot in Fig. \[fig:etaminuspluschiminus13\] of $\eta_{-}+\chi_{-}$ for the $m=1$ solution for the parameter choice of Eq. \[eq:su5xsu5Coschoice\] shows, it is indeed kink-like. This means that standard result of Jackiw and Rebbi [@jackiwrebbi] for chiral zero modes holds for the superpotential $W^{5V}(\tilde{y})$, and we will attain a single left-chiral zero mode for the visible quintet if $h$ is positive, or a single right-chiral zero mode if $h$ is negative. Interestingly, due to the relative minus sign in Eq. \[eq:darkquintetsuperpot\], which is due to the visible and dark quintets having the opposite charges under $A$ and $B$, the spectra for the left- and right-chiral modes for the dark quintet is flipped with respect to that for the visible quintet. This means that for $h>0$, we will attain a right-chiral zero mode for the dark quintet, or a left-chiral zero mode if $h<0$. Thus, given that that zero modes for the visible and dark quintets have opposite chirality, this suggests the possibility of reproducing a mirror matter theory on the domain-wall brane. ![A plot of the solutions for $\tilde{\eta}_{-}+\tilde{\chi}_{-}$ for $m=1$ for the parameter choice in Eq. \[eq:su5xsu5Coschoice\] subject to the constraint in Eq. \[eq:deltafixing\].[]{data-label="fig:etaminuspluschiminus13"}](minuswall13tilde.pdf) To calculate some of the profiles for the modes of the visible and dark quintets, we again use dimensionless variables. We firstly define the dimensionless Yukawa coupling by $$\tilde{h} = \frac{hv}{k},$$ and the non-dimensionalized profiles by $$\label{eq:nondimfermprofiles} \tilde{f}_{L,R}(\tilde{y}) = k^{-\frac{1}{2}}f_{L,R}(\tilde{y}),$$ and we utilize the same dimensionless coordinate $\tilde{y}$ from the previous section. We solve the relevant differential equations on the same mesh that we used before, with the domain of $\tilde{y}$ truncated to $(-10, 10)$ and split into 2000 intervals, and thus we solve for the profile functions on 2001 mesh points. We solve for the profile functions in the usual way, by defining $f(\tilde{y}_{i}) = 0$ for $i=0$ and $i=2000$ (here $\tilde{y}_{0} = -10$, $\tilde{y}_{2000} = +10$), and then writing the Hamiltonian operator for the relevant Schrödinger equations in terms of the $f(\tilde{y}_{i})$, with the second order derivative in $\tilde{y}$ of a profile $f$ at $\tilde{y}_{i}$ calculated in terms of $f$ computed at the adjacent points, turning the Schrödinger equation into an eigenvalue/eigenvector problem for a symmetric matrix on a 2001-dimensional vector space, with the components of the eigenvectors in this space being the values of the eigenfunction $f$ at the various mesh points $\tilde{y}_{i}$. We calculate all the derivatives in the Hamiltonian to sixth order in the mesh spacing, which we will call here $\epsilon$. This means that we calculate the kinetic term as well as the derivative of the superpotential $W$ in terms of the relevant functions evaluated not only at $\tilde{y}_{i-1}$ and $\tilde{y}_{i+1}$, but also at $\tilde{y}_{i-2}$, $\tilde{y}_{i+2}$ and $\tilde{y}_{i-3}$ and $\tilde{y}_{i+3}$. Because the derivative of the superpotential will involve dividing known functions, $\eta_{\pm{}}$ and $\chi_{\pm{}}$, which are known to $O(\epsilon^{6})$ in the mesh spacing, this term, and thus the whole Hamiltonian operator, is known to $O(\epsilon^{5})$. All of this means that instead of having a Hamiltonian which is a symmetric tridiagonal matrix, we end up with a Hamiltonian which is symmetric, septa-diagonal matrix. This makes things a bit more complicated and slower in terms of computation but is nevertheless doable: we first convert the septa-diagonal matrix to a tridiagonal matrix via a series of Householder transformations, calculate the eigenvalues and eigenvectors of the tridiagonal matrix, and then transform back to the original basis to get the eigenvectors of the septa-diagonal matrix. We then produced plots for the ground state, the first and second excited states of both the left- and right-chiral towers for each of the components of the fundamental, for the choices $\tilde{h} = 10$, $\tilde{h} = 100$, and $\tilde{h} = 1000$, which are shown in Figs. \[fig:leftchiralh10\], \[fig:rightchiralh10\], \[fig:leftchiralh100\], \[fig:rightchiralh100\], \[fig:leftchiralh1000\] and \[fig:rightchiralh1000\]. For $\tilde{h} = 10$, the squared masses of the first few localized, left-chiral modes, which we label $m^{2}_{L,gs}$, $m^{2}_{L,1e}$ and $m^{2}_{L,2e}$, for the visible quintet are $$\begin{aligned} \label{eq:masssqrleft12visibleh10} m^{2}_{L,gs} &= 0, \\ m^{2}_{L,1e} &= 5.8010k^2, \\ m^{2}_{L,2e} &= 7.7023k^2. \end{aligned}$$ Similarly, those for the first few right-chiral modes $$\begin{aligned} \label{eq:masssqrright12visibleh10} m^{2}_{R,gs} &= 5.8010k^2, \\ m^{2}_{R,1e} &= 7.7005k^2, \\ m^{2}_{R,2e} &= 7.8002k^2. \end{aligned}$$ The squared masses for the first few left- and right-chiral modes for the dark quintet are, as implied previously, the same as those just above for the visible quintet but with the chiralities reversed. For $\tilde{h} = 100$, the squared masses of the first few localized chiral modes of the visible quintet are $$\begin{aligned} \label{eq:masssqrleft12visibleh100} m^{2}_{L,gs} &= 0, \\ m^{2}_{L,1e} &= 76.4038k^2, \\ m^{2}_{L,2e} &= 148.6622k^2. \end{aligned}$$ and $$\begin{aligned} \label{eq:masssqrright12visibleh100} m^{2}_{R,gs} &= 76.4038k^2, \\ m^{2}_{R,1e} &= 148.6622k^2, \\ m^{2}_{R,2e} &= 216.7872k^2. \end{aligned}$$ For $\tilde{h} = 1000$, the squared masses of the first few localized chiral modes of the visible quintet are $$\begin{aligned} \label{eq:masssqrleft12visibleh1000} m^{2}_{L,gs} &= 0, \\ m^{2}_{L,1e} &= 782.7132k^2, \\ m^{2}_{L,2e} &= 1561.2653k^2. \end{aligned}$$ and $$\begin{aligned} \label{eq:eq:masssqrright12visibleh1000} m^{2}_{R,gs} &= 782.7153k^2, \\ m^{2}_{R,1e} &= 1561.2736k^2, \\ m^{2}_{R,2e} &= 2335.6715k^2. \end{aligned}$$ ![A plot of the first three left-chiral (right-chiral) modes, including the zero mode, of the visible (dark) quintet for $\tilde{h}=10$.[]{data-label="fig:leftchiralh10"}](leftfermiongs1e2eh10.pdf) ![A plot of the first three right-chiral (left-chiral) modes of the visible (dark) quintet for $\tilde{h}=10$.[]{data-label="fig:rightchiralh10"}](rightfermiongs1e2eh10.pdf) ![A plot of the first three left-chiral (right-chiral) modes, including the zero mode, of the visible (dark) quintet for $\tilde{h}=100$.[]{data-label="fig:leftchiralh100"}](leftfermiongs1e2eh100.pdf) ![A plot of the first three right-chiral (left-chiral) modes of the visible (dark) quintet for $\tilde{h}=100$.[]{data-label="fig:rightchiralh100"}](rightfermiongs1e2eh100.pdf) ![A plot of the first three left-chiral (right-chiral) modes, including the zero mode, of the visible (dark) quintet for $\tilde{h}=1000$.[]{data-label="fig:leftchiralh1000"}](leftfermiongs1e2eh1000.pdf) ![A plot of the first three right-chiral (left-chiral) modes of the visible (dark) quintet for $\tilde{h}=1000$.[]{data-label="fig:rightchiralh1000"}](rightfermiongs1e2eh1000.pdf) Now we repeat the analysis for the singlet components of the fundamental. These experience a superpotential proportional to $\eta_{+}+\chi_{+}$. Given that $\eta_{+}\rightarrow{}-v$, $\chi_{+}\rightarrow{}0$ as $\tilde{y}\rightarrow{}-\infty$, and $\eta_{+}\rightarrow{}0$, $\chi_{+}\rightarrow{}-v$ as $\tilde{y}\rightarrow{}+\infty$, it obvious that $\eta_{+}+\chi_{+}$ is not kink-like, and, given it approaches the same non-zero, constant value at both positive and negative infinity, there will not exist a normalizable profile for either a left- or right-chiral zero mode. Considering that, in most instances, $\eta_{+}$ can be approximated by something proportional to $M(1-\tanh{(k_{+}y)})/2$ and $\chi_{+}$ can be approximated by something proportional to $M(1+\tanh{(k_{+}y)})/2$, for some mass scale $M$ and inverse wall width $k_{+}$, we anticipate that $\eta_{+}+\chi_{+}$ will behave to a first order approximation as a simple 5D bulk mass $M$, and that only massive modes will exist. This is in fact that case, and it is easy to see this from plotting the potentials $V^{--}_{L, R}$ ($V^{++}_{R, L}$) that arise from the superpotential $W^{--}(\tilde{y})$ ($W^{++}(\tilde{y})$), which are shown along with the superpotential in Fig. \[fig:fermionsingletpotentialplot\] for $\tilde{h} = 10$. We can see clearly that despite the existence of small wells near the centre of the wall, the potentials are positive definite and never drop below about $6.5k^2$. Thus, in this case at least, only massive modes exist. We find that this property holds for $\tilde{h} = 100$ and $\tilde{h} = 1000$, although the wells are a bit deeper, supporting more localized massive modes. ![Plots of $W^{--}(\tilde{y})/k$, $V^{--}_{L}(\tilde{y})/k^2$ ($V^{++}_{R}(\tilde{y})/k^2$), and $V^{--}_{L}(\tilde{y})/k^2$ ($V^{++}_{L}(\tilde{y})/k^2$) for $\tilde{h}=10$.[]{data-label="fig:fermionsingletpotentialplot"}](fermionsingletpotentials.pdf) For $\tilde{h} = 10$, the squared masses of the first few left-chiral (right-chiral) modes of the $\Psi^{--}$ ($\Psi^{++}$) singlet are $$\begin{aligned} \label{eq:masssqrleft12visible} m^{2}_{L,gs} &= 7.3984k^2, \\ m^{2}_{L,1e} &= 7.8209k^2, \\ m^{2}_{L,2e} &= 7.8542k^2. \end{aligned}$$ Similarly, those for the first few right-chiral (left-chiral) $$\begin{aligned} \label{eq:masssqrleft12visible} m^{2}_{R,gs} &= 7.3984k^2, \\ m^{2}_{R,1e} &= 7.8209k^2, \\ m^{2}_{R,2e} &= 7.8542k^2. \end{aligned}$$ For $\tilde{h} = 100$, the squared masses of the first few localized chiral modes for the singlets are $$\begin{aligned} \label{eq:masssqrleft12visible} m^{2}_{L,gs} &= 674.2393k^2, \\ m^{2}_{L,1e} &= 696.7534k^2, \\ m^{2}_{L,2e} &= 717.2189k^2. \end{aligned}$$ and $$\begin{aligned} \label{eq:masssqrleft12visible} m^{2}_{R,gs} &= 674.2393k^2, \\ m^{2}_{R,1e} &= 696.7534k^2, \\ m^{2}_{R,2e} &= 717.2189k^2. \end{aligned}$$ For $\tilde{h} = 1000$, the squared masses of the first few localized chiral modes for the singlets are $$\begin{aligned} \label{eq:masssqrleft12visible} m^{2}_{L,gs} &= 66375.7932k^2, \\ m^{2}_{L,1e} &= 66618.2533k^2, \\ m^{2}_{L,2e} &= 66858.8267k^2, \end{aligned}$$ and $$\begin{aligned} \label{eq:masssqrleft12visible} m^{2}_{R,gs} &= 66375.7932k^2, \\ m^{2}_{R,1e} &= 66618.2533k^2, \\ m^{2}_{R,2e} &= 66858.8267k^2. \end{aligned}$$ We show the plots for the first several modes for the choice $\tilde{h}=10$ in Fig. \[fig:leftchiralsingleth10\] and Fig. \[fig:rightchiralsingleth10\]. The plots for these massive modes for the corresponding choices $\tilde{h}=100$ and $\tilde{h}=1000$ are similar, but more localized. ![A plot of the first three left-chiral (right-chiral) modes of the $\Psi^{--}$ ($\Psi^{++}$) singlet for $\tilde{h}=10$.[]{data-label="fig:leftchiralsingleth10"}](leftsingleth10gs1e2e.pdf) ![A plot of the first three right-chiral (left-chiral) modes of the $\Psi^{--}$ ($\Psi^{++}$) singlet for $\tilde{h}=10$.[]{data-label="fig:rightchiralsingleth10"}](rightsingleth10gs1e2e.pdf) We now deal with coupling a fermion, $\Psi_{66}$, in the $66$ representation to the domain wall in the form of the interaction in Eq. \[eq:yukawacoupling1\]. For this interaction, we choose for convenience the normalization $$\label{eq:66coupling} Y_{66} = 2hTr(\overline{\Psi_{66}}\eta{}\Psi_{66})+2hTr(\overline{\Psi_{66}}\chi{}\Psi_{66}).$$ To derive the relevant superpotentials for the $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ components of $\Psi_{66}$, we need to know how to write $\Psi_{66}$ in terms of these components. We may write $\Psi_{66}$ as the matrix, and the correct way to do this in order to attain the appropriate normalizations of the kinetic terms for each component is $$\label{eq:66matrix} \Psi_{66} = \begin{pmatrix} \Psi^{10V} && \frac{1}{\sqrt{2}}\Psi^{5V--} && \frac{1}{\sqrt{2}}\Psi^{5V++} && \frac{1}{\sqrt{2}}\Psi^{5V5D} \\ -\frac{1}{\sqrt{2}}(\Psi^{5V--})^{T} && 0 && \frac{1}{\sqrt{2}}\Psi_{--++} && \frac{1}{\sqrt{2}}\Psi_{5D--} \\ -\frac{1}{\sqrt{2}}(\Psi^{5V++})^{T} && -\frac{1}{\sqrt{2}}\Psi_{--++} && 0 && \frac{1}{\sqrt{2}}\Psi_{5D++} \\ -\frac{1}{\sqrt{2}}(\Psi^{5V5D})^{T} && -\frac{1}{\sqrt{2}}(\Psi^{5D--})^{T} && -\frac{1}{\sqrt{2}}(\Psi_{5D++})^{T} && \Psi_{10D} \end{pmatrix},$$ where $\Psi^{10V}$ corresponds to the visible $(10, 1, +2, -2, +2)$ decuplet, $\Psi^{10D}$ is the dark decuplet corresponding to the $(1, 10, +2, +2, -2)$, $\Psi^{5V--}$ corresponds to the $(5, 1, -4, -2, 0)$ quintet, $\Psi^{5V++}$ to the $(5, 1, -4, 0, +2)$ quintet, $\Psi^{5D--}$ the $(1, 5, -4, 0, -2)$ quintet, $\Psi^{5D++}$ the $(1, 5, -4, +2, 0)$ quintet, $\Psi^{5V5D}$ is the bi-fundamental $(5, 5, +2, 0, 0)$ component, and $\Psi^{--++}$ is the $(1, 1, -10, 0 , 0)$ singlet component. When one substitutes the matrix representation of Eq. \[eq:66matrix\] into the interaction of Eq. \[eq:66coupling\], one derives the superpotentials $$\label{eq:visibledecupletsuperpot} W^{10V}(\tilde{y}) = 2h[\eta_{-}(\tilde{y})+\chi_{-}(\tilde{y})],$$ for the visible decuplet, $$\label{eq:darkdecupletsuperpot} W^{10D}(\tilde{y}) = -2h[\eta_{-}(\tilde{y})+\chi_{-}(\tilde{y})] = -W^{10V}(\tilde{y}),$$ for the dark decuplet, $$\label{eq:5Vmmsuperpot} W^{5V--}(\tilde{y}) = h[\eta_{+}(\tilde{y})+\chi_{+}(\tilde{y})-\eta_{-}(\tilde{y})-\chi_{-}(\tilde{y})],$$ for the extra $\Psi^{5V--}$ quintet, $$\label{eq:5Dmmsuperpot} W^{5D--}(\tilde{y}) = h[\eta_{+}(\tilde{y})+\chi_{+}(\tilde{y})+\eta_{-}(\tilde{y})+\chi_{-}(\tilde{y})],$$ for the $\Psi^{5D--}$ quintet, $$\label{eq:5Vppsuperpot} W^{5V++}(\tilde{y}) = -h[\eta_{+}(\tilde{y})+\chi_{+}(\tilde{y})+\eta_{-}(\tilde{y})+\chi_{-}(\tilde{y})] = -W^{5D--}(\tilde{y})$$ for the $\Psi^{5V++}$ quintet, $$\label{eq:5Dppsuperpot} W^{5D++}(\tilde{y}) = -h[\eta_{+}(\tilde{y})+\chi_{+}(\tilde{y})-\eta_{-}(\tilde{y})-\chi_{-}(\tilde{y})] = -W^{5V--}(\tilde{y}).$$ for the $\Psi^{5D++}$ quintet, $$\label{eq:biquintetsuperpot} W^{5V5D}(\tilde{y}) = 0,$$ for the mixed $\Psi^{5V5D}$ bi-fundamental component and $$\label{eq:66singletsuperpot} W^{--++}(\tilde{y}) = 0,$$ for the $\Psi^{--++}$ singlet component. From the above superpotentials, we can see that the visible and dark decuplet couple to the combination, $\eta_{-}+\chi_{-}$, with equal and opposite strength, in the same way that the quintets from the fundamental did. This means that they will also attain localized chiral zero modes with opposite chiralities, making possible the localization of a left-chiral $(\overline{5}, 1)\oplus{}(10, 1)$ sector embedding the Standard Model fermions together with the localization of a corresponding right-chiral $(1, \overline{5})\oplus{}(1, 10)$ sector embedding a mirror dark fermion sector. The second important thing to note from the above superpotentials is that the superpotential for the mixed bi-fundamental $\Psi^{5V5D}$ component vanishes. This implies that this component which couples to both the visible and dark $SU(5)$ gauge sectors is completely decoupled from the domain wall, and thus it remains a $4+1D$ fermionic field. Initially, this seems more worrying given that this field will initially be massless, and to have a $4+1D$ massless fermion interacting with a localized $3+1D$ Standard Model sector would be disastrous. However, because $\Psi^{5V5D}$ remains a $4+1D$ fermion, it remains a *Dirac* fermion and will thus be able to form vector-like interactions with any additional scalar fields we later introduce into the theory. This means that when we introduce an additional adjoint scalar field which induces the usual breaking $SU(5)_{V}\rightarrow{}SU(3)_{c}\times{}SU(2)_{I}\times{}U(1)_{Y}$ in the interior of the domain wall, this very component will attain a mass of order the GUT scale in the interior of the domain wall and will thus be removed from the spectrum. Also, the singlet $\Psi^{--++}$ component also experiences a vanishing superpotential and remains delocalized. Given that the singlet has a charge of $-10$ under $U(1)_{X}$, it will attain a mass or, at the very least, become decoupled from the localized sectors when we break $U(1)_{X}$ at a sufficient scale. Finally, there are the additional quintet components from $\Psi^{66}$. When one looks closely at their superpotentials and the resulting potentials, it is clear that they do not attain chiral zero modes. Given that any of the superpotentials in Eqs. \[eq:5Vmmsuperpot\], \[eq:5Dmmsuperpot\], \[eq:5Vppsuperpot\] and \[eq:5Dppsuperpot\] either contain the combination $\eta_{+}-\eta_{-}$ or $\chi_{+}+\chi_{-}$, these superpotentials interpolate between a non-zero value at spatial infinity at one end (negative or positive) and zero at the other. This means that any potential zero mode would have to be localized at infinity and therefore unphysical. We show plots of the superpotentials $W^{5V--}$ and $W^{5V++}$ along with the resultant left- and right-chiral potentials respectively in Figs. \[fig:potential5Vmm\] and \[fig:potential5Vpp\] for $\tilde{h}=10$. Note that the potentials for the modes of $\Psi^{5D--}$ and $\Psi^{5D++}$ are the same as those for $\Psi^{5V++}$ and $\Psi^{5V--}$ respectively but with the chiralities reversed, again due to a relative minus sign in the superpotentials, so it suffices to analyze the localization properties of $\Psi^{5V++}$ and $\Psi^{5V--}$. In the aforementioned figures, the potentials tend to zero at either negative or positive infinity, and to some positive value on the opposite side. Thus we anticipate that the modes for these fields will exhibit a continuum of massive modes starting from $m=0$, which are delocalized and free to propogate on one side of the wall, but are deeply suppressed on the other. The main concern that we have with these modes is whether they will be able to tunnel sufficiently into the interior of the domain wall and interact with the low-energy localized theory. For simplicity, we just give the plots for several of the left-chiral modes of $\Psi^{5V--}$ and $\Psi^{5V++}$, as there is little qualitative difference between them and the right-chiral modes: we show those for $\tilde{h} = 10$ in in Figs. \[fig:L5VIh10modes\] and \[fig:L5VIprimeh10modes\], those for $\tilde{h} = 100$ in Figs. \[fig:L5VIh100modes\] and \[fig:L5VIprimeh100modes\] and those for $\tilde{h} = 1000$ in Figs. \[fig:L5VIh1000modes\] and \[fig:L5VIprimeh1000modes\]. For completeness, we give the squared masses of these modes: the masses of the states (which we still label with $gs$, $1e$ and $2e$) in all cases are the same for $\Psi^{5V--}$ and $\Psi^{5V++}$, and for $\tilde{h} = 10$ we have $$\begin{aligned} \label{eq:delocquintetmassesh10} m^{2}_{L,gs} &= 0.1156k^2, \\ m^{2}_{L,1e} &= 0.4450k^2, \\ m^{2}_{L,2e} &= 0.9793k^2, \end{aligned}$$ for $\tilde{h} = 100$ we have $$\begin{aligned} \label{eq:delocquintetmassesh100} m^{2}_{L,gs} &= 0.1748k^2, \\ m^{2}_{L,1e} &= 0.6118k^2, \\ m^{2}_{L,2e} &= 1.2817k^2, \end{aligned}$$ and for $\tilde{h} = 1000$ the masses are $$\begin{aligned} \label{eq:delocquintetmassesh1000} m^{2}_{L,gs} &= 0.3853k^2, \\ m^{2}_{L,1e} &= 1.2047k^2, \\ m^{2}_{L,2e} &= 2.3568k^2, \end{aligned}$$ ![A plot of the superpotential $W^{5V--}$ and the resulting left-chiral and right-chiral potentials $V^{5V--}_{L}$ and $V^{5V--}_{R}$ for $\tilde{h}=10$.[]{data-label="fig:potential5Vmm"}](potential5v1h10.pdf){width="18cm" height="10cm"} ![A plot of the superpotential $W^{5V++}$ and the resulting left-chiral and right-chiral potentials $V^{5V++}_{L}$ and $V^{5V++}_{R}$ for $\tilde{h}=10$.[]{data-label="fig:potential5Vpp"}](potential5v1primeh10.pdf){width="18cm" height="10cm"} ![A plot of a several left-chiral modes for $\Psi^{5V--}$ for $\tilde{h}=10$.[]{data-label="fig:L5VIh10modes"}](L5VIh10modes.pdf) ![A plot of a several left-chiral modes for $\Psi^{5V++}$ for $\tilde{h}=10$.[]{data-label="fig:L5VIprimeh10modes"}](L5VIprimeh10modes.pdf) ![A plot of a several left-chiral modes for $\Psi^{5V--}$ for $\tilde{h}=100$.[]{data-label="fig:L5VIh100modes"}](L5VIh100modes.pdf) ![A plot of a several left-chiral modes for $\Psi^{5V++}$ for $\tilde{h}=100$.[]{data-label="fig:L5VIprimeh100modes"}](L5VIprimeh100modes.pdf) ![A plot of a several left-chiral modes for $\Psi^{5V--}$ for $\tilde{h}=1000$.[]{data-label="fig:L5VIh1000modes"}](L5VIh1000modes.pdf) ![A plot of a several left-chiral modes for $\Psi^{5V++}$ for $\tilde{h}=1000$.[]{data-label="fig:L5VIprimeh1000modes"}](L5VIprimeh1000modes.pdf) As can be seen in the above figures, the stronger the coupling, the less the continuum modes tunnel into the interior of the domain wall. In all three cases the energies of the modes are similar and their exact energies and profiles reflect the fact that we used a program to find them on a truncated mesh. Given the nature of this semi-delocalized potential, the modes are free to propagate in half the domain, which is a length of $L = 10k$. Thus our program finds modes which resemble standing waves with wavelengths of order $L$. Given that the energies of these waves is inversely proportional to $L$, it is no surprise to see that in Eqs. \[eq:delocquintetmassesh10\], \[eq:delocquintetmassesh100\] and \[eq:delocquintetmassesh1000\] that the energies of the modes our program found were in the range $0.1k$ to a few $k$. Thus, not surprisingly, modes with roughly the same energies become more suppressed and penetrate less deeply into the interior of the domain wall, as we increase $\tilde{h}$ and thus the height of the energy barriers of their localization potentials from the interior of the wall onwards. Furthermore, the above modes suggest that to get any sort of significant tunneling, even for small $\tilde{h}$, the energy of the modes must be of a scale near $k$. Since $k$ in the parameter region we chose for the scalar fields generating the domain wall is roughly the inverse width of the wall, $k$ must be at the very least be several $TeV$, hence the interaction of these delocalized modes with the localized modes on the wall is extremely minimal. This is achieved solely with the dynamics of localization of these fields to the wall; there are many other mechanisms which could contribute to the same effect, including further symmetry breaking as well as the addition of a bulk mass, which we will discuss shortly. One may be worried about the fact that in Figs. \[fig:potential5Vmm\] and \[fig:potential5Vpp\], the potentials $V^{5V--}_{R}$ and $V^{5V++}_{R}$ (and thus, also $V^{5D--}_{L}$ and $V^{5D--}_{L}$) appear become slightly negative near the interior of the wall. This would perhaps suggest the existence of modes localized in these regions, with tachyonic masses. However, no such bound states can exist, since if they did, they would have partners of the opposite chirality, which experience the localization potentials $V^{5V--}_{L}$ and $V^{5V++}_{L}$ (and $V^{5D--}_{R}$ and $V^{5D++}_{R}$ in the dark sector). The potentials $V^{5V--}_{L}$ and $V^{5V++}_{L}$ are positive definite everywhere, and so such opposite chirality partners can only attain positive definite squared masses, and thus cannot possibly be tachyonic. Hence, only modes with squared masses $m^{2} > 0$ exist. Furthermore, as we increase the value of $\tilde{h}$, the well gets pushed further away from the center of the wall, making penetration of (and escape from) the wall by low mass modes negligible. The above results were achieved solely through the localization properties of fermions in particular representations of $SU(12)$ to this domain-wall arrangement. Given that the localized gauge group to this wall is $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$, this setup is not complete, since at the very least we must break the visible GUT $SU(5)_{V}$ to the Standard Model. The most likely way to achieve this breaking is through the introduction of an additional adjoint $143$ field, and choosing parameters such that the $(24, 1)$ component embedded in it condenses in the interior of the domain wall, inducing the breaking in the usual way. In many domain wall models [@firstpaper; @jayneso10paper], such a field contributes to the background domain-wall configuration, condensing in the interior of the wall and asymptoting to zero at infinity, leading to a kink-lump background configuration. This lump affects the localization of the different SM components, splitting them according to their hypercharges. Similar physics will happen in the dark sector if we introduce additional fields which break $SU(5)_{D}$ in the interior of the wall. We leave the specific analysis of further symmetry breaking in the interior of the wall to later work. Note also that the addition of a bulk mass is consistent with the symmetries which underlie the Yukawa interaction in Eq. \[eq:yukawacoupling1\]. This is so because under this symmetry, $\Psi_{R}$ and thus $\overline{\Psi_{R}}\Psi_{R}$ is invariant. For the localized quintets and decuplets, the bulk mass will shift their localization centers from $y = 0$ in the usual way, as per the original split fermion mechanism [@splitfermions]. In fact, given fields with opposite chiralities have their localization centers shifted in equal amounts in opposite directions for the same bulk mass $M$, a bulk mass term will shift the visible and dark fermions in different directions along the extra dimension, leading to a splitting of the visible and dark sectors. For the mixed $(5, 5)$ fermion and the singlet state from the $66$, a bulk mass simply makes these delocalized $4+1D$ states massive, hence presenting a much easier way to make these fields massive than through symmetry breaking. For the delocalized singlets of the fundamental, the most likely outcome is that their masses get shifted. For the semi-delocalized quintets of the $66$, since their superpotentials can be thought of as approximately of the form $hv(1\pm{}\tanh{(k'y)})/2$, if the bulk mass is opposite that provided by the superpotential, then the resultant mass term will always be less than the maximum of the $\tanh(k'y)$ term, thus making it possible for some of these quintets to attain localized modes. These modes will have the same chirality as the decuplets, hence they would be potentially troublesome, but we can always localize additional quintets of the opposite chirality using the fundamental representation to ensure that these modes attain a GUT scale mass after breaking $SU(5)_{V}$ to the Standard Model if need be. We have shown in this section that it is possible to localize a set of 3+1D left-chiral fermions in the set of representations $(\overline{5}, 1)\oplus{}(10, 1)$ of $SU(5)_{V}\times{}SU(5)_{D}$, which contain the visible Standard Model fermions, along with a mirror dark sector of right-chiral fermions in the representations $(1, \overline{5})\oplus{}(1, 10)$ of $SU(5)_{V}\times{}SU(5)_{D}$, by coupling $4+1D$ fermions in the $12$ and $66$ representations of $SU(12)$ to the domain wall. Furthermore, we showed that the troublesome mixed $(5, 5)$ fermion was completely delocalized, implying that it remained a vector-like $4+1D$ Dirac fermion which will attain a GUT scale mass when we add an additional adjoint scalar field to the background configuration to induce the breaking of $SU(5)_{V}$ to the Standard Model. Likewise, the delocalized singlet will attain a mass when we break the additional $U(1)_{X}$. We also showed that the additional unwanted quintet states in the $66$ could be sufficiently suppressed in the interior of the domain-wall brane. The next step is to show that we can localize scalars and that we can therefore localize a Standard Model Higgs field along with a dark mirror Higgs field, opening the possibility of having a fully localized Standard Model and a localized dark mirror sector, which are sufficiently sequestrated to satisfy current experimental limits. Scalar Localization {#sec:scalarlocalization} =================== In this section, we give a simple example of scalar localization to the $m = 1$ domain wall which was described in previous sections and used in the previous section on fermion localization. For simplicity, we solely consider a scalar in the fundamental $12$ representation, which we call $\Phi$. We give a couple of interesting scenarios when considering the localization properties of the individual $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}\times{}U(1)_{A}\times{}U(1)_{B}$ components, which we label as $\Phi^{5V}$, $\Phi^{5D}$, $\Phi^{--}$ and $\Phi^{++}$, in correspondence with the labelling we used for the components of the fermionic $\Psi_{12}$ from the previous section. We would like to at the very least be able to give the visible Higgs quintet scalar, $\Psi^{5V}$, a lowest energy localized mode with a tachyonic mass, so that electroweak symmetry breaking can be performed. We would like to also show that there are parameter regions where the singlets $\Phi^{--}$ and $\Phi^{++}$ attain tachyonic masses so that we can break the semi-delocalized $U(1)_{A}$ and $U(1)_{B}$. The most general potential which couples $\Phi$ to the domain-wall generating fields $\eta$ and $\chi$ is $$\label{eq:fundamentalscalarlocpotential} \begin{aligned} V_{loc}(\Phi, \eta, \chi) &= \mu^{2}_{\Phi}\Phi^{\dagger}\Phi+\lambda_{\Phi1}\big(\Phi^{\dagger}\eta{}\Phi{}+\Phi^{\dagger}\chi{}\Phi{}\big)+\lambda_{\Phi2}\big(\Phi^{\dagger}\eta^{2}\Phi{} + \Phi^{\dagger}\chi^{2}\Phi{}\big) \\ &+\lambda_{\Phi3}\big(\Phi^{\dagger}\eta{}\chi{}\Phi{}+\Phi^{\dagger}\chi{}\eta{}\Phi{}\big)+\lambda_{\Phi4}\big(\Phi^{\dagger}\Phi{}Tr[\eta^2]+\Phi^{\dagger}\Phi{}Tr[\chi^2]\big)+\lambda_{\Phi5}\Phi^{\dagger}\Phi{}Tr(\eta{}\chi{}). \end{aligned}$$ From this, by substituting the form of the $m=1$ solution of Sec. \[sec:solution\] and describing the couplings in terms of $\eta_{-}$, $\chi_{-}$, $\eta_{+}$ and $\chi_{+}$, we find that the effective localization potentials for the modes of $\Phi^{5V}$, $\Phi^{5D}$, $\Phi^{--}$ and $\Phi^{++}$ are, respectively, $$\label{eq:visiblequintetscalarpotential} \begin{aligned} V^{5V}_{loc}(\tilde{y}) &= \mu^{2}_{\Phi}+\lambda_{\Phi1}\big(\eta_{-}(\tilde{y})+\chi_{-}(\tilde{y})\big)+\lambda_{\Phi2}\big(\eta^{2}_{-}(\tilde{y})+\chi^{2}_{-}(\tilde{y})\big)+2\lambda_{\Phi3}\eta_{-}(\tilde{y})\chi_{-}(\tilde{y}) \\ &+\lambda_{\Phi4}\big(10\eta^{2}_{-}(\tilde{y})+2\eta^{2}_{+}(\tilde{y})+10\chi^{2}_{-}(\tilde{y})+2\chi^{2}_{+}(\tilde{y})\big)+\lambda_{\Phi5}\big(10\eta_{-}(\tilde{y})\chi_{-}(\tilde{y})+2\eta_{+}(\tilde{y})\chi_{+}(\tilde{y})\big), \end{aligned}$$ $$\label{eq:darkquintetscalarpotential} \begin{aligned} V^{5D}_{loc}(\tilde{y}) &= \mu^{2}_{\Phi}-\lambda_{\Phi1}\big(\eta_{-}(\tilde{y})+\chi_{-}(\tilde{y})\big)+\lambda_{\Phi2}\big(\eta^{2}_{-}(\tilde{y})+\chi^{2}_{-}(\tilde{y})\big)+2\lambda_{\Phi3}\eta_{-}(\tilde{y})\chi_{-}(\tilde{y}) \\ &+\lambda_{\Phi4}\big(10\eta^{2}_{-}(\tilde{y})+2\eta^{2}_{+}(\tilde{y})+10\chi^{2}_{-}(\tilde{y})+2\chi^{2}_{+}(\tilde{y})\big)+\lambda_{\Phi5}\big(10\eta_{-}(\tilde{y})\chi_{-}(\tilde{y})+2\eta_{+}(\tilde{y})\chi_{+}(\tilde{y})\big), \end{aligned}$$ $$\label{eq:mmsingletscalarpotential} \begin{aligned} V^{--}_{loc}(\tilde{y}) &= \mu^{2}_{\Phi}+\lambda_{\Phi1}\big(\eta_{+}(\tilde{y})+\chi_{+}(\tilde{y})\big)+\lambda_{\Phi2}\big(\eta^{2}_{+}(\tilde{y})+\chi^{2}_{+}(\tilde{y})\big)+2\lambda_{\Phi3}\eta_{+}(\tilde{y})\chi_{+}(\tilde{y}) \\ &+\lambda_{\Phi4}\big(10\eta^{2}_{-}(\tilde{y})+2\eta^{2}_{+}(\tilde{y})+10\chi^{2}_{-}(\tilde{y})+2\chi^{2}_{+}(\tilde{y})\big)+\lambda_{\Phi5}\big(10\eta_{-}(\tilde{y})\chi_{-}(\tilde{y})+2\eta_{+}(\tilde{y})\chi_{+}(\tilde{y})\big), \end{aligned}$$ and $$\label{eq:ppsingletscalarpotential} \begin{aligned} V^{++}_{loc}(\tilde{y}) &= \mu^{2}_{\Phi}-\lambda_{\Phi1}\big(\eta_{+}(\tilde{y})+\chi_{+}(\tilde{y})\big)+\lambda_{\Phi2}\big(\eta^{2}_{+}(\tilde{y})+\chi^{2}_{+}(\tilde{y})\big)+2\lambda_{\Phi3}\eta_{+}(\tilde{y})\chi_{+}(\tilde{y}) \\ &+\lambda_{\Phi4}\big(10\eta^{2}_{-}(\tilde{y})+2\eta^{2}_{+}(\tilde{y})+10\chi^{2}_{-}(\tilde{y})+2\chi^{2}_{+}(\tilde{y})\big)+\lambda_{\Phi5}\big(10\eta_{-}(\tilde{y})\chi_{-}(\tilde{y})+2\eta_{+}(\tilde{y})\chi_{+}(\tilde{y})\big). \end{aligned}$$ To find the localized modes of these potentials, we first perform a mode expansion in the usual way, representing a given $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ component $\Phi^{R}$ in the form $$\label{eq:scalarmodeexpansion} \Phi^{R}(x, y) = \sum_{m} p^{R}_{m}(y)\phi^{R}_{m}(x),$$ where again $m$ stands for the mass of the mode $\phi^{R}_{m}$. When we substitute this mode expansion into the $4+1D$ Klein-Gordon equation, noting that $\Box_{3+1D}\phi^{R}_{m} = -m^2\phi^{R}_{m}$, we find that profiles $p^{R}_{m}(y)$ satisfy the Schrödinger equations $$\label{eq:scalarmodeSE} \big[-\frac{d}{dy^2} + V^{R}_{loc}(y)\big]p^{R}_{m}(y) = m^2p^{R}_{m}(y).$$ We solve for the three lowest energy modes for the above set of equations in the same way that we did for the corresponding equations for the fermions in the previous section, by finding the eigenvectors and eigenvalues of the Hamiltonian acting on the $2001$-dimensional space spanned by the values of the eigenmodes at each of the lattice points. We do this for two parameter choices. For the first parameter choice, we choose $$\label{eq:firstscalarlocchoice} \begin{aligned} \mu^{2}_{\Phi} &= 5.0k^2, \\ \lambda_{\Phi1} &= \frac{100.0}{k}, \\ \lambda_{\Phi2} &= \frac{-600.0}{k}, \\ \lambda_{\Phi3} &= \frac{600.0}{k}, \\ \lambda_{\Phi4} &= \frac{100.0}{k}, \\ \lambda_{\Phi5} &= \frac{150.0}{k}, \end{aligned}$$ and we find the masses of the three lightest modes, which label again with the subscripts $gs$, $1e$ and $2e$, are $$\begin{aligned} \label{eq:visiblequintetscalarmasses1} m^{2}_{5V,gs} &= -4.3809k^2, \\ m^{2}_{5V,1e} &= 12.2846k^2, \\ m^{2}_{5V,2e} &= 22.3332k^2, \end{aligned}$$ for the visible Higgs quintet $\Phi^{5V}$, $$\begin{aligned} \label{eq:darkquintetscalarmasses1} m^{2}_{5D,gs} &= -4.3809k^2, \\ m^{2}_{5D,1e} &= 12.2846k^2, \\ m^{2}_{5D,2e} &= 22.3332k^2, \end{aligned}$$ for the dark Higgs quintet $\Phi^{5D}$, $$\begin{aligned} \label{eq:mmsingletscalarmasses1} m^{2}_{--,gs} &= 3.4731k^2, \\ m^{2}_{--,1e} &= 12.3468k^2, \\ m^{2}_{--,2e} &= 18.1113k^2, \end{aligned}$$ for the singlet Higgs scalar $\Phi^{--}$, and $$\begin{aligned} \label{eq:mmsingletscalarmasses1} m^{2}_{++,gs} &= 55.4509k^2, \\ m^{2}_{++,1e} &= 65.4019k^2, \\ m^{2}_{++,2e} &= 72.3116k^2, \end{aligned}$$ for the singlet Higgs scalar $\Phi^{++}$. We show plots of the profiles for $\Phi^{5V}$, $\Phi^{5D}$, $\Phi^{--}$ and $\Phi^{++}$ respectively in Figs. \[fig:visiblehiggsquintet1\], \[fig:darkhiggsquintet1\], \[fig:singletmmhiggsquintet1\] and \[fig:singletpphiggsquintet1\]. From this we see that the lowest energy modes of both the visible and dark quintets are localized and attain tachyonic masses, while all the modes for the singlet states have positive squared masses. This means that these quintets can go on to induce symmetry breaking in the visible and dark sectors, while the semi-delocalized Abelian groups $U(1)_{A}$ and $U(1)_{B}$ are left unbroken. Notice that the profiles for these lowest energy states of $\Phi^{5V}$ and $\Phi^{5D}$ are split and their masses are degenerate. The splitting is due to the cubic interaction corresponding to the coupling constant $\lambda_{\Phi1}$; this term introduces a contribution to the potential proportional to the combination $\eta_{-}+\chi_{-}$, which is kink-like, thus shifting the localization centers from zero. Given that the visible and dark quintets experience this term equally but with the opposite sign, they experience shifts in opposite directions from $\tilde{y}=0$. In fact, one can deduce that $V^{5D}_{loc}(\tilde{y}) = V^{ 5V}_{loc}(-\tilde{y})$, so that the potential of the dark quintet is a mirror image of the one for the visible quintet, explaining the degeneracy of the masses for their respective modes. The same cubic interaction does something very different for the singlet modes. This interaction leads to terms in the potentials for $\Phi^{--}$ and $\Phi^{++}$ which are proportional to $\eta_{+}+\chi_{+}$. As discussed previously, $\eta_{+}+\chi_{+}$ is an even function and behaves, for the most part, as mass-like rather than kink-like. This means that this term will either raise or lower the masses of the localized modes and, given that $\Phi^{--}$ and $\Phi^{++}$ experience this interaction equally but with a relative minus sign, the masses of one of them will be lowered while those for the other will be raised. This is why the masses of the modes of $\Phi^{--}$ and $\Phi^{++}$ are not degenerate. ![A plot of the first three modes for the visible scalar quintet $\Phi^{5V}$ for the parameter choice in Eq. \[eq:firstscalarlocchoice\].[]{data-label="fig:visiblehiggsquintet1"}](visiblehiggsquintetgsto2ek1.pdf) ![A plot of the first three modes for the dark scalar quintet $\Phi^{5D}$ for the parameter choice in Eq. \[eq:firstscalarlocchoice\].[]{data-label="fig:darkhiggsquintet1"}](darkhiggsquintetgsto2ek1.pdf) ![A plot of the first three modes for the singlet $\Phi^{--}$ for the parameter choice in Eq. \[eq:firstscalarlocchoice\].[]{data-label="fig:singletmmhiggsquintet1"}](singletmmhiggsquintetgsto2ek1.pdf) ![A plot of the first three modes for the singlet $\Phi^{++}$ for the parameter choice in Eq. \[eq:firstscalarlocchoice\].[]{data-label="fig:singletpphiggsquintet1"}](singletpphiggsquintetgsto2ek1.pdf) We have shown that symmetric breaking of the $SU(5)_{V}$ and $SU(5)_{D}$ is possible. Given that the singlet Higgs scalars are charged under $U(1)_{A}$ and $U(1)_{B}$, they can be potentially used to break one of these symmetries, so we are thus also interested in whether these components can attain tachyonic masses. In the second parameter choice, we will show that this is possible. If one of these components attains a tachyonic mass, $U(1)_{A}\times{}U(1)_{B}$ is broken to $U(1)_{A-B}$. In a realistic model, we would need to add a scalar in another representation to localize a singlet component which can break $U(1)_{A-B}$. For the second parameter choice, we choose $$\label{eq:secondscalarlocchoice} \begin{aligned} \mu^{2}_{\Phi} &= 32.0k^2, \\ \lambda_{\Phi1} &= \frac{200.0}{k}, \\ \lambda_{\Phi2} &= \frac{100.0}{k}, \\ \lambda_{\Phi3} &= \frac{100.0}{k}, \\ \lambda_{\Phi4} &= \frac{100.0}{k}, \\ \lambda_{\Phi5} &= \frac{550.0}{k}, \end{aligned}$$ and we find the masses of the three lightest modes this time are $$\begin{aligned} \label{eq:visiblequintetscalarmasses2} m^{2}_{5V,gs} &= 13.8579k^2, \\ m^{2}_{5V,1e} &= 39.5516k^2, \\ m^{2}_{5V,2e} &= 59.6312k^2, \end{aligned}$$ for the visible Higgs quintet $\Phi^{5V}$, $$\begin{aligned} \label{eq:darkquintetscalarmasses2} m^{2}_{5D,gs} &= 13.8579k^2, \\ m^{2}_{5D,1e} &= 39.5516k^2, \\ m^{2}_{5D,2e} &= 59.6312k^2, \end{aligned}$$ for the dark Higgs quintet $\Phi^{5D}$, $$\begin{aligned} \label{eq:mmsingletscalarmasses2} m^{2}_{--,gs} &= -24.8537k^2, \\ m^{2}_{--,1e} &= 1.9428k^2, \\ m^{2}_{--,2e} &= 24.4997k^2, \end{aligned}$$ for the singlet Higgs scalar $\Phi^{--}$, and $$\begin{aligned} \label{eq:mmsingletscalarmasses2} m^{2}_{++,gs} &= 78.5005k^2, \\ m^{2}_{++,1e} &= 106.1446k^2, \\ m^{2}_{++,2e} &= 129.6727k^2, \end{aligned}$$ for the singlet Higgs scalar $\Phi^{++}$. For this parameter choice, we show plots of the profiles for $\Phi^{5V}$, $\Phi^{5D}$, $\Phi^{--}$ and $\Phi^{++}$ respectively in Figs. \[fig:visiblehiggsquintet2\], \[fig:darkhiggsquintet2\], \[fig:singletmmhiggsquintet2\] and \[fig:singletpphiggsquintet2\]. From the above equations for the squared masses, we can clearly see that for the second parameter choice, the lowest energy localized modes for the visible and dark quintets as well as the singlet $\Phi^{++}$ have positive definite squared masses, while the lowest energy localized mode for $\Phi^{--}$ attains a tachyonic mass. This will lead to the breaking $U(1)_{A}\times{}U(1)_{B}\rightarrow{}U(1)_{A-B}$ ![A plot of the first three modes for the visible scalar quintet $\Phi^{5V}$ for the parameter choice in Eq. \[eq:secondscalarlocchoice\].[]{data-label="fig:visiblehiggsquintet2"}](visiblehiggsquintetgsto2ek2.pdf) ![A plot of the first three modes for the dark scalar quintet $\Phi^{5D}$ for the parameter choice in Eq. \[eq:secondscalarlocchoice\].[]{data-label="fig:darkhiggsquintet2"}](darkhiggsquintetgsto2ek2.pdf) ![A plot of the first three modes for the singlet $\Phi^{--}$ for the parameter choice in Eq. \[eq:secondscalarlocchoice\].[]{data-label="fig:singletmmhiggsquintet2"}](singletmmhiggsquintetgsto2ek2.pdf) ![A plot of the first three modes for the singlet $\Phi^{++}$ for the parameter choice in Eq. \[eq:secondscalarlocchoice\].[]{data-label="fig:singletpphiggsquintet2"}](singletpphiggsquintetgsto2ek2.pdf) We have used this section to give an example of scalar localization for the various components of a fundamental 4+1D scalar field coupled to the $m=1$ CoS solution. Any realistic model will have to include additional scalar fields, including at least one adjoint which breaks the visible $SU(5)_{V}$ to the Standard Model. Given the magnitude of the GUT scale, such a field is expected to have a significant back-reaction on the kink-generating scalar fields, hence an additional scalar inducing the breaking of $SU(5)_{V}$ is expected to form part of the background scalar field configuration, rather than merely just being localized. Coupling a fundamental scalar to this field would then lead to splitting of different SM-covariant components, and would also lead to a breaking of degeneracy in the masses between these components. Thus, it should be possible to choose parameters such that the electroweak Higgs embedded in the visible quintet attains a tachyonic mass, while the colored Higgs component of $\Phi^{5V}$ attains a positive squared mass, maintaining $SU(3)_{c}$. In later models, we could also introduce symmetry breaking in the dark $SU(5)_{D}$ sector via scalar fields in the background. In the dark sector, we obviously have a lot more freedom in how we break $SU(5)_{D}$. If we were to break $SU(5)_{V}$ and $SU(5)_{D}$ symmetrically, that is to break $SU(5)_{D}$ to a mirror SM gauge group at the same scale is the corresponding breaking for $SU(5)_{V}$, we would end up with a localized mirror matter model on the domain wall. Another interesting possibility would to break $SU(5)_{V}$ and $SU(5)_{D}$ asymmetrically, as per the models in Ref. [@lonsdalegrandunifieddm; @lonsdaleso10xso10adm]. The potential to do asymmetric symmetry breaking generally in the context of this model is very rich; we could do this through the background scalar field configuration or, alternatively, through the localization of a set of scalar fields which experience an asymmetric symmetry breaking potential in the interior of the wall. We leave an analysis of these symmetry breaking scenarios to later work. In this section along with the two previous ones, we have outlined the construction of a domain-wall brane model based on the Clash-of-Symmetries mechanism in which gauge bosons corresponding to the gauge group $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ are localized, along with fermions and scalars. In particular, fermion localization in this model has some special properties, with only the components which transform solely under either $SU(5)_{V}$ or $SU(5)_{D}$ localized to the wall; the troublesome $(5, 5)$ mediator is completely delocalized and will attain a GUT scalar mass when we perform the required breaking of $SU(5)_{V}$. Higgs scalars can be localized with tachyonic masses and can hence induce further symmetry breaking on the wall as required. Hence, we have been successful in constructing a prototype model in an extra-dimensional field theory in which a visible gauge sector along with a dark, non-Abelian gauge sector arises from a unified $SU(12)$ theory. Given that it would be interesting to construct other viable models of this sort, before we conclude this paper, we give an overview of several other interesting alternatives which could lead to the same dynamics in the next section. Some Alternative Models {#sec:alternativemodels} ======================= Another interesting model: $SU(9)$ {#subsec:su9model} ---------------------------------- In this section, we briefly outline how to construct a potentially realistic model using the group $SU(9)$. In this case, we claim that the CoS mechanism can be used to generate an $SU(5)_{V}\times{}SU(2)_{D}\times{}U(1)_{X}$ invariant theory on the wall after breaking $SU(9)$ to two differently embedded copies of $SU(6)\times{}SU(3)\times{}U(1)$. Surprisingly, it turns out that the visible SM fermions as well as dark $SU(2)_{D}$ quarks can be acceptably embedded in a combination of the $9$ and $84$ representations, with all fermionic mediators and other unwanted states either being completely decoupled from the wall or attaining a coupling potential which does not permit chiral zero mode solutions. Just as we did before with the $SU(12)$ model described previously, we generate the CoS domain wall generating our desired theory using two scalar fields charged under the adjoint representation, which in $SU(9)$ is 80-dimensional, transforming under a discrete $\mathbb{Z}_{2}$ interchange symmetry. Unlike for $SU(12)$, the discrete reflection transformation $\eta{}\rightarrow{}-\eta$ is outside $SU(9)$, however, the required breakings to subgroups isomorphic to $SU(6)\times{}SU(3)\times{}U(1)$ at positive and negative spatial infinity requires the cubic invariant to make the VEV pattern generating $SU(6)\times{}SU(3)\times{}U(1)$ globally minimal. Just like for the $SU(12)$ model, the scalar potential may be written as $$\begin{aligned} \label{eq:twoadjointpotentialpartssu9} V(\eta) &= -\frac{1}{2}Tr[\eta^2]-\frac{1}{3}cTr[\eta^3]+\lambda_{1}(Tr[\eta^2])^2+\lambda_{2}Tr[\eta^4], \\ V(\chi) &= -\frac{1}{2}Tr[\chi^2]-\frac{1}{3}cTr[\chi^3]+\lambda_{1}(Tr[\chi^2])^2+\lambda_{2}Tr[\chi^4], \\ I(\eta, \chi) &= 2\delta^2Tr[\eta{}\chi]+dTr[\eta^2\chi]+dTr[\eta{}\chi^2]+l_{1}Tr[\eta^2]Tr[\chi^2]+l_{2}Tr[\eta^2\chi^2]+l_{3}(Tr[\eta{}\chi])^2 \\ &+l_{4}Tr[\eta{}\chi{}\eta{}\chi{}]+l_{5}Tr[\eta^2]Tr[\eta{}\chi]+l_{5}Tr[\eta{}\chi]Tr[\chi^2]+l_{6}Tr[\eta^3\chi]+l_{6}Tr[\eta{}\chi^3]. \end{aligned}$$ We will not go into the specifics of ensuring the desired minima and whether the desired CoS solution can be made the most stable, although like the $SU(12)$ model, we expect that this can be done given the generic features of the potential in Eq. \[eq:twoadjointpotentialpartssu9\]. We only give the minima required and the localization properties for fermions and scalars which follow. We wish to choose parameters such that the minima are of the form $\eta{}\neq{}0$, $\chi=0$ and $\eta=0$, $\chi{}\neq{}0$, to which the CoS domain wall solution asymptotes to at spatial infinity. Without loss of generality, let the minimum at $y=-\infty$ be of the form $\eta{}\neq{}0$, $\chi=0$, with $\eta$ proportional to $$\label{eq:su9neginftyminimum} \eta(y=-\infty) \propto{} A = diag(-1, -1, -1, -1, -1, -1, +2, +2, +2).$$ We will again choose $l_{2}>l_{4}$ so that the paths with $[\eta, \chi] = 0$ are minimal, and so that $\chi$ is thus simultaneously diagonalizable with $\eta$. Then the desired VEV pattern, of the form $\eta=0$, $\chi{}\neq{}0$, at positive infinity to localize an $SU(5)\times{}SU(2)\times{}U(1)_{X}$ gauge group is obviously one in which $\chi$ is proportional to (up to trivial gauge rotations connecting to other diagonal forms) $$\label{eq:su9posinftyminimum} \chi(y=+\infty) \propto{} B = diag(+1, +1, +1, +1, +1, -2, +1, -2, -2).$$ At negative infinity, the breaking induced is $SU(9)\rightarrow{}SU(6)_{1}\times{}SU(3)_{1}\times{}U(1)_{A}$, and at positive infinity, the induced breaking is $SU(9)\rightarrow{}SU(6)_{2}\times{}SU(3)_{2}\times{}U(1)_{B}$. On the domain-wall brane, there is further breaking to the overlap of these two subgroups and, clearly, $SU(6)_{1}\cap{}SU(6)_{2}\supset{}SU(5)_{V}$ and $SU(3)_{1}\cap{}SU(3)_{2}\supset{}SU(2)_{D}$. To determine the form for the localized Abelian generator $X$, we need to look at the leftover generators from the $SU(6)$ and $SU(3)$ subgroups on each side of the wall. From $SU(6)_{1}\times{}SU(3)_{1}$, the leftover generators are respectively $$\label{eq:t1su9} T_{1} = diag(+1, +1, +1, +1, +1, -5, 0, 0, 0),$$ and $$\label{eq:t2su9} T_{2} = diag(0, 0, 0, 0, 0, 0, -2, +1, +1).$$ For $SU(6)_{2}\times{}SU(3)_{2}$, the leftover generators are $$\label{eq:t1primesu9} T'_{1} = diag(+1, +1, +1, +1, +1, 0, -5, 0, 0)$$ and $$\label{eq:t2primesu9} T'_{2} = diag(0, 0, 0, 0, 0, -2, 0, +1, +1).$$ One sees immediately that the generator $$\label{eq:xgeneratorlocsu9} X = 2T_{1}+5T_{2} = 2T'_{1}+5T'_{2} = diag(+2, +2, +2, +2, +2, -10, -10, +5, +5),$$ is preserved on the domain wall interior and its corresponding photon is localized. Thus the full symmetry preserved on the domain wall can be written $SU(5)_{V}\times{}SU(2)_{D}\times{}U(1)_{X}\times{}U(1)_{A}\times{}U(1)_{B}$, with the $SU(5)_{V}\times{}SU(2)_{D}\times{}U(1)_{X}$ subgroup completely dynamically localized. The photons corresponding to the generators $A$ and $B$ are semi-delocalized. The next step is to write the simplest representations of $SU(9)$ in terms of the representations of $SU(5)_{V}\times{}SU(2)_{D}\times{}U(1)_{X}\times{}U(1)_{A}\times{}U(1)_{B}$ so that we can embed fermions and scalars and determine their localization properties. Under $SU(5)_{V}\times{}SU(2)_{D}\times{}U(1)_{X}\times{}U(1)_{A}\times{}U(1)_{B}$, the fundamental $9$ representation, the rank 2 antisymmetric $36$ representation, and the rank 3 totally antisymmetric $84$ representation decompose respectively as $$\label{eq:su9fundamental} 9 = (5, 1, +2, -1, +1)\oplus{}(1, 1, -10, -1, -2)\oplus{}(1, 2, +5, +2, -2)\oplus{}(1, 1, -10 , +2, +1),$$ $$\begin{aligned} \label{eq:su9rank2antisym} 36 &= (10, 1, +4, -2, +2)\oplus{}(5, 1, -8, -2, -1)\oplus{}(5, 2, +7, +1, -1)\oplus{}(5, 1, -8, +1, +2) \\ &\oplus{}(1, 2, -5, +1, -4)\oplus{}(1, 1, -20, +1, -1)\oplus{}(1, 1, +10, +4, -4), \end{aligned}$$ and $$\begin{aligned} \label{eq:su9rank3totantisym} 84 &= (\overline{10}, 1, +6, -3, +3)\oplus{}(10, 1, -6, -3, 0)\oplus{}(10, 2, +9, 0, 0)\oplus{}(10, 1, -6, 0, +3)\oplus{}(5, 2, -3, 0, -3)\oplus{}(5, 1, -18, 0, 0) \\ &\oplus{}(5, 1, +12, +3, -3)\oplus{}(5, 2, -3, +3, 0)\oplus{}(1, 1, 0, +3, -6)\oplus{}(1, 2, -15, +3, -3)\oplus{}(1, 1, 0, +6, -3). \end{aligned}$$ Given the methods we developed in the previous section for analyzing the localization properties of the various components, we can see that for this type of solution, assuming that the fermions are coupled to the background fields by the type of coupling described by Eq. \[eq:yukawacoupling1\], that for the $9$ of $SU(9)$, the $(5, 1, +2, -1, +1)$ and $(1, 2, +5, +2, -2)$ components experience a kink and anti-kink respectively (or vice versa, depending on the sign of the coupling $h$), and thus develop chiral zero modes, while the $(1, 1, -10, -1, -2)$ and $(1, 1, -10 , +2, +1)$ components experience bulk masses which do not lead to chiral zero modes. Looking at the $36$ representation, it at first seems unlikely that we can reproduce a realistic model, given that it contains an undesirable $(5, 2, +7, +1, -1)$ component which will attain a chiral zero mode given the type of coupling in Eq. \[eq:yukawacoupling1\]. However, if we look at the $84$ representation, one of the fermionic mediator components, the $(10, 2, +9, 0, 0)$ component, is uncharged under both $A$ and $B$ and is thus completely decoupled from the domain wall and, therefore, delocalized. The other two fermionic mediator components, the $(5, 2, -3, 0, -3)$ and $(5, 2, -3, +3, 0)$ components, are uncharged under one of $A$ or $B$ and thus also do not attain chiral zero modes and are semi-delocalized, analogously to the additional quintet components of the $66$ in the $SU(12)$ model discussed previously. Similarly, the two decuplets $(10, 1, -6, -3, 0)$ and $(10, 1, -6, 0, +3)$ are also semi-delocalized. The quintet $(5, 1, -18, 0, 0)$ is fully decoupled from the wall and will attain a mass of order $M_{GUT}$ when we break $SU(5)_{V}$. This leaves only the anti-decuplet $(\overline{10}, 1, +6, -3, +3)$ along with a quintet $(5, 1, +12, +3, -3)$, an $SU(2)_{D}$ doublet $(1, 2, -15, +3, -3)$ and two $SU(5)_{V}\times{}SU(2)_{D}\times{}U(1)_{X}$-singlet components $(1, 1, 0, +3, -6)$ and $(1, 1, 0, +6, -3)$, which develop chiral zero modes. If we choose the coupling constant such that the $(\overline{10}, 1, +6, -3, +3)$ component develops a right-chiral zero mode, then the $(5, 1, +12, +3, -3)$ component develops a left-chiral mode, which means that $SU(5)_{V}$ chiral multiplets arising from the $84$ are equivalent to a left-chiral combination of $5\oplus{}10$. The required combination is $\overline{5}\oplus{}10$, so from the $84$ we are guaranteed an additional $5$ which we must make massive, which can be done by localizing another left-chiral $\overline{5}$, which is readily done by embedding it in a second $9$ representation. Hence, the minimal required fermionic particle content to contain three generations of the SM fermions in this model is three copies of the combination $$\label{eq:su9fermioncontent} 9\oplus{}9\oplus{}84.$$ After forming this solution, we will have to include another adjoint Higgs field which will contain a component charged under the adjoint of $SU(5)_{V}$, in order to break $SU(5)_{V}$ to the Standard Model. As might be expected, we can embed an $SU(5)_{V}$ quintet Higgs in the $(5, 1, +2, -1, +1)$ component of the fundamental, and we can use any number of combinations of the various $SU(5)_{V}\times{}SU(2)_{D}$-singlet components embedded in the $9$, $36$ and $84$ to break the $U(1)$ subgroups associated with $X$, $A$ and $B$. Models based on Dvali-Shifman localization on non-Clash-of-Symmetries Domain Walls {#subsec:noncossolution} ---------------------------------------------------------------------------------- In this section, we briefly outline how we can alternatively use one of the non-Clash-of-Symmetries solutions, the $m=0$ or $m=6$ solutions from Sec. \[sec:solution\], as a basis to construct a realistic model. One of the immediate benefits of this is that it is very easy to ensure that one of these solutions is the most stable, since we can impose the additional $\eta{}\rightarrow{}\chi{}$, $\chi{}\rightarrow{}-\eta$ and $\eta{}\rightarrow{}-\eta{}$, $\chi{}\rightarrow{}-\chi{}$ symmetries, and then choose the coupling constant of the $Tr[\eta{}\chi{}]^2$ interaction to be negative. In a setup based on these solutions, we have to utilize the original Dvali-Shifman mechanism to localize $SU(5)_{V}$ and $SU(5)_{D}$, since the $SU(6)_{V}\times{}SU(6)_{D}$ group which is respected in the interior of the wall at the level of symmetries is otherwise delocalized. We simply achieve this with the addition of extra scalar fields which induce the breakings $SU(6)_{V,D}\rightarrow{}SU(5)_{V,D}$ in the interior of the wall. Thus, one of the costs to using the non-CoS walls is the addition of extra fields to the background scalar field configuration. For the non-CoS solutions, the symmetry respected in the interior of the wall is $SU(6)_{V}\times{}SU(6)_{D}\times{}U(1)_{A}$. Consider the $m=0$ solution, where $B=-A$. The $12$ and $66$ representations decompose under $SU(6)_{V}\times{}SU(6)_{D}\times{}U(1)_{A}$ as $$\label{eq:fundamentalrepnoncossol} 12 = (6, 1, -1)\oplus{}(1, 6, +1),$$ and $$\label{eq:fundamentalrepnoncossol} 66 = (15, 1, -2)\oplus{}(6, 6, 0)\oplus{}(1, 15, +2).$$ Since $B = -A$ for the $m=0$ solution, if we couple a fermionic field in the $12$ representation to $\eta$ and $\chi$ according to the type of interaction in Eq. \[eq:yukawacoupling1\], if the Yukawa coupling $h$ is positive, the $(6, 1, -1)$ component will experience a kink-like interaction and develop left-chiral zero mode and the $(1, 6, +1)$ will experience an anti-kink interaction, attaining a right-chiral zero mode, and vice verse if $h$ is negative. For the same reason, if $h$ for a fermion in the $66$ representation is positive, the $(15, 1, -2)$ component attains a left-chiral zero mode and $(1, 15, -2)$ component attains a right-chiral zero mode, and vice versa if $h$ is negative. Once again, the mixed bi-fundamental $(6, 6, 0)$ is completely decoupled from the domain wall and is completely delocalized, since for this component $A=B=0$. This component will then attain a mass at least as large as the GUT scale if we break $SU(6)_{V}\rightarrow{}SU(5)_{V}\times{}U(1)$ by introducing an additional adjoint scalar which forms a lump in the interior of the domain wall. That leaves the $SU(6)_V$ and $SU(6)_{D}$ components charged under the respective $6$ and $15$ representations to attain localized chiral zero modes. Given that the visible $SU(6)_V$ $15$ component will contain an additional $SU(5)_V$ quintet component with the same chirality as the the decuplet, we need to include two fundamentals, so that a localized quintet from one of them can form a mass term with the unwanted quintet from the visible $15$. Given that under $SU(6)_{V}\rightarrow{}SU(5)_{V}$, $6 = 5\oplus{}1$ and $15 = 10\oplus{}5$, in choosing the combination $12\oplus{}12\oplus{}66$, and choosing the background couplings such that the $(6, 1, -1)$ components in the two $12$ fermions attain right-chiral zero modes, and the $(15, 1, -2)$ component of the $66$ attains a left-chiral zero mode, the localized visible content will consist of left-chiral $\overline{5}$ fermion and a left-chiral $10$ fermion as required, along with two right-chiral (sterile) neutrinos, and a Dirac $5$ fermion, which will have a mass of order $M_{GUT}$. In the dark sector, the $(1, 6, +1)$ components will attain left-chiral modes, and the $(1, 15, +2)$ components will attain right-chiral modes. To localize gauge groups in the dark sector, we must again utilize the ordinary Dvali-Shifman by spontaneously breaking $SU(6)_{D}$ to a subgroup. Unlike the visible sector, we have a great deal of freedom in what we break $SU(6)_{D}$ down to: we could break it symmetrically to $SU(3)\times{}SU(2)\times{}U(1)$, yielding a mirror matter scenario, or asymmetrically to something else entirely. Hence, what the localized $(1, 6, +1)$ and $(1, 15, +2)$ components break down to depends on how we break $SU(6)_{D}$. Using non-CoS domain walls, we can actually produce a model with the simpler gauge group $SU(10)$, in which the breaking in the visible sector leads directly to the Standard Model. If we break $SU(10)$ to the same $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)$ subgroup on both sides of the wall, we can then localize the Standard Model gauge group by introducing an additional scalar field which induces the usual breaking $SU(5)_{V}\rightarrow{}SU(3)\times{}SU(2)\times{}U(1)$. To localize gauge groups in the dark sector, we need to have an additional scalar field which breaks $SU(5)_{D}$, and we have the same freedom in choosing what subgroup we break it to as before. Under $SU(10)\rightarrow{}SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)$, the fundamental and rank two antisymmetric representations decompose as $$\label{eq:decupletnoncos} \overline{10} = (\overline{5}, 1, +1)\oplus{}(1, \overline{5}, -1),$$ and $$\label{eq:45noncos} 45 = (10, 1, -2)\oplus{}(5, 5, 0)\oplus{}(1, 10, +2).$$ Again, we see that the mixed component of the $45$, the $(5, 5, 0)$ component, is uncharged under the generator which induces the breaking $SU(10)\rightarrow{}SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)$, and is thus decoupled from the wall, and, for reasons stated previously, removed from the low-energy spectrum. Hence, picking the background Yukawa couplings appropriately, if we choose the combination $\overline{10}\oplus{}45$ for our fermionic particle content, the visible SM fermions embedded in the $(\overline{5}, 1, +1)$ and $(10, 1, -2)$ components will attain left-chiral zero modes, leading to the required visible content, and the dark matter fermions embedded in the $(1, \overline{5}, -1)$ and $(1, 10, +2)$ components will attain right-chiral zero modes. In this subsection, we have shown that realistic models can also be constructed from the non-CoS solutions. We showed that the non-CoS solutions from the $SU(12)$ model discussed through the majority of this paper can lead to a realistic model, and we showed that this scenario could be further refined and simplified by using theory based on an $SU(10)$ gauge group, which leads directly to a domain-wall brane localized SM in the visible sector. The price for using the non-CoS solutions is that we must revert to the ordinary Dvali-Shifman mechanism, rather than the Clash-of-Symmetries mechanism, for localizing the gauge fields, and this in turn requires additional background scalar fields, greatly increasing the number of parameters of the scalar field theory generating the background field configuration. Conclusion {#sec:conclusion} ========== In this paper, we have shown how to construct a $4+1D$ theory based on domain-wall branes in which a realistic $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ was localized to a Clash-of-Symmetries domain wall, starting from a grand unified theory based on $SU(12)$. To motivate the addition of a higher dimension, we first argued that $3+1D$ grand unified theories based on $SU(N)\rightarrow{}SU(5)_{V}\times{}SU(N-5)_{D}\times{}U(1)$ were highly difficult to construct due the existence of chiral fermions charged under representations of both the visible and dark gauge groups. We then constructed a scalar field theory based on $SU(12)$ in $4+1D$ with two adjoint scalar fields transforming under a discrete $\mathbb{Z}_{2}$ interchange symmetry. We chose parameters such that the theory had two disconnected vacuum manifolds with the topology of $SU(12)/SU(6)\times{}SU(6)\times{}U(1)$, which meant we could construct clash-of-symmetries domain-wall solutions which break $SU(12)$ to differently embedded copies of $SU(6)\times{}SU(6)\times{}U(1)$, leading to a further breaking to the overlap of these differently embedded groups in the interior of the domain wall. Furthermore, we then showed it was possible to choose parameters such that one of the domain-wall solutions which lead to a localized $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ was made the most stable. We then demonstrated that fermions could be localized to this $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$-respecting wall in a phenomenologically interesting and acceptable way, showing that it was possible to localize left-chiral $\overline{5}\oplus{}10$ combination in the visible $SU(5)_{V}$ sector along with a right-chiral $\overline{5}\oplus{}10$ combination in the dark $SU(5)_{D}$ sector. Furthermore, we showed that the potentially troublesome $(5, 5)$ component charged under both $SU(5)_{V}$ and $SU(5)_{D}$ was completely decoupled from the wall and remained a vector-like $4+1D$ Dirac fermion; this means that this fermionic mediator will attain a GUT scale mass in the interior of the wall and be removed from the spectrum when we include an additional background adjoint scalar field that performs the required breaking $SU(5)_{V}\rightarrow{}SU(3)_{c}\times{}SU(2)_{I}\times{}U(1)_{Y}$. We also showed that other undesirable components did not attain localized modes and could also be removed from the localized theory on the wall. This means that we have a localized theory on the wall which has a visible sector containing the Standard Model particles along with a hidden, dark sector which is completely sequestrated from it at low energies. We showed that scalars could be localized to the wall, and we demonstrated that the parameters controlling the coupling of a fundamental scalar to domain wall could be chosen to make certain $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$-covariant components have either tachyonic or positive definite squared masses. In particular, we showed that it was possible to choose parameters such that the visible and dark quintets could be made tachyonic, initiating symmetry breaking in the $SU(5)_{V}$ and $SU(5)_{D}$ sectors. Alternatively, we can make the singlet components, which are charged under the semi-delocalized $U(1)_{A}$ and $U(1)_{B}$ subgroups, tachyonic in order to break these troublesome Abelian gauge symmetries on the wall. Further analysis of the localization of scalars and spontaneous symmetry breaking in this model is left to later work; this work would include an analysis which takes into account the breaking $SU(5)_{V}$ to the Standard Model, as well an analysis of symmetric and asymmetric symmetry breaking scenarios in the dark sector. Given that a desirable goal would be to extend the above work and find other interesting scenarios leading to realistic models based on breaking a grand unified group $G$ to $G_{V}\times{}G_{D}$, we then outlined several other interesting potential models. We showed that another interesting model based on $SU(9)$ could lead to a localized theory with the gauge group $SU(5)_{V}\times{}SU(2)_{D}\times{}U(1)_{X}$, and we gave a set of representations for the fermions which could lead to a realistic theory without fermionic mediators. We also showed that the non-CoS domain walls in the $SU(12)$ model, in which $SU(5)_{V}$ and $SU(5)_{D}$ are localized by utilizing the original Dvali-Shifman mechanism, could be constructed, and, that this scenario could be further refined to generate localized $SU(3)\times{}SU(2)\times{}U(1)$ groups in the visible and dark sectors by utilizing the corresponding non-CoS solutions for $SU(10)$, which first induce the breaking $SU(10)\rightarrow{}SU(5)\times{}SU(5)\times{}U(1)$. The work in this paper represents the first step for using the Clash-of-Symmetries mechanism in particular to generate dark matter gauge groups and, potentially, asymmetric dark matter scenarios on a domain-wall brane. The next steps are to explore both symmetric and asymmetric breaking scenarios in this model with the introduction of additional background fields which break $SU(5)_{V}$ and $SU(5)_{D}$, and to explore the phenomenology in the visible and dark sectors in these scenarios. Another step would be doing a detailed calculation which checks, given the small energy differences between the different CoS solutions, that the stability of one of the $SU(5)_{V}\times{}SU(5)_{D}\times{}U(1)_{X}$ generating solutions can be preserved under quantum corrections for some parameter choice. Such a calculation would perhaps have to be done first in a lower dimensional toy model. Another interesting further work with the Clash-of-Symmetries mechanism could be to investigate whether it could be alternatively used to generate a gauge flavor symmetry instead of, or in addition to, a dark matter gauge symmetry. Acknowledgments {#acknowledgments .unnumbered} --------------- This work was supported in part by the Australian Research Council and the Commonwealth of Australia. BCD would like to thank Raymond Volkas for useful discussion and advice. BDC would also like to thank Stephen Lonsdale, Claudia Hagedorn and Michael Schmidt for some useful discussions.
--- abstract: 'Fully robust versions of the elastic net estimator are introduced for linear and logistic regression. The algorithms to compute the estimators are based on the idea of repeatedly applying the non-robust classical estimators to data subsets only. It is shown how outlier-free subsets can be identified efficiently, and how appropriate tuning parameters for the elastic net penalties can be selected. A final reweighting step improves the efficiency of the estimators. Simulation studies compare with non-robust and other competing robust estimators and reveal the superiority of the newly proposed methods. This is also supported by a reasonable computation time and by good performance in real data examples.' address: - 'Department of Statistics, Yildiz Technical University, 34220, Istanbul, Turkey' - 'Institute of Statistics and Mathematical Method in Economics, Vienna University of Technology, Wiedner Hauptstraße 8-10, 1040, Vienna, Austria' author: - Fatma Sevinç Kurnaz - Irene Hoffmann - Peter Filzmoser title: Robust and sparse estimation methods for high dimensional linear and logistic regression --- Elastic net penalty, Least trimmed squares, C-step algorithm, High dimensional data, Robustness, Sparse estimation Introduction ============ -0.25cm Let us consider the linear regression model, which assumes the linear relationship between the predictors $\mathbf{X} \in \mathbb{R}^{n \times p}$ and the predictand $\mathbf{y} \in \mathbb{R}^{n \times 1}$, $$\label{lr} \mathbf{y}=\mathbf{X}\pmb{\beta}+\pmb{\varepsilon} ,$$ where $\pmb{\beta}=(\beta_1,\ldots ,\beta_p)^T$ are the regression coefficients and $\pmb{\varepsilon}$ is the error term assumed to have standard normal distribution. For simplicity we assume that $\mathbf{y}=(y_1,\ldots ,y_n)^T$ is centered to mean zero, and the columns of $\mathbf{X}$ are mean-centered and scaled to variance one. The ordinary least squares (OLS) regression estimator is the common choice in situations where the number of observations, $n$, in the data set is greater than the number of predictor variables, $p$. However, in presence of multicollinearity among predictors, the OLS estimator becomes unreliable, and if $p$ exceeds $n$ it can not even be computed. Several alternatives have been proposed in this case; here we focus on the class of shrinkage estimators which penalize the residual sum-of-squares. The ridge estimator uses an $l_2$ penalty on the regression coefficients [@Hoerl70], while the lasso estimator takes an $l_1$ penalty instead [@Tibshirani96]. Although this does no longer allow for a closed form solution for the estimated regression coefficients, the lasso estimator gets *sparse*, which means that some of the regression coefficients are shrunken to zero. This means that lasso acts like a variable selection method by returning a smaller subset of variables being relevant for the model. This is appropriate in particular for high dimensional low sample size data sets ($n \ll p$), arising from applications in chemometrics, biometrics, econometrics, social sciences and many other fields, where the data include many uninformative variables which have no effect on the predictand or have very small contribution to the model. There is also a limitation of the lasso estimator, since it is able to select only at most $n$ variables when $n<p$. If $n$ is very small, or if the number of informative variables (variables which are relevant for the model) is expected to be greater than $n$, the model performance can become poor. As a way out, the elastic net (*enet*) estimator has been introduced [@Zou05], which combines both $l_1$ and $l_2$ penalties: $$\label{elnet} \hat{\pmb{\beta}}_{enet} = \operatorname*{arg\,min}_{\pmb{\beta}} \left\{\sum_{i=1}^n ( y_i-\mathbf{x}^T_i\pmb{\beta})^2 + \lambda P_{\alpha}(\pmb{\beta}) \right\}$$ Here, $\mathbf{y}=(y_1,\ldots ,y_n)^T$, the observations $\mathbf{x}_i^T$ form the rows of $\mathbf{X}$, and the penalty term $P_{\alpha}$ is defined as $$\label{penalty} P_{\alpha}(\pmb{\beta})=(1-\alpha)\frac{1}{2} \lVert \pmb{\beta} \rVert_2^2 + \alpha \lVert \pmb{\beta} \rVert_1 =\sum_{j=1}^p \left[ (1-\alpha)\frac{1}{2} \beta_j^2 + \alpha \lvert \beta_j\rvert \right].$$ The entire strength of the penalty is controlled by the tuning parameter $\lambda\geq 0$. The other tuning parameter $\alpha$ is the mixing proportion of the ridge and lasso penalties and takes value in $\left[0,1 \right]$. The elastic net estimator is able to select variables like in lasso regression, and shrink the coefficients according to ridge. For an overview of sparse methods, see [@Filzmoser12]. A further limitation of the previously mentioned estimators is their lack of robustness against data outliers. In practice, the presence of outliers in data is quite common, and thus robust statistical methods are frequently used, see, for example [@Liang1; @Liang2]. In the linear regression setting, outliers may appear in the space of the predictand (so-called vertical outliers), or in the space of the predictor variables (leverage points) [@Maronna06]. The Least Trimmed Squares (LTS) estimator has been among the first proposals of a regression estimator being fully robust against both types of outliers [@RousseeuwL03]. It is defined as $$\label{LTS} \hat{\pmb{\beta}}_{LTS}= \operatorname*{arg\,min}_{\pmb{\beta}} \sum_{i=1}^hr_{(i)}^2(\pmb{\beta}) ,$$ where the $r_{(i)}$ are the ordered absolute residuals $\lvert r_{(1)}\rvert \leq \lvert r_{(2)}\rvert \leq \dots \leq \lvert r_{(n)}\rvert$, and $r_{i}=y_i-\mathbf{x}_i^T\pmb{\beta}$ [@Rousseeuw84]. The number $h$ is chosen between $\lfloor (n+p+1)/2\rfloor$ and $n$, where $\lfloor a \rfloor$ refers to the largest integer $\leq a$, and it determines the robustness properties of the estimator [@Rousseeuw84]. The LTS estimator also became popular due to the proposal of a quick algorithm for its computation, the so-called FAST-LTS algorithm [@Rousseeuw06]. The key feature of this algorithm is the “concentration step” or C-step, which is an efficient way to arrive at outlier-free data subsets where the OLS estimator can be applied. This only works for $n>p$, but recently the sparse LTS regression estimator has been proposed for high dimensional problems [@Alfons13]: $$\label{spLTS} \hat{\pmb{\beta}}_{sparseLTS} = \operatorname*{arg\,min}_{\pmb{\beta}} \left\{\sum_{i =1}^h r_{(i)}^2(\pmb{\beta}) + h\lambda \lVert \pmb{\beta} \rVert_1 \right\}.$$ This estimator adds an $l_1$ penalty to the objective function of the LTS estimator, and it can thus be seen as a robust counterpart of the lasso estimator. The sparse LTS estimator is robust to both vertical outliers and leverage points, and also a fast algorithm has been developed for its computation [@AlfonsR13]. The contribution of this work is twofold: A new sparse and robust regression estimator is proposed with combined $l_1$ and $l_2$ penalties. This robustified elastic net regression estimator overcomes the limitations of lasso type estimators concerning the low number of variables in the models, and concerning the instability of the estimator in case of high multicollinearity among the predictors [@Tibshirani96]. As a second contribution, a robust elastic net version of logistic regression is introduced for problems where the response $\mathbf{y}$ is a binary variable, encoded with $y_i\in \{0,1\}$ referring to the class memberships of two groups. The logistic regression model is $y_i=\pi_i+\varepsilon_i$, for $i=1,\ldots ,n$, where $\pi_i$ denotes the conditional probability for the $i$th observation, $$\pi_i=\mbox{Pr}(y_i=1|\mathbf{x}_i)=\frac{e^{\mathbf{x}^T_i\pmb{\beta}}}{1+e^{\mathbf{x}^T_i\pmb{\beta}}} , \label{eq:pi}$$ and $\varepsilon_i$ is the error term assumed to have binomial distribution. The most popular way to estimate the model parameters is the maximum likelihood (ML) estimator which is based on maximizing the log-likelihood function or, equivalently, minimizing the negative log-likelihood function, $$\label{MLlog} \hat{\pmb{\beta}}_{ML} = \operatorname*{arg\,min}_{\pmb{\beta}} \sum_{i=1}^n d(\mathbf{x}^T_i\pmb{\beta},y_i),$$ with the deviances $$\label{deviances} d(\mathbf{x}^T_i\pmb{\beta},y_i) = - y_i \log \pi_i - (1-y_i)\log (1-\pi_i) = - y_i\mathbf{x}^T_i\pmb{\beta}+ \log\left(1+e^{\mathbf{x}^T_i\pmb{\beta}}\right) .$$ The estimation of the model parameters with this method is not reliable when there is multicollinearity among the predictors and is not feasible when $p>n$. To solve these problems, Friedman et al. [@Friedman10] suggested to minimize a penalized negative log-likelihood function, $$\label{enetlog} \hat{\pmb{\beta}}_{enet} = \operatorname*{arg\,min}_{\pmb{\beta}} \left\{\sum_{i=1}^n d(\mathbf{x}^T_i\pmb{\beta},y_i) + n\lambda P_{\alpha}(\pmb{\beta}) \right\}.$$ Here, $P_{\alpha}(\pmb{\beta})$ is the elastic net penalty as given in Equation (\[penalty\]), and thus this estimator extends (\[elnet\]) to the logistic regression setting. Using the elastic net penalty also solves the non-existence problem of the estimator in case of non-overlapping groups [@Albert84; @Friedman10; @FriedmanR16]. Robustness can be achieved by trimming the penalized log-likelihood function, and using weights as proposed in the context of robust logistic regression [@Croux03; @Bianco96]. These weights can also be applied in a reweighting step which increases the efficiency of the robust elastic net logistic regression estimator. The outline of this paper is as follows. In Section \[sec:enetlts\], we introduce the robust and sparse linear regression estimator and provide a detailed algorithm for its computation. Section \[sec:enetlogit\] presents the robust elastic net logistic regression estimator. Some important details which are different from the linear regression algorithm are mentioned here. Section \[sec:tuningpara\] explains how the tuning parameters for the proposed estimators can be selected; we prefer an approach based on cross-validation. Since LTS estimators possess a rather low statistical efficiency, a reweighting step is introduced in Section \[sec:rewight\] to increase the efficiency. The properties of the proposed estimators are investigated in simulation studies in Section \[sec:simulations\], and Section \[sec:applications\] shows the performance on real data examples. Section \[sec:comptime\] provides some insight into the computation time of the algorithms, and the final Section \[sec:conclude\] concludes. Robust and sparse linear regression with elastic net penalty {#sec:enetlts} ============================================================ -0.25cm A robust and sparse elastic net estimator in linear regression can be defined with the objective function $$\label{objectlinear} Q(H,\pmb{\beta}) = \sum_{i \in H} ( y_i-\mathbf{x}^T_i\pmb{\beta})^2 + h\lambda P_{\alpha}(\pmb{\beta})$$ where $H \subseteq \{1,2,\dots,n\}$ with $\lvert H \rvert=h$, $\lambda \in \lbrack 0,\lambda_0 \rbrack$, and $P_{\alpha}$ indicates the elastic net penalty with $\alpha \in \lbrack 0,1 \rbrack$ as in Equation (\[penalty\]). We call this estimator the *enet-LTS* estimator, since it uses a trimmed sum of squared residuals, like the sparse LTS estimator (\[spLTS\]). The minimum of the objective function (\[objectlinear\]) determines the optimal subset of size $h$, $$\label{Hopt} H_{opt} = \operatorname*{arg\,min}_{H \subseteq {1,2,\dots,n}:\lvert H \rvert=h} Q(H,\hat{\pmb{\beta}}_H) ,$$ which is supposed to be outlier-free. The coefficient estimates $\hat{\pmb{\beta}}_H$ depend on the subset $H$. The enet-LTS estimator is given for this subset $H_{opt}$ by $$\label{enetLTS} \hat{\pmb{\beta}}_{enetLTS}=\operatorname*{arg\,min} Q(H_{opt},\pmb{\beta}).$$ It is not trivial to identify this optimal subset, and practically one has to use an algorithm to approximate the solution. This algorithm uses C-steps: Suppose that the current $h$-subset in the $k$th iteration of the algorithm is denoted by $H_k$, and the resulting estimator by $\hat{\pmb{\beta}}_{H_k}$. Then the next subset $H_{k+1}$ is formed by the indexes of those observations which correspond to the smallest $h$ squared residuals $$\label{eq:CSTEPreg} r^2_{k,i}=(y_i-\mathbf{x}^T_i\hat{\pmb{\beta}}_{H_k})^2, \quad \mbox{ for } i=1,\ldots ,n.$$ If $\hat{\pmb{\beta}}_{H_{k+1}}$ denotes the estimator based on $H_{k+1}$, then by construction of the $h$-subsets it follows immediately: $$\label{eq:CSTEPobjf} Q(H_{k+1},\hat{\pmb{\beta}}_{H_{k+1}})\leq Q(H_{k+1},\hat{\pmb{\beta}}_{H_{k}}) \leq Q(H_{k},\hat{\pmb{\beta}}_{H_{k}})$$ This means that the C-steps decrease the objective function (\[objectlinear\]) successively, and lead to a local optimum after convergence. The global optimum is approximated by performing the C-steps with several initial subsets. However, in order to keep the runtime of the algorithm low, it is crucial that the initial subsets are chosen carefully. As motivated in [@Alfons13], for a certain combination of the penalty parameters $\alpha$ and $\lambda$, elemental subsets are created consisting of the indexes of three randomly selected observations. Using only three observations increases the possibility of having no outliers in the elemental subsets. Let us denote these elemental subsets by $$\label{3obs} H_{el}^s=\{j_1^s,j_2^s,j_3^s\} ,$$ where $s \in \{1,2,\dots,500 \}$. The resulting estimators based on the three observations are denoted by $\hat{\pmb{\beta}}_{H_{el}^s}$. Now the squared residuals $(y_i-\mathbf{x}_i \hat{\pmb{\beta}}_{H_{el}^s})^2$ can be computed for all observations $i=1,\ldots ,n$, and two C-steps are carried out, starting with the $h$-subset defined by the indexes of the smallest squared residuals. Then only those $10$ $h$-subsets with the smallest values of the objective function (\[objectlinear\]) are kept as candidates. With these candidate subsets, the C-steps are performed until convergence (no further decrease), and the best subset is defined as that one with the smallest value of the objective function. This *best subset* also defines the estimator for this particular combination of $\alpha$ and $\lambda$. Basically, one can apply this procedure now for a grid of values in the interval $\alpha \in \lbrack 0,1 \rbrack$ and $\lambda \in \lbrack 0,\lambda_0 \rbrack$. Practically, this may still be quite time consuming, and therefore, for a new parameter combination, the best subset of the neighboring grid value of $\alpha$ and/or $\lambda$, is taken, and the C-steps are started from this best subset until convergence. This technique, called *warm starts*, is repeated for each combination over the grid of $\alpha$ and $\lambda$ values, and thus the start based on the elemental subsets is carried out only once. The choice of the optimal tuning parameters $\alpha_{opt}$ and $\lambda_{opt}$ is detailed in Section \[sec:tuningpara\]. The subset corresponding to the optimal tuning parameters is the optimal subset of size $h$. The enet-LTS estimator is then calculated on the optimal subset with $\alpha_{opt}$ and $\lambda_{opt}$. Robust and sparse logistic regression with elastic net penalty {#sec:enetlogit} ============================================================== -0.25cm Based on the definition (\[enetlog\]) of the elastic net logistic regression estimator, it is straightforward to define the objective function of its robust counterpart based on trimming, $$\label{objlog} Q(H,\pmb{\beta}) = \sum_{i \in H} d(\mathbf{x}^T_i\pmb{\beta},y_i) + h\lambda P_{\alpha}(\pmb{\beta}) ,$$ where again $H \subseteq \{1,2,\dots,n\}$ with $\lvert H \rvert=h$, and $P_{\alpha}$ is the elastic net penalty as defined in Equation (\[penalty\]). As outlined in the last Section \[sec:enetlts\], the task is to find the optimal subset which minimizes the objective function and defines the robust sparse elastic net estimator for logistic regression. It turns out that the algorithm explained previously in the linear regression setting can be successfully used to find the approximative solution. In the following we will explain the modifications that need to be carried out. C-steps: : In the linear regression case, the C-steps were based on the squared residuals (\[eq:CSTEPreg\]). Now the $h$-subsets are determined according to the indexes of those observations with the smallest values of the deviances $d(\mathbf{x}^T_i\hat{\pmb{\beta}}_{H_{k}},y_i)$. However, here it needs to be made sure that the original group sizes are in the same proportion. Denote $n_0$ and $n_1$ the number of observations in both groups, with $n_0+n_1=n$. Then $h_0=\lfloor (n_0+1) h/n\rfloor$ and $h_1=h-h_0$ define the group sizes in each $h$-subset. A new $h$-subset is created with the $h_0$ indexes of the smallest deviances $d(\mathbf{x}^T_i\hat{\pmb{\beta}}_{H_{k}},y_i=0)$ and with the $h_1$ indexes of the smallest deviances $d(\mathbf{x}^T_i\hat{\pmb{\beta}}_{H_{k}},y_i=1)$. Elemental subsets: : In the linear regression case, the elemental subsets consisted of the indexes of three randomly selected observations, see (\[3obs\]). Now four observations are randomly selected to form the elemental subsets, two from each group. This allows to compute the estimator, and the two C-steps are based on the $h$ smallest values of the deviances. As before, this is carried out for 500 elemental subsets, and only the “best” 10 $h$-subsets are kept. Here, “best” refers to an evaluation that is borrowed from a robustified deviance measure proposed in Croux and Haesbroeck [@Croux03] in the context of robust logistic regression (but not in high dimension). These authors replace the deviance function (\[deviances\]) used in (\[MLlog\]) by a function $\varphi_{BY}$ to define the so-called Bianco Yohai (BY) estimator $$\hat{\pmb{\beta}}_{BY} = \operatorname*{arg\,min}_{\pmb{\beta}} \sum_{i=1}^n \varphi(\mathbf{x}^T_i\pmb{\beta};y_i) , \label{BY}$$ a highly robust logistic regression estimator, see also [@Bianco96]. The form of the function $\varphi_{BY}$ is shown in Figure \[fig:phifunc\], see [@Croux03] for details. We use this function as follows: Positive scores $\mathbf{x}^T_i\hat{\pmb{\beta}}$ of group 1, i.e. $y_i=1$, refer to correct classification and receive the highest values for $\varphi_{BY}$, while negative scores refer to misclassification, with small or zero $\varphi_{BY}$ values. For the scores of group 0 we have the reverse behavior, see Figure \[fig:phifunc\]. When evaluating an $h$-subset, the sum over the $h$ values of $\varphi_{BY}(\mathbf{x}^T_i\hat{\pmb{\beta}}_H)$ for $i\in H$ is computed, and this sum should be as large as possible. This means that we aim at identifying an $h$-subset where the groups are separated as much as possible. Points on the wrong side have almost no contribution, but also the contribution of outliers on the correct side is bounded. In this way, outliers will not dominate the sum. With the best 10 $h$-subsets we continue the C-steps until convergence. Finally, the subset with the largest sum $\varphi_{BY}(\mathbf{x}^T_i\hat{\pmb{\beta}}_H)$ over all $i\in H$ forms the best index set. ![\[fig:phifunc\]Function $\varphi_{BY}$ used for evaluating an $h$-subset, based on the scores $\mathbf{x}^T_i\hat{\pmb{\beta}}$ for the two groups.](phifunc2){width="70.00000%"} The selection of the optimal parameters $\alpha_{opt}$ and $\lambda_{opt}$ is discussed in Section \[sec:tuningpara\]. The subset corresponding to these optimal tuning parameters is defined as the optimal subset of size $h$. The enet-LTS logistic regression estimator is then calculated on the optimal subset with $\alpha_{opt}$ and $\lambda_{opt}$. Note that at the beginning of the algorithm for linear regression, the predictand is centered, and the predictor variables are centered robustly by the median and scaled by the MAD. Within the C-steps of the algorithm, we additionally mean-center the response variable and scale the predictors by their arithmetic means and standard deviations, calculated on each current subset, see also [@Alfons13]. The same procedure is applied for logistic regression, except for centering the predictand. In the end, the coefficients are back-transformed to the original scale. Selection of the tuning parameters {#sec:tuningpara} ================================== -0.25cm Sections \[sec:enetlts\] and \[sec:enetlogit\] outlined the algorithms to arrive at a best subset for robust elastic net linear and logistic regression, for each combination of the tuning parameters $\alpha \in [0,1]$ and $\lambda \in [0,\lambda_0]$. In this section we define the strategy to select the optimal combination $\alpha_{opt}$ and $\lambda_{opt}$, leading to the optimal subset. For this purpose we are using $k$-fold cross-validation (CV) on those best subsets of size $h$, with $k=5$. In more detail, for $k$-fold CV, the data are randomly split into $k$ blocks of approximately equal size. In case of logistic regression, each block needs to consist of observations from both classes with approximately the same class proportions as in the complete data set. Each block is left out once, the model is fitted to the “training data” contained in the $k-1$ blocks, using a fixed parameter combination for $\alpha$ and $\lambda$, and it is applied to the left-out block with the “test data”. In this way, $h$ fitted values are obtained from $k$ models, and they are compared to the corresponding original response by using the following evaluation criteria: - For linear regression we take the root mean squared prediction error (RMSPE) $$\label{eq:cvrmspe} \mathrm{RMSPE}(\alpha,\lambda) =\sqrt{\frac{1}{h}\sum_{i=1}^{h} r_i^2 (\hat{\pmb{\beta}}_{\alpha,\lambda})}$$ where $r_{i}=y_i-\mathbf{x}^T_i\hat{\pmb{\beta}}_{\alpha,\lambda}$ presents the test set residuals from the models estimated on the training sets with a specific $\alpha$ and $\lambda$ (for simplicity we omitted here the index $k$ denoting the models where the $k$-th block was left out and the corresponding test data from this block). - For logistic regression we use the mean of the negative log-likelihoods or deviances (MNLL) $$\label{eq:cvlog} \mathrm{MNLL}(\alpha,\lambda) = \frac{1}{h}\sum_{i=1}^{h} d_{i}(\hat{\pmb{\beta}}_{\alpha,\lambda}),$$ where $d_{i}=d(\mathbf{x}^T_i\hat{\pmb{\beta}}_{\alpha,\lambda},y_i)$ presents the test set deviances from the models estimated on the training sets with a specific $\alpha$ and $\lambda$. Note that the evaluation criteria given by (\[eq:cvrmspe\]) and (\[eq:cvlog\]) are robust against outliers, because they are based on the best subsets of size $h$, which are supposed to be outlier free. In order to obtain more stable results, we repeat the $k$-fold CV five times and take the average of the corresponding evaluation measure. Finally, the optimal parameters $\alpha_{opt}$ and $\lambda_{opt}$ are defined as that couple for which the evaluation criterion gives the minimal value. The corresponding best subset is determined as the optimal subset. Note that the optimal couple $\alpha_{opt}$ and $\lambda_{opt}$ is searched on a grid of values $\alpha \in [0,1]$ and $\lambda \in [0,\lambda_0]$. In our experiments we used 41 equally spaced values for $\alpha$, and $\lambda$ was varied in steps of size $0.025\lambda_0$. For determining $\lambda_0$ in the linear regression case we used the same approach as in Alfons et al. [@Alfons13] which is based on the Pearson correlation between $y$ and the $j$th predictor variable $x_j$ on winsorized data. For logistic regression we replaced the Pearson correlation by a robustified point-biserial correlation: denote by $n_0$ and $n_1$ the group sizes of the two groups, and by $m_j^0$ and $m_j^1$ the medians of the $j$th predictor variable for the data from the two groups, respectively. Then the robustified point-biserial correlation between $y$ and $x_j$ is defined as $$r_{pb}(y,x_j)=\frac{m_j^1-m_j^0}{\mbox{MAD}(x_j)}\cdot \sqrt{\frac{n_0n_1}{n(n-1)}} ,$$ where $\mbox{MAD}(x_j)$ is the MAD of $x_j$, and $n=n_0+n_1$. Reweighting step {#sec:rewight} ================ -0.25cm The LTS estimator has a low efficiency, and thus it is common to use a reweighting step [@RousseeuwL03]. This idea is also used for the estimators introduced here. Generally, in a reweighting step the outliers according to the current model are identified and downweighted. For the linear regression model we will use the same reweighting scheme as proposed in Alfons et al. [@Alfons13], which is based on standardized residuals. In case of logistic regression we compute the Pearson residuals which are approximately standard normally distributed and given by $$r_i^s=\frac{y_i-\pi_i}{\pi_i\left(1-\pi_i\right)} , \label{pearson}$$ with $\pi_i$ the conditional probabilities from (\[eq:pi\]). For simplicity, denote the standardized residuals from the linear regression case also by $r_i^s$. Then the weights are defined by $$w_i=\begin{cases} 1, & \mbox{ if } \lvert r_i^s \rvert \leq \Phi^{-1}(1-\delta) \\ 0, & \mbox{ if } \lvert r_i^s \rvert > \Phi^{-1}(1-\delta) \end{cases} \quad i=1,2,\dots,n,$$ where $\delta=0.0125$, such that $2.5\%$ of the observations are flagged as outliers in the normal model. The reweighted enet-LTS estimator is defined as $$\label{eq:rewest} \hat{\pmb{\beta}}_{reweighted} = \operatorname*{arg\,min}_{\pmb{\beta}} \left\{\sum_{i=1}^n w_i f(\mathbf{x}_i;y_i) + \lambda_{upd} n_w P_{\alpha_{opt}}(\pmb{\beta}) \right\},$$ where $w_i$, $i=1,\dots,n$ stands for the vector of binary weights (according to the current model), $n_w=\sum_{i=1}^n w_i$, and $f$ corresponds to squared residuals for linear regression or to the deviances in case of logistic regression. Since $h \leq n_w$, and because the optimal parameters $\alpha_{opt}$ and $\lambda_{opt}$ have been derived with $h$ observations, the penalty can act (slightly) differently in (\[eq:rewest\]) than for the raw estimator. For this reason, the parameter $\lambda_{opt}$ has to be updated, while the $\alpha_{opt}$ regulating the tradeoff between the $l_1$ and $l_2$ penalty is kept the same. The updated parameter $\lambda_{upd}$ is determined by $5$-fold CV, with the simplification that $\alpha_{opt}$ is already fixed. Simulation studies {#sec:simulations} ================== -0.25cm In this section, the performance of the new estimators is compared with different sparse estimators in different scenarios. We consider both the raw and the reweighted versions of the enet-LTS estimators, and therefore aim to show how the reweighting step improves the methods. The raw and reweighted enet-LTS estimators are compared with their classical, non-robust counterparts, the linear and logistic regression estimators with elastic net penalty [@Friedman10]. In case of linear regression we also compare with the reweighted sparse LTS estimator of [@Alfons13]. All robust estimators are calculated taking the subset size $h=\lfloor (n+1)\cdot 0.75\rfloor$ such that their performances are directly comparable. For each replication, we choose the optimal tuning parameters $\alpha_{opt}$ and $\lambda_{opt}$ over the grids $\alpha$ and $\lambda$ with 5-times repeated $5$-fold CV as described in Section \[sec:tuningpara\]. To select the tuning parameters for the classical estimators with elastic net penalty, we first draw the same grid for $\alpha$, namely $\alpha \in [0,1]$, with 41 equally spaced grid points. Then we use $5$-fold CV as provided by the R package *glmnet*, which automatically checks the model quality for a sequence of values for $\lambda$, taking the mean squared error as an evaluation criterion. Finally, the tuning parameters corresponding to the smallest value of the minimum cross-validated error are determined as the optimal tuning parameters. In order to be coherent with our evaluation, the tuning parameters for the sparse LTS estimator are determined in the same way as for the enet-LTS estimator. All simulations are carried out in R [@R]. Note that we simulated the data sets with intercept. As described at the end of Section \[sec:enetlogit\], the data are centered and scaled at the beginning of the algorithm and only in the final step the coefficients are back-transformed to the original scale, where also the estimate of the intercept is computed. ***Sampling schemes for linear regression*:** Let us consider two different scenarios by means of generating a “low dimensional” data set with $n=150$ and $p=60$ and a “high dimensional” data set with $n=50$ and $p=100$. We generate a data matrix where the variables are forming correlated blocks, $\mathbf{X}=(\mathbf{X}_{a_1},\mathbf{X}_{a_2},\mathbf{X}_b)$, where $\mathbf{X}_{a_1}$, $\mathbf{X}_{a_2}$ and $\mathbf{X}_b$ have the dimensions $n \times p_{a_1}$,$n \times p_{a_2}$ and $n \times p_b$, with $p=p_{a_1}+p_{a_2}+p_b$. Such a block structure can be assumed in many application, and it mimics different underlying hidden processes. The observations of the blocks are generated independently from each other, from a multivariate normal distribution $\mathcal{N}_{p_{a_1}}(\mathbf{0},\mathbf{\Sigma}_{a_1})$ with $\mathbf{\Sigma}_{a_1}=\rho_{a_1}^{\lvert j-k \rvert}$, $1 \leq j$, $k \leq p_{a_1}$, $\mathcal{N}_{p_{a_2}}(\mathbf{0},\mathbf{\Sigma}_{a_2})$ with $\mathbf{\Sigma}_{a_2}=\rho_{a_2}^{\lvert j-k \rvert}$, $1 \leq j$, $k \leq p_{a_2}$, and $\mathcal{N}_{p_b}(\mathbf{0},\mathbf{\Sigma}_b)$ with $\mathbf{\Sigma}_b=\rho_b^{\lvert j-k \rvert}$, $1 \leq j$, $k \leq p_b$, respectively. While the first two blocks belong to the informative variables with sizes of $p_{a_1}=0.05p$ and $p_{a_2}=0.05p$, the third block represents uninformative variables with $p_b=0.9p$. Furthermore, we take $\rho_{a_1}=\rho_{a_2}=0.9$ to allow for a high correlation among the informative variables, and $\rho_b=0.2$ to have low correlation among the uninformative variables. To create sparsity, the true parameter vector $\pmb{\beta}$ consists of zeros for the last 90% of the entries referring to the uninformative variables, while the first 10% of the entries are assigned to one. The response variable is calculated by $$y_i=1+\mathbf{x}_i^T\pmb{\beta}+\varepsilon_i , \label{eq:predictand_lin}$$ where the error term $\varepsilon_i$ is distributed according to a standard normal distribution $\mathcal{N}(0,1)$, for $i=1,\dots,n$. This is the design for the simulations with clean data. For the simulation scenarios with outliers we replace the first $10\%$ of the observations of the block of informative variables by values coming from independent normal distributions $\mathcal{N}(20,1)$ for each variable. Further, the error terms for these $10\%$ outliers are replaced by values from $\mathcal{N}(20\hat{\sigma}_y,1)$ instead of $\mathcal{N}(0,1)$, where $\hat{\sigma}_y$ represents the estimated standard deviation of the clean predictand vector. In this way, the contaminated data consist of both vertical outliers and leverage points. ***Sampling schemes for logistic regression*:** We also consider two different scenarios for logistic regression, a “low dimensional” data set with $n=150$ and $p=50$ and a “high dimensional” data set with $n=50$ and $p=100$. The data matrix is $\mathbf{X}=(\mathbf{X}_a,\mathbf{X}_b)$, where $\mathbf{X}_a$ has the dimension $n \times p_a$ and $\mathbf{X}_b$ is of dimension $n \times p_b$, with $p=p_a+p_b$. The data matrices are generated independently from $\mathcal{N}_{p_a}(\mathbf{0},\mathbf{\Sigma}_a)$ with $\mathbf{\Sigma}_a=\rho_a^{\lvert j-k \rvert}$, $1 \leq j$, $k \leq p_a$, and $\mathcal{N}_{p_b}(\mathbf{0},\mathbf{\Sigma}_b)$ with $\mathbf{\Sigma}_b=\rho_b^{\lvert j-k \rvert}$, $1 \leq j$, $k \leq p_b$, respectively. While the first block consists of the informative variables with $p_a=0.1p$, the second block represents uninformative variables with $p_b=0.9p$. We take $\rho_a=0.9$ for a high correlation among the informative variables, and $\rho_b=0.5$ for moderate correlation among the uninformative variables. The coefficient vector $\pmb{\beta}$ consists of ones for the first 10% of the entries, and zeros for the remaining uninformative block. The elements of the error term $\varepsilon_i$ are generated independently from $\mathcal{N}(0,1)$. The grouping variable is then generated according to the model $$\label{eq:predictand} y_i=\begin{cases} 0, & \mbox{ if } 1+\mathbf{x}_i^T\pmb{\beta}+\varepsilon_i \leq 0\\ 1, & \mbox{ if } 1+\mathbf{x}_i^T\pmb{\beta}+\varepsilon_i > 0 \end{cases} \quad i=1,2,\dots,n.$$ With this setting, both groups are of approximately the same size. Contamination is introduced by adding outliers only to the informative variables. Denote $n_0$ the number of observations in class 0. Then the first $\lfloor 0.1 n_0 \rfloor$ observations of group 0 are replaced by values generated from $\mathcal{N}(20,1)$. In order to create “vertical” outliers in addition to leverage points, we assign those first $0.1 n_0$ observations of class 0 a wrong class membership. ***Performance measures*:** For the evaluation of the different estimators, training and test data sets are generated according to the explained sampling schemes. The models are fit to the training data and evaluated on the test data. The test data are always generated without outliers. As performance measures we use the root mean squared prediction error (RMSPE) for linear regression, $$\label{eq:rmspe} \mathrm{RMSPE}(\hat{\pmb{\beta}})=\sqrt{\frac{1}{n}\sum_{i=1}^n\left(y_i-\hat{\beta}_0-\mathbf{x}_i^T\hat{\pmb{\beta}} \right)^2} ,$$ and the mean of the negative log-likelihoods or deviances (MNLL) for logistic regression, $$\label{eq:nll} \mathrm{MNLL}(\hat{\pmb{\beta}}) = \frac{1}{n}\sum_{i=1}^{n} d(\hat{\beta}_0+\mathbf{x}^T_i\hat{\pmb{\beta}},y_i) ,$$ where $y_i$ and $\mathbf{x}_i$, $i=1,\dots,n$, indicate the observations of the test data set, $\hat{\pmb{\beta}}$ denotes the coefficient vector and $\hat{\beta}_0$ stands for the estimated intercept term obtained from the training data set. In logistic regression we also calculate the misclassification rate (MCR), defined as $$\mathrm{MCR}=\frac{m}{n} \label{eq:mcr}$$ where $m$ is the number of misclassified observations from the test data after fitting the model on the training data. Further, we consider the precision of the coefficient estimate as a quality criterion, defined by $$\label{eq:bias} \mathrm{PRECISION}(\hat{\pmb{\beta}})=\sqrt{\sum_{i=0}^{p}\left(\beta_i-\hat{\beta}_i \right)^2},$$ In order to compare the sparsity of the coefficient estimators, we evaluate the False Positive Rate (FPR) and the False Negative Rate (FNR), defined as $$\label{eq:fpr} \mathrm{FPR}(\hat{\pmb{\beta}})=\frac{\lvert \{ j=0,\dots,p:\hat{\beta}_j \neq 0 \wedge \beta_j=0 \} \rvert}{\lvert \{ j=0,\dots,p:\beta_j=0\} \rvert},$$ $$\label{eq:fnr} \mathrm{FNR}(\hat{\pmb{\beta}})=\frac{\lvert \{ j=0,\dots,p:\hat{\beta}_j=0 \wedge \beta_j \neq 0 \} \rvert}{\lvert \{ j=0,\dots,p:\beta_j \neq 0\} \rvert}.$$ The FPR is the proportion of non-informative variables that are incorrectly included in the model. On the other hand, the FNR is the proportion of informative variables that are incorrectly excluded from the model. A high FNR usually has a bad effect on the prediction performance since it inflates the variance of the estimator. These evaluation measures are calculated for the generated data in each of 100 simulation replications separately, and then summarized by boxplots. The smaller the value for these criteria, the better the performance of the method. ***Results for linear regression*:** The outcome of the simulations for linear regression is summarized in Figures \[fig:rmspe\_lin\]–\[fig:fnr\_lin\]. The left plots in these figures are for the simulations with low dimensional data, and the right plots for the high dimensional configuration. Figure \[fig:rmspe\_lin\] compares the RMSPE. All methods yield similar results in the low dimensional non-contaminated case, while in the high dimensional clean data case the elastic net method is clearly better. However, in the contaminated case, elastic net leads to poor performance, which is also the case for sparse LTS. Enet-LTS performs even slightly better with contaminated data, and there is also a slight improvement visible in the reweighted version of this estimator. The PRECISION in Figure \[fig:bias\_lin\] shows essentially the same behavior. The FPR in Figure \[fig:fpr\_lin\], reflecting the proportion of incorrectly added noise variables to the models, shows a very low rate for sparse LTS. Here, the elastic net even improves in the contaminated setting, and the same is true for enet-LTS. A quite different picture is shown in Figure \[fig:fnr\_lin\] with the FNR. Sparse LTS and elastic net miss a high proportion of informative variables in the contaminated data scenario, which is the reason for their poor overall performance. Note that the outliers are placed in the informative variables, which seems to be particularly difficult for sparse LTS. ![Root mean squared prediction error (RMSPE) for linear regression. Left: low dimensional data set ($n=150$ and $p=60$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:rmspe_lin"}](rmspe_low_lin "fig:"){width="50.00000%"} ![Root mean squared prediction error (RMSPE) for linear regression. Left: low dimensional data set ($n=150$ and $p=60$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:rmspe_lin"}](rmspe_high_lin "fig:"){width="50.00000%"} ![Precision of the estimators (PRECISION) for linear regression. Left: low dimensional data set ($n=150$ and $p=60$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:bias_lin"}](bias_low_lin "fig:"){width="50.00000%"} ![Precision of the estimators (PRECISION) for linear regression. Left: low dimensional data set ($n=150$ and $p=60$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:bias_lin"}](bias_high_lin "fig:"){width="50.00000%"} ![False positive rate (FPR) for linear regression. Left: low dimensional data set ($n=150$ and $p=60$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:fpr_lin"}](fpr_low_lin "fig:"){width="50.00000%"} ![False positive rate (FPR) for linear regression. Left: low dimensional data set ($n=150$ and $p=60$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:fpr_lin"}](fpr_high_lin "fig:"){width="50.00000%"} ![False negative rate (FNR) for linear regression. Left: low dimensional data set ($n=150$ and $p=60$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:fnr_lin"}](fnr_low_lin "fig:"){width="50.00000%"} ![False negative rate (FNR) for linear regression. Left: low dimensional data set ($n=150$ and $p=60$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:fnr_lin"}](fnr_high_lin "fig:"){width="50.00000%"} ***Results for logistic regression*:** Figures \[fig:misclas\_log\]–\[fig:fnr\_log\] summarize the simulation results for logistic regression. As before, the left plots refer to the low dimensional case, and the right plots to the high dimensional data. Within one plot, the results for uncontaminated and contaminated data are directly compared. The misclassification rate in Figure \[fig:misclas\_log\] is around 10% for all methods, and it is slightly higher in the high dimensional situation. In case of contamination, however, this rate increases enormously for the classical method elastic net. The average deviances in Figure \[fig:mnll\_log\] show that the reweighting of the enet-LTS estimator clearly improves the raw estimate in both the low and high dimensional cases. It can also be seen that elastic net is sensitive to the outliers. The precision of the parameter estimates in Figure \[fig:bias\_log\] reveal a remarkable improvement for the reweighted enet-LTS estimator compared to the raw version, while there is not any clear effect of the contamination on the classical elastic net estimator. The FPR in Figure \[fig:fpr\_log\] shows a certain difference between uncontaminated and contaminated data for the elastic net, but otherwise the results are quite comparable. A different picture is visible from the FNR in Figure \[fig:fnr\_log\], where especially in the low dimensional case the elastic net is very sensitive to the outliers. Overall we conclude that the enet-LTS performs very well in case of contamination even though this was not clearly visible in the precision, and it also yields reasonable results for clean data. ![Misclassification rate for logistic regression. Left: low dimensional data set ($n=150$ and $p=50$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:misclas_log"}](misclas_low_log "fig:"){width="50.00000%"} ![Misclassification rate for logistic regression. Left: low dimensional data set ($n=150$ and $p=50$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:misclas_log"}](misclas_high_log "fig:"){width="50.00000%"} ![The mean of negative likelihood (MNLL) function for logistic regression. Left: low dimensional data set ($n=150$ and $p=50$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:mnll_log"}](mnll_low_log "fig:"){width="50.00000%"} ![The mean of negative likelihood (MNLL) function for logistic regression. Left: low dimensional data set ($n=150$ and $p=50$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:mnll_log"}](mnll_high_log "fig:"){width="50.00000%"} ![Precision of the estimators (PRECISION) for logistic regression. Left: low dimensional data set ($n=150$ and $p=50$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:bias_log"}](bias_low_log "fig:"){width="50.00000%"} ![Precision of the estimators (PRECISION) for logistic regression. Left: low dimensional data set ($n=150$ and $p=50$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:bias_log"}](bias_high_log "fig:"){width="50.00000%"} ![False positive rate (FPR) for logistic regression. Left: low dimensional data set ($n=150$ and $p=50$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:fpr_log"}](fpr_low_log "fig:"){width="50.00000%"} ![False positive rate (FPR) for logistic regression. Left: low dimensional data set ($n=150$ and $p=50$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:fpr_log"}](fpr_high_log "fig:"){width="50.00000%"} ![False negative rate (FNR) for logistic regression. Left: low dimensional data set ($n=150$ and $p=50$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:fnr_log"}](fnr_low_log "fig:"){width="50.00000%"} ![False negative rate (FNR) for logistic regression. Left: low dimensional data set ($n=150$ and $p=50$); right: high dimensional data set ($n=50$ and $p=100$).[]{data-label="fig:fnr_log"}](fnr_high_log "fig:"){width="50.00000%"} Applications to real data {#sec:applications} ========================= -0.25cm In this section we focus on applications with logistic regression, and compare the non-robust elastic net estimator with the robust enet-LTS method. The model selection is conducted as described in Section \[sec:tuningpara\]. Model evaluation is done with leave-one-out cross validation, i.e. each observation is used as test observation once, a model is estimated on the remaining observations, and the negative log-likelihood is calculated for the test observation. In these real data examples it is unknown if outliers are present. In order to avoid an influence of potential outliers on the evaluation of a model, the 25% trimmed mean of the negative log-likelihoods is calculated to compare the models. Analysis of meteorite data -------------------------- The time-of-flight secondary iron mass spectroscope COSIMA [@kissel2007cosima] was sent to the comet Churyumov-Gerasimenko in the Rosetta space mission by the ESA to analyze the elemental composition of comet particles which were collected there [@schulz2015comet]. As reference measurements, samples of meteorites provided by the Natural History Museum Vienna were analyzed with the same type of spectroscope at Max Planck Institute for Solar System Research in Göttingen. Here we apply our proposed method for logistic regression to the measurements from particles from the meteorites Ochansk and Renazzo with 160 and 110 spectra, respectively. We restrict the mass range to 1-100mu, consider only mass windows where inorganic and organic ions can be expected as described in [@Varmuza11] and variables with positive median absolute deviation. So we obtain $p=1540$ variables. Further, the data is normalized to have constant row sum 100. Table \[tab:meteorite\] summarizes the results for the comparison of the methods. The trimmed MNLL is much smaller for the enet-LTS estimator than for the classical elastic net method. The reweighting step improves the quality of the model further. The selected tuning parameter $\alpha_{opt}$ is much smaller for enet-LTS than for the classical elastic net method which strongly influences the number of variables in the models. number variables trimmed MNLL -------------- ------------------ -------------- elastic net 136 0.00866 enet-LTS raw 294 0.00030 enet-LTS 397 0.00014 : Renazzo and Ochansk: Number of variables in the optimal models and trimmed mean negative log-likelihood from leave-one-out cross validation of the optimal models.[]{data-label="tab:meteorite"} Figure \[fig:meteorite\] compares the Pearson residuals of the elastic net model and the enet-LTS model. In the classical approach no abnormal observations can be detected. With the enet-LTS model several observations are identified as outliers by the 1.25% and 98.25% quantiles of the standard normal distribution, which are marked as horizontal lines in Figure \[fig:meteorite\]. Closer investigation showed that these spectra lie on the outer border of the measurement area and are potentially measured on the target instead of the meteorite particle. Their multivariate structure for those variables which are included in the model is visualized in Figure \[fig:meteorite2\], where we can see that in some variables they have particularly large values compared to the majority of the group. ![Renazzo and Ochansk: the Pearson residuals of elastic net and the raw enet-LTS estimator. The horizontal lines indicate the 0.0125 and the 0.9875 quantiles of the standard normal distribution. []{data-label="fig:meteorite"}](preasonresid_ochansk_renazzo){width="60.00000%"} ![The index refers to the index of the variables included in the model of raw enet-LTS. The detected outliers are visualized by grey lines, while the black lines represent the 5% and 95% quantile of the non-outlying spectra for Ochansk (left) and Renazzo (right).[]{data-label="fig:meteorite2"}](mean_spec_outlier_ochansk_renazzo){width="\textwidth"} Analysis of the glass vessels data ---------------------------------- Archaeological glass vessels where analyzed with electron-probe X-ray micro-analysis to investigate the chemical concentrations of elements in order to learn more about their origin and the trade market at the time of their making in the 16$^{th}$ and 17$^{th}$ century [@Janssens98]. Four different groups were identified, i.e. sodic, potassic, potasso-calcic and calcic glass vessels. For demonstration of the performance of logistic regression, two groups are selected from the glass vessels data set. The first group is the potassic group with 15 spectra, the second group the potasso-calcic group with 10 spectra. As in [@Filzmoser08] we remove variables with MAD equal to zero, resulting in $p=1905$ variables. The quality of the selected models is described in Table \[tab:glassvessel\]. The trimmed mean of the negative log likelihoods is much smaller for enet-LTS than for elastic net. The reweighting step in enet-LTS hardly improves the model, but includes more variables. Again, both enet-LTS models include more variables than the elastic net model. In the elastic net model the penalty gives higher emphasis on the $l_1$ term, i.e. $\alpha_{opt}=0.8$; for enet-LTS it is $\alpha_{opt}=0.05$. number variables trimmed MNLL -------------- ------------------ -------------- elastic net 50 0.004290 enet-LTS raw 375 0.000345 enet-LTS 448 0.000338 : Glass vessel data: number of variables in the optimal models, and trimmed mean negative log-likelihood from leave-one-out cross validation of the optimal models.[]{data-label="tab:glassvessel"} Different behavior of the coefficient estimates can be expected. Figure \[fig:glassvessel\] (left) shows coefficients of the reweighted enet-LTS model corresponding to variables associated with potassium and calcium. The band which is associated with potassium has positive coefficients, i.e. high values of these variables correspond to the potassic group which is coded with ones in the response. High values of the variables in the band which is associated with calcium will favor a classification to the potasso-calcic group (coded with zero), since the coefficients for these variables are negative. Further, it can be observed that neighboring variables, which are correlated, have similar coefficients. This is favored by the $l_2$ term in the elastic net penalty. In Figure \[fig:glassvessel\] (right) the coefficient estimates of the elastic net model are visualized. Fewer coefficients are non-zero than for enet-LTS which was favored by the $l_1$ term in the elastic net penalty, but in the second block of non-zero coefficients neighboring variables receive very different coefficient estimates. ![Glass vessels: coefficient estimate of the reweighted enet-LTS model (left) and coefficient estimate of the elastic net mode (right) for a selected variable range.[]{data-label="fig:glassvessel"}](specraV3_glassvessels2){width="\textwidth"} Computation time {#sec:comptime} ================ -0.25cm For our algorithm we employ the classical elastic net estimator as it is implemented in the R package $glmnet$ [@FriedmanR16]. So, it is natural to compare the computation time of our algorithm with this method. In the linear regression case we also compare with the sparse LTS estimator implemented in the R package $robustHD$ [@AlfonsR13]. For calculating the estimators we take a grid of five values for both tuning parameters $\alpha$ and $\lambda$. The data sets are simulated as in Section \[sec:simulations\] for a fixed number of observations $n=150$, but for a varying number of variables $p$ in a range from $50$ to $2000$. In Figure \[fig:compt\_time\] (left: linear regression, right: logistic regression), the CPU time is reported in seconds, as an average over $5$ replications. In order to show the dependency on the number of observations $n$, we also simulated data sets for a fixed number of variables $p=100$ with a varying number of observations $n=50,100,\dots,500$. The results for linear and logistic regression are summarized in Figure \[fig:compt\_time\_n\]. The computations have been performed on an Intel Core 2 Q9650 @ $3000$ GHz$\times$4 processor. ![CPU time in seconds (log-scale), averaged over 5 replications, for fixed $n=150$ and varying $p$; left: for linear regression; right: for logistic regression.[]{data-label="fig:compt_time"}](timeCPU_figures "fig:"){width="50.00000%"} ![CPU time in seconds (log-scale), averaged over 5 replications, for fixed $n=150$ and varying $p$; left: for linear regression; right: for logistic regression.[]{data-label="fig:compt_time"}](timeCPU_figures_log "fig:"){width="50.00000%"} ![CPU time in seconds (log-scale), averaged over 5 replications, for fixed $p=100$ and varying $n$; left: for linear regression; right: for logistic regression.[]{data-label="fig:compt_time_n"}](timeCPU_figures_n "fig:"){width="50.00000%"} ![CPU time in seconds (log-scale), averaged over 5 replications, for fixed $p=100$ and varying $n$; left: for linear regression; right: for logistic regression.[]{data-label="fig:compt_time_n"}](timeCPU_figures_log_n "fig:"){width="50.00000%"} Let us first consider the dependency of the computation time on the number of variables $p$ for linear regression, shown in the left plot of Figure \[fig:compt\_time\]. Sparse LTS increases strongly with the number of variables $p$ since it is based on the LARS algorithm which has a computational complexity of $\mathcal{O}(p^3+np^2)$ [@Efron2004]. Also for the smallest number of considered variables, the computation time is considerably higher than for the other two methods. The reason is that for each value of $\lambda$ and each step in the CV the best subset is determined starting with 500 elemental subsets. In this setting at least 25,000 estimations of a Lasso model are needed, because for each cross validation step at each of the 5 values of $\lambda$, two C-steps for 500 elemental subsets are carried out, and for the 10 subsamples with lowest value of the objective function, further C-steps are performed. In contrast, the enet-LTS estimator starts with 500 elemental subsets only for one combination of $\alpha$ and $\lambda$, and takes the *warm start* strategy for subsequent combinations. This saves computation time, and there is indeed only a slight increase with $p$ visible when compared to the elastic net estimator. In total approximately 1,700 elastic net models are estimated in this procedure, which are considerably fewer than for the sparse LTS approach. The computation time of sparse LTS also increases with $n$ due to the computational complexity of LARS, while the increase is only minor for enet-LTS, see Figure \[fig:compt\_time\_n\] (left). The results for the computation time in logistic regression are presented in Figure \[fig:compt\_time\] (right) and \[fig:compt\_time\_n\] (right). Here we can only compare the classical elastic net estimator and the proposed robustified enet-LTS version. The difference in computation time between elastic net and enet-LTS is again due to the many calls of the [glmnet]{} function within enet-LTS. The robust estimator is considerably slower in logistic regression when compared to linear regression for the same number of explanatory variables or observations. The reason is that more C-steps are necessary to identify the optimal subset for each parameter combination of $\alpha$ and $\lambda$. Conclusions {#sec:conclude} =========== -0.25cm In this paper, robust methods for linear and logistic regression using the elastic net penalty were introduced. This penalty allows for variable selection, can deal with high multicollinearity among the variables, and is thus very appropriate in high dimensional sparse settings. Robustness has been achieved by using trimming. This usually leads to a loss in efficiency, and therefore a reweighting step was introduced. Overall, the outlined algorithms for linear and logistic regression turned out to yield good performance in different simulation settings, but also with respect to computation time. Particularly, it was shown that the idea of using “warm starts” for parameter tuning allows to save computation time, while the precision is still preserved. A competing method for robust high dimensional linear regression, the sparse LTS estimator [@AlfonsR13], does not use this idea, and is thus much less attractive concerning computation time, especially in case of many explanatory variables. We should also admit that for other simulation settings (not shown here), the difference between sparse LTS and the enet-LTS estimator is not so big, or even marginal, depending on the exact setting. For this reason, a further focus was on the robust high dimensional logistic regression case. We consider such a method as highly relevant, since in many modern applications in chemometrics or bio-informatics, one is confronted with data information from two groups, with the task to find a classification rule and to identify marker variables which support the rules. Outliers in the data are frequently a problem, and they can affect the identification of the marker variables as well as the performance of the classifier. For this reason it is desirable to treat outliers appropriately. It was shown in simulation studies as well as in data examples, that in presence of outliers the new proposal still works well, while its classical non-robust counterpart can lead to poor performance. Note that in [@Park16] a logistic regression method with elastic net penalty is proposed using weights to reduce the influence of outliers. Their approach is to perform outlier detection in a PCA space, obtain weights based on robust Mahalanobis distances in the PCA score space and derive weights from these distances. These weights are then used to down-weight the negative log likelihoods in the penalized objective function to reduce the influence of outliers. However, it is not guaranteed that outliers can be detected in the PCA score space. An increasing number of uninformative variables will disguise observations deviating from the majority only in few informative variables, but these hidden outlying observations can still distort the model. Therefore, model based outlier detection is highly recommended as proposed in our algorithm. The algorithms to compute the proposed estimators are implemented in R functions, which are available upon request from the authors. The basis for the computation of the robust estimator is the R package $glmnet$ [@FriedmanR16]. This package also implements the case of multinomial and Poisson regression. Naturally, a further extension of the algorithms introduced here could go into these directions. Further work will be devoted to the theoretical properties of the family of enet-LTS estimators. Acknowledgments {#acknowledgments .unnumbered} =============== This work is partly supported by the Austrian Science Fund (FWF), project P 26871-N20 and by grant TUBITAK 2214/A from the Scientific and Technological Research Council of Turkey (TUBITAK). The authors thank F. Brandstätter, L. Ferrière, and C. Koeberl (Natural History Museum Vienna, Austria) for providing meteorite samples, C. Engrand (Centre de Sciences Nucléaires et de Sciences de la Matière, Orsay, France) for sample preparation, and M. Hilchenbach (Max Planck Institute for Solar System Research, Göttingen, Germany) for TOF-SIMS measurements. The authors are grateful to Kurt Varmuza for valuable feedback on the results of the meteorite data. References {#references .unnumbered} ========== [10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{} A. Hoerl, R. Kennard, Ridge regression: Biased estimation for nonorthogonal problems, Technometrics 12 (1970) 55–67. R. Tibshirani, Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society: Series (Methodological) 58(1) (1996) 267–288. H. Zou, T. Hastie, Regularization and variable selection via the elastic net, Journal of the Royal Statistical Society: Series B 67(2) (2005) 301–320. P. Filzmoser, M. Gschwandtner, V. Todorov, Review of sparse methods in regression and classification with application to chemometrics, Journal of Chemometrics 26(3–4) (2012) 42–51. Y.-Z. Liang, O. Kvalheim, Robust methods for multivariate analysis – a tutorial review, Chemometrics and Intelligent Laboratory Systems 32 (1) (1996) 1–10. K. Liang, Y.Z. anf Fang, Robust multivariate calibration algorithm based on least median of squares and sequential number theory optimization method, Analyst 121 (8) (1996) 1025–1029. R. Maronna, R. Martin, V. Yohai, Robust Statistics: Theory and Methods, Wiley, New York, 2006. P. Rousseeuw, A. Leroy, Robust Regression and Outlier Detection, Wiley 2nd edition: John Wiley & Sons, New York, 2003. P. J. Rousseeuw, Least median of squares regression, Journal of the American Statistical Association 79(388) (1984) 871–880. P. J. Rousseeuw, K. Van Driessen, Computing [LTS]{} regression for large data sets, Data Mining and Knowledge Discovery 12(1) (2006) 29–45. A. Alfons, C. Croux, S. Gelper, Sparse least trimmed squares regression for analyzing high-dimensional large data sets, The Annals of Applied Statistics 7(1) (2013) 226–248. A. Alfons, [robustHD: Robust methods for high dimensional data](http://CRAN.R-project.org/package=robustHD), [R]{} Foundation for Statistical Computing, Vienna, Austria, [R]{} package version 0.4.0 (2013). <http://CRAN.R-project.org/package=robustHD> J. Friedman, T. Hastie, R. Tibshirani, Regularization paths for generalized linear models via coordinate descent, Journal of Statistical Software 33(1) (2010) 1–22. A. Albert, J. Anderson, On the existence of maximum likelihood estimates in logistic regression models, Biometrika 71 (1984) 1–10. J. Friedman, T. Hastie, N. Simon, R. Tibshirani, [glmnet: Lasso and Elastic Net Regularized Generalized Linear Models](http://CRAN.R-project.org/package=glmnet), [R]{} Foundation for Statistical Computing, Vienna, Austria, [R]{} package version 2.0-5 (2016). <http://CRAN.R-project.org/package=glmnet> C. Croux, G. Haesbroeck, Implementing the [B]{}ianco and [Y]{}ohai estimator for logistic regression, Computational Statistics and Data Analysis 44(1–2) (2003) 273–295. V. Bianco, A.M. Yohai, [R]{}obust [E]{}stimation in [L]{}ogistic [R]{}egression [M]{}odel, in robust Statistics, Data Analysis, and Computer Intensive Methods, 17–34; Lecture Notes in Statistics **109**, Springer Verlag, Ed. H., Rieder: New York, 1996. , [R: A Language and Environment for Statistical Computing, [V]{}ienna, [A]{}ustria](http://www.R-project.org), R Foundation for Statistical Computing, Vienna, Austria, [ISBN]{} 3-900051-07-0 (2017). <http://www.R-project.org> J. Kissel, K. Altwegg, B. Clark, L. Colangeli, H. Cottin, S. Czempiel, J. Eibl, C. Engrand, H. Fehringer, B. Feuerbacher, et al., [COSIMA]{}–high resolution time-of-flight secondary ion mass spectrometer for the analysis of cometary dust particles onboard [R]{}osetta, Space Science Reviews 1280 (1-4) (2007) 823–867. R. Schulz, M. Hilchenbach, Y. Langevin, J. Kissel, J. Silen, C. Briois, C. Engrand, K. Hornung, D. Baklouti, A. Bardyn, et al., Comet 67[P]{}/[C]{}huryumov-[G]{}erasimenko sheds dust coat accumulated over the past four years, Nature 5180 (7538) (2015) 216–218. K. Varmuza, C. Engrand, P. Filzmoser, M. Hilchenbach, J. Kissel, H. Krüger, J. Silén, M. Trieloff, Random projection for dimensionality reduction - applied to time-of-flight secondary ion mass spectrometry data, Analytica Chimica Acta 705(1) (2011) 48–55. K. Janssens, I. Deraedt, A. Freddy, J. Veekman, Composition of 15-17 th century archeological glass vessels excavated in [A]{}ntwerp, [B]{}elgium, Mikrochima Acta 15 (1998) 253–267. P. Filzmoser, R. Maronna, W. M., Outlier identification in high dimensions, Computational Statistics & Data Analysis 52(3) (2008) 1694–1711. B. Efron, T. Hastie, I. Johnstone, R. Tibshirani, Least angle regression, The Annals of Statistics 32(2) (2004) 407–499. Robust logistic regression modelling via the elastic net-type regularization and tuning parameter selection, Journal of Statistical Computation and Simulation 86 (7) (2016) 1450–1461.
--- abstract: 'The factorization of hard and soft contributions into the hadronic decays of $B_c$ meson at large recoils is explored in order to evaluate the decay rates into the S, P and D-wave charmonia associated with $\rho$ and $\pi$. The constraints of approach applicability and uncertainties of numerical estimates are discussed. The mode with the $J/\psi$ in the final state is evaluated taking into account the cascade radiative electromagnetic decays of excited P-wave states, that enlarges the branching ratio by 20-25%.' --- 235 mm 165 mm -1.0cm -2.0cm \#1[0= 0=0 1= 1=1 0&gt;1 \#1 / ]{} [**Two-particle decays of $B_c$ meson\ into charmonium states**]{} [*V.V.Kiselev*]{}\ Russian State Research Center “Institute for High Energy Physics”,\ Protvino, Moscow Region, 142280 Russia\ and\ [*O.N.Pakhomova, V.A.Saleev*]{}\ Samara State University, Samara, 443011 Russia Introduction ============ After the first observation of $B_c$ meson by the CDF Collaboration at FNAL [@cdf] in the semileptonic mode with the $J/\psi$ particle in the final state, $$B_c^+\to J/\psi l^+\nu_l, \label{1}$$ one expects rather a significant, factor 20, increase of statistics with $B_c$ in the same mode after RunII. However, the uncertainties in the mass measurements are essential in the semileptonic channel, since the neutrino momentum is not detected directly. So, the two-particle decay mode $$B_c^+\to J/\psi \pi^+, \label{2}$$ is the most prospective channel for such the measurements. Therefore, we need a qualitative theoretical modelling in order to predict the basic characteristics of (\[2\]). The dynamics of $B_c$ decays was studied in various theoretical approaches: the QCD sum rules [@QCDSRBc] and potential models [@PMBc] operated with the exclusive decays and gave the estimates for both the branching ratios and the total lifetime summed over the exclusive modes, which is consistent with estimates of inclusive decays and lifetime in the framework of Operator Product Expansion (OPE) combined with the machinery of effective theory [@OPEBc] in the form of nonrelativistic QCD (NRQCD) [@NRQCD]. However, the feature of two-particle decays, which are studied in the present paper $$B_c^+\to c\bar c [{\scriptstyle ^{2s+1}L_J}] \pi^+(\rho^+), \label{2a}$$ is specified by rather a large recoil momentum of charmonium $c\bar c [{\scriptstyle ^{2s+1}L_J}]$, where $s$ denotes the sum of quark and antiquark spins, $L$ is the orbital quantum number running from 0 to 2, and $J$ is the total spin of charmonium. Indeed, in the framework of potential models the approximation of heavy-quarkonium wave-function overlapping for the calculation of hadronic form factors can be explored in the region, where those wave-functions are not exponentially small, i.e. if an amplitude under consideration is soft enough, and the nonperturbative modelling in the form of wave functions is reliable, while at large recoils the behaviour of vertices for the quarks entering the bound states is significantly modified due to hard gluon corrections. Then, the exponential decrease of quark-meson form factors is replaced by the power-like one at large recoils. In that case, one could factorize the hard and soft amplitudes [@Brodsky], which was recently explored in the description of two-particle decays of $B_c$ [@HSBc] as well as heavy-light mesons [@Anisovich]. While in [@HSBc] the decays into the S and P-wave charmonia were considered, in the present paper we develop the same technique for a more accurate analysis including the D-wave states of $c\bar c$, too. So, the approach is based on the fact that in the heavy quarkonium we can neglect the binding energy $\epsilon$ in comparison with the heavy quark mass, since by the order of magnitude it is determined by the kinetic energy of heavy quark and antiquark inside the meson $\epsilon \sim m_Q \cdot v^2$, where $v$ is the relative velocity of quarks, $v\ll 1$, so that $\epsilon \ll m_Q$. Moreover, the region of soft part in the heavy-quarkonium wave-function is determined by the meson size, which is about $r\sim 1/p_Q$ with the quark momentum $p_Q\sim m_Q\cdot v$. Then we can apply the nonrelativistic wave functions in the amplitudes, where the virtualities $\mu^2$ are less than $(m_Q\cdot v)^2$, while at large recoils in the decays the hard factors of amplitudes with virtualities greater than $(m_Q\cdot v)^2$ should be described with account of hard gluon exchange between the constituents of heavy quarkonia. The perturbative QCD can be used for the hard amplitudes, if $\mu^2 \gg \Lambda_{QCD}^2$. We check these conditions of hard-soft factorization and estimate the uncertainties of numerical results by the variation of charmed quark mass in the limits constrained by the excitation energy of P and D-levels with respect to S-one. In addition, we factorize the matrix element of light quark current by the vacuum insertion. This approach and limits of its applicability are discussed in [@fact]. Then, we deal with the hard approximation for the four heavy-quark operator omitting possible renormalization effects. We take into account the perturbative corrections to the effective nonleptonic weak Lagrangian. The paper is organized as follows: In Section 2 we present the basic model assumptions and general formalism, while the analytic expressions for the widths of $B_c$ decays into the charmonium states with the pion are given in Section 3. We describe the input parameters and present the numerical results, too. Then we take into account the radiative electromagnetic decays of P-levels in order to estimate the summed $J/\psi \pi$ yield in the $B_c$ decays. In Conclusion we discuss the obtained results and their uncertainties. Bulky analytical expressions for the decays with $\rho$ are placed in the Appendix. Hard-soft factorization in $B_c$ decays ======================================= Neglecting the binding energy, in the soft amplitude we put the mass of $B_c$ meson $m_1$ equal to the sum of b-quark and c-quark masses $m_b+m_c$, and the mass of ${c\bar c}$ state $m_2$ equal to $2m_c$. Then the heavy quark and antiquark inside the bound state move with the same 4-velocity, so that in the accepted nonrelativistic approximation we can write down $$\begin{aligned} v_1 &=& \frac{p_1}{m_1}=\frac{p_{\bar b}}{m_b}=\frac{p_c}{m_c}, \label{s1}\\ v_2 &=& \frac{p_2}{m_2}=\frac{p_{\bar c}}{m_c}=\frac{p^{'}_c}{m_c}, \label{s2}\end{aligned}$$ where $p_{1,2}$ are the momenta of decaying and recoil heavy quarkonia, respectively, and $p_Q$ are the momenta of quarks composing the heavy quarkonia. However, at large recoils specific for the decays of $B_c^+\to c\bar c [{\scriptstyle ^{2s+1}L_J}] \pi^+(\rho^+)$, the conditions of (\[s1\]) and (\[s2\]) could be valid only if we take into account the hard gluon correction with a large momentum transfer $$|k^2|=\frac{m_2}{4m_1}((m_1-m_2)^2-m_3^2)\gg\Lambda_{QCD}^2,$$ where $m_3$ is the mass of $\pi$ or $\rho$. At the tree level as well as with soft gluon corrections the prescription of (\[s1\]) and (\[s2\]) would give zero matrix element, since a little smearing by soft gluons responsible for the formation of wave-function results in the exponential suppression of overlapping at large recoils. Numerically, the characteristic virtuality in the hard amplitude is equal to $1.0-1.2$ GeV$^2$ for the charmonium in the final state with $m_2=3.0-3.5$ GeV. We see that such virtualities are large enough for quite a reliable use of perturbative QCD. Moreover, a characteristic relative momentum of heavy quarks inside the bound states under consideration is about $p\sim 0.6-0.7$ GeV, and the ratio $p^2/k^2 \sim 0.3-0.4$ is quite a small parameter for the expansion. Thus, the kinematical conditions in the decays of $B_c^+\to c\bar c [{\scriptstyle ^{2s+1}L_J}] \pi^+(\rho^+)$ favor the application of hard-soft approximation with the accuracy about 30%. Another source of uncertainty is connected with neglecting the binding energy, and it is more essential. We will further test it numerically by the variation of charmed quark mass from 1.5 to 1.7 GeV. A general covariant formalism for calculating the production and decay rates of S-wave and P-wave heavy quarkonium in the nonrelativistic expansion was developed in [@n4]. In this approach, the amplitude for the decay of bound state ($\bar b c$) possessing the momentum $p_1$, total spin $J_1$, orbital momentum $L_1$ and summed spin $S_1$ into the bound-state ($\bar c c$) possessing the momentum $p_2$, total spin $J_2$, orbital momentum $L_2$ and summed spin $S_2$ is given by the following: $$\begin{aligned} A(p_1,p_2) &=& \int \frac{d{\bf q}_1}{(2\pi)^3}\sum_{L_{1z}S_{1z}} \Psi_{L_{1z}S_{1z}}({\bf q}_1) \langle L_1L_{1z};S_1S_{1z}|J_1J_{1z}\rangle\times \\ &&\int \frac{d{\bf q}_2}{(2\pi)^3}\sum_{L_{2z}S_{2z}} \Psi_{L_{2z}S_{2z}}({\bf q}_2) \langle L_2L_{2z};S_2S_{2z}|J_2J_{2z}\rangle M(p_1,p_2,q_1,q_2),\nonumber\end{aligned}$$ where $M(p_1,p_2,q_1,q_2)$ is the hard amplitude of process with truncated fermion legs entering the initial and final mesons as described by the diagrams in Fig. \[fig1\]. Here $\Psi_{L_zS_z}({\bf q})$ are the nonrelativistic wave functions for the heavy quarkonia. (250,100)(0,0) (10,40)\[\][$p_1$]{} (20,43)(30,43) (20,37)(30,37) (33,40)(28,35) (33,40)(28,45) (39,40)(35,6)(0)[0.7]{} (39,40)(35,6)(0) (168,75)(125,75) (105,67)\[\][$u_1$]{} (125,75)(82,75) (82,75)(90,90) (82,75)(96,86) (96,91)(87,90) (96,91)(97,83) (98,97)\[l\][$\pi^+(\rho^+)$]{} (82,75)(39,75) (125,75)(125,5)[5]{}[5]{}(118,40)\[r\][$k$]{} (39,5)(82,5) (82,5)(125,5) (125,5)(168,5) (168,40)(35,6)(0)[0.7]{} (168,40)(35,6)(0) (174,43)(184,43) (174,37)(184,37) (187,40)(182,35) (187,40)(182,45) (197,40)\[\][$p_2$]{} (200,100)(0,0) (10,40)\[\][$p_1$]{} (20,43)(30,43) (20,37)(30,37) (33,40)(28,35) (33,40)(28,45) (39,40)(35,6)(0)[0.7]{} (39,40)(35,6)(0) (168,75)(125,75) (105,67)\[\][$u_2$]{} (125,75)(82,75) (125,75)(133,90) (125,75)(139,86) (139,91)(130,90) (139,91)(140,83) (141,97)\[l\][$\pi^+(\rho^+)$]{} (82,75)(39,75) (82,75)(82,5)[5]{}[5]{}(75,40)\[r\][$k$]{} (39,5)(82,5) (82,5)(125,5) (125,5)(168,5) (168,40)(35,6)(0)[0.7]{} (168,40)(35,6)(0) (174,43)(184,43) (174,37)(184,37) (187,40)(182,35) (187,40)(182,45) (197,40)\[\][$p_2$]{} With the accuracy up to second order in relative momenta $q_1$ and $q_2$, the operators $\Gamma_{SS_z}(p,q)$ projecting the quark-antiquark pairs onto the bound states with fixed quantum numbers can be written in the form $$\Gamma_{S_1S_{1z}}(p_1,q_1)=\frac{\sqrt{m_1}}{4m_cm_b} (\frac{m_c}{m_1}\slashchar p_1-\slashchar q_1+m_c)\slashchar A_1 (\frac{m_b}{m_1}\slashchar p_1+\slashchar q_1-m_b),\label{8l}$$ where $\slashchar A_1=\gamma_5$ for the pseudoscalar initial state $S_1=0$ and $\slashchar A_1=\slashchar \varepsilon(S_{1z})$ for the vector one $S_1=1$, and $$\Gamma^\dagger_{S_2S_{2z}}(p_2,q_2)=\frac{\sqrt{m_2}}{4m_c^2} (\frac{m_c}{m_2}\slashchar p_2+\slashchar q_2-m_c)\slashchar A_2 (\frac{m_c}{m_2}\slashchar p_2-\slashchar q_2+m_c),\label{9l}$$ where $\slashchar A_2=\gamma_5$ for the summed spin $S_2=0$ of recoil quarkonium, and $\slashchar A_2=\slashchar \varepsilon(S_{2z})$ for $S_2=1$. Here $\varepsilon(S_{1z,2z})$ denotes the polarization of vector-spin state. The color factor $\delta^{ij}/\sqrt{3}$ for the singlet states should be also introduced in the quark-meson vertices. Making use of projection operators in (\[8l\]) and (\[9l\]) we write down the hard amplitude $M(p_1,p_2,q_1,q_2)$ in the following way: $$M(p_1,p_2,q_1,q_2)=\mbox{Tr}\left [ \Gamma^\dagger (p_2,q_2)\gamma^{\mu}\Gamma(p_1,q_1)\cal{O}_{\mu}\right ],$$ where for the decay with the $\pi$ meson in the final state we have $$\begin{aligned} {\cal O}_{\mu}&=&{\cal O}^{[1]}_{\mu}+{\cal O}^{[2]}_{\mu},\\ {\cal O}^{[1]}_{\mu}&=&\frac{G_F}{\sqrt{2}}\frac{16\pi\alpha_s}{3}V_{bc}f_{\pi}a_1 \slashchar p_3 (1-\gamma_5) \left (\frac{-\slashchar u_1+m_c}{x_1^2-m_c^2} \right )\frac{\gamma_{\mu}}{k^2},\\ {\cal O}^{[2]}_{\mu}&=&\frac{G_F}{\sqrt{2}}\frac{16\pi\alpha_s}{3}V_{bc}f_{\pi}a_1 \frac{\gamma_{\mu}}{k^2}\left (\frac{-\slashchar u_2+m_b}{x_2^2-m_b^2} \right )\slashchar p_3(1-\gamma_5),\end{aligned}$$ with the notations $$\slashchar u_1=\frac{m_b}{m_1}\slashchar p_1+\slashchar q_1-\slashchar p_3, \quad \slashchar u_2=\frac{m_c}{m_2}\slashchar p_2+\slashchar q_2+\slashchar p_3, \quad \slashchar k=\frac{m_c}{m_2}\slashchar p_2-\frac{m_c}{m_1}\slashchar p_1+\slashchar q_1- \slashchar q_2.$$ The factor $a_1$ comes from the hard gluon corrections to the four-fermion effective weak Lagrangian. Expanding $M(p_1,p_2,q_1,q_2)$ over small parameters $q_1/m_1$ and $q_2/m_2$ at $q_1=q_2=0$, we get $$\begin{aligned} M(p_1,p_2,q_1,q_2) &=& M(p_1,p_2,0,0)+ q_{1\alpha}\frac{\partial M}{\partial q_{1\alpha}}|_{q_{1,2}=0}+ q_{2\alpha}\frac{\partial M}{\partial q_{2\alpha}}|_{q_{1,2}=0}+\nonumber\\ &+&\frac{1}{2}q_{2\alpha}q_{2\beta} \frac{\partial^2 M}{\partial q_{2\alpha} \partial q_{2\beta}}|_{q_{1,2}=0}+\ldots\end{aligned}$$ where the above terms correspond to various quantum numbers, first, with $L_1=L_2=0$ for the transition between the S-wave levels, second and third, with $L_1=1$ and $L_2=0$, $L_1=0$ and $L_2=1$ for the transitions between S and P-wave states, fourth, with $L_1=0$ and $L_2=2$ for the transition between the S-wave initial quarkonium and the D-wave recoil meson. Thus, we can easily find that for the various orbital states, the soft factors in the amplitudes $A(p_1,p_2)$ are expressed in terms of quarkonium radial wave-functions in the following way: $$\begin{aligned} \int \frac{ d^3 {\bf q}}{(2\pi)^3} \; \Psi_{00} ({\bf q} ) &=& \frac{R(0)}{\sqrt{4\pi}} \; , \nonumber \\ \int \frac{ d^3 {\bf q}}{(2\pi)^3} \; \Psi_{1L_Z} ({\bf q} ) q_\alpha &=& - i \sqrt{\frac{3}{4\pi}} R'(0) \varepsilon_\alpha(p,L_Z) \; , \nonumber \\ \int \frac{d^3{\bf q}}{(2\pi)^3} \; \Psi_{2L_Z} ({\bf q} ) q_\alpha q_\beta &=& \sqrt{\frac{15}{8\pi}} R''(0) \varepsilon_{\alpha\beta}(p,L_Z) \; ,\end{aligned}$$ where $\varepsilon_{\alpha}(p,L_Z)$ is the polarization of vector particle, and $\varepsilon_{\alpha\beta}$ is the symmetric, traceless, and transverse rank-2 polarization for the spin-2 particle. The above wave-functions are represented as the products of radial and angular functions: $\Psi_{L L_Z}({\bf r}) = Y_{L L_Z}(\theta,\phi)\, R_{L}(r)$. For the $^1P_1$ charmonium state we get $$\sum_{L_{2Z}}\varepsilon^{\alpha}(p_2,L_{2Z})\langle 1L_{2Z},00\vert 1,J_{2Z}\rangle =\varepsilon^{\alpha}(p_2,J_{2Z}).$$ For $^3P_J(J=0,1,2)$ states the summation over the quark spins and orbital momentum projections results in $$\begin{aligned} \sum_{S_{2Z},L_{2Z}}\varepsilon^{\alpha}(p_2,L_{2Z})\langle 1L_{2Z},1S_{2Z} \vert J_2,J_{2Z}\rangle \varepsilon^{\rho}(S_{2Z})%=\nonumber\\ =\left \{ \begin{array}{lr} \frac{1}{\sqrt{3}}(g^{\alpha\rho}-\frac{p_2^{\alpha}p_2^{\rho}} {m_2^2}),& \mbox{ }J_2=0,\\[2mm] \frac{i}{\sqrt{2}m_2}\varepsilon^{\alpha\rho\mu\nu}p_{2\mu} \varepsilon_{\nu}(p_2,J_{2Z}),&\mbox{ }J_{2}=1,\\[2mm] \varepsilon^{\rho\alpha}(p_2,J_{2Z}),&\mbox{ } J_2=2. \end{array} \right.\end{aligned}$$ For $c\bar c[{\scriptstyle ^1D_2}]$ one has $$\sum_{L_{2Z}}\varepsilon^{\alpha\beta}(p_2,L_{2Z})\langle 2L_{2Z},00\vert 2,J_{2Z}\rangle =\varepsilon^{\alpha\beta}(p_2,J_{2Z}).$$ For $c\bar c[{\scriptstyle ^3D_J(J=1,2,3)}]$ states we get $$\begin{aligned} &&\sum_{S_{2Z},L_{2Z}}\varepsilon^{\alpha\beta}(p_2,L_{2Z})\langle 2L_{2Z},1S_{2Z} \vert J_2,J_{2Z}\rangle \varepsilon^{\rho}(S_{2Z})=\nonumber\\ &\quad &=\left \{ \begin{array}{lr} - \sqrt{\frac{3}{20}} \; \left( \frac{2}{3} {\cal P}_{\alpha\beta} \; \varepsilon_\rho (p_2,J_{2Z}) -{\cal P}_{\alpha\rho}\;\varepsilon_\beta(p_2,J_{2Z}) - {\cal P}_{\beta\rho} \;\varepsilon_\alpha(p_2,J_{2Z}) \right), & \mbox{ }J_2=1,\\[2mm] \frac{i}{M\sqrt{6}}\left( \varepsilon_{\alpha\sigma}(p_2,J_{2Z}) \varepsilon_{\tau\beta\rho \sigma'}\; p_2^\tau g^{\sigma\sigma'} + \varepsilon_{\beta\sigma}(p_2,J_{2Z}) \varepsilon_{\tau\alpha\rho\sigma'}\; p_2^\tau g^{\sigma\sigma'} \right),& \mbox{ } J_2=2,\\[2mm] \varepsilon_{\alpha\beta\rho} (p_2,J_{2Z}), & \mbox{ } J_2=3, \end{array} \right.\end{aligned}$$ where $${\cal P}^{\alpha\beta} = -g^{\alpha\beta} + \frac{p_2^\alpha p_2^\beta}{m_2^2}\;,$$ and $\epsilon_{\alpha\beta\rho}(p_2,J_{2Z})$ is the symmetric, traceless, and transverse spin-3 polarization tensor. The summation over the polarizations gives the following ordinary expressions [@n5]: $$\begin{aligned} \label{j=1} \sum_{J_{2Z}=-1}^1 \varepsilon_\alpha(p_2,J_{2Z}) \varepsilon_\beta^*(p_2,J_{2Z}) &=& {\cal P}_{\alpha\beta} , \\ \label{j=2} \sum_{J_{2Z}=-2}^2 \varepsilon_{\alpha\beta}(p_2,J_{2Z}) \varepsilon_{\rho\sigma}^*(p_2,J_{2Z}) &=& \frac{1}{2}\left( {\cal P}_{\alpha\rho} {\cal P}_{\beta\sigma} + {\cal P}_{\alpha\sigma} {\cal P}_{\beta\rho} \right ) -\frac{1}{3} {\cal P}_{\alpha\beta} {\cal P}_{\rho\sigma} \; , \\ % \label{j=3} \sum_{J_{2Z}=-3}^3 \varepsilon_{\alpha\beta\gamma}(p_2,J_{2Z}) \varepsilon_{\rho\sigma\eta}^*(p_2,J_{2Z}) &=& \frac{1}{6} \biggr({\cal P}_{\alpha\rho} {\cal P}_{\beta\sigma} {\cal P}_{\gamma\eta} + {\cal P}_{\alpha\rho} {\cal P}_{\beta\eta} {\cal P}_{\gamma\sigma} + {\cal P}_{\alpha\sigma} {\cal P}_{\beta\rho} {\cal P}_{\gamma\eta} \nonumber \\ && + {\cal P}_{\alpha\sigma} {\cal P}_{\beta\eta} {\cal P}_{\gamma\rho} + {\cal P}_{\alpha\eta} {\cal P}_{\beta\sigma} {\cal P}_{\gamma\rho} + {\cal P}_{\alpha\eta} {\cal P}_{\beta\rho} {\cal P}_{\gamma\sigma} \biggr ) %\nonumber \\ && -\frac{1}{15} \biggr({\cal P}_{\alpha\beta} {\cal P}_{\gamma\eta} {\cal P}_{\rho\sigma} + {\cal P}_{\alpha\beta} {\cal P}_{\gamma\sigma} {\cal P}_{\rho\eta} + {\cal P}_{\alpha\beta} {\cal P}_{\gamma\rho} {\cal P}_{\sigma\eta} \nonumber \\ && + {\cal P}_{\alpha\gamma} {\cal P}_{\beta\eta} {\cal P}_{\rho\sigma} + {\cal P}_{\alpha\gamma} {\cal P}_{\beta\sigma} {\cal P}_{\rho\eta} + {\cal P}_{\alpha\gamma} {\cal P}_{\beta\rho} {\cal P}_{\sigma\eta} \nonumber \\ && + {\cal P}_{\beta\gamma} {\cal P}_{\alpha\eta} {\cal P}_{\rho\sigma} + {\cal P}_{\beta\gamma} {\cal P}_{\alpha\sigma} {\cal P}_{\rho\eta} + {\cal P}_{\beta\gamma} {\cal P}_{\alpha\rho} {\cal P}_{\sigma\eta} \biggr). \nonumber\end{aligned}$$ To the moment, we completely define the procedure for the calculation of matrix elements squared, that we perform by the use of analytic calculation system MATHEMATICA. Results ======= Neglecting the $\pi$ meson mass, we obtain the following analytical formulae for the widths of $B_c^+\to c\bar c [{\scriptstyle ^{2s+1}L_J}] \pi^+(\rho^+)$: $$\begin{aligned} &&\Gamma (B_c\to \psi\pi)=\frac{128}{9\pi}F\frac{|R_2(0)|^2}{m_2^3} \frac{(1+x)^3}{(1-x)^5},\\ &&\Gamma (B_c\to \eta_c\pi)=\frac{32}{9\pi}F\frac{|R_2(0)|^2}{m_2^3} \frac{(1+x)^3}{(1-x)^5}(x^2-2x+3)^2,\\ &&\Gamma (B_c\to h_c\pi)=\frac{128}{3\pi}F\frac{|R'_2(0)|^2}{m_2^5} \frac{(1+x)^3}{(1-x)^7}(x^3-2x^2+3x+4)^2,\\ &&\Gamma (B_c\to \chi_{c0}\pi)=\frac{128}{\pi}F\frac{|R'_2(0)|^2}{m_2^5} \frac{(1+x)^3}{(1-x)^7}(x^2-2x+3)^2,\\ &&\Gamma (B_c\to \chi_{c1}\pi)=\frac{256}{3\pi}F\frac{|R'_2(0)|^2}{m_2^5} \frac{(1+x)^3}{(1-x)^3},\\ &&\Gamma (B_c\to \chi_{c2}\pi)=\frac{256}{\pi}F\frac{|R'_2(0)|^2}{m_2^5} \frac{(1+x)^5}{(1-x)^7},\\ &&\Gamma (B_c\to {^1D_2}\pi)=\frac{2560}{9\pi}F\frac{|R''_2(0)|^2}{m_2^7} \frac{(1+x)^5}{(1-x)^9}(x^3-3x+5x+5)^2,\\ &&\Gamma (B_c\to {^3D_1}\pi)=\frac{256}{9\pi}F\frac{|R''_2(0)|^2}{m_2^7} \frac{(1+x)^3}{(1-x)^9}(5x^3-22x^2+41x+8)^2,\\ &&\Gamma (B_c\to {^3D_2}\pi)=\frac{5120}{3\pi}F\frac{|R''_2(0)|^2}{m_2^7} \frac{(1+x)^5}{(1-x)^5},\\ &&\Gamma (B_c\to {^3D_3}\pi)=\frac{8192}{3\pi}F\frac{|R''_2(0)|^2}{m_2^7} \frac{(1+x)^7}{(1-x)^9},\end{aligned}$$ where $$x=\frac{m_2}{m_1}, \mbox{ and } F=\alpha_s^2G_F^2V_{bc}^2f_{\pi}^2|R_1(0)|^2a_1^2.$$ In numerical estimations we use the following set of parameters: ------------------ --- ------------------ ------- --- ------------- ------------ --- -------------- $|R_{1}(0)|^2$ = $1.27$ GeV$^3$, $m_c$ = $1.5 $ GeV, $m_{\pi}$ = $0.14$ GeV, $|R_{2}(0)|^2$ = $0.94$ GeV$^3$, $m_b$ = $4.8 $ GeV, $f_{\pi}$ = $ 0.13$ GeV, $|R'_{2}(0)|^2$ = $0.08$ GeV$^5$, $m_1$ = $6.3 $ GeV, $V_{bc}$ = $0.04$, $|R''_{2}(0)|^2$ = $0.015$ GeV$^7.$ $m_2$ = $3.0$ GeV, $\alpha_s$ = $0.33$. ------------------ --- ------------------ ------- --- ------------- ------------ --- -------------- The values of wave-function parameters are taken from [@EichQuigg]. Then we get the estimate of direct $J/\psi$ yield associated with the pion $$\Gamma (B_c^+\to J/\psi \pi^+)=6.455 \times 10^{-15}a_1^2\mbox{ GeV}.$$ The decay widths into different charmonium states and the $\pi$ meson are presented in Table \[tab1\] as the fractions of decay width for $B_c^+\to J/\psi \pi^+$, while the absolute values depending on the choice of $a_1$ are given in Table \[tab2\]. ------------------------ -------------- ------------------------------------------------------- ------------------------------------------------------- \[-3mm\] ${c\bar c}$ $^{2S+1}L_J$ $\displaystyle{\Gamma (B_c\to {c\bar c}[{\scriptstyle $\displaystyle{\Gamma (B_c\to {c\bar c}[{\scriptstyle ^{2S+1}L_J}]\pi)\over\Gamma (B_c\to J/\psi\pi)} $ ^{2S+1}L_J}]\rho)\over\Gamma (B_c\to {c\bar c}[{\scriptstyle ^{2S+1}L_J}]\pi)}$ \[4mm\] $J/\psi$ $^3S_1$ 1.0 3.9 \[0.75mm\] $\eta_c$ $^1S_0$ 1.3 3.2 \[0.75mm\] $h_c$ $^1P_1$ 2.7 3.4 \[0.75mm\] $\chi_{c0}$ $^3P_0$ 1.6 3.5 \[0.75mm\] $\chi_{c1}$ $^3P_1$ 0.016 51 \[0.75mm\] $\chi_{c2}$ $^3P_2$ 1.4 4.0 \[0.75mm\] $^1D_2$ 5.4 3.6 \[0.75mm\] $^3D_1$ 2.8 3.8 \[0.75mm\] $^3D_2$ 0.053 31 \[0.75mm\] $^3D_3$ 2.4 4.2 \[0.75mm\] ------------------------ -------------- ------------------------------------------------------- ------------------------------------------------------- : The yields of charmonium states in hadronic two-particle decays of $B_c$ meson represented as the ratios.[]{data-label="tab1"} ------------------------ -------------- -------------- ------------------------- ------------- ------------------------ \[-3mm\] ${c\bar c}$ $^{2S+1}L_J$ \[2mm\] $\eta_c$ ${^1S_0}$ 8.4 $a_1^2$ 2.1 $a_1^2$ [@10] 27 $a_1^2$ 5.5 $a_1^2$ [@10] $J/\psi$ ${^3S_1}$ 6.5 $a_1^2$ 2.0 $a_1^2$ [@10] 26 $a_1^2$ 6.0 $a_1^2$ [@10] \[0.75mm\] $h_c$ ${^1P_1}$ 18 $a_1^2$ 0.57 $a_1^2$ [@chache] 60 $a_1^2$ 1.4 $a_1^2$ [@chache] \[0.75mm\] $\chi_{c0}$ ${^3P_0}$ 11 $a_1^2$ 0.32 $a_1^2$ [@chache] 37 $a_1^2$ 0.81 $a_1^2$ [@chache] \[0.75mm\] $\chi_{c1}$ ${^3P_1}$ 0.10 $a_1^2$ 0.082 $a_1^2$ [@chache] 5.2 $a_1^2$ 0.33 $a_1^2$ [@chache] \[0.75mm\] $\chi_{c2}$ ${^3P_2}$ 8.9 $a_1^2$ 0.28 $a_1^2$ [@chache] 36 $a_1^2$ 0.58 $a_1^2$ [@chache] \[0.75mm\] ${^1D_2}$ 35 $a_1^2$ 124 $a_1^2$   \[0.75mm\] ${^3D_1}$ 19 $a_1^2$ 70 $a_1^2$   \[0.75mm\] ${^3D_2}$ 0.34 $a_1^2$ 11 $a_1^2$   \[0.75mm\] ${^3D_3}$ 16 $a_1^2$ 65 $a_1^2$   \[0.75mm\] ------------------------ -------------- -------------- ------------------------- ------------- ------------------------ : The widths of $B_c$ meson with the charmonium states in hadronic two-particle decays calculated in the hard-soft factorization in comparison with the results of wave-function overlapping technique [@chache; @10].[]{data-label="tab2"} An additional source of $J/\psi$ mesons is the two-particle decay of $B_c$ meson with $\rho$ in the final state: $B_c^+\to {c\bar c}[{\scriptstyle ^{2S+1}L_J}]\rho^+$. Calculating the decay widths $\Gamma (B_c^+\to {c\bar c}[{\scriptstyle ^{2S+1}L_J}]\rho^+)$ can be done in the same way as for the widths $\Gamma (B_c^+\to {c\bar c}[{\scriptstyle ^{2S+1}L_J}]\pi^+)$. Indeed, we use the factorization of light meson current, so that in decays with $\rho$ we incorporate the substitution $f_{\pi} p_3^\mu\to m_{\rho} f_{\rho} \varepsilon_3^\mu$, where $\varepsilon_3^{\mu}$ is the $\rho$ meson polarization. Taking into account the numerical values of $f_{\rho}=0.22$ GeV and $m_{\rho}=0.77$ GeV, we get the decay widths $\Gamma (B_c^+\to {c\bar c}[{\scriptstyle ^{2S+1}L_J}]\rho^+)$, which are presented in Tables \[tab1\], \[tab2\]. Cumbersome analytical expressions for the decay widths of $B_c^+\to {c\bar c}[{\scriptstyle ^{2S+1}L_J}]\rho^+$ are given in Appendix. For the sake of comparison with the results obtained in the ordinary technique with the wave-function overlapping, in Table \[tab2\] we show also the estimates, which were recently evaluated in [@chache; @10]. The analysis of two-particle hadronic $B_c$ decays with the charmed S-wave recoil-mesons in the final state was also done in [@Vary], where the estimates are similar with those of [@10], but slightly less numbers. Then, we see that our estimates with the charmed quark mass fixed at $m_c=1.5$ GeV are significantly greater than the values calculated in [@chache; @10]. Indeed, for the S-wave charmonium the increase due to the nonexponential fall off the wave-functions is about a factor 4 in the matrix element squared, while for the P-waves this factor reaches one order of magnitude. The reason for such the increase is quite transparent. So, following the coulomb analogy, we can expect that the velocity of heavy quark motion in the P-wave quarkonium is less than the velocity in the S-wave state (remember, $v_n \sim \alpha/n$, where $n$ is the principal quantum number). Then, the wave functions of P-wave levels have more soft behaviour than in the S-wave states, i.e. the relative momentum of heavy quarks is softer in the P-wave states, while the overlapping of quarkonia wave-functions at large recoils is displaced into the region of high virtualities, and the suppression is stronger for the P-wave levels. Thus, we would expect the above result on the significant enhancement of P-wave level yield in the hard-soft factorization approach. Another problem concerning for the factorization applicability is related to inherent uncertainties due to neglecting the binding energy in the charmonium states. Indeed, putting $m_c =m_{J/\psi}/2$ leads to zero binding energy for the S-wave states, while for the excitations under study it is about 500 MeV, i.e. it could be quite essential in the numerical estimates. We test this dependence on the binding energy by the variation of charmed quark mass in the range of $1.5-1.7$ GeV. We find that this variation brings the significant uncertainties into the absolute values of widths under consideration, so that the variations are about 30-50% for the P-wave charmonia and greater than 100% for the D-waves. Nevertheless, we observe that the ratios of widths presented in Table \[tab1\] are quite stable under such the variation of charmed quark mass. The corresponding uncertainties are limited by $5-10$%. This fact implies that the theoretical predictions for the ratios of two-particle widths in the $B_c$ decays are quite reliable. Moreover, these ratios are close to the values obtained in [@chache; @10]. Since the $J/\psi$ meson is experimentally detected with a good efficiency in the decays of $B_c$, we compare the direct $J/\psi$ yield ($B_c^+\to J/\psi \pi^+(\rho^+)$) with the cascade one in decays with the radiative electromagnetic transitions of excited P-wave states into $J/\psi$. The corresponding branching ratios of radiative decays are known experimentally [@n6]. So, $$\mbox{Br}(\chi_{c0}\to J/\psi\gamma)=0.007,\; \mbox{Br}(\chi_{c1}\to J/\psi\gamma)=0.27, \;% \mbox{~and~} \mbox{Br}(\chi_{c2}\to J/\psi\gamma)=0.14.$$ Then we obtain $${\Gamma (B_c^+\to\chi_{c0,c1,c2}\pi^+\to J/\psi\pi^+\gamma)\over {\Gamma (B_c^+\to J/\psi\pi^+)}}=0.21,$$ and $${\Gamma (B_c^+\to\chi_{c0,c1,c2}\rho^+\to J/\psi\rho^+\gamma)\over {\Gamma (B_c^+\to J/\psi\rho^+)}}=0.26.$$ Thus, we see that the correction to the $J/\psi$ yield in the hadronic two-particle decays of $B_c$ due to the indirect mechanism with the P-wave charmonium is about 20-25%. In contrast, analogous contribution in the semileptonic decays of $B_c$ is significantly less, and the corresponding fraction due to the P-wave charmonium is about 5% [@PMBc]. Supposing $a_1=1.1$, we get $${\rm Br}(B_c^+\to J/\psi\pi^+)+{\rm Br}(B_c^+\to J/\psi\rho^+)\approx 2.8\%,$$ and in the $B_c$ decays to $J/\psi$ with radiative transitions of $\chi_{c0,c1,c2}$ the correction is equal to $${\rm Br}(B_c^+\to J/\psi\pi^+\gamma)+{\rm Br}(B_c^+\to J/\psi\rho^+\gamma)=0.64\%.$$ Conclusion ========== In this paper we have considered the hadronic two-particle decays of $B_c$ meson with large recoils in the technique of hard-soft factorization for the matrix elements. This factorization is based on the physical separation of hard rescattering the constituents composing the heavy quarkonia (the scale of virtualities $\sim 1-1.5$ GeV$^2$) from the soft binding of heavy quarks (the scale of virtualities $\sim 0.3-0.45$ GeV$^2$). Hard factors can be calculated in the perturbative QCD, while the soft ones are expressed in terms of wave-functions and their derivatives at the origin for S, P and D-wave levels of heavy quarkonia. We have calculated the widths of $B_c^+\to {c\bar c}[{\scriptstyle ^{2S+1}L_J}]\pi^+(\rho^+)$ decays for the charmonium in the final states (see Tables \[tab1\], \[tab2\]). We have found that the results for the ratios of widths are quite stable under the variation of model parameters, while the absolute values have rather large uncertainties especially because of variation of charmed quark mass, that reflects the main systematic uncertainty due to the approximation of zero binding energy in the charmonium. We have compared our results with the potential model [@chache; @10] operating with the wave-function overlapping. The relative momentum of charmed quarks inside the charmonium states (especially inside the excited P and D-waves, where the relative velocity of heavy quarks becomes less than for the S-levels) is rather small in comparison with the recoil momentum, so that the overlapping is displaced into the exponentially suppressed region, where the wave-function formalism is not reliable. In this region the hard gluon corrections replacing the exponential behaviour of quark-meson form factors by the power one, are significant. Thus, we expect valuable yields of excited charmonium states in the two-particle decays of $B_c$. This increase results in the additional 20-25% fraction of $J/\psi\pi$ inclusive branching ratio due to the contribution caused by the radiative electromagnetic transitions of P-wave charmonium states. The authors thank prof. A.K.Likhoded for fruitful discussions and valuable remarks. This work is in part supported by the Russian Foundation for Basic Research, grants 01-02-99315, 01-02-16585 and 00-15-96645, the Federal program “State support for the integration of high education and fundamental science”, grant 247 (V.V.K. and O.N.P.), and the Federal program “University of Russia — Basic Researches", grant 02.01.03. Appendix {#appendix .unnumbered} ======== In this appendix we write down cumbersome formulae for the ratios of decay widths with the $\rho$ meson with respect to the widths with the pion for various recoil charmonium sates. We introduce $x=m_2/m_1$, where $m_{1,2,3}$ are the masses of $B_c$, $c\bar c[{\scriptstyle ^{2s+1}L_J}]$ and $\rho$, respectively. Then we get $$\begin{aligned} &&\frac{\Gamma(B_c\to\eta_c\rho)}{\Gamma(B_c\to\eta_c\pi)}= B_{\eta_c}\,\left(1-\frac{m_3^2}{m_1^2}\, \frac{5x^2+2x+3}{x^2\,(1+x)^2\,(3x^2-2x+1)}\right)+{\cal O}\left(\frac{m_3^4}{m_1^4}\right),\\[3mm] &&\frac{\Gamma(B_c\to\ J/\psi\rho)}{\Gamma(B_c\to\ J/\psi\pi)}= B_{J/\psi}\,\left(1+\frac{m_3^2}{2 m_1^2}\frac{2x^6+20x^5-3x^4-16x^3+16x^2-4x+1} {x^6\,(1-x)^2\,(2+x)^2}\right)+{\cal O}\left(\frac{m_3^4}{m_1^4}\right),\\[3mm] &&\frac{\Gamma(B_c\to\ h_c\rho)}{\Gamma(B_c\to\ h_c\pi)}= B_{h_c}\, \left(1-\frac{2 m_3^2}{m_1^2}\,\frac{2x^6-3x^5+6x^4+8x^3+6x^2+31x+18} {(1+x)^2\,(x^3-2x^2+3x+4)^2}\right)+%\\&& {\cal O}\left(\frac{m_3^4}{m_1^4}\right),\\[3mm] &&\frac{\Gamma(B_c\to \chi_{c0}\rho)}{\Gamma(B_c\to \chi_{c0}\pi)}= B_{\chi_{c0}}\,\left(1-\frac{m_3^2}{3 m_1^2}\,\frac{(2x^4-17x^3-5x^2+7x+29)} {(1-x)\,(1+x)^2\,(x^2-2x+3)}\right)+{\cal O}\left(\frac{m_3^4}{m_1^4}\right),\\[3mm] &&\frac{\Gamma(B_c\to \chi_{c1}\rho)}{\Gamma(B_c\to \chi_{c1}\pi)}= B_{\chi_{c1}}\, \Biggl(1+\frac{m_3^2}{2 m_1^2}\, \frac{2}{(1-x)^6\,(1+x)^2} (5x^8-24x^7+91x^6\\&&\hspace*{3cm}-158x^5+183x^4-4x^3-119x^2+130x+40)\Biggr)+ {\cal O}\left(\frac{m_3^4}{m_1^4}\right),\\[3mm] &&\frac{\Gamma(B_c\to \chi_{c2}\rho)}{\Gamma(B_c\to \chi_{c2}\pi)}= B_{\chi_{c2}}\,%\times\\ && \left(1+\frac{m_3^2}{6 m_1^2}\,\frac{x^6-6x^5+32x^4-48x^3-11x^2+78x-10} {(1-x)^2\,(1+x)^2}\right)+{\cal O}\left(\frac{m_3^4}{m_1^4}\right),\\[3mm] &&\frac{\Gamma(B_c\to c\bar c[{\scriptstyle ^1D_2}]\rho)} {\Gamma(B_c\to c\bar c[{\scriptstyle ^1D_2}]\pi)}= B_{c\bar c[{\scriptstyle ^1D_2}]}\times\\ &&\hspace*{3.7cm}\left(1-\frac{2 m_3^2}{m_1^2}\,\frac{3x^6-12x^5+27x^4-16x^3-3x^2+100x+49} {(1+x)^2\,(x^3-3x^2+5x+5)^2}\right)+{\cal O}\left(\frac{m_3^4}{m_1^4}\right),\\[3mm] &&\frac{\Gamma(B_c\to c\bar c[{\scriptstyle ^3D_1}]\rho)} {\Gamma(B_c\to c\bar c[{\scriptstyle ^3D_1}]\pi)}=B_{c\bar c[{\scriptstyle ^3D_1}]}\times\\ &&\hspace*{3.7cm}\Biggl(1+\frac{m_3^2}{2 m_1^2}\, \frac{1}{(1-x)^2\,(1+x)^2\,(5x^3-22x^2+41x+8)^2}\, (169x^{10}-\Biggr.\\ &&\hspace*{3.7cm} 1306x^9+\Biggl.5334x^8-13168x^7+22638x^6-25436x^5+10336x^4+\Biggr.\\ &&\hspace*{3.7cm}14672x^3- 8271x^2\Biggl.-1514x+642)\Biggr)+{\cal O}\left(\frac{m_3^4}{m_1^4}\right),\\[5mm] &&\frac{\Gamma(B_c\to c\bar c[{\scriptstyle ^3D_2}]\rho)} {\Gamma(B_c\to c\bar c[{\scriptstyle ^3D_2}]\pi)}=B_{c\bar c[{\scriptstyle ^3D_2}]}\, \Biggl(1+\frac{m_3^2}{24 m_1^2}\, \frac{1} {(1-x)^6\,(1+x)^2} (13x^8-96x^7+433x^6-\\&&\hspace*{3.7cm}822x^5+1201x^4-124x^3-873x^2+1230x+122) \Biggr)+{\cal O}\left(\frac{m_3^4}{m_1^4}\right),\\[3mm] &&\frac{\Gamma(B_c\to c\bar c[{\scriptstyle ^3D_3}]\rho)} {\Gamma(B_c\to c\bar c[{\scriptstyle ^3D_3}]\pi)}= B_{c\bar c[{\scriptstyle ^3D_3}]}\, \Biggl(1+\frac{m_3^2}{12 m_1^2}\, \frac{1}{(1-x)^2\,(1+x)^2}(x^6-8x^5+52x^4-\\&&\hspace*{3.7cm}96x^3-15x^2+152x-2 2)\Biggr)+{\cal O}\left(\frac{m_3^4}{m_1^4}\right).\end{aligned}$$ The factors $B$ for various charmonium states tend to the ratio of leptonic constants of $\rho$ and $\pi$ mesons, if we neglect the $\rho$ meson mass, while in general case we obtain $$\begin{aligned} &&B_{\eta_c}=\frac{f_\rho^2}{f_\pi^2}\, \frac{x^6\,(1-x)^5} {(1+x)\left((1-x)^2-\displaystyle\frac{m_3^2}{m_1^2}\right)^3} \,\sqrt{(1-x)^2\,(1+x)^2-\frac{2m_3^2}{m_1^2}\,(1+x^2)+\frac{m_3^4}{m_1^4}}, \\[3mm] &&B_{J/\psi}=B_{\eta_c}\,\frac{x^6\,(1-x)^2}{\left((1-x)^2-\displaystyle\frac{m _3^2} {m_1^2}\right)},\\[3mm] &&B_{h_c}=\frac{f_\rho^2}{f_\pi^2}\, \frac{(1-x)^7}{(1+x)\left((1-x)^2-\displaystyle\frac{m_3^2}{m_1^2}\right)^4} \,\sqrt{(1-x)^2\,(1+x)^2-\frac{2m_3^2}{m_1^2}\,(1+x^2)+\frac{m_3^4}{m_1^4}}, \\[3mm] &&B_{\chi_{c0}}=B_{h_c}\,\frac{(1-x)^2}{\left((1-x)^2-\displaystyle\frac{m_3^2} {m_1^2}\right)},\\[3mm] &&B_{\chi_{c1}}=B_{\chi_{c0}},\\[3mm] &&B_{\chi_{c2}}=\frac{f_\rho^2}{f_\pi^2}\, \frac{(1-x)^9}{(1+x)\left((1-x)^2-\displaystyle\frac{m_3^2}{m_1^2}\right)^5} \,\frac{1}{\sqrt{\displaystyle{{(1-x)^2\,(1+x)^2-\frac{2m_3^2}{m_1^2}\,(1+x^2)+ \frac{m_3^4}{m_1^4}}}}},\\[3mm] &&B_{c\bar c[{\scriptstyle ^1D_2}]}=\frac{f_\rho^2}{f_\pi^2}\, \frac{(1-x)^9\,(1+x)}{\left((1-x)^2-\displaystyle\frac{m_3^2}{m_1^2}\right)^4} \,\frac{1}{\sqrt{\displaystyle{{(1-x)^2\,(1+x)^2-\frac{2m_3^2}{m_1^2}\,(1+x^2)+ \frac{m_3^4}{m_1^4}}}}},\\[3mm] &&B_{c\bar c[{\scriptstyle ^3D_1}]}=\frac{f_\rho^2}{f_\pi^2} \frac{(1-x)^{11}}{(1+x)\left((1-x)^2-\displaystyle\frac{m_3^2}{m_1^2}\right)^6} \,\sqrt{(1-x)^2\,(1+x)^2-\frac{2m_3^2}{m_1^2}\,(1+x^2)+\frac{m_3^4}{m_1^4}}, \\[3mm] &&B_{c\bar c[{\scriptstyle ^3D_2}]}= B_{c\bar c[{\scriptstyle ^3D_3}]}= B_{c\bar c[{\scriptstyle ^3D_1}]}.\end{aligned}$$ Finally, we note that the most significant numerical correction due to the nonzero $\rho$ meson mass comes from the phase space in the coefficients $B$. [999]{} F. Abe et al., CDF Collaboration, Phys. Rev. Lett. [**81**]{}, 2432 (1998), Phys. Rev. [**D58**]{}, 112004 (1998). P.Colangelo, G.Nardulli, N.Paver, Z.Phys. [**C57**]{}, 43 (1993);\ E.Bagan et al., Z. Phys. [**C64**]{}, 57 (1994);\ V.V.Kiselev, A.V.Tkabladze, Phys. Rev. [**D48**]{}, 5208 (1993);\ V.V.Kiselev, A.K.Likhoded, O.A.Onishchenko, Nucl. Phys. [**B569**]{}, 473 (2000);\ V.V.Kiselev, A.K.Likhoded, A.E.Kovalsky, Nucl. Phys. [**B585**]{}, 353 (2000), hep-ph/0006104 (2000). M.Lusignoli, M.Masetti, Z. Phys. [**C51**]{}, 549 (1991);\ V.V.Kiselev, Mod. Phys. Lett. [**A10**]{}, 1049 (1995);\ V.V.Kiselev, Int. J. Mod. Phys. [**A9**]{}, 4987 (1994);\ V.V.Kiselev, A.K.Likhoded, A.V.Tkabladze, Phys. Atom. Nucl. [**56**]{}, 643 (1993), Yad. Fiz. [**56**]{}, 128 (1993);\ V.V.Kiselev, A.V.Tkabladze, Yad. Fiz. [**48**]{}, 536 (1988);\ S.S.Gershtein et al., Sov. J. Nucl. Phys. [bf 48]{}, 327 (1988), Yad. Fiz. [**48**]{}, 515 (1988);\ G.R.Jibuti, Sh.M.Esakia, Yad. Fiz. [**50**]{}, 1065 (1989), Yad. Fiz. [**51**]{}, 1681 (1990);\ D.Scora, N.Isgur, Phys. Rev. [**D52**]{}, 2783 (1995);\ A.Yu.Anisimov, I.M.Narodetskii, C.Semay, B.Silvestre–Brac, Phys. Lett. [**B452**]{}, 129 (1999);\ A.Yu.Anisimov, P.Yu.Kulikov, I.M.Narodetsky, K.A.Ter-Martirosian, Phys. Atom. Nucl. [**62**]{}, 1739 (1999), Yad. Fiz. [**62**]{}, 1868 (1999);\ M.A.Ivanov, J.G.Korner, P.Santorelli, Phys. Rev. [**D63**]{}, 074010 (2001);\ P.Colangelo, F.De Fazio, Phys. Rev. [**D61**]{}, 034012 (2000). I.Bigi, Phys. Lett. [**B371**]{}, 105 (1996);\ M.Beneke, G.Buchalla, [Phys. Rev.]{} [**D53**]{}, 4991 (1996);\ A.I.Onishchenko, \[hep-ph/9912424\];\ Ch.-H.Chang, Sh.-L.Chen, T.-F.Feng, X.-Q.Li, Commun. Theor. Phys. [**35**]{}, 51 (2001), Phys. Rev. [**D64**]{}, 014003 (2001). G.T.Bodwin, E.Braaten, G.P.Lepage, Phys. Rev. [**D51**]{}, 1125 (1995) \[Erratum-ibid. [**D55**]{}, 5853 (1995)\];\ T.Mannel, G.A.Schuler, Z. Phys. [**C67**]{}, 159 (1995). G.P.Lepage, S.J.Brodsky, Phys. Rev. [**D23**]{}, 2157 (1980). S.S.Gershtein et al., preprint IHEP 98-22 (1998) \[hep-ph/9803433\];\ V.V.Kiselev, Phys.Lett. [**B372**]{}, 326 (1996), hep-ph/9605451;\ O.N.Pakhomova, V.A.Saleev, Phys. Atom. Nucl. [**63**]{}, 1999 (2000); Yad.Fiz. [**63**]{}, 2091 (2000) \[hep-ph/9911313\];\ V.A.Saleev, Yad. Fiz. 64 (2001) (in press) \[hep-ph/0007352\]. V.V.Anisovich, D.I.Melikhov, V.A.Nikonov, Phys. Rev. [**D55**]{}, 2918 (1997), [**D52**]{}, 5295 (1995), Phys. Atom. Nucl.  [**57**]{}, 490 (1994) \[Yad. Fiz.  [**57**]{}, 520 (1994)\]. M.Dugan and B.Grinstein, Phys. Lett. [**B255**]{}, 583 (1991);\ M.A.Shifman, Nucl. Phys. [**B388**]{}, 346 (1992);\ B.Blok, M.Shifman, Nucl. Phys. [**B389**]{}, 534 (1993). B.Guberina et al., [ Nucl. Phys.]{} [**B174**]{}, 317 (1980);\ M.B.Voloshin, M.A.Shifman, [ Yad. Fiz.]{} [**41**]{}, 187 (1985);\ M.B.Voloshin, M.A.Shifman, [ Sov. Phys. JETP ]{} [**64**]{}, 698 (1986). L.Bergström et al., [Phys. Rev.]{} [**D43**]{}, [2157]{} (1991). E.Eichten, C.Quigg, Phys. Rev. [**D49**]{}, 5845 (1994);\ S.S.Gershtein et al., Phys. Rev. [**D51**]{}, 3613 (1995);\ S.S.Gershtein et al., [Usp. Fiz. Nauk]{} [**165**]{}, [3]{} (1995);\ E.Eichten, C.Quigg, Phys. Rev. [**D52**]{}, 1726 (1995);\ V.V.Kiselev, Phys. Part. Nucl.  [**31**]{}, 538 (2000) \[Fiz. Elem. Chast. Atom. Yadra [**31**]{}, 1080 (2000)\];\ V.V.Kiselev, Int. J. Mod. Phys.  [**A11**]{}, 3689 (1996);\ V.V.Kiselev, Nucl. Phys. [**B406**]{}, 340 (1993). Ch.-H.Chang, Y.-Q.Chen, G.-L.Wang, H.-Sh.Zong, hep-ph/0103036 (2001). D.E.Groom et al., [Eur. Phys. J.]{} [**C15**]{}, 1 (2000). C.-H.Chang, Y.-Q.Chen, Phys. Rev. [**D49**]{}, 3399 (1994). A. Abd El-Hady, J.H.Munoz, J.P. Vary, Phys. Rev. [**D62**]{}, 014019 (2000).
--- abstract: 'Falbel, Koseleff and Rouillier computed a large number of boundary unipotent CR representations. Those representations are not always discrete. By experimentally computing their limit set, one can determine that those with fractal limit sets are discrete. Most of those discrete representations can be classified into $(3,3,n)$ complex hyperbolic triangle groups. By exact computations, we verify the existence of those triangle representations, which have unipotent boundary holonomy. We also show that many representations are redundant: for $n$ fixed, all the $(3,3,n)$ representations encountered are conjugated and only one among them is uniformizable.' author: - 'Raphaël V. [Alexandre]{}[^1]' bibliography: - 'ref.bib' title: Redundancy of hyperbolic triangle groups in spherical CR representations --- Introduction ============ We are interested in triangle groups $$\Lambda(p,q,r) = \left\langle a,b,c \; ; \; \begin{matrix} &a^2=b^2=c^2=e, \\ &(ab)^p=(bc)^q=(ca)^r=e \end{matrix}\right\rangle$$ and how they can be represented into the Lie group $\operatorname{PU}(2,1)$ by complex reflections, that is to say, with $a$, $b$ and $c$ all being complex reflections with respect to complex geodesic lines in the complex hyperbolic plane $\mathbf H_{\mathbf{C}}^2$. Such a representation is called a *complex hyperbolic triangle group*, denoted by $\Delta(p,q,r)$, and the images of $a$, $b$ and $c$ are often denoted by $I_1$, $I_2$ and $I_3$. An additional hypothesis is that $\frac \pi p + \frac \pi q + \frac \pi r < \pi$ and can be interpreted as the requirement for the triangle to lie in the hyperbolic plane. Triangle groups represented in $\operatorname{PO}(2,1)$ (which is the transformation group of the *real* hyperbolic plane) are fully prescribed by $p,q$ and $r$ (up to conjugation), whereas in $\operatorname{PU}(2,1)$ (which is the transformation group of the *complex* hyperbolic plane) an additional parameter controls the representation. This additional parameter can be interpreted as follows. One can always set two vertices of the triangle in a same real plane of $\mathbf H_{\mathbf{C}}^2$. The last vertex has to be placed at the intersection of the complex geodesic lines issued from the previous vertices. That intersection is a one-dimensional topological space and it represents the possible values of the additional parameter. Only one of those values corresponds to the case where the last vertex lies in the same previous real plane (and therefore corresponds to a ${\mathbf{R}}$-Fuchsian representation). This parameter is called the *angular invariant*. Complex hyperbolic triangle groups are a very rich class of representations in $\operatorname{PU}(2,1)$ and one can ask whether it covers many known representations. In particular, a large number of fundamental groups’ representations of knots’ and links’ complements are known in $\operatorname{PU}(2,1)$. Falbel, Koseleff and Rouillier [@FKR] explicitly computed those representations with unipotent boundary for complements described by four or less tetrahedra. The additional hypothesis of unipotent boundary is strong but allows this explicit computation that remains a highly complex numerical problem. Those representations are accompanied by some delicate questions. *Which representations are discrete? Which are complex hyperbolic triangle groups?* In the early stages of those researches, Falbel [@Falbel] constructed the unipotent boundary representations of the figure-eight knot. Those (essentially) three representations have two discrete representations among them, which are indeed complex hyperbolic triangle groups. To be more specific, those two representations can be identified with the even-length words’ normal subgroup of a complex hyperbolic triangle group $\Delta(3,3,4)$. When complex hyperbolic triangle groups were exposed by Schwartz [@Schwartz] in an ICM talk, he proposed the following conjecture which allows to compare the previous two emphasized questions: if we find a complex hyperbolic triangle group, is it a discrete and injective representation? A complex hyperbolic triangle group $\Delta(p,q,r)$ is a discrete and injective representation, if and only if $I_2I_3I_1I_3$ and $I_1I_2I_3$ are both not elliptic. Note that, in some rare cases, $I_2I_3I_1I_3$ and $I_1I_2I_3$ can have finite order and $\Delta(p,q,r)$ can remain discrete (but is not an injective representation). For example, Thompson [@Thompson] showed that there exists a representation $\Delta(3,3,4)$ with $I_2I_3I_1I_3$ of order $7$ and a representation $\Delta(3,3,5)$ with $I_2I_3I_1I_3$ of order $5$, and that both representations are lattices. A first step toward this conjecture is a result of Grossi [@Grossi]. It shows that in the case of $(p,q,r)=(3,3,n)$, the fact that $I_2I_3I_1I_3$ is not elliptic implies that $I_1I_2I_3$ is not either. A proof of Schwartz’ conjecture in the case of $(3,3,n)$ has been given by Parker, Wang and Xie [@ParkerWangXie] and the case of $(3,3,\infty)$ has been studied by Parker and Will [@ParkerWill]. Let $4\leq n \leq \infty$. Let $\Gamma$ be a hyperbolic $(3,3,n)$ triangle group. Then $\Gamma$ is a discrete and faithful representation of $\Delta(3,3,n)$ if and only if $I_2I_3I_1I_3$ is not elliptic. This allows the following study: take $\rho$ a boundary unipotent representation; is it discrete? Is it triangular? Both questions are here treated in a systematic and experimental manner. To study the discreteness of a representation we chose to experimentally compute its limit set. This set is an attractor for the iteration dynamic and a simple argument allows to prove the discreteness: if this limit set is fractal then the representation is discrete (unfortunately, the converse does not hold). Those representations with fractal limit sets are then set aside from the others. There are only two dozens of them (in comparaison of hundreds). They are then manipulated in order to try to prove that they come from a complex hyperbolic triangle group. We show that many of them are in fact triangle groups with $(p,q,r)=(3,3,n)$ with $n\geq 4$. Since those representations are discrete, $I_2I_3I_1I_3$ is not elliptic. In our examples, we show that it is even parabolic unipotent and that the conjugacy class can be chosen so that $I_2I_3I_1I_3$ generates the boundary holonomy of the fundamental group. We would like to stress this last phenomenon. When a manifold $M$ has an abstract triangle representation $\rho\colon\pi_1(M)\to\Lambda(p,q,r)$, then this representation has various embeddings $\pi_1(M)\to\Lambda(p,q,r)\to\Delta(p,q,r)\subset \operatorname{PU}(2,1)$ by the choice of the angular invariant. Only one representation $\Delta(p,q,r)$ is such that $I_2I_3I_1I_3$ is parabolic unipotent. In every example that the author encountered, this choice implied that the boundary holonomy is also parabolic unipotent and even described by the element $I_2I_3I_1I_3$. If this phenomenon was always true, it would justify the search for triangle representations using only the computation of boundary unipotent CR representations. It is a drastic reduction. For example, the figure-eight knot has a two-dimensional character variety [@FalbelAl] but only three (up to complex conjugation) different unipotent boundary representations. A phenomenon that will also interest us here is redundancy: among all the $(3,3,n)$ triangle groups’ representations (with $n$ fixed) appearing in the census in [@FKR], only one corresponds to a uniformization of the underlying CR structure. There is a delicate relationship between a CR representation of a manifold $M$ and a *uniformizable* CR representation of $M$. In the latter case, the image group $\Gamma$ completely determines $M$: if $U$ is its discontinuity domain then $U/\Gamma$ is diffeomorphic to $M$ (that is the definition of being uniformizable). Interestingly, it is very hard to determine whether a CR representation is uniformizable. The algebraic computations of the CR representations do not involve a conservation of the topological information. Deraux [@Deraux] first showed that m009 and m015 are two manifolds with boundary unipotent CR representations $\Delta(3,3,5)$ which are conjugated, but only the representation of m009 is uniformizable. In fact, once we know that two representations are conjugated, then it follows that only one of them can be uniformizable by an evident topological argument. The final result of the present work is: In the following table, the manifolds have a $(3,3,n)$ complex hyperbolic triangle group representation, with the normal subgroup of the even-length words for image. Furthermore, all those representations (for a shared a column) are the same, up to conjugation and complex conjugation. The starred manifolds correspond to the uniformizable representations. $\Delta(3,3,4)$ $ \Delta(3,3,5)$ $\Delta(3,3,6)$ $\Delta(3,3,7)$ $\Delta(3,3,\infty)$ ----------------- ------------------ ----------------- ----------------- ---------------------- m004\* m009\* m023\* (m039)\* m129\* m022 m015 m032 m203 m029 m142 m045 m034 m146 m081 m117 The even-length subgroups of the triangle groups $\Delta(3,3,n)$ with $I_2I_3I_1I_3$ parabolic unipotent were finely studied by a theorem of Acosta. This theorem allows to identify the manifold at infinity and to check which manifolds have a uniformizable triangle representation. Let $4\leq n \leq \infty$. Let $\Gamma$ be a hyperbolic $(3,3,n)$ triangle group. Suppose that $I_2I_3I_1I_3$ is parabolic unipotent. Let $\Gamma'\subset \Gamma$ be the subgroup of even-length words. Then the manifold at infinity of $\mathbf H_{\mathbf C}^2 / \Gamma'$ is the Dehn surgery with slope $(1,n-3)$ on any cusp of the Whitehead link complement. In section \[sec-2\], we succinctly expose a few elements of complex hyperbolic geometry that are necessary to the subject. In section \[sec-3\], we explain how the numerical experiments were driven. We also propose visual clues in order to recognize the limit sets of the various triangle representations $\Delta(3,3,n)$ with $n$ varying and $I_2I_3I_1I_3$ always parabolic unipotent. Finally, in section \[sec-4\] we summarized the various representations with an apparent fractal limit set. We show that a subclass is only composed of $\Delta(3,3,n)$ triangle groups. For this, we only employ formal computations, so the result is certain. We also explain how to retrieve which of each class for $n$ fixed is uniformizable. #### Note This paper is part of the author’s thesis, in progress under the supervision of Elisha Falbel. #### Acknowledgments The author enjoyed many and very fruitful conversations. First of all, of course, I am very thankful to Elisha Falbel. This work could not have been executed without him. Since the early stages in making the experimental tools, Fabrice Rouillier and Antonin Guilloux have been of precious help for the improvement of my code and its expanding. Across countries, Mathias Görner has been essential to me in order to correctly use the tools provided by SnapPy. Without them, it is clear that many efforts would not have been made to make the program faster, clearer and easier to use in a larger context. The code is open-source and accessible from the webpage of the author. I would also like to thank Pierre Will for taking the precious time to help me understand elementary aspects of the theory of complex hyperbolic groups. Finally, I have been pleased to exchange with Miguel Acosta. His understanding of both experimental and theoretical aspects has been very precious to me. Elements of complex hyperbolic geometry {#sec-2} ======================================= In this first section, we expose the main tools and notions in use. One can compare with [@Will], [@Goldman], [@Pratoussevitch] and [@ChenGreenberg]. We consider the space ${\mathbf{C}}^{2,1}$, that is to say ${\mathbf{C}}^3$ equipped with the Hermitian product $$\langle z,w\rangle = z_1\overline{w_1} + z_2\overline{w_2} - z_3\overline{w_3}.$$ The linear subspace of the vectors verifying $\langle z,z\rangle <0$ can be projectivised in ${\mathbf{C}}\mathbf P^2$ and is identified to the *complex hyperbolic plane*, $\mathbf H_{\mathbf{C}}^2$. In the affine chart $z_3=1$, one can identify $\mathbf H_{\mathbf{C}}^2$ with the set of vectors verifying $|z_1|^2 + |z_2|^2 < 1$. This description of the complex hyperbolic plane is also known as the Klein model of $\mathbf H_{\mathbf{C}}^2$. Its boundary in the complex projective plane is a differentiable sphere $S^3$ and is given by $|z_1|^2 + |z_2|^2 = 1$. Those points are in correspondance with the non-null vector lines in ${\mathbf{C}}^{2,1}$ verifying $\langle z,z\rangle = 0$. The orthogonal group of ${\mathbf{C}}^{2,1}$ is $\operatorname{U}(2,1)$ and its projectivised version is $\operatorname{PU}(2,1)$. Together with the complex conjugation, the group $\widehat{\operatorname{PU}(2,1)}$ is the transformation group of $\mathbf H_{\mathbf{C}}^2$ and also of its boundary sphere. This last geometrical structure $(\widehat{\operatorname{PU}(2,1)},S^3)$ is the *spherical CR structure* and $(\operatorname{PU}(2,1),S^3)$ is the *orientation-preserving spherical CR structure*. Let $A \in \operatorname{SU}(2,1)$. If $A$ has a fixed point in $\mathbf H_{\mathbf{C}}^2$ then $A$ is *elliptic*. If $\inf \{d(x,A(x))\}>0$ with $d$ the associated distance function of $\mathbf H_{\mathbf{C}}^2$ then $A$ is *loxodromic* (or *hyperbolic*). Otherwise, $A$ is *parabolic*. One can determine the type of $A$ by looking at its trace. We follow Goldman [@Goldman] and let $$f(\tau) = |\tau|^4 - 8 {{\rm Re}}(\tau^3) +18|\tau|^2 -27.$$ If $f(\operatorname{tr}A)> 0$ then $A$ is loxodromic, if $f(\operatorname{tr}A)<0$ then $A$ is elliptic (in fact regular elliptic: all its eigenvalues are different). When $f(\operatorname{tr}A)=0$ there are three cases: if $\operatorname{tr}(A)^3=27$ then $A$ is parabolic unipotent (all its eigenvalues are $1$), otherwise it is either elliptic (and therefore a reflection with respect to a point or a complex geodesic) or ellipto-parabolic (a screw transformation along a complex geodesic). Note that when $\tau$ is real: $$f(\tau) = (\tau+1)(\tau-3)^3,$$ and (under the hypothesis that $\operatorname{tr}(A)$ is real) $A$ is therefore regular elliptic if $\operatorname{tr}(A)\in ]-1,3[$, is loxodromic if $\operatorname{tr}(A)\not\in [-1,3]$ and is parabolic unipotent if $\operatorname{tr}(A)=3$. Let $M$ be a smooth manifold and $\pi_1(M)$ its fundamental group. A representation $\rho\colon \pi_1(M) \to \operatorname{PU}(2,1)$ is a (CR) *uniformization* of $M$ if $U/\rho(\pi_1(M))$ is diffeomorphic to $M$, where $U\subset \partial \mathbf H_{\mathbf{C}}^2$ is the domain of discontinuity of $\rho(\pi_1(M))$. When $\rho$ is discrete, $U = \partial \mathbf H_{\mathbf{C}}^2 - L(\rho(\pi_1(M)))$, where $L(\rho(\pi_1(M)))$ is the *limit set* of $\rho(\pi_1(M))$. The next section will describe this set. A manifold admitting such a representation is said to be (CR) *uniformizable*. Those manifolds are of high matter in the study of spherical CR structures and are determined by the algebraic data of $\rho$. In general, $U/\rho(\pi_1(M))$ is a smooth manifold but is too hard to identify. It remains unknown which three-manifolds are CR uniformizable. Limit sets ---------- Let $\Gamma\subset \operatorname{PU}(2,1)$ be a subgroup. Its *limit set* $L(\Gamma)$ is given by: $$L(\Gamma) = \overline{\Gamma\cdot p}\cap \partial \mathbf H_{\mathbf{C}}^2,$$ where $p\in \mathbf H_{\mathbf{C}}^2$ is any point ($L(\Gamma)$ is independent of this choice). The main properties of this set are the following. (Compare with [@ChenGreenberg].) 1. The limit set $L(\Gamma)$ is compact and $\Gamma$-invariant. 2. If $A\subset \partial \mathbf H_{\mathbf{C}}^2$ is compact, $\Gamma$-invariant and is constituted of at least two points, then $A\subset L(\Gamma)$. 3. If $L(\Gamma)=\emptyset$ then $\Gamma$ fixes a point in $\mathbf H_{\mathbf{C}}^2$. 4. If $L(\Gamma)$ has at most two points, then $L(\Gamma)$ is said to be *elementary*, otherwise it has an infinity number of points and is perfect (each point is an accumulating point). An important result is the following. If $\Gamma$ is not discrete then $L(\Gamma)$ is either elementary, or equal to $\partial \mathbf H_{\mathbf{C}}^2$, or equal to the boundary of a totally geodesic proper subspace, that is to say a smooth circle. By consequence, if $L(\Gamma)$ is a fractal, then $\Gamma$ is discrete. It is a powerful experimental way to check if $\Gamma$ is discrete, since no abstract systematic argument allows to know this. The auto-similarity property of limit sets can be justified by the following. Let $\Gamma$ be a discrete subgroup of $L(\Gamma)$ and suppose that $L(\Gamma)$ is not elementary. Let $a\in L(\Gamma)$ be any point and $V$ be any open neighbhorhood of $a$. Then there exists $\gamma_1,\dots,\gamma_n\in\Gamma$ such that $$L(\Gamma)= \bigcup \gamma_i\cdot V \cap L(\Gamma).$$ Let $W = \partial \mathbf H_{\mathbf{C}}^2 - \bigcup \Gamma\cdot V$. It is compact and $\Gamma$-invariant. By construction, $W$ can not have more than one point. If $W=\{b\}$ then $\Gamma$ let $b$ fixed and it follows that $L(\Gamma)$ must be elementary since it is discrete (this relies on an observation on the corresponding Heisenberg transformation group). Therefore $W=\emptyset$ and it follows that $L(\Gamma)\subset \bigcup \Gamma\cdot V$. By compacity of $L(\Gamma)$, only a finite number of $\gamma_i\cdot V$ are necessary. Complex hyperbolic triangle groups ---------------------------------- We will now describe more precisely the *complex hyperbolic triangle groups* $$\Delta(p,q,r) = \left\langle I_1,I_2,I_3 \; ; \; \begin{matrix} &I_1^2=I_2^2=I_3^2=e, \\ &(I_1I_2)^p=(I_2I_3)^q=(I_3I_1)^r=e \end{matrix}\right\rangle \subset \operatorname{PU}(2,1),$$ with $I_1$, $I_2$ and $I_3$ all three being complex reflections. If $p$ or $q$ or $r$ is infinite, then the corresponding relation vanishes. Let $\Delta(p,q,r)$ be a non-singular complex hyperbolic triangle group (the geodesic lines corresponding to the reflections are distinct), with $2 \leq p\leq q \leq r\leq \infty$ and $\frac \pi p +\frac \pi q + \frac \pi r <\pi$. Any such triangle group can be represented by a complex hyperbolic triangle in $\overline{\mathbf H_{\mathbf C}^2}\subset {\mathbf{C}}\mathbf P^2$. Let $H_1$, $H_2$ and $H_3$ be the vectorial hyperplanes of ${\mathbf{C}}^3$ covering the sides of the triangle in ${\mathbf{C}}\mathbf P^2$. Let $L_1$, $L_2$ and $L_3$ be the dual complex lines of those hyperplanes defined by $\langle H_i, L_i\rangle = 0$. The group $\Delta(p,q,r)$ is fully described by them. We only need to choose a base vector for each $L_i$ in order to retrieve those lines. Furthermore, note that such base vectors form a basis of ${\mathbf{C}}^3$ since the triangle group is non-singular. Let $v_k$ be a base vector of $L_k$, then $$I_k(x) = -x + \frac{2\langle v_k,x\rangle}{\langle v_k,v_k\rangle} v_k$$ is a complex reflection (note that $\langle v_k,x\rangle$ and not $\langle x,v_k\rangle$ makes this transformation linear) and verifies $I_k(h_k) = -h_k$ for any $h_k\in H_k$. That is to say, in ${\mathbf{C}}\mathbf P^2$, $I_k(h_k) \equiv h_k$. Therefore $I_k$ indeed defines the reflection fixing $H_k$. Because $\langle v_k,v_k\rangle >0$, one can normalize $v_k$ so that $\langle v_k,v_k\rangle = 1$. The last free parameters are an angle $z_k\in S^1$ for each $v_k$. One can set $z_1$ and then modify $z_2$ and $z_3$ so that $\langle v_1,v_2\rangle$ and $\langle v_2,v_3\rangle$ are real and positive. In general, $\langle v_1,v_3\rangle$ is not real and this lack can be measured by $\arg(\langle v_1,v_3\rangle)$. From an intrinsic point of view, that is to say without choosing the $z_k$’s, the default for the vertices to be in a same real plane can be measured by $$\theta = -\arg(\langle v_1,v_2\rangle \langle v_2,v_3\rangle\langle v_3,v_1\rangle).$$ The value of $\theta$ is also known under the name of the *angular invariant*. Once $\langle v_i,v_j\rangle = c_{ij}$ are known, it is easy to evaluate the matrices of $I_1$, $I_2$ and $I_3$ in the basis $(v_1,v_2,v_3)$. $$\begin{aligned} I_1 &= \begin{pmatrix} 1 & 2c_{12} & 2c_{13}\\ 0 & -1 & 0\\ 0 & 0 & -1 \end{pmatrix}\\ I_2 &= \begin{pmatrix} -1 & 0 & 0\\ 2c_{21} & 1 & 2c_{23} \\ 0 & 0 & -1 \end{pmatrix}\\ I_3 &= \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 2 c_{31} & 2c_{32} & 1 \end{pmatrix}\end{aligned}$$ We still have to see how $p$, $q$, $r$ and $\theta$ determine $\langle v_i,v_j\rangle=c_{ij}$. For the time being, we suppose $r<\infty$. In fact, the matrix given by the $c_{ij}$’s is equal to: $$H= \begin{pmatrix} 1 & \cos \frac \pi p & \cos \frac \pi r e^{{\boldsymbol{i}}\theta} \\ \cos \frac \pi p & 1 & \cos \frac \pi q \\ \cos \frac \pi r e^{-{\boldsymbol{i}}\theta} & \cos \frac \pi q & 1 \end{pmatrix}.$$ And this shows that the $c_{ij}$’s fully determine $p$, $q$, $r$ and $\theta$ in return. This matrix is an Hermitian form preserved by $I_1$, $I_2$ and $I_3$. The determinant of this matrix is given by $$1 + 2\cos(\theta)\cos\frac\pi p \cos\frac \pi q \cos\frac \pi r - \cos\left(\frac \pi p\right)^2 - \cos\left(\frac \pi q\right)^2 - \cos\left(\frac \pi r\right)^2.$$ This determinant allows to decide when $H$ has $(2,1)$ for signature. Since the trace of $H$ is $3$, it implies that at least one eigenvalue is positive. Therefore, its determinant is negative if and only if $H$ has $(2,1)$ for signature. That is equivalent to: $$\cos(\theta) < \frac{-1 + \cos\left(\frac \pi p\right)^2 + \cos\left(\frac \pi q\right)^2 + \cos\left(\frac \pi r\right)^2 }{2\cos\frac\pi p \cos\frac \pi q \cos\frac \pi r}.$$ That must be the case since the original Hermitian form $\langle\cdot,\cdot\rangle$ has $(2,1)$ for signature. If $p=2$ then $c_{12}$ vanishes and one can make both $c_{23}$ and $c_{13}$ real. Therefore, $(2,q,r)$ complex hyperbolic triangle groups are rigid. We now justify the expression of $H$. Up to conjugation, we can suppose that $H_1\cap H_2$ is generated by $(0,0,1)$. This implies that $v_1$ and $v_2$ are both of the form $(x,y,0)$. Therefore, every $c_{ij}$ is given by $v_{i}^1\overline{v_{j}^1} + v_i^2\overline{v_j^2}$ since at least one of $v_i$ or $v_j$ has a vanishing third coordinate. Therefore, geometrically speaking, $c_{ij}$ is the cosine of the angle in ${\mathbf{C}}^2$ formed by the complex lines generated by the first two coordinates of $v_i$ and $v_j$. It is the real part of $c_{ij}$ that is equal to the cosine of the angle formed by the vectors given by the first two coordinates of $v_i$ and $v_j$ (see [@Goldman p. 36]). Note that $c_{13}$ is non real in general, but of course $\langle v_1,e^{{\boldsymbol{i}}\theta}v_3\rangle = e^{-{\boldsymbol{i}}\theta}\langle v_1,v_3\rangle = e^{-{\boldsymbol{i}}\theta}c_{13}$ is real. The angle formed by $H_1$ and $H_2$ is equal to $\frac \pi p$ since $(I_1I_2)^p=e$. By taking the duals $v_1$ and $v_2$, we get $c_{12}=\pm\cos\frac \pi p$ but we made $c_{12}$ positive therefore $c_{12}=\cos\frac \pi p$. Likewise, $c_{23}=\cos\frac \pi q$ and $e^{-{\boldsymbol{i}}\theta}c_{13}=\cos\frac \pi r$. Finally, one can compute, with $i\neq j \neq k$: $$\operatorname{tr}(I_iI_jI_kI_j) = 16 |c_{ij}c_{kj}|^2 - 16{{\rm Re}}(c_{12}c_{23}c_{31}) + 4|c_{ik}|^2 - 1,$$ and note that in our conventions, we have ${{\rm Re}}(c_{12}c_{23}c_{31})=c_{12} c_{23} c_{13}\cos \theta$. It shows that $\operatorname{tr}(I_iI_jI_kI_j)$ determines $\pm\theta$ once $(p,q,r)$ is known. Since the complex conjugation changes $\theta$ into $-\theta$, we deduce from our discussion the following results. In the case where $(p,q,r)=(3,3,r)$ we have in fact: $$\operatorname{tr}(I_iI_jI_kI_j) = 4\cos\left(\frac \pi r\right)^2 - 4\cos\frac\pi r \cos\theta.$$ Let $3\leq p \leq q \leq r < \infty$ be such that $\frac \pi p + \frac \pi q + \frac \pi r <\pi$. A representation of the triangle group $$\Delta(p,q,r) = \left\langle I_1,I_2,I_3 \; ; \; \begin{matrix} &I_1^2=I_2^2=I_3^2=e, \\ &(I_1I_2)^p=(I_2I_3)^q=(I_3I_1)^r=e \end{matrix}\right\rangle$$ into $\operatorname{PU}(2,1)$ is determined by $\theta = \arg(\langle v_1,v_2\rangle\langle v_2,v_3\rangle \langle v_1,v_3\rangle)$ up to conjugation. Up to conjugation and complex conjugation, it is determined by $\operatorname{tr}(I_iI_jI_kI_j)$, with $i\neq j \neq k$. Furthermore, $\theta$ verifies $$\cos(\theta) < \frac{-1 + \cos\left(\frac \pi p\right)^2 + \cos\left(\frac \pi q\right)^2 + \cos\left(\frac \pi r\right)^2 }{2\cos\frac\pi p \cos\frac \pi q \cos\frac \pi r}$$ and reciprocally, this condition suffices to define a representation with that value of $\theta$. The parameter $\theta$ can be taken in $]0,\pi]$ since the complex conjugation exchanges $\theta$ and $2\pi - \theta$. The possible values of $\operatorname{tr}(I_iI_jI_kI_j)$ are constrained by the preceding condition. For example, when $(p,q,r)=(3,3,r)$, we have $$\cos\theta < \frac{-\frac 12 + \cos\left(\frac \pi r\right)^2 }{\frac 12 \cos\frac \pi r} = \frac{-1 + 2\cos\left(\frac \pi r\right)^2}{\cos\frac \pi r}.$$ If we take a look at the trace of $I_iI_jI_kI_j$, its maximum is given for $\cos\theta$ minimal (that is to say $-1$) and its minimum by the maximum of $\cos\theta$. $$\begin{aligned} \operatorname{tr}(I_iI_jI_kI_j) &\leq 4\cos\frac\pi r\left(\cos\frac\pi r + 1\right),\\ \operatorname{tr}(I_iI_jI_kI_j) &>4\cos\left(\frac \pi r\right)^2 - 4\cos\frac\pi r \left(\frac{-1 + 2\cos\left(\frac \pi r\right)^2}{\cos\frac \pi r}\right) =4\left(1 - \cos\left(\frac \pi r\right)^2\right)>0.\end{aligned}$$ This computation shows that the range of values of $\operatorname{tr}(I_iI_jI_kI_j)$ is included in ${\mathbf{R}}_+$ and therefore, the range for which $\Delta(p,q,r)$ is discrete and injective is of the form $[3,m]$, with $m$ the maximum stated before. The value $3$ is indeed reachable: the minimum value for $r$ is $4$, since we have to verify $\frac \pi p + \frac \pi q + \frac\pi r< \pi$, and the maximum value of $4(1 - \cos(\frac \pi r)^2)$ is indeed reached when $r$ is minimal. This value for $r=4$ is $1$. One can compute the angular invariant required to have $I_iI_jI_kI_j$ parabolic unipotent. It is given by: $$\cos\theta = \cos\frac \pi r - \frac 3{4\cos\frac \pi r}.$$ When one or several of $p$, $q$ and $r$ are non finite, we can get similar results by replacing the undefined $c_{ij}$ with $\cosh(l_{ij}/2)$, where $l_{ij}$ is the distance between the two complex hyperbolic geodesics $H_i$ and $H_j$. See [@Pratoussevitch]. In particular, it is still true that $\cos\theta$ is determined by $\operatorname{tr}(I_iI_jI_kI_j)$. Experimental approach {#sec-3} ===================== In this section, we explain how we experimentally computed the limit sets of the representations appearing in the census of Falbel-Koseleff-Rouillier [@FKR]. We also propose a comparative experiment by simulating the limit sets associated to the $(3,3,n)$ triangle groups. It will allow us to propose visual clues in order to distinguish the different $(3,3,n)$ triangle groups’ fractals when $I_2I_3I_1I_3$ is parabolic. The source code of the simulations and most of their results are available on the author’s webpage. The code has been made open-source. Computing limit sets -------------------- Let $\Gamma\subset \operatorname{PU}(n,1)$ be a subgroup. Let $\Gamma_L$ denote the subgroup of the loxodromic elements. Suppose that the limit set $L(\Gamma)$ is non elementary and that $\Gamma_L\neq \emptyset$. Then the closure of the accumulation points’ set for the loxodromic iteration dynamic, $$\overline{\left\{x \, |\, \exists g\in \Gamma_L, \; \lim g^n x_0 = x\right\} , \; x_0\in \mathbf H_{\mathbf C}^2},$$ is equal to the full limit set $L(\Gamma)$. This ensues from the following remark. Note that if $g$ is loxodromic then any conjugation $\gamma g \gamma^{-1}$ is loxodromic too. Now, $\gamma x$ is equal to $\lim \gamma g^n \gamma^{-1}(x_0)$. This shows that the described set has two points (the attractive points of $g$ and $g^{-1}$) and is $\Gamma$-invariant. It allows a first good strategy: computing attractive limit points of loxodromic elements. However, this strategy requires to compute a very large number of elements $g\in \Gamma$. This can be done by generating words of length $n$. If $\Gamma$ is described by two generators, then there are approximately $3^n$ words of length $n$. An additional strategy consists in computing two lists of the half-length words and combining them at the last moment (in order to save a square root of memory space). In practice, and this is particularly true with complex hyperbolic triangle groups, it is hard to get *different* points from such a computation. One can often see large concentrations of points in tiny boxes and even many copies of the same point. This is partly due to unknown relations between words, even at small length words. Instead of only computing words and get their attractive limits, we used a second strategy in complement. When enough points are acquired, one can apply words on them (loxodromic or not) to get a better picture of the limit set. This method is much more efficient for it rarely makes redundant images. When nice symmetries are known (and for example, with complex hyperbolic triangles one knows the symmetries $I_1$, $I_2$ and $I_3$), this allows a much better result. To symmetrize fully, one can apply each symmetry successively on the set of points. In practice, we first compute the attractive points of $n_1$-length words, then apply given symmetries on the set obtained, then apply $n_2$-length words on them, and again apply symmetries. At each step, we sort and select points in order to eliminate redundancy in the results. At the end, we still have to project the points from $\mathbf {CP}^2$ into ${\mathbf{R}}^3\subset \partial \mathbf H_{\mathbf C}^2 $. It can be done once a Hermitian form determining $\partial \mathbf H_{\mathbf C}^2$ is known. We used a least-square method to solve the natural system of linear equations associated to such a Hermitian form in order to have this information. Complex hyperbolic $(3,3,n)$ triangle groups -------------------------------------------- To compute the limit sets associated to $\operatorname{PU}(2,1)$-representations of $(3,3,n)$ triangle groups, we used our previous parametrization of the reflections $I_1$, $I_2$, and $I_3$. We already have all the tools necessary to ensure that $\theta$ is acceptable and gives a discrete representation. Note that when $\theta=\pi$, it corresponds to a ${\mathbf{R}}$-Fuchsian representation and the limit set is therefore a (${\mathbf{R}}$-)circle (since the representation preserves a real plane). For $n\in \{4,5,6,7\}$ we show three projections of the limit set and an additional diagram proposing a visual clue to recognize the limit set (figures \[fig-2-1\], \[fig-2-2\], \[fig-2-3\], and \[fig-2-4\]). This visual clue consists in looking for a pair of symmetric spikes and inspect the middle. We count the largest outer holes. When $n=4$ there is none, when $n=5$ there is one, when $n=6$ there are two and when $n=7$ there are three. ![Hyperbolic triangle group $(3,3,4)$ with $I_2I_3I_1I_3$ parabolic.[]{data-label="fig-2-1"}](3_3_4-tiles.png){width="\textwidth"} ![Hyperbolic triangle group $(3,3,5)$ with $I_2I_3I_1I_3$ parabolic.[]{data-label="fig-2-2"}](3_3_5-tiles.png){width="\textwidth"} ![Hyperbolic triangle group $(3,3,6)$ with $I_2I_3I_1I_3$ parabolic.[]{data-label="fig-2-3"}](3_3_6-tiles.png){width="\textwidth"} ![Hyperbolic triangle group $(3,3,7)$ with $I_2I_3I_1I_3$ parabolic.[]{data-label="fig-2-4"}](3_3_7-tiles.png){width="\textwidth"} The visual clues are pointed out in the following examples. See figures \[fig-2-6\], \[fig-2-7\], \[fig-2-8\] and \[fig-2-9\]. Redundancy {#sec-4} ========== From the census of the unipotent boundary representations in [@FKR], we experimentally computed the limit sets. After a visual inspection, we kept the representations that gave fractals. Those representations come in pairs by complex conjugation of the coefficients. For each pair, we wrote down the identifier of one representation and of verified relations (they are true by exact computations) in the following table. Often, those relations allowed to reproduce the relation of the fundamental group. Each time, the fundamental group is presented by two generators and a relation. In the next table (table \[table-complete\]), we present all the representations which gave a fractal. Those accompanied by a star won’t be studied furthermore. id $a^n=e$ $b^n=e$ $(a,b)^n=e$ ---------- ----------- --------- --------- -------------------------------------- m004-1 \[0,0,0\] $a^4$ $b^3$ $(Ab)^3$ m004-3\* \[0,1,0\] m009-1 \[0,1,2\] $a^5$ $(aaB)^3, (aab)^3$ m015-2 \[0,1,2\] $a^3$ $b^5$ $(abb)^3$ m022-1 \[0,0,0\] $a^3$ $b^4$ $(ab)^3$ m023-1 \[0,0,0\] $b^6$ $(Abb)^3$, $(aB^3)^3$ m023-7\* \[0,4,0\] m029-1 \[0,0,0\] $a^3$ $b^4$ $(aB)^3 $ m032-7 \[0,2,4\] $a^3$ $b^7$ $(ABB)^3$ m034-1 \[0,0,0\] $a^4$ $b^3$ $(AB)^3$ m035-1\* \[0,0,0\] $(Ba)^2$, $(aBAb^3)^2$ m038-1\* \[0,0,0\] $a^4$ $b^4$ $(AB)^3$ m045-1\* \[0,0,0\] $(aab)^2$, $(a^3bbaa)^2$ m045-8 \[0,4,2\] $a^7$ $b^3$ $(aaaB)^3$ m053-1\* \[0,0,0\] m053-4\* \[0,1,1\] m053-7\* \[0,2,2\] m081-1 \[0,0,0\] $a^3$ $b^4$ $(aB)^3$ m117-1 \[0,0,0\] $a^3$ $b^3$ $(AB)^4$ m129-1 \[0,0,0\] $a^3$ $b^3$ m130-1\* \[0,1,0\] $(ab)^2$, $(a^2b^3)^4$, $(a^2b^4)^2$ m137-5\* \[0,2,2\] $a^4$ $b^5$ $(Abb)^3$ m142-1 \[0,0,2\] $a^3$ $b^3$ $(aB)^5$ m146-3 \[0,3,2\] $a^3$ $b^5$ $(AB)^3$ m203-1 \[0,0,0\] $a^3$ $b^3$ : Full experimental table.[]{data-label="table-complete"} #### Additional remarks on the full experimental table It is already known that m004-1 and m004-3 are related by the composition of a figure-eight knot’s symmetry (see [@Falbel]). It might be the same for m045-1 and m045-8 but this remains to be proved. The representation m038-1 presents the characteristics of a $(3,4,4)$ complex hyperbolic triangle group. Indeed, m038 has such a representation according to a preprint of Ma and Xie that the author has been able to consult. The representation m137-5 presents the characteristics of a $(3,4,5)$ complex hyperbolic triangle group. The representations we selected all present characteristics of $(3,3,n)$ complex hyperbolic groups. We will indeed show this phenomenon. $\Delta(3,3,4)$ $ \Delta(3,3,5)$ $\Delta(3,3,6)$ $\Delta(3,3,7)$ $\Delta(3,3,\infty)$ ----------------- ------------------ ----------------- ----------------- ---------------------- m004\* m009\* m023\* (m039)\* m129\* m022 m015 m032 m203 m029 m142 m045 m034 m146 m081 m117 : Manifolds with $(3,3,n)$ complex hyperbolic triangle group representations.[]{data-label="table-restricted"} \[thm-rest\] For each column of table \[table-restricted\], the manifolds have a $(3,3,n)$ complex hyperbolic triangle group representation, with the normal subgroup of the even-length words for image. Furthermore, all those representations (for a shared a column) are the same, up to conjugation and complex conjugation. By consequence, for each column, at most one representation is a uniformization of the corresponding manifold. Deraux [@Deraux] encountered the same phenomenon with the manifolds m009 and m015. In fact, we can complete the picture with the following result of Acosta. Let $4\leq n \leq \infty$. Let $\Gamma$ be a hyperbolic $(3,3,n)$ triangle group. Suppose that $I_2I_3I_1I_3$ is parabolic unipotent. Let $\Gamma'\subset \Gamma$ be the subgroup of even-length words. Then the manifold at infinity of $\mathbf H_{\mathbf C}^2 / \Gamma'$ is the Dehn surgery with slope $(1,n-3)$ on any cusp of the Whitehead link complement. With SnapPy, it is possible to compute Dehn surgeries on the Whitehead link complement. Remember that m129 is the Whitehead link complement. In each column, we can identify a uniforming representation (which must be unique in the column). (In order to make SnapPy work correctly, one needs to call the manifold $5^2_1$ and fill a cusp with the meridian equal to $n-3$ and the longitude equal to $1$.) The manifolds in table \[table-restricted\] for which the corresponding $(3,3,n)$ representation is a uniformization were marked with a star. Those manifolds are: m004, m009, m023, m039 and m129. The first uniformization of m129 (the Whitehead link complement) was shown by Schwartz [@Schwartz2], but the present uniformization by a $(3,3,\infty)$ triangle group with unipotent boundary was studied by Parker and Will [@ParkerWill]. #### About m039 This manifold is described with 5 tetrahedra and does therefore not appear in the explicit census of [@FKR]. By the theorem of Acosta, it does have a representation with the even-length subgroup of the $(3,3,7)$ complex hyperbolic triangle group having $I_2I_3I_1I_3$ parabolic for image. We will construct this representation and show that it has in fact parabolic unipotent boundary. The same phenomenon happens for $s000$, the manifold obtained by Dehn surgery on the complement of the Whitehead link with the slope associated to the triangle group $(3,3,8)$. #### The method In the following subsections, corresponding to the different values of $n$, we recognize the selected representations and reconstruct the triangle group representation in order to show the result with certainty. This is organized in three steps and for each one we give a table. 1. The first table gives the informations from the census. That is to say, the way to retrieve the representation is the census (its identifier) and some relations (formally) verified that suggests the choice of the triangle group. Those relations always imply the fundamental group relation. 2. The second table gives a morphism from the fundamental group of the variety to the abstract triangle group $\Lambda(3,3,n)$. This is achieved by giving a presentation of the fundamental group (that is always constituted of two generators and a relation) and the specification of the generators’ images. We check that the morphism chosen verifies the relations given in the preceding table and (therefore) the fundamental group relation (showing that it is indeed a morphism). In this table, we also precise what the word corresponding to $I_2I_3I_1I_3$ is. This word can be experimentally computed with the representation of the census, and we can check that it is indeed parabolic unipotent (for one can take the trace once it is verified that the matrix is in $\operatorname{SU}(2,1)$). 3. In the third table, we compute the peripheral holonomy. Equivalences of words are given according to the relations inscribed in the first table (that are verified by the abstract morphism previously constructed). This peripheral holonomy is computed in terms of $I_2I_3I_1I_3$. This implies that the abstract representation, once embedded in $\operatorname{PU}(2,1)$ with the convenient angular invariant (that makes $I_2I_3I_1I_3$ parabolic unipotent) must exist in the census. Therefore, this constructed representation is indeed conjugated (or complex conjugated) to the one selected in the census, by unicity of the complex hyperbolic triangle group representations that has $I_2I_3I_1I_3$ parabolic. $(3, 3, 4)$ – m004, m022, m029, m034, m081 and m117 --------------------------------------------------- The first table is computed with the help of the following code. import snappy import numpy \# Data s = ’m022’ i,j,k = 0,0,0 \# M = snappy.Manifold(s) G = M.fundamental\_group() P = M.ptolemy\_variety(3,’all’).retrieve\_solutions(prefer\_rur=True, verbose=False) S = \[\[component for component in per\_obstruction if component.dimension == 0\] for per\_obstruction in P\] K = S\[i\]\[j\] def f(x): mat\_x = K.evaluate\_word(x,G) return \[\[z.lift() for z in y\] for y in mat\_x\] \# Search trivial word word = ’a’ \# to be changed manually for i in range(1,50): s = f(word\*i) if s == \[\[1, 0, 0\], \[0, 1, 0\], \[0, 0, 1\]\]: print(i,s) break id $a^n=e$ $b^n=e$ $(a,b)^n=e$ -------- ----------- --------- --------- ------------- m004-1 \[0,0,0\] $a^4$ $b^3$ $(Ab)^3$ m022-1 \[0,0,0\] $a^3$ $b^4$ $(ab)^3$ m029-1 \[0,0,0\] $a^3$ $b^4$ $(aB)^3$ m034-1 \[0,0,0\] $a^4$ $b^3$ $(AB)^3$ m081-1 \[0,0,0\] $a^3$ $b^4$ $(aB)^3$ m117-1 \[0,0,0\] $a^3$ $b^3$ $(AB)^4$ Here, we consider the abstract triangle group $$\Lambda(3,3,4) = \left\langle I_1,I_2,I_3 \; ; \; \begin{matrix} &I_1^2=I_2^2=I_3^2=e, \\ &(I_1I_2)^3=(I_2I_3)^3=(I_3I_1)^4=e \end{matrix}\right\rangle.$$ $a$ $b$ fundamental group’s relation $I_2I_3I_1I_3 = P$ ------ ---------- ---------------- ------------------------------ -------------------- m004 $I_3I_1$ $I_3I_2$ aabABBAba $BA$ m022 $I_3I_2$ $I_1I_3$ abbbbbabAAb $Ab$ m029 $I_2I_1$ $I_3I_1$ aBabbbAAbbb $aBB$ m034 $I_3I_1$ $I_1I_2$ aaabbABAbb $BAA$ m081 $I_2I_1$ $I_3I_1$ abbbaBaaaaB $aBB$ m117 $I_3I_2$ $I_3I_1I_2I_3$ aabbaabbABAbb $aB$ peripheral curves ------ ------------------- -------------------------- m004 $ab$ $aBAbABab \equiv (ab)^3$ $P^{-1}$ $P^{-3}$ m022 $Ba$ $AAbabA\equiv Ba$ $ P^{-1}$ $P^{-1}$ m029 $abb\equiv aBB$ $ bAAAbbb \equiv e$ $P$ $e$ m034 $bbaa\equiv BAA$ $AAABBBA \equiv e$ $P$ $e$ m081 $bba \equiv BBa$ $BaaaBa \equiv BBa$ $I_1P^{-1} I_1$ $I_1P^{-1} I_1$ m117 $bA$ $BAAABA \equiv bA$ $P^{-1}$ $P^{-1}$ $(3, 3, 5)$ – m009, m015, m142 and m146 --------------------------------------- We iterate the same process. id $a^n=e$ $b^n=e$ $(a,b)^n=e$ -------- ----------- --------- --------- ------------------- m009-1 \[0,1,2\] $a^5$ $(aaB)^3,(aab)^3$ m015-2 \[0,1,2\] $a^3$ $b^5$ $(abb)^3$ m142-1 \[0,0,2\] $a^3$ $b^3$ $(aB)^5$ m146-3 \[0,3,2\] $a^3$ $b^5$ $(AB)^3$ This time, the triangle group is $$\Lambda(3,3,5) = \left\langle I_1,I_2,I_3 \; ; \; \begin{matrix} &I_1^2=I_2^2=I_3^2=e, \\ &(I_1I_2)^3=(I_2I_3)^3=(I_3I_1)^5=e \end{matrix}\right\rangle.$$ $a$ $b$ fundamental group’s relation $I_2I_3I_1I_3=P$ ------ ---------------- ---------------------- ------------------------------ ------------------ m009 $I_1I_3$ $I_3I_1I_3I_1I_3I_2$ aabABaaBAb $BA$ m015 $I_2I_1$ $I_3I_1I_3I_1$ abbAAbbaBBB $aB$ m142 $I_3I_1I_2I_3$ $I_2I_3$ abbaBabbaBaaaaB $BA$ m146 $I_2I_3$ $I_3I_1$ aabbaaabbaaBAB $aB$ peripheral curves ------ ------------------- ---------------------------- m009 $ab$ $ABaaaBAb\equiv (ab)^2$ $P^{-1}$ $P^{-1}$ m015 $bA$ $abbAAAbb\equiv (bA)^{-1}$ $P^{-1}$ $P$ m142 $BA$ $bAAAbA\equiv BA$ $P$ $P$ m146 $baa\equiv bA$ $BABABB\equiv aB$ $P^{-1}$ $P$ $(3, 3, 6)$ – m023 ------------------ -------- ----------- --------- --------- -------------------- id $a^n=e$ $b^n=e$ $(a,b)^n=e$ m023-1 \[0,0,0\] $b^6$ $(Abb)^3,(aB^3)^3$ -------- ----------- --------- --------- -------------------- The triangle group is $$\Delta(3,3,6) = \left\langle I_1,I_2,I_3 \; ; \; \begin{matrix} &I_1^2=I_2^2=I_3^2=e, \\ &(I_1I_2)^3=(I_2I_3)^3=(I_3I_1)^6=e \end{matrix}\right\rangle.$$ $a$ $b$ fundamental group’s relation $I_2I_3I_1I_3=P$ ------ ---------------------------- ---------- ------------------------------ ------------------ m023 $I_3I_1I_3I_1I_3I_2I_3I_1$ $I_1I_3$ aBAbbABabbb $BAB$ peripheral curves ------ ------------------- ---------------------------- m023 $bab$ $bbaBABabb \equiv (BAB)^2$ $P^{-1}$ $P^2$ $(3, 3, 7)$ – m032, m045 and m039 --------------------------------- id $a^n=e$ $b^n=e$ $(a,b)^n=e$ -------- ----------- --------- --------- ------------- m032-7 \[0,2,4\] $a^3$ $b^7$ $(ABB)^3$ m045-8 \[0,4,2\] $a^7$ $b^3$ $(aaaB)^3$ The triangle group is $$\Lambda(3,3,7) = \left\langle I_1,I_2,I_3 \; ; \; \begin{matrix} &I_1^2=I_2^2=I_3^2=e, \\ &(I_1I_2)^3=(I_2I_3)^3=(I_3I_1)^7=e \end{matrix}\right\rangle.$$ $a$ $b$ fundamental group’s relation $I_2I_3I_1I_3=P $ ------ ---------------- ---------------------- ------------------------------ ------------------- m032 $I_1I_2$ $I_1I_3I_1I_3I_1I_3$ aaBBAbbbbbABB $Abbb$ m045 $I_1I_3I_1I_3$ $I_2I_1$ aaabbaaaBAAAAB $ba$ peripheral curves ------ ------------------- ------------------------- m032 $BBBa$ $bbAAAbba \equiv BBBa$ $P^{-1}$ $P^{-1}$ m045 $AB$ $bAAABBBAAA \equiv ba $ $P^{-1}$ $P$ We additionally construct a representation for m039 so that we complete our picture. $a$ $b$ fundamental group’s relation ------ ---------- ---------------------- ------------------------------ -- m039 $I_1I_3$ $I_3I_1I_3I_1I_2I_1$ aabABaaaaBAb This representation verifies $a^7 = (aab)^3=(Baaaa)^3$ and this implies the fundamental group relation. The peripheral holonomy is prescribed by $U=AB$ and $V=abABAbaaba$. The images are respectively $\rho(U)=321313$, $\rho(V)=3121 32 31 2123$ (where we denoted $1,2,3$ instead of $I_1,I_2,I_3)$. We can check that $\rho(U)=\rho(V)$ by computing $\rho(U)^{-1}\rho(V)$. Once $\theta$ is fixed in order to have a triangle representation with $I_2I_3I_1I_3$ parabolic unipotent, $\rho(V)$ has its trace equal to $3$ and this representation therefore has parabolic unipotent boundary. $(3, 3, \infty)$ – m129 and m203 -------------------------------- id $a^n=e$ $b^n=e$ $(a,b)^n=e$ -------- ----------- --------- --------- ------------- m129-1 \[0,0,0\] $a^3$ $b^3$ m203-1 \[0,0,0\] $a^3$ $b^3$ The triangle group is $$\Lambda(3,3,\infty) = \left\langle I_1,I_2,I_3 \; ; \; \begin{matrix} &I_1^2=I_2^2=I_3^2=e, \\ &(I_1I_2)^3=(I_2I_3)^3=e \end{matrix}\right\rangle.$$ This time, an additional parabolic unipotent element is given by $I_1I_3$. $a$ $b$ fundamental group’s relation $I_2I_3I_1I_3=P$ ------ ---------------- ---------- ------------------------------ ------------------ m129 $I_3I_1I_2I_3$ $I_2I_3$ aaaBBabAAAbbAB $BA$ m203 $I_3I_1I_2I_3$ $I_2I_3$ aaabbaaBAAABBAAb $BA$ peripheral curves ------ ------------------------------- -------------------------------------- m129 $AAb\equiv ab$ $ AAAbbA \equiv BA$ $P^{-1}$ $P$ $Ab$ $bAAAba \equiv Ba$ $(I_3I_2)I_1I_3(I_3I_2)^{-1}$ $(I_3I_2)(I_1I_3)^{-1}(I_3I_2)^{-1}$ m203 $aab\equiv Ab$ $BBAAAB\equiv e$ $(I_3I_2)I_1I_3(I_3I_2)^{-1}$ $e$ $ab$ $BAAABBAAA\equiv e$ $P^{-1}$ $e$ [^1]: Institut de Mathématiques de Jussieu-Paris Rive Gauche, Sorbonne Université, 4 place Jussieu, 75252 Paris Cédex, France. ACG, OURAGAN (IMJ-PRG, INRIA Paris, Sorbonne Université, Université de Paris, CNRS). Email address: [raphael.alexandre@imj-prg.fr]{}.
--- abstract: | Ac susceptibility and static magnetization measurements were performed in the optimally doped SmFeAsO$_{0.8}$F$_{0.2}$ superconductor. The field - temperature phase diagram of the superconducting state was drawn and, in particular, the features of the flux line lattice derived. The dependence of the intra-grain depinning energy on the magnetic field intensity was derived in the thermally-activated flux creep framework, enlightening a typical $1/H$ dependence in the high-field regime. The intra-grain critical current density was extrapolated in the zero temperature and zero magnetic field limit, showing a remarkably high value $J_{c0}(0) \sim 2 \cdot 10^{7}$ A/cm$^{2}$, which demonstrates that this material is rather interesting for the potential future technological applications. address: - '$^{1}$Department of Physics “A. Volta,” University of Pavia-CNISM, I-27100 Pavia, Italy' - '$^{2}$Department of Physics “E. Amaldi,” University of Roma Tre-CNISM, I-00146 Roma, Italy' - '$^{3}$Department of Physics, University of Parma-CNISM, I-43124 Parma, Italy' - '$^{4}$CNR-INFM-LAMIA and University of Genova, I-16146 Genova, Italy' author: - 'G. Prando,$^{1,2}$ P. Carretta,$^{1}$ R. De Renzi,$^{3}$ S. Sanna,$^{1}$ A. Palenzona,$^{4}$ M. Putti,$^{4}$ M. Tropeano$^{4}$' title: | Vortex dynamics and irreversibility line in optimally doped SmFeAsO$_{0.8}$F$_{0.2}$\ from ac susceptibility and magnetization measurements --- Introduction ============ Several properties of iron-based 1111 oxy-pnictide superconductors like, for instance, the high crystallographic and superconducting anisotropy and large penetration depths make them similar to cuprate high-$T_{c}$ materials. Since their discovery,[@Kam06; @Kam08] several studies have been performed in order to clarify their intrinsic microscopic properties.[@Lum10; @Joh10] The attention has been mainly focussed on the bosonic coupling mechanism of the superconducting electrons, on the features of the spin density wave magnetic phase characterizing the parental and lightly doped-compounds and its possible coexistence with superconductivity,[@Lue09; @San09; @San10] and on the interaction between localized magnetism from RE ions and itinerant electrons onto FeAs bands.[@Pra10; @Pou08; @Sun10] A common hope is that a full comprehension of these topics in oxy-pnictide superconductors could also allow one to answer several open questions on the cuprates. At the same time, other analogies with cuprates possibly characterize 1111 oxy-pnictides as interesting materials for technological applications, like small coherence lengths (and, correspondingly, high values of upper critical fields) besides the high values of the superconducting critical temperature $T_{c}$. In this respect, studies of macroscopic properties like critical fields and critical depinning currents are of the utmost importance. Namely, the investigation of the dynamical features of the flux line lattice and of the so-called irreversibility line, typically investigated by means of both resistance and ac susceptibility measurements, is in order. Those measurements allow to further check the validity of the theories used to model the mixed state of cuprate materials and, in particular, the vortices motion and its relationship with the possible pinning mechanisms.[@Bla94] Several works reporting magnetoresistance,[@Mol10; @Lee10; @Sha10] modulated microwave absorption[@Pan10] and dc magnetization[@Jo09; @Vdb10; @Yam08; @Ahm09] examining the flux line lattice dynamics in 1111 oxy-pnictides have already been published in the last two years. To our knowledge, no study of the flux line lattice by means of ac susceptibility measurements has been published yet. This paper deals with the field, temperature and frequency dependences of ac susceptibility in optimally doped SmFeAsO$_{0.8}$F$_{0.2}$, which is one of the compounds showing the highest $T_{c}$ among all the iron-based superconductors. Although no large enough single-crystals are available and our data refer to unoriented powder samples, the power of the ac susceptibility technique allowed us to deduce several intrinsic features of the mixed state of the superconductor. The magnetic field ($H$) behaviour of the irreversibility line was obtained, allowing to draw (together with dc magnetization data) a detailed phase diagram of the flux line lattice. The $H$ dependence of the intra-grain effective depinning energy ($U_{0}$) was investigated, evidencing the characteristic crossover from a single-vortex-dynamics to a collective-dynamics ($U_{0} \propto 1/H$) at a field $H \sim 0.5$ T. An estimate of the intra-grain critical depinning current density in the limit of vanishing temperature and magnetic field was also deduced, giving the remarkably high value of $J_{c0}(0) \sim 2 \cdot 10^{7}$ A/cm$^{2}$. This result is of great importance in characterizing SmFeAsO$_{0.8}$F$_{0.2}$ as a superconductor suitable for technological applications. Experimentals and main results {#SectExp} ============================== SmFeAsO$_{0.8}$F$_{0.2}$ was prepared by solid state reaction at ambient pressure from Sm, As, Fe, Fe$_{2}$O$_{3}$ and FeF$_{2}$. SmAs was first synthesized from pure elements in an evacuated, sealed glass tube at a maximum temperature of 550°C. The final sample was synthesized by mixing SmAs, Fe, Fe$_{2}$O$_{3}$ and FeF$_{2}$ powders in stoichiometric proportions, using uniaxial pressing to make powders into a pellet and then heat treating the pellet in an evacuated, sealed quartz tube at 1000°C for 24 hours, followed by furnace cooling. The sample was analyzed by powder X-ray diffraction in a Guinier camera, with Si as internal standard. The powder pattern showed the sample to be single phase with two weak extra lines at low angle of the SmOF extra phase. The lattice parameters were $a = 3.930(1)$ Å and $c = 8.468(2)$ Å. Static magnetization $M_{dc}$ measurements were performed by means of a Quantum Design MPMS-XL7 SQUID magnetometer. The temperature $T$ dependence of $M_{dc}$ upon field-cooling (FC) the sample was monitored at different applied magnetic fields up to 7 T. Representative raw $M_{dc}$ vs. $T$ curves are shown in Fig. \[FigDC1v5T\]. The superconducting (SC) response, with onset around $T_{c} \simeq 52$ K, is found to be superimposed to a paramagnetic contribution associated with Sm$^{3+}$ ions. Clear kinks in the magnetization curves can be observed at $T_{N} \simeq 4$ K, evidencing the antiferromagnetic transition of the Sm$^{3+}$ magnetic moments.[@Rya09] The field dependence of the SC transition temperature $T_{c}(H)$ was deduced by first subtracting the linear extrapolation of the Sm$^{3+}$ paramagnetic contribution in a few-K region around the SC onset from the raw data. The transition temperatures were then estimated from the intersection of two linear fits of the resulting curves above and below the onset (see Fig. \[FigDC1v5T\], inset). $T_{c}(H)$ behaviour was also deduced by means of magnetoresistance measurements upon the application of magnetic fields up to 9 T, showing a behaviour analogous to what observed in SmFeAsO$_{1-x}$F$_{x}$ compounds from the same bath but with lower $x$ concentrations of F$^{-}$.[@Pal09] The onset of the diamagnetic contribution and its dependence on the applied external magnetic field was also investigated by means of a Quantum Design MPMS-XL5 SQUID ac susceptometer. Measurements were performed with an alternating field in the range $H_{ac} =$ ($0.0675$ – $1.5$) $\cdot 10^{-4}$ T parallel to the static field $H$, which varied up to 5 T. The ac field frequency ranged from $37$ to $1488$ Hz. The diamagnetic onset temperature was estimated from $\chi^{\prime}$ vs. $T$ curves by means of the same procedure shown in the inset of Fig. \[FigDC1v5T\]. An accurate examination of ac susceptibility data as a function of $\nu_{m}$ allowed us to obtain further information on the dynamical properties of the flux line lattice (FLL). It is well known, in fact, that a peak in $\chi^{\prime\prime}$ vs. $T$ curves associated with a maximum in the energy dissipation inside the sample appears at a temperature $T_{p}$ slightly lower than the diamagnetic onset temperature in $\chi^{\prime}$. Correspondingly, at the same temperature $T_{p}$ the $\chi^{\prime}$ vs. $T$ curve displays a maximum in its first derivative[@Kes89] (see exemplifying raw data in Fig. \[FigAC\], lower panel). Several works in the past decades have tried to clarify the origin of the $\chi^{\prime\prime}$ peak. One of the possible interpretations relies on the Bean’s critical state model[@Bea62; @Bea64] and associate the peak in $\chi^{\prime\prime}$ with the flux front reaching the centre of the sample. In this case, the peak temperature $T_{p}$ should not depend on the measurement frequency $\nu_{m}$ and a strong dependence on sample dimensions and ac field amplitude $H_{ac}$ are predicted.[@Fri95] Another interpretation relies on the modification of the skin depth, due to the superconductor resistivity in the thermally-assisted flux flow (TAFF) regime, with respect to the London penetration depth. In this case $T_{p}$ should strongly depend on the measurement frequency $\nu_{m}$ while no dependence on the ac field amplitude $H_{ac}$ is predicted.[@Kes89; @Fri95; @Cle91; @Ges91; @Bra93; @Gom97] Considering the frequency dependence of the $\chi^{\prime\prime}$ peak, another interesting interpretation has been associated with a resonant absorption of energy obtained when the inverse of $\nu_{m}$ matches the characteristic relaxation time $\tau_{c}$ of the FLL at $T_{p}$,[@Pal90; @Tin91] namely $$2 \pi \nu_{m} \tau_{c|_{T = T_{p}}} = 1.$$ In this case, the underlying theory is the more general framework of the thermally activated creep of flux lines between different metastable minima of pinning potentials.[@And64; @Tin96] At temperatures lower than $T_{p}$, other broader contributions to both $\chi^{\prime}$ and $\chi^{\prime\prime}$ were found and interpreted as arising from granularity of the powder sample and, in particular, to intergranular Josephson weak links between different grains.[@Gom97; @Nik89; @Nik95] In cuprate materials, from the analysis of the low-temperature peak in $\chi^{\prime\prime}$ and, in particular, of its frequency dependence, the depinning energy barrier associated with grain boundaries was extracted.[@Nik89; @Kum95; @Mul90] Strong granularity has been observed also in iron-based pnictide materials. On samples prepared with the same procedure a small (though not negligible) intergranular critical density current has been evaluated by a remanent magnetization analysis.[@Yam11] However, it has been determined that the main contribution to the magnetization curve comes from intagranular currents. By considering our data just in the $T$ region close enough to the diamagnetic onset, then, we will be focussing only on the intra-grain intrinsic dynamical properties. The stronger signal amplitude, moreover, made it reasonable to investigate the peak in $\chi^{\prime}$ derivative instead of the maximum in $\chi^{\prime\prime}$. Fig. \[FigACder\] shows the temperature evolution of the normalized first derivative of the real component $\chi^{\prime}$ of the ac susceptibility upon the application of different values of static magnetic field $H$. In these measurements, both the alternating field and the working frequency were kept fixed at $H_{ac} = 1.5 \cdot 10^{-4}$ T and $\nu_{m} = 37$ Hz respectively. The effect of increasing $H$ is a clear shift of $T_{p}$ towards lower values. At each applied $H$ a clear dependence of $T_{p}$ on $\nu_{m}$ was evidenced, as discussed later on. Some scans with $H_{ac}$ values in the range ($0.0675$ – $1.5) \cdot 10^{-4}$ T were also performed at the representative values $H = 0.025, 0.25$ and $5$ T (data not shown). Within the experimental error, no dependence of $T_{p}$ on $H_{ac}$ was evidenced. Analysis and discussion ======================= The raw $M_{dc}$ vs. $T$ data reported in Fig. \[FigDC1v5T\] were fitted by the function (see dashed curves in Fig. \[FigDC1v5T\]) $$\begin{aligned} \label{EqScTrans} M_{dc}(T,H) &=& M_{sc} \left[1 - \left(\frac{T}{T_{c}}\right)^{\alpha}\right]^{\beta} +\nonumber\\ && + C_{cw}\frac{H}{T - T_{N}} + M_{0}(H)\end{aligned}$$ where the first term is the diamagnetic Meissner response (empirically represented by a two-exponents mean-field function) and the second one is the Curie-Weiss paramagnetic contribution. The last term accounts for all the sources of $T$-independent magnetism, ranging from Pauli- and Van-Vleck-like paramagnetism to a small contribution of magnetic impurities (e.g. Fe$_{2}$As).[@Cim09] A detailed analysis of the results of the fitting procedure according to Eq. \[EqScTrans\] will be presented in another work.[@Pra11] Results from both SQUID magnetometry and ac susceptibility are summarized in the phase diagram shown in Fig. \[FigPhDiaSubm\]. From the $T_{c}(H)$ data obtained from $M_{dc}$ vs. $T$ curves (see Fig. \[FigDC1v5T\]) it was possible to derive the temperature dependence of the upper critical field $H_{c2}$ (see open circles in Fig. \[FigPhDiaSubm\]). A linear fit of the $H_{c2}$ vs. $T$ data deduced from magnetization data gives a slope $d H_{c2}/d T = 7.47 \pm 0.15$ T/K, in agreement with what was found in compounds of the same family from magnetoresistivity data,[@Pal09; @Jar08] even if much lower slope values were reported from calorimetric measurements on single crystals of Nd-based 1111 superconductors.[@Wel08] Then, in the simplified assumption of single-band superconductivity, through the Werthamer, Helfand and Hohenberg relation[@Wer66] $$\label{EqWHH} H_{c2}(T = 0 K) \simeq 0,693 \cdot T_{c} \left|\frac{d H_{c2}}{d T}\right|_{T \simeq T_{c}}$$ it is possible to estimate $H_{c2}$ $\simeq 270$ T in the limit of vanishing $T$. A comparison between the diamagnetic onset temperature as obtained from ac susceptibility at different frequencies (see exemplifying raw data in Fig. \[FigAC\]) and the one obtained from static magnetization is plotted in the inset of Fig. \[FigPhDiaSubm\]. The onset in ac data was systematically found at lower temperatures than the corresponding dc diamagnetic onset. Dashed lines represent the empirical power-law fitting function $$\label{EqPowLaw} H \propto (1 - T/T_{c})^{\beta}, \; \beta = 3/2$$ well describing experimental data at each value of the ac field frequency $\nu_{m}$. Such a functional form characterized by $\beta = \frac{4}{3}$ – $\frac{3}{2}$ is known to describe the irreversibility line in the $H-T$ phase diagram of cuprates.[@Mal88] Plus signs in the phase diagram in Fig. \[FigPhDiaSubm\] refer to the points $T = T_{p}$ of maximum slope of $\chi^{\prime}$ vs. $T$ curves ($H_{ac} = 1.5 \cdot 10^{-4}$ T and $\nu_{m} = 37$ Hz. See Fig. \[FigACder\]) corresponding, as already explained in sect. \[SectExp\], to the maximum of $\chi^{\prime\prime}$ vs. $T$ associated with intrinsic intra-grain losses. These data divide the phase diagram into two main regions, following the interpretation of the $\chi^{\prime\prime}$ peak in term of resonant absorption of energy in a thermally activated flux creep model.[@Pal90] In the high-$T$ and high-$H$ region the flux lines are in a reversible state, that is, they are responding to the external ac perturbation (liquid FLL). On the other hand, in the low-$T$ and low-$H$ region vortices are arranged in a glassy-like frozen FLL that gives rise to a non-reversible response and to dissipation, linked to the non-zero values assumed by $\chi^{\prime\prime}$. $T_{p}$ vs. $H$ points associated with the lowest accessible frequency $\nu_{m}$ are thus expected to belong to the irreversibility line (or de Almeida - Thouless line) of the FLL phase diagram. As in the case of the diamagnetic onset in $\chi^{\prime}$ (see Fig. \[FigPhDiaSubm\], inset), Eq. \[EqPowLaw\] nicely fits the field dependence of $T_{p}$ in the $H > 0.8$ T limit (see the dotted line in Fig. \[FigPhDiaSubm\]). A logarithmic dependence of $1/T_{p}$ vs. $\nu_{m}$ at every fixed $H$ is evidenced over the explored frequency range (see, for example, $H = 1.5$ T data in the inset of Fig. \[FigDepBarr\]). Data can then be fitted within a thermally-activated framework by the formula (red dashed line in the inset of Fig. \[FigDepBarr\]) $$\label{EqLogDep} \frac{\nu_{m}}{\nu_{0}} = \exp\left(-\frac{U_{0}(H)}{k_{B} T_{p|_{\nu = \nu_{m}}}}\right)$$ from which it can be observed that the logarithmic behaviour of $1/T_{p}$ is mainly controlled by the parameter $U_{0}$, playing the role of an effective depinning energy barrier in a thermally-activated flux creep model. The parameter $\nu_{0}$ takes the meaning of a intra-valley characteristic frequency associated with the motion of the vortices around their equilibrium position in the pinning centers. The advantage of extracting the value of $U_{0}$ from ac susceptibility data, if compared for instance to magnetoresistance data, is that the former is an almost isothermal estimate. Temperature, in fact, varies at most by $1$ K degree as a function of $\nu_{m}$ at the highest applied $H$ (see Fig. \[FigDepBarr\], inset) so that it is possible to determine $U_{0}$ at a temperature $T^{*}(H)$ with a maximum uncertainty of $0.5$ K. This fact will be of interest when deriving the critical current density value, as it will be shown shown later on. Data at different static magnetic fields can be fitted according to this model giving the results reported in Fig. \[FigDepBarr\], where the $H$ dependence of the effective depinning energy barrier $U_{0}$ is shown. Beyond an overall sizeable reduction of $U_{0}$ with increasing $H$, a crossover between two different regimes can be clearly observed at $H \gtrsim 0.5$ T. At high fields the data are well described by a $1/H$ dependence, a well-known result in high-$T_{c}$ cuprate superconductors, observed by means of several techniques, ranging from NMR[@Rig98] to ac magnetometry[@Emm91] and magnetoresistivity[@Pal90]. A naive explanation of this behaviour can be obtained in terms of the balance between the Gibbs free energy of the system and the energy required for the motion of a flux lines bundle.[@Tin88] In this framework, the crossover between the two different trends of $U_{0}$ vs. $H$ shown in Fig. \[FigDepBarr\] can be interpreted as the transition from a basically single-flux line response at low $H$ values to a collective response of vortices for $H > 0.5$ T. A similar phenomenology was recently reported from magnetoresistivity data on a single crystal of O-deficient SmFeAsO$_{0.85}$ and on powder samples of La-based and Ce-based 1111 superconductors.[@Lee10; @Sha10] The crossover between different regimes was observed at $H \simeq 1$ T in La-based samples and at much higher magnetic fields in Ce-based samples and in SmFeAsO$_{0.85}$ ($H \simeq 3$ T). However, $U_{0}$ values are typically 1 order-of-magnitude lower in La- and Ce- based superconductors. Comparing the sets of data for $U_{0}$ derived from magnetoresistivity and here from ac susceptibility in Sm-based samples, the numerical agreement is very good for $H \gtrsim 1$ T. In the $1/H$ regime, Tinkham[@Tin88] extended previous works by Yeshurun and Malozemoff[@Mal88; @Yes88] showing that the relation for the normalized effective depinning energy barrier $$\label{EqDep} \frac{U_{0}(t,H)}{t} \simeq \frac{K J_{c0}(0)}{H} g(t)$$ holds also for granular samples. Here $U_{0}$ is expressed in K degrees, $t$ is the reduced temperature $t \equiv T/T_{c}$, $g(t) \equiv 4 \left(1-t\right)^{3/2}$, $J_{c0}(t)$ quantifies the critical current density at $H = 0$ and $T = t T_{c}$, while the constant $K$ is defined as $K \equiv 3 \sqrt{3} \Phi_{0}^{2} \beta/2 c$, $\Phi_{0}$ being the flux quantum, $c$ the speed of light and $\beta$ a numerical constant close to unit value. Eq. \[EqDep\] is derived in the simplified assumption of a two-fluid model.[@Tin88] By assuming that this empirical scenario can describe also the system under current investigation, from ac susceptibility it is possible to directly extrapolate the value of $J_{c0}(0)$. $U_{0}(H)$, as already noticed above, is almost isothermally estimated and can then be expressed as $U_{0}(t^{*},H)$. By now plotting $U_{0}/t^{*} g(t^{*})$ as a function of $1/H$ (see Fig. \[FigJc\]), from a linear fit of data it is possible to extract from Eq. \[EqDep\] the value $J_{c0}(0) = \left(2.25 \pm 0.05\right) \cdot 10^{7}$ A/cm$^{2}$, having assumed $\beta = 1$. This rather high value is in agreement with estimates of the critical current density $J_{c}$ evaluated by inductive measurements in similar Sm-based samples[@Yam11] and also with the direct measurement of this quantity in a Sm-based 1111 single crystals in the $T \rightarrow 0$ K and $H \rightarrow 0$ T limit.[@Mol10] Conclusions =========== The $H$ – $T$ phase diagram of the flux line lattice in a powder sample of optimally-doped SmFeAsO$_{0.8}$F$_{0.2}$ was investigated by means of both ac and dc susceptibility measurements. The irreversibility line separating a liquid from a glassy phase was deduced and the activation depinning energy $U_{0}$ as a function of the external magnetic field derived in the framework of a thermally-activated flux creep theory. A $1/H$ dependence of $U_{0}$ for $H \gtrsim 0.5$ T, typical of collective motion of flux lines, has been evidenced. From the $U_{0}$ vs. $H$ behaviour a value of $J_{c0}(0) \sim 2 \cdot 10^{7}$ A/cm$^{2}$ has been extrapolated for the critical depinning current at both zero field and zero temperature. From this result we confirm that high intrinsic critical depinning current density values seem to be a peculiar feature of these systems, possibly making them good candidates for technological applications. M. J. Graf, A. Rigamonti and L. Romanò are gratefully acknowledged for stimulating discussions. We thank C. Pernechele and D. Zola for useful help and suggestions about ac susceptibility measurements. Y. Kamihara, H. Hiramatsu, M. Hirano, R. Kawamura, H. Yanagi, T. Kamiya, H. Hosono, [*J. Am. Chem. Soc.*]{} [**128**]{}, 10012 (2006) Y. Kamihara, T. Watanabe, M. Hirano, H. Hosono, [*J. Am. Chem. Soc.*]{} [**130**]{}, 3296 (2008) H. D. Lumsden and A. D. Christianson, [*J. Phys.: Cond. Matt.*]{} [**22**]{}, 203203 (2010) D. C. Johnston, [*Adv. Phys.*]{} [**59**]{}, 803 (2010) H. Luetkens, H.-H. Klauss, M. Kraken, F. J. Litterst, T. Dellmann, R. Klingeler, C. Hess, R. Khasanov, A. Amato, C. Baines, M. Kosmala, O. J. Schumann, M. Braden, J. Hamann-Borrero, N. Leps, A. Kondrat, G. Behr, J.Werner and B. Büchner, [*Nature Mat.*]{} [**8**]{}, 305 (2009) S. Sanna, R. De Renzi, G. Lamura, C. Ferdeghini, A. Palenzona, M. Putti, M. Tropeano, T. Shiroka, [*Phys. Rev.*]{} [**B 80**]{}, 052503 (2009) S. Sanna, R. De Renzi, T. Shiroka, G. Lamura, G. Prando, P. Carretta, M. Putti, A. Martinelli, M. R. Cimberle, M. Tropeano, A. Palenzona, [*Phys. Rev.*]{} [**B 82**]{}, 060508(R) (2010) G. Prando, P. Carretta, A. Rigamonti, S. Sanna, A. Palenzona, M. Putti, M. Tropeano, [*Phys. Rev.*]{} [**B 81**]{}, 100508(R) (2010) L. Pourovskii, V. Vildosola, S. Biermann, A. Georges, *Europhys. Lett.* [**84**]{}, 37006 (2008) L. Sun, X. Dai, C. Zhang, W. Yi, G. Chen, N. Wang, L. Zheng, Z. Jiang, X. Wei, Y. Huang, J. Yang, Z. Ren, W. Lu, X. Dong, G. Che, Q. Wu, H. Ding, J. Liu, T. Hu, Z. Zhao, [*Europhys. Lett.*]{} [**91**]{}, 57008 (2010) G. Blatter, M. V. Feigel’man, V. B. Geshkenbein, A. I. Larkin, V. M. Vinokur, [*Rev. Mod. Phys.*]{} [**66**]{}, 1125 (1994) P. J. W. Moll, R. Puzniak, F. Balakirev, K. Rogacki, J. Karpinski, N. D. Zhigadlo, B. Batlogg, [*Nature Mat.*]{} [**9**]{}, 628 (2010) H.-S. Lee, M. Bartkowiak, J. S. Kim, H.-J. Lee, [*Phys. Rev.*]{} [**B 82**]{}, 104523 (2010) M. Shahbazi, X. L. Wang, C. Shekhar, O. N. Srivastava, S. X. Dou, [*Supercond. Sci. Technol.*]{} [**23**]{}, 105008 (2010) N. Y. Panarina, Y. I. Talanov, T. S. Shaposhnikova, N. R. Beysengulov, E. Vavilova, G. Behr, A. Kondrat, C. Hess, N. Leps, S. Wurmehl, R. Klingeler, V. Kataev, B. Buchner, [*Phys. Rev.*]{} [**B 81**]{}, 224509 (2010) Y. J. Jo, J. Jaroszynski, A. Yamamoto, A. Gurevich, S. C. Riggs, G. S. Boebinger, D. Larbalestier, H. H. Wen, N. D. Zhigadlo, S. Katrych, Z. Bukowski, J. Karpinski, R. H. Liu, H. Chen, X. H. Chen, L. Balicas, [*Physica*]{} [**C 469**]{}, 566 (2009) C. J. van der Beek, G. Rizza, M. Konczykowski, P. Fertey, I. Monnet, T. Klein, R. Okazaki, M. Ishikado, H. Kito, A. Iyo, H. Eisaki, S. Shamoto, M. E. Tillman, S. L. Bud’ko, P. C. Canfield, T. Shibauchi, Y. Matsuda, [*Phys. Rev.*]{} [**B 81**]{}, 174517 (2010) A. Yamamoto, A. A. Polyanskii, J. Jiang, F. Kametani, C. Tarantini, F. Hunte, J. Jaroszynski, E. E. Hellstrom, P. J. Lee, A. Gurevich, D. C. Larbalestier, Z. A. Ren, J. Yang, X. L. Dong, W. Lu, Z. X. Zhao, [*Supercond. Sci. Technol.*]{} [**21**]{}, 095008 (2008) D. Ahmad, I. Park, G. C. Kim, J. H. Lee, Z.-A. Ren, Y. C. Kim, [*Physica*]{} [**C 469**]{}, 1052 (2009) D. H. Ryan, J. M. Cadogan, C. Ritter, F. Canepa, A. Palenzona, M. Putti, [*Phys. Rev.*]{} [**B 80**]{}, 220503(R) (2009) I. Pallecchi, C. Fanciulli, M. Tropeano, A. Palenzona, M. Ferretti, A. Malagoli, A. Martinelli, I. Sheikin, M. Putti, C. Ferdeghini, [*Phys. Rev.*]{} [**B 79**]{}, 104515 (2009) P. H. Kes, J. Aarts, J. van der Beek, J. A. Mydosh, [*Supercond. Sci. Technol.*]{} [**1**]{}, 242 (1989) C. P. Bean, [*Phys. Rev. Lett.*]{} [**8**]{}, 250 (1962) C. P. Bean, [*Rev. Mod. Phys.*]{} [**36**]{}, 31 (1964) M. C. Frischherz, F. M. Sauerzopf, H. W. Weber, M. Murakami, G. A. Emel’chenko, [*Supercond. Sci. Technol.*]{} [**8**]{}, 485 (1997) J. R. Clem, [*Magnetic Susceptibility of Superconductors and other Spin Systems*]{}, Ed. R. A. Hein, T. L. Francavilla, D. H. Liebenberg, Plenum Press - New York (1991), p. 177 V. B. Geshkenbein, V. M. Vinokur, R. Fehrenbacher, [*Phys. Rev.*]{} [**B 43**]{}, 3748(R) (1991) E. H. Brandt, [*The Vortex State*]{}, Ed. N. Bontemps, Y. Bruynseraede, G. Deutscher, A. Kapitulnik, Kluwer Academic Publishers (1994), p. 125 F. Gömöry, [*Supercond. Sci. Technol.*]{} [**10**]{}, 523 (1997) T. T. M. Palstra, B. Batlogg, R. B. van Dover, L. F. Schneemeyer, J. V. Waszczak, [*Phys. Rev.*]{} [**B 41**]{}, 6621 (1990) M. Tinkham, [*Physica*]{} [**B 169**]{}, 66 (1991) P. W. Anderson, Y. B. Kim, [*Rev. Mod. Phys.*]{} [**36**]{}, 39 (1964) M. Tinkham, [*Introduction to Superconductivity*]{}, McGraw-Hill Book Co. (1996) M. Nikolo, R. B. Goldfarb, [*Phys. Rev.*]{} [**B 39**]{}, 6615 (1989) M. Nikolo, [*Am. J. Phys.*]{} [**63**]{}, 57 (1995) B. V. Kumaraswamy, R. Lal, A. V. Narlikar, [*Phys. Rev.*]{} [**B 52**]{}, 1320 (1995) K.-H. Müller, [*Physica*]{} [**B 168**]{}, 585 (1990) A. Yamamoto, J. Jiang, F. Kametani, A. Polyanskii, E. Hellstrom, D. Larbalestier, A. Martinelli, A. Palenzona, M. Tropeano, M. Putti, arXiv:1011.2547 M. R. Cimberle, F. Canepa, M. Ferretti, A. Martinelli, A. Palenzona, A. S. Siri, C. Tarantini, M. Tropeano, C. Ferdeghini, [*Journ. Magn. Magn. Mat.*]{} [**321**]{}, 3024 (2009) G. Prando, P. Carretta, A. Lascialfari, A. Rigamonti, S. Sanna, A. Palenzona, M. Putti, M. Tropeano, to be published J. Jaroszynski, S. C. Riggs, F. Hunte, A. Gurevich, D. C. Larbalestier, G. S. Boebinger, F. F. Balakirev, A. Migliori, Z. A. Ren, W. Lu, J. Yang, X. L. Shen, X. L. Dong, Z. X. Zhao, R. Jin, A. S. Sefat, M. A. McGuire, B. C. Sales, D. K. Christen, D. Mandrus, [*Phys. Rev.*]{} [**B 78**]{}, 064511 (2008) U. Welp, R. Xie, A. E. Koshelev, W. K. Kwok, P. Cheng, L. Fang, H.-H. Wen, [*Phys. Rev.*]{} [**B 78**]{}, 140510(R) (2008) N. R. Werthamer, E. Helfand, P. C. Hohenberg, [*Phys. Rev.*]{} [**147**]{}, 295 (1966) A. P. Malozemoff, T. K. Worthington, Y. Yeshurun, F. Holtzberg, P. H. Kes, [*Phys. Rev.*]{} [**B 38**]{}, 7203(R) (1988) A. Rigamonti, F. Borsa, P. Carretta, [*Rep. Prog. Phys.*]{} [**61**]{}, 1367 (1998) J. H. P. M. Emmen, V. A. M. Brabers, W. J. M. de Jonge, [*Physica*]{} [**C 176**]{}, 137 (1991) M. Tinkham, [*Phys. Rev. Lett.*]{} [**61**]{}, 1658 (1988) Y. Yeshurun, A. P. Malozemoff, [*Phys. Rev. Lett.*]{} [**60**]{}, 2202 (1988)
--- abstract: 'We present and analyse a numerical method for understanding the low-inertia dynamics of an open, inextensible viscoelastic rod - a long and thin three dimensional object - representing the body of a long, thin microswimmer. Our model allows for both elastic and viscous, bending and twisting deformations and describes the evolution of the midline curve of the rod as well as an orthonormal frame which fully determines the rod’s three dimensional geometry. The numerical method is based on using a combination of piecewise linear and piecewise constant functions based on a novel rewriting of the model equations. We derive a stability estimate for the semi-discrete scheme and show that at the fully discrete level that we have good control over the length element and preserve the frame orthonormality conditions up to machine precision. Numerical experiments demonstrate both the good properties of the method as well as the applicability of the method for simulating undulatory locomotion in the low-inertia regime.' --- **Keywords.**[ Kirchhoff rod; viscoelastic materials; finite element methods; biomechanics; undulatory locomotion ]{} **Subject class.**[ 74S05, 74K10, 74D05, 65M60, 74L15, 92C10 ]{} Introduction ============ Background ---------- The dynamics of active microswimmers has long captured the interest of physicists, mathematicians and engineers, not to mention researchers in various biological fields. Undulation, passing bending waves down the body, is a common strategy used across many orders of magnitude [@Gazzola2014] ranging from bacteria [@Lig76; @Lauga2016] and spermatozoa [@Fauci2006] to larger fish and whales [@Fish2006]. It is especially common in the low inertia regime, where viscous forces of the surrounding media dominate over inertial forces and the scallop theorem prohibits self-propelled locomotion for time-reversal symmetric sequences of body postures [@Pur77; @Tay52]. For additional reading, we refer the reader to a large body of excellent reviews on animal locomotion [@Gra68; @Lauga2009; @Cicconofri2019], fluid dynamics at low Reynolds number [@Childress1981; @Lighthill1975; @Lig76; @Brennen1977; @Pur77; @Yates1986; @Fauci2006], and the biophysics and biology of cell motility [@Berg2004; @Holwill696; @Jahn1972; @Blum1979; @Berg2000; @Bray2001]. Some computational studies in this area have focussed on solving a fully detailed three dimensional problem capturing many aspects of the problem including the dynamics and its interaction with the body (e.g. [@Szigeti2014; @Palyanov2016; @Spagnolie2013; @Elmi2017; @Mujika2017; @Simons2015]). These large scale computational studies provide many interesting results explaining giving insights on both complex fluids and animal locomotion (see, e.g., the review [@Lauga2009] and references therein). An alternative is to model the body as an elastic or viscoelastic slender body [@Guo2008; @BerBoyTas09] and apply slender body theory [@Lig76; @Keller1976] to capture the surrounding fluid as a linear drag coefficients. This allows orders of magnitude faster simulations by considering only a one dimensional problem significantly reducing the numbers of computational degrees of freedom. In this work we develop, analyse and show experiments using a novel computational approach to the evolution of an open viscoelastic rod, representing the body of a long, thin microswimmer - a three dimensional object with one axis much longer than the other two. We assume the body is embedded in three dimensions which can undergo bending and twisting deformations. The resulting problem can be described as a system of one dimensional partial differential equations for a midline curve and a orthonormal frame which describes the conformation of cross sections to the midline. The model is a natural generalisation of [@Guo2008] to three spatial dimensions. The elastic terms we consider arise from the classical Kirchhoff-Love model for an elastic rod [@Kirchhoff1859] which are combined in a simple linear Voigt viscoelastic model [@Antman1996; @LanLif75]. Our model avoids considering shearing or extensional deformations from the full Cosserat model. In general this nonlinear system can not be solved analytically and requires the use of computer methods. The reduction to a one dimensional object allows for significant reduction in the complexity of the resulting mathematical model in particular with respect to the computational effort required to solve the problem. Our new method tackles three key challenges. First when considering active locomotion one must be able to map directly between anatomic detail of the organism under consideration and the geometry defined by the model. For example, we should be able to clearly identify where are the muscles and in which direction do they apply force. Our approach to capture this results in equations for the midline of the rod coupled to equations for the orientation of the cross section of the rod (see the discussion in [@Langer1996] and \[sec:application\] for more details). These equations must be carefully coupled to ensure that we have an accurate and robust representation of the geometry which allows us to apply the biological forces appropriately. Second we have a moving geometry which is a priori unknown. Our scheme captures this with a parametrisation which is equivalent to a moving mesh. As is typical in this type of problem we must ensure that the moving mesh does not become too distorted (see, for example, [@Elliott2016] and references therein). Finally, in many biological systems various parameters and problem data may be unknown or poorly characterized. Any computational method should be both cheap enough to run so that parametrization studies may be run and also robust to input parameters so that a wide variety of behaviours can be observed. (See [@Stuart2010; @kaipio2006statistical] for example). We will demonstrate through analysis of our method and numerical experiments that we can address all three challenges. Similar models to those considered in this paper arise in many areas of natural science and engineering. For example, similar models have been considered for elastic ribbons and filaments [@bergou2008discrete; @Bergou2010; @Ftterer2012], tangled hair and fibres [@Bertails2006; @Ward2007; @Daviet2011; @Durville2005], plants [@Goriely1998; @Gerbode2012], and woven cloth [@Kaldor2008]. A historical overview of the model used in this paper is given in [@dill1992kirchhoff]. We are particularly interested in the locomotion of low inertia microswimmers. We should the applicability of our method through the case study of the 1mm long nematode *Caenorhabditis elegans*. This animal has become a model organism studied by physicists, computer scientists and engineers partly due to its simple, undulating, periodic gait but also its experimental tractability and simple neuroanatomy [@Cohen2014]. In its natural environment, *C. elegans* grows mostly in rotten vegetation (a highly complex, heterogeneous three dimensional environment). It is cultured on the surface of agar gels for extensive use in laboratories. On agar, *C. elegans* move by propagating bending waves from head to tail to generate forward thrust [@GraLis64; @Cro70; @PieCheMun08; @BerBoyTas09; @FanWyaXie10; @Lebois2012] where the bending arises from body wall muscles which line sides of the body. The two dimensional viscoelastic model [@Guo2008] has previously been applied to *C. elegans* [@SznPurKra10; @FanWyaXie10; @CohRan17-pp; @Denham2018]. However, previous formulations have either linearised the equations, which means although postures are recovered trajectories are not, or viscosity is neglected, which limits the applicability of the model in less resistive environments. Recent experiments [@Bilbao2018; @Shaw2018] have shown that *C. elegans* achieves a different gait in three dimensions. Here we demonstrate that our computational tool is capable of accurately and robustly capturing such behaviours. The simulation results are meant to be indicative of the capabilities of the method rather than demonstrating any properties of the underlying model which is left to future work. The numerical method used in this section builds on the unpublished work [@CohRan17-pp] (see also [@Denham2018]). Model ----- Our model can be seen as a simplification of the model presented in [@Lang2010; @Lang2012] or a three dimensional version of [@Guo2008]. In our model, the rod is described through a smooth, time-dependent parametrisation of the midline $\vec{x} \colon [0,1] \times [0,T] \to \R^3$ and a orthonormal frame $\vec{e}^0, \vec{e}^1, \vec{e}^2 \colon [0,1] \times [0,T] \to \R^3$ up to some final time $T>0$. We call $u \in [0,1]$ the material parameter. Since we do not allow shear deformations, we set $\vec{e}^0 \equiv \vec\tau := \vec{x}_u / \abs{ \vec{x}_u }$ the unit tangent to the midline. The other two coordinates $\vec{e}^1$ and $\vec{e}^2$ form an orthonormal basis of the cross section of the rod to the midline. See \[fig:geometry\]. We can form the skew system: $$\begin{aligned} \frac{1}{\abs{\vec{x}_u}} \vec{e}^{0}_u = \alpha \vec{e}^{1} + \beta \vec{e}^{2}, \qquad % \frac{1}{\abs{\vec{x}_u}} \vec{e}^{1}_u = - \alpha \vec{e}^{0} + \gamma \vec{e}^{2}, \qquad % \frac{1}{\abs{\vec{x}_u}} \vec{e}^{2}_u = - \beta \vec{e}^{0} - \gamma \vec{e}^{1},\end{aligned}$$ where $\alpha$ and $\beta$ are smooth fields denoting the curvature of the midline in directions $\vec{e}^{1}$ and $\vec{e}^{2}$ and $\gamma$ is a smooth field denoting the twist of the orthonormal frame about the midline. At the core of the model is a moment which is the sum of elastic and viscous contributions. The elastic terms are proportional to differences between the fields $\alpha, \beta, \gamma$ and some desired values $\alpha^0, \beta^0, \gamma^0$. The viscous contribution is proportional to the time derivatives $\alpha_t, \beta_t, \gamma_t$. Inertial terms are ignored and external forces are simply modelled as linear drag terms [@Keller1976]. This model can be seen as a quasi-static or low inertia approximation of the full rod dynamics. Full details of the model are given in \[sec:model\]. Computational method -------------------- Our approach is to discretize an appropriate formulation of the continuous equations directly. This allows us to use the structure of the equations to recover a robust numerical scheme through careful discretization choices. The discretization extends the approach of [@Lin2004] to the case of non-constant twist and open curves. Key to our numerical method is a well suited formulation of the continuous partial differential equation system. We start from the balance laws for linear and angular momentum and the linear viscoelastic constitutive law. This is combined with geometric constraints so that the solution variables are the position of the midline, line tension (a Lagrange multiplier for enforcing the length constraint), the curvature of the midline, the twist and angular velocity of the frame and two auxiliary variables which describe the bending and twisting moments. The continuous system of equations is discretised in space by a mixed finite element method where we use a mix of piecewise linear and piecewise constant approximation spaces. We use a Lagrange multiplier to enforce no stretching (i.e. that the length of the curve is locally fixed) in a similar approach to [@Bar13]. Finally, we discretize in time using a semi-implicit method which results in a linear system of equations to solve at each time step, inspired by the approach in [@Dzi90; @DziKuwSch02; @Lin2004], together with an update formula for the frame. The use of lower order finite element approximations, along with mass lumping [@Tho84], allows us to derive identities for various geometric quantities at the nodes. The frame is updated using a Rodrigues formula where the rotation is specified by the frame’s angular moment which can be derived from the solution variables. We use the angular momentum and Rodrigues formula as an alternative to using Euler angles [@Schwab2006] or quaternions [@Lang2010; @Lang2012]. Although direct use of the Rodrigues formula is not recommended for numerical applications in general [@Bauchau2003], we will show that our formulation avoids any problems for any angles. More details of the numerical scheme are given in \[sec:method\]. We will demonstrate three key properties of our scheme: - a semi-discrete stability result (coupled with computational evidence of fully discrete stability) which shows that we recover a discrete Lyapunov functional for our scheme; - control over the length element which ensures that vertices in the moving mesh do not collide; - preservation of the frame orthogonality conditions to ensure that the frame really does remain orthonormal over time. We see these three properties as allowing our method to provide a computational tool both for the understanding of viscoelastic rods and also for domain experts working on undulatory locomotion in the low inertia regime. Further work is required to link this model for undulatory locomotion with more detailed model of the surrounding fluid which would allow a greater variety of behaviours to be captured. Where such links are to be used a balance must be struck between the computational efficiency of a one dimensional approach against the greater detail provided by a fully three dimensional model. In our previous unpublished work [@CohRan17-pp], we presented a similar scheme (also used in [@Denham2018]). This paper builds on that work extending the scheme to three dimensions including twisting as well as bending contributions and further generalising the material law to include viscous terms. There are several very successful methods from discrete differential geometry which solve either the problem we consider or a generalisation. In this approach the rod is first discretised and then equations are derived to evolve those discrete quantities. We mention in particular the approach of [@bergou2008discrete] (extended to viscous threads in [@Bergou2010; @Audoly2013]) which uses a discrete set of vertices (in our notations a piecewise linear curve) with a material frame on each edge between the vertices (piecewise constant in our notation) to represent a rod which was stored using a reduced curve-angle formulation [@Langer1996]. Starting from energies, defined using curvature and twist defined as integrated quantities based at vertices for a Kirchhoff-Love model, the approach uses discrete parallel transport and variation of holonomy to derive the update equations. This work was generalised to Cosserat rods by [@Gazzola2018] who revert away from the reduced curvature-angle formulation storing the full frame. The super-helix/super-clothoid approach [@Bertails2006; @Casati2013] uses a piecewise constant or piecewise linear approximation of generalised curvature of a rod and then recovers the geometry of the rod (position of midline and frame) using analytic expressions. The scheme results in a smooth curve with well defined curvatures and twist. In the context of these scheme our model can be seen as using a set of vertices with a frame at each vertex to define the discrete geometry of the rod. The equations are straight discretization of the continuous scheme using a finite element approach and discrete equations to define curvature and relate twist with tangential angular velocity. It is our particular choice of both geometric discretization and direct discretization of the equations that allows us to demonstrate the properties of our scheme. Outline ------- In \[sec:model\], we present the continuous model we use and the discretization is shown in \[sec:method\]. Numerical experiments to demonstrate the efficacy of the method are shown in \[sec:results\]. The restriction of our scheme to a two dimensional problem is given in \[sec:2d-problem\]. Governing equations {#sec:model} =================== Geometry -------- We consider a smooth, inextensible, unshearable rod embedded in $\R^3$ over a time interval $[0,T]$ for some $0 < T < \infty$. The rod can be described by a (non-arc length) parametrisation of the centre line $\vec{x} \colon [0,1] \times [0,T] \to \mathbb{R}^3$ and an oriented frame of reference $\vec{Q} \colon [0,1] \times [0,T] \to SO(3)$. For a discussion of rod representations see [@Langer1996]. We will write derivatives with respect to the first coordinate, the material coordinate $u$, and the second coordinate, time $t$, with subscripts $(\cdot)_u$ and $(\cdot)_t$, respectively. Our assumptions imply the length of the midline is fixed so that $\abs{ \vec{x}_u }_t = 0$. We call the length of the midline curve $L$. Rather than use the tensor $\vec{Q}$, we will use the equivalent orthonormal triad of unit vectors $\vec{Q} = \{ \vec{e}^0, \vec{e}^1, \vec{e}^2 \}$. We will use the convention for an unshearable rod that $\vec{e}^0 \equiv \vec\tau$ the unit tangent to the centre line given by $\vec\tau = \vec{x}_u / | \vec{x}_u |$. See \[fig:geometry\] for an example rod conformation. ![An illustration of a conformation of a rod. The image shows a (green) shaded three dimensional region which can be parametrized by a midline curve $\vec{x}$ (black) and an orthogonal triad of vectors $\vec{e}^{0}$ (yellow), $\vec{e}^{1}$ (red), and $\vec{e}^{2}$ (blue). The fields $\alpha$ and $\beta$ represent the variation (along the curve) of the tangent vector $\tau = \vec{e}^{0}$ and $\gamma$ represents the rotation of the pair $\vec{e}^{1}, \vec{e}^{2}$ about the tangent vector $\tau = \vec{e}^{0}$.[]{data-label="fig:geometry"}](figs/3d-geometry.png){width="50.00000%"} We can recover the generalised curvature (sometimes called the Darboux vector) $\vec\Omega \colon [0,1] \times [0,T] \to \R^3$ which satisfies $$\label{eq:darboux} \frac{1}{\abs{\vec{x}_u}} \vec\tau_u = \vec\Omega \times \vec\tau, \qquad % \frac{1}{\abs{\vec{x}_u}} \vec{e}^{1}_u = \vec\Omega \times \vec{e}^{1}, \qquad % \frac{1}{\abs{\vec{x}_u}} \vec{e}^{2}_u = \vec\Omega \times \vec{e}^{2}.$$ We decompose $\vec\Omega = \vec\tau \times \vec{\kappa} + \gamma \vec\tau$. We call $\vec{\kappa} = \vec\tau_u/\abs{\vec{x}_u}$ the vector curvature with decomposition $\vec{\kappa} = \alpha \vec{e}_1 + \beta \vec{e}_2$ for fields $\alpha = \vec{\kappa} \cdot \vec{e}_1, \beta = \vec{\kappa} \cdot \vec{e}_2$ and call $\gamma$ the twist. Further, if the orthonormal triad transforms smoothly in time, then we recover the angular velocity of the frame $\vec\omega$ given by $$\label{eq:angular-velocity} \vec\tau_t = \vec\omega \times \vec\tau, \qquad \vec{e}^{1}_{t} = \vec\omega \times \vec{e}^{1}, \qquad \vec{e}^{2}_{t} = \vec\omega \times \vec{e}^{2}.$$ Again, we decompose $\vec\omega = \vec\tau \times \vec\tau_t + m \vec\tau$. The field $m$ denotes the angular velocity of the frame about the tangent vector field which must be computed separately to the centre line velocity. Since the $u$ and $t$ derivatives commute, using the inextensibility of the parametrisation (i.e. $\abs{\vec{x}_u}_t=0$), we can compute that $$\frac{1}{\abs{\vec{x}_u}} \vec\omega_u - \vec{\Omega}_t = \vec\omega \times \Omega.$$ In particular, we see that: $$\label{eq:equation-gamma} \frac{m_u}{\abs{\vec{x}_u}} = \gamma_t - \frac{\vec{x}_{tu}}{\abs{\vec{x}_u}} \cdot \vec{\tau} \times \vec{\kappa}.$$ This final equation is important for providing a closing relation between the angular velocity and twist of the frame. Model derivation ---------------- For an inextensible, unshearable rod we can write down the conservation of linear and angular momentum as [@LanLif75]: $$\label{eq:balance} \vec{K} + \frac{1}{\abs{\vec{x}_u}} \vec{F}_u = 0, \qquad \vec{K}^\rot + \vec\tau \times \vec{F} + \frac{1}{\abs{\vec{x}_u}} \vec{M}_u = 0,$$ where $\vec{K}$ is the external force, $\vec{F}$ is the internal force resultant, $\vec{K}^\rot$ is the external moment and $\vec{M}$ is the internal moment. We assume that the rod is viscoelastic with preferred curvatures and preferred twist so that the internal moment is given by a linear Voigt model: $$\label{eq:moment} \vec{M} = \vec\tau \times \Bigl\{ A \bigl( ( \alpha - \alpha^0 ) \vec{e}^{1} + ( \beta - \beta^0 ) \vec{e}^{2} \bigr) + B ( \alpha_t \vec{e}^{1} + \beta_t \vec{e}^{2} ) \Bigr\} + C ( \gamma - \gamma^0 ) \vec\tau + D \gamma_t \vec\tau.$$ Here $\alpha^0, \beta^0$ and $\gamma^0$ are given fields which we allow to depend on the parameter $u$ and time $t$. We call $\alpha^0$ and $\beta^0$ preferred curvatures and $\gamma^0$ a preferred twist. The material parameters, which we allow to depend on the parameter $u$ but not time $t$, are $A$ the bending modulus, $C$ the twisting modulus, $B$ the bending viscosity and $D$ the twisting viscosity. Note that the material parameters will depend on the precise geometry of the cross section of the rod [@Lang2012]. We assume that $A = A(u) \ge A_0 > 0$, $B = B(u) \ge 0$, $C = C(u) > C_0 > 0$ and $D = D(u) \ge 0$. We introduce the variables $\vec{y}$, the bending moment, and $z$ the twisting moment given by \[eq:moment-split\] $$\begin{aligned} \vec{y} & = A \bigl( ( \alpha - \alpha^0 ) \vec{e}^{1} + ( \beta - \beta^0 ) \vec{e}^{2} \bigr) + B ( \alpha_t \vec{e}^{1} + \beta_t \vec{e}^{2} ) \\ z & = C ( \gamma - \gamma_0 ) + D \gamma_t, \end{aligned}$$ so that the moment is given by $$\label{eq:moment-simple} \vec{M} = \vec\tau \times \vec{y} + z \vec{\tau}.$$ We assume that the tangential forces, $p \vec\tau$, act as a Lagrange multiplier to enforce the inextensibility constraint: $$\label{eq:inextensibility} \abs{ \vec{x}_u }_t = \vec\tau \cdot \vec{x}_{tu} = 0.$$ We may consider $p$ as a pressure or line tension. We assume a linear drag response from the environment onto the rod by $$\label{eq:drag} \vec{K} = \K \vec{x}_t, \qquad \vec{K}^\rot = - K^\rot m \vec{\tau},$$ with strictly positive definite matrix $\K$ and strictly positive scalar coefficient $K^\rot$. Our model of drag is inspired by resistive force theory [@Keller1976]. We decompose the internal force resultant into tangential and normal components by $\vec{F} = p \vec{\tau} + \vec{f}$. We call $p$ the pressure and $\vec{f}$ the normal force resultant. Basic manipulation of the above equations results in the system: \[eq:model\] $$\begin{aligned} \label{eq:model-x} \K \vec{x}_t + \frac{1}{| \vec{x}_u |} \bigl( p \vec\tau \bigr)_u + \frac{1}{|\vec{x}_u|} \bigl( ( \id - \vec\tau \otimes \vec\tau) \frac{ \vec{y}_u }{| \vec{x}_u |} + z \vec\tau \times \vec{\kappa} \bigr)_u & = \vec{0} \\ % \label{eq:model-m} - K^\rot m + \frac{ z_u }{| \vec{x}_u |} + \vec{y} \cdot ( \vec\tau \times \vec{\kappa} ) & = 0 \\ \label{eq:model-p} \vec\tau \cdot \vec{x}_{tu} & = 0. \end{aligned}$$ Here $\id$ is the $3 \times 3$ identity matrix and $\otimes$ is the outer product given by $( \vec{a} \otimes \vec{b} )_{ij} = \vec{a}_i \vec{b}_j$ for $i,j=1,2,3$, $\vec{a}, \vec{b} \in \R^3$. For boundary conditions we assume that each end of the rod is free so we enforce zero force and zero moment at $u=0,1$. That is $$\begin{aligned} \label{eq:model-bc0} p \vec\tau + ( \id - \vec\tau \otimes \vec\tau) \frac{\vec{y}_u}{| \vec{x}_u |} + z \vec\tau \times \vec{\kappa} & = \vec{0} && \mbox{ at } u=0,1\\ \label{eq:model-bc1} \vec\tau \times \vec{y} + z \vec\tau & = \vec{0} && \mbox{ at } u=0,1. \end{aligned}$$ Weak form --------- We will write down a weak form which we will use for our finite element method in the next section. When writing down the weak formulation, we combine equations for the conservation laws \[eq:model-x\] and \[eq:model-m\], the constitutive laws \[eq:moment-split\], the geometric relation \[eq:equation-gamma\], and the inextensibility constraint \[eq:model-p\]. When writing down the constitute equation we add the Laplace-Beltrami identity for $\vec{\kappa}$ the vector curvature: $$\label{eq:model-w} \vec{\kappa} = \frac{1}{\abs{\vec{x}_u}} \vec{\tau}_u = \frac{1}{\abs{\vec{x}_u}} \left( \frac{ \vec{x}_u }{ \abs{ \vec{x}_u }} \right)_u.$$ We must also impose boundary conditions for $\vec{\kappa}$, since the curvature is not well defined at $u=0,1$, which we set to be equal to the prescribed curvatures here: $$\label{eq:model-w-b} \vec{\kappa} = \alpha^0 \vec{e}^{1} + \beta^0 \vec{e}^{2} \qquad \mbox{ at } u = 0,1.$$ We also note that for the bending viscosity terms we can write $$\alpha_t \vec{e}^{1} + \beta_t \vec{e}^{2} = ( \id - \vec\tau \otimes \vec\tau ) \vec{\kappa}_t - m \vec\tau \times \vec{\kappa}.$$ We derive the weak form of the problem by multiplying by appropriate test functions and integrating over the centre line. We let $Q = L^2( 0, 1 )$ denote the space of square integrable functions on $(0,1)$, $V = H^1( 0, 1 )$ the Sobolev space of functions in $L^2(0,1)$ with a weak derivative in $L^2(0,1)$ and $V_0$ the space of functions in $V$ with zero trace [@Evans2010]. Unless otherwise stated, integrals are with respect to the measure $\mathrm{d} u$ Given preferred curvatures, $\alpha^0, \beta^0$, a preferred twist, $\gamma^0$, and initial conditions for $\vec{x}, \gamma$ (which imply compatible initial conditions of $\vec{\kappa}, \vec{e}^1$ and $\vec{e}^2$, up to a fixed rotation), find $\vec{x} , \vec{y}, \vec{\kappa} \colon [0,1] \times [0,T] \to \R^3$ (with the conditions \[eq:model-bc1,eq:model-w-b\] at the boundaries), $m, z, \gamma \colon [0,1] \times [0,T] \to \R$, and $\vec{e}^{1}, \vec{e}^{2} \colon [0,1] \times [0,T] \to \R^3$ such that \[eq:weak\] $$\begin{aligned} \label{eq:weak-x} \int_0^1 \K \vec{x}_t \cdot \vec{\phi} | \vec{x}_u | - \int_0^1 p \vec\tau \cdot \vec{\phi}_u - \int_0^1 \bigl( ( \id - \vec\tau \otimes \vec\tau ) \frac{1}{| \vec{x}_u |} \vec{y}_u + z \vec\tau \times \vec{\kappa} \bigr) \cdot \vec{\phi}_u & = 0 \\ % \label{eq:weak-y} \int_0^1 \bigl( \vec{y} - A ( \vec{\kappa} - \alpha^0 \vec{e}_{1} - \beta^0 \vec{e}_{2} ) - B \bigl( ( \id - {\vec\tau} \otimes {\vec\tau} ) \vec{\kappa}_{t} - m {\vec\tau} \times \vec{\kappa} \bigr) \bigr) \abs{ \vec{x}_{u} } & = 0 \\ \label{eq:weak-w} \int_0^1 \vec{\kappa} \cdot \vec\psi | \vec{x}_u | + \frac{\vec{x}_u}{ | \vec{x}_u | } \cdot \vec\psi_u & = 0\end{aligned}$$ for all $\vec\phi \in V^3, \vec\psi \in V_0^3$, $$\begin{aligned} \label{eq:weak-m} \int_0^1 - K^\rot m v | \vec{x}_u | + \int_0^1 \vec{y} \cdot ( \vec\tau \times \vec{\kappa} ) v | \vec{x}_u | - \int_0^1 z v_u & = 0, \\ % \label{eq:weak-z} \int_0^1 ( z - C ( \gamma - \gamma^0 ) - D \gamma_t ) q | \vec{x}_u | & = 0, \\ % \label{eq:weak-gamma} \int_0^1 \gamma_t q | \vec{x}_u | - \int_0^1 m_u q + \int_0^1 \vec\tau \times \vec{\kappa} \cdot \vec{x}_{tu} q & = 0\end{aligned}$$ for all $q \in Q$ and $v \in V$, $$\begin{aligned} \label{eq:weak-p} \int_0^1 q \vec\tau \cdot \vec{x}_{tu} & = 0,\end{aligned}$$ for all $q \in Q$, and $$\begin{aligned} \int_0^1 \bigl( \vec{e}^{j}_{t} - \bigl( \vec\tau \times \vec\tau_t + m \vec\tau \bigr) \times \vec{e}^{j} \bigr) \cdot \vec{\phi} \abs{ \vec{x}_{u} } = 0, \quad \mbox{ for } j = 1,2,\end{aligned}$$ for all $\vec{\phi}_h \in V^3$. Numerical method {#sec:method} ================ Finite element spaces --------------------- We take a partition of $[0,1]$ by $N$ points $u_1 = 0 < u_2 < \ldots < u_{N} = 1$. We will use a combination of piecewise linear and piecewise constant functions. We introduce the spaces: $$\begin{aligned} V_h & := \{ v_h \in C([0,1]) : v_h|_{[u_i,u_{i+1}]} \text{ is affine, for } i = 1, \ldots, N-1 \} \\ Q_h & := \{ q_h \in L^2(0,1) : q_h|_{[u_i,u_{i+1}]} \text{ is constant, for } i = 1, \ldots, N-1 \}.\end{aligned}$$ We will denote by $V_{h,0}$ the space of finite element functions in $v_h \in V_h$ such that $v_h(0) = v_h(1) = 0$. We will use the subscript $h$, defined to be the maximum mesh spacing, to denote discrete quantities. Temporal and spatial derivatives of discrete functions will be denoted by subscript $u$ and $t$ with a comma separating from the subscript $h$: e.g. $\eta_{h,u}$ denote the spatial derivative of $\eta_h$. For a discrete parametrisation $\vec{x}_h \in V_h^3$, we introduce two different tangent vector fields. First, $\vec{\tau}_h \in Q_h^3$ as the piecewise constant normalised derivative of $\vec{x}_h$: $$\label{eq:tauh-1} \vec{\tau}_h = \frac{\vec{x}_{h,u}}{ |\vec{x}_{h,u}| }.$$ We will also require a piecewise linear approximation of $\vec{\tau}_h$ written $\tilde{\vec{\tau}}_h \in V_h^3$ with vertex values given by $$\begin{aligned} \label{eq:tauh-2} \tilde{\vec{\tau}}_h( u_j, \cdot ) & = \frac{ \vec{\tau}_{h}( u_i^-, \cdot) + \vec{\tau}_h( u_i^+, \cdot ) }{| \vec{\tau}_{h}( u_i^-, \cdot) + \vec{\tau}_h( u_i^+, \cdot ) |} \quad \mbox{ for } i = 1, \ldots, N, \end{aligned}$$ where $\vec\tau_h( u_i^\pm, \cdot )$ is $\vec{\tau}_h$ evaluated on the left (or right) element to the vertex $u_i$. We will apply mass lumping [@Tho84] using the notations: $$( f )_h := I_h( f ) \quad \mbox{ and } \quad \abs{ f }_{h} := \abs{ I_h( f ) } \mbox{ for } f \in C([0,1]),$$ where $I_h$ is the Lagrangian interpolation operator $C([0,1]) \to V_h$. Finally we denote by $V_{h,0}^3 + \vec{\kappa}_{b}(\cdot,t)$ the space of finite element functions $\vec{v}_h \in V_h^3$ which match the boundary conditions for $\vec{\kappa}_h$: $$\vec{v}_{h} |_{u=0,1} = \vec{\kappa}_b := \alpha^0 \vec{e}^{1}_{h} + \beta^0 \vec{e}^{2}_{h},$$ where $\vec{e}^1_h, \vec{e}^2_h \in V_h^3$ will denote components of the orthonormal frame that we will solve for as part of the method. The space $V_{h,0}^3 + \vec{\kappa}_b$ will in general be time dependent. Semi-discrete problem --------------------- We directly discretize the weak form \[eq:weak\]. The choice of piecewise linear or piecewise constant approximation spaces for the different functions is determined by the properties we will show in \[lem:stability\]. At this stage of discretization the choices are between which discrete function spaces each solution variable should live in and how to implement boundary conditions. We choose piecewise linear approximations of position $\vec{x}$, bending moment $\vec{y}$, curvature $\vec{\kappa}$, angular momentum $m$ and frame $\vec{e}_{1}$ and $\vec{e}_2$ and piecewise constant approximations of twisting moment $z$, twist $\gamma$ and the Lagrange multiplier $p$. We choose to enforce boundary conditions for the bending moment $\vec{y}$ and curvature $\vec{\kappa}$ in the function spaces but boundary conditions for the twisting moment $z$ and twist $\gamma$ arise as natural boundary conditions. We will see our choices naturally lead to the key properties of our scheme. A summary of discretization choices in given in \[tab:variables\]. Variable label discrete space ----------------------------- ------------------- ------------------------ Position $\vec{x}_h$ $V_h^3$ Lagrange multiplier $p_h$ $Q_h$ Vector curvature $\vec{\kappa}_h$ $V_{h,0}^3 + \kappa_b$ Twist $\gamma_h$ $Q_h$ Tangential angular velocity $m_h$ $V_h$ Normal moment $\vec{y}_h$ $V_{h,0}^3$ Tangential moment $z_h$ $Q_h$ Frame vectors ($j=1,2$) $\vec{e}^{j}_{h}$ $V_{h,0}^3$ : Summary of discretization choices for terms in model. Recall the $V_h$ is the space of piecewise linear functions and $Q_h$ is the space of piecewise constant functions.[]{data-label="tab:variables"} Given preferred curvatures $\alpha^0, \beta^0$, a preferred twist $\gamma^0$, and initial conditions for $\vec{x}_h, \gamma_h$ (which imply compatible initial conditions for $\vec{\kappa}_h$, $\vec{e}^{1}_{h}$ and $\vec{e}^{2}_{h}$ up to a fixed rotation), for $t \in [0,T]$, find $\vec{x}_h( \cdot, t) \in V_h^3, \vec{y}_h( \cdot, t) \in V_{h,0}^3, \vec{\kappa}_h( \cdot, t ) \in V_{h,0}^3 + \vec{\kappa}_b(\cdot,t)$, $m_h( \cdot, t ) \in V_h, z_h( \cdot, t ), \gamma_h( \cdot, t), p_h( \cdot, t ) \in Q_h$, $\vec{e}^{1}_{h}( \cdot, t ), \vec{e}^{2}_{h}( \cdot, t ) \in V_h^3$ such that $$\begin{aligned} \label{eq:fem-x} \int_0^1 \K \vec{x}_{h,t} \cdot \vec{\phi}_h | \vec{x}_{h,u} | - \int_0^1 p_h \vec\tau_h \cdot \vec{\phi}_{h,u} \qquad\qquad\qquad\qquad\qquad\qquad & \\ \nonumber - \int_0^1 \bigl( ( \id - \vec\tau_h \otimes \vec\tau_h ) \frac{\vec{y}_{h,u}}{| \vec{x}_{h,u} |} + z_h \vec\tau_h \times \vec{\kappa}_h \bigr) \cdot \vec{\phi}_{h,u} & = 0 \\ % \label{eq:fem-y} \int_0^1 \Bigl( \bigl( \vec{y}_h - A ( \vec{\kappa}_h - \alpha^0 \vec{e}_{1,h} - \beta^0 \vec{e}_{2,h} ) \qquad\qquad\qquad\qquad\qquad\qquad & \\ \nonumber - B ( ( \id - \tilde{\vec\tau}_h \otimes \tilde{\vec\tau}_h ) \vec{\kappa}_{h,t} - m_h \tilde{\vec\tau}_h \times \vec{\kappa}_h ) \bigr) \cdot \vec{\psi}_h \Bigr)_h \abs{ \vec{x}_{h,u} } & = 0 \\ % \label{eq:fem-w} \int_0^1 ( \vec{\kappa}_h \cdot \vec\psi_h )_h | \vec{x}_{h,u} | + \frac{\vec{x}_{h,u}}{ | \vec{x}_{h,u} | } \cdot \vec\psi_{h,u} & = 0\end{aligned}$$ for all $\vec\phi_h \in V_h^3$, $\vec\psi_h \in V_{h,0}^3$, $$\begin{aligned} \label{eq:fem-gamma} \int_0^1 - ( K^\rot m_h v_h )_h | \vec{x}_{h,u} | - \int_0^1 z_h v_{h,u} + \int_0^1 ( \vec{y}_h \cdot ( \tilde{\vec\tau}_h \times \vec{\kappa}_h ) v_h )_h | \vec{x}_{h,u} | & = 0, \\ % \label{eq:fem-z} \int_0^1 ( z_h - C ( \gamma_h - \gamma^0 ) - D \gamma_{h,t} ) q_h | \vec{x}_{h,u} | & = 0, \\ % \label{eq:fem-m} \int_0^1 \gamma_{h,t} q_h | \vec{x}_{h,u} | - \int_0^1 m_{u,h} q_h + \int_0^1 \vec\tau_h \times \vec{\kappa}_h \cdot \vec{x}_{h,tu} q_h & = 0\end{aligned}$$ for all $q_h \in Q_h$ and $v_h \in V_h$, $$\begin{aligned} \label{eq:fem-p} \int_0^1 q_h \vec\tau_h \cdot \vec{x}_{h,tu} & = 0,\end{aligned}$$ for all $q_h \in Q_h$, and $$\label{eq:fem-frame} \int_0^1 \Bigl( \bigl( \vec{e}_{h,j,t} - \bigl( \tilde{\vec{\tau}}_h \times \tilde{\vec{\tau}}_{h,t} + m_h \tilde{\vec{\tau}}_h \bigr) \times \vec{e}_{h,j} \bigr) \cdot \vec{\phi}_h \Bigr)_h \abs{\vec{x}_{h,u}} = 0, \mbox{ for } j = 1,2,$$ for all $\vec{\phi}_h\in V_h^3$. Decoupling variables for curvature $\vec{w}_h$ and position $\vec{x}_h$ should be interpreted as a tool for solving the partial differential equation model. This is widely used when solving geometric partial differential equations (see e.g. [@DziKuwSch02] with the convergence results in [@DecDzi08] and the review [@DecDziEll05]). Since we compute with piecewise linear curves we cannot formulate an exact curvature of the discrete curve. This is a key difference between the approach presented here and super-helix/super-clothoid approaches [@Bertails2006; @Casati2013]. Using $\vec{e}^{0}_{h} \equiv \tilde{\vec\tau}_h$, we will see $\{ \vec{e}^{0}_{h}, \vec{e}^{1}_{h}, \vec{e}^{2}_{h} \}$ is a vertex-wise orthonormal frame. Indeed, we note that \[eq:fem-frame\] implies we recover the vertex-wise relations $$\label{eq:fem-frame-vertex} \vec{e}^{j}_{h,t} = \vec\omega_h \times \vec{e}^{j}_{h}, \quad \mbox{ for } j=0,1,2,$$ where $\vec\omega_h = \tilde{\vec\tau}_h \times \tilde{\vec\tau}_{h,t} + m_h \tilde{\vec\tau}_h \in V_h^3$. This implies the following vertex-wise identities hold: $$\label{eq:e-ttauh} \vec{e}^{1}_{h} \cdot \vec{e}^{2}_{h} = \vec{e}^{1}_{h} \cdot \tilde{\vec{\tau}}_h = \vec{e}^{2}_{h} \cdot \tilde{\vec{\tau}}_h = 0 \qquad \mbox{ and } \qquad | \tilde{\vec{\tau}}_h | = | \vec{e}^{1}_{h} | = | \vec{e}^{2}_{h} | = 1,$$ so long as the initial values satisfy corresponding versions of these identities. In other words we have the $(\tilde{\vec{\tau}}_h, \vec{e}^{1}_{h}, \vec{e}^{2}_{h})$ form an orthonormal frame. Next, we note that we can write \[eq:fem-w\] as: $$\vec{\kappa}_{h}( u_i, \cdot ) = \frac{ \vec{\tau}_h( u_i^+, \cdot ) - \vec{\tau}_h( u_i^-, \cdot ) }{ \frac{1}{2} ( \abs{ \vec{x}_{h,u}( u_i^+ ) } + \abs{ \vec{x}_{h,u}( u_i^-, \cdot ) } ) } \qquad \mbox{ for } i = 2, \ldots, N-1.$$ So that, using the fact that $\abs{ \vec{\tau}_h } = 1$, we can infer that $\vec{\kappa}_h$ and $\tilde{\vec\tau}_h$ are orthogonal at the vertices. Indeed, for all $i= 2, \ldots, N-1$, we have $$\begin{aligned} \vec{\kappa}_{h}( u_i, \cdot ) \cdot \tilde{\vec\tau}_h( u_i, \cdot ) & = \frac{ \vec{\tau}_h( u_i^+, \cdot ) - \vec{\tau}_h( u_i^-, \cdot ) }{ \frac{1}{2} ( \abs{ \vec{x}_{h,u}( u_i^+ ) } + \abs{ \vec{x}_{h,u}( u_i^-, \cdot ) } ) } \cdot \frac{ \vec{\tau}_{h}( u_i^-, \cdot) + \vec{\tau}_h( u_i^+, \cdot ) }{| \vec{\tau}_{h}( u_i^-, \cdot) + \vec{\tau}_h( u_i^+, \cdot ) |} \\ & = \frac{ \vec{\tau}_h( u_i^+, \cdot ) \cdot \vec{\tau}_h( u_i^+, \cdot ) - \vec{\tau}_h( u_i^-, \cdot) \cdot \vec{\tau}_h( u_i^-, \cdot ) }{ { \frac{1}{2} ( \abs{ \vec{x}_{h,u}( u_i^+ ) } + \abs{ \vec{x}_{h,u}( u_i^-, \cdot ) } ) }{| \vec{\tau}_{h}( u_i^-, \cdot) + \vec{\tau}_h( u_i^+, \cdot ) |} } \\ & = 0. \end{aligned}$$ Further, at $i=1$ and $i=N$, the boundary conditions give us that $\vec{\kappa}_h$ and $\tilde{\vec\tau}_h$ are orthogonal directly. This implies we can create a decomposition of $\vec{\kappa}_h$, at the vertices, into fields $\alpha_h, \beta_h \in V_h$ given by $$\label{eq:fem-w-decomp} \vec{\kappa}_h( u_i, \cdot ) = \alpha_h( u_i, \cdot ) \vec{e}^{1}_{h}( u_i, \cdot ) + \beta_h( u_i, \cdot ) \vec{e}^{2}_{h}( u_i, \cdot ) \qquad \mbox{ for } i = 1, \ldots, N.$$ Similarly it can be shown that $\vec{y}_h( u_i, \cdot ) \cdot \tilde{\vec\tau}_h( u_i, \cdot ) = 0$ for $i = 1, \ldots, N$. \[lem:stability\] If $\alpha^0, \beta^0, \gamma^0$ are independent of time, any solution to the above problem satisfies: $$\begin{gathered} \int_0^1 ( \K \vec{x}_{h,t} \cdot \vec{x}_{h,t} + K^\rot_h m_h^2 ) | \vec{x}_{h,u} | \\ + \frac{1}{2} \ddt \int_0^1 \bigl( A \bigl( ( \alpha_h - \alpha^0 )^2 + ( \beta_h - \beta^0 )^2 \bigr)_h + C ( \gamma_h - \gamma^0 )^2 \bigr) | \vec{x}_{h,u} | \\ + \int_0^1 \bigl( ( B ( \alpha_{h,t}^2 + \beta_{h,t}^2 ) )_h + D \gamma_{h,t}^2 \bigr) \abs{ \vec{x}_{h,u}} = 0. \end{gathered}$$ First we see that implies that $\abs{ \vec{x}_{h,u} }_t = 0$. We test \[eq:fem-x\] with $\vec{x}_{h,t}$, \[eq:fem-gamma\] with $-m_h$ and \[eq:fem-p\] with $p_h$. Adding the resulting equations results in $$\begin{gathered} \label{eq:stab-1} \int_0^1 ( \K \vec{x}_{h,t} \cdot \vec{x}_{h,t} + K^\rot m_h^2 ) \abs{ \vec{x}_{h,u} } - \int_0^1 \big( ( \id - \vec\tau_h \otimes \vec\tau_h ) \frac{\vec{y}_{h,u}}{ \abs{\vec{x}_{h,u}} } + z_h \vec{\tau}_h \times \vec{\kappa}_h \big) \cdot \vec{x}_{h,tu} \\ + \int_0^1 z_h m_{h,u} - \int_0^1 \big( \vec{y}_{h} \cdot ( \tilde{\vec\tau}_h \times \vec{\kappa}_h ) m_h \big)_h \abs{ \vec{x}_{h,u} } = 0. \end{gathered}$$ Next, we take a time derivative of \[eq:fem-w\] and test the result with $\vec{y}_h$. $$\int_0^1 ( \vec{\kappa}_{h,t} \cdot \vec{y}_h )_h \abs{ \vec{x}_{h,u} } + ( \id - \vec\tau_h \otimes \vec\tau_h ) \frac{ \vec{y}_{h,u} }{ \abs{ \vec{x}_{h,u} } } \cdot \vec{x}_{h,tu} = 0$$ From the result we subtract \[eq:fem-y\] tested with $\vec{\kappa}_{h,t}$ to see $$\begin{gathered} \label{eq:stab-2} \int_0^1 ( \id - \vec\tau_h \otimes \vec\tau_h ) \frac{ \vec{y}_{h,u} }{ \abs{ \vec{x}_{h,u} } } \cdot \vec{x}_{h,tu} + \int_0^1 \Big( A ( \vec{\kappa}_h - \alpha^0 \vec{e}^{1}_{h} - \beta^0 \vec{e}^{2}_{h} ) \cdot \vec{\kappa}_{h,t} \Big)_h \abs{ \vec{x}_{h,u} } \\ + \int_0^1 \Big( B \big( ( \id - \tilde{\vec\tau}_h \otimes \tilde{\vec\tau}_h ) \vec{\kappa}_{h,t} - m_h \tilde{\vec\tau}_h \times \vec{\kappa}_h \big) \cdot \vec{\kappa}_{h,t} \Big)_h \abs{ \vec{x}_{h,u} } = 0. \end{gathered}$$ Using \[eq:fem-w-decomp\], the frame equations \[eq:fem-frame-vertex\] and some simple vector identities gives $$\label{eq:stab-3a} \begin{aligned} & \int_0^1 \Big( A ( \vec{\kappa}_h - \alpha^0 \vec{e}^{1}_{h} - \beta^0 \vec{e}^{2}_{h} ) \cdot \vec{\kappa}_{h,t} \Big)_h \\ & = \frac{1}{2} \frac{d}{dt} \int_0^1 \Bigl( A \bigl( ( \alpha_h - \alpha^0 )^2 + ( \beta_h - \beta^0 )^2 \bigr) \Bigr)_h | \vec{x}_{h,u} | \\ & + \int_0^1 \bigl( A ( \vec{\kappa}_{h} - \alpha^0 \vec{e}^{1}_{h} - \beta^0 \vec{e}^{2}_{h} ) \cdot \vec{\omega_h} \times \vec{\kappa}_h \bigr)_h \abs{ \vec{x}_{h,u} }. \end{aligned}$$ We can further reduce the right hand side using the definition of $\vec{\omega}_h$ and the fact that $\vec{\kappa}_h \cdot \tilde{\vec\tau}_h = 0$: $$\begin{gathered} \label{eq:stab-3} \int_0^1 \Big( A ( \vec{\kappa}_h - \alpha^0 \vec{e}^{1}_{h} - \beta^0 \vec{e}^{2}_{h} ) \cdot \vec\omega_h \times \vec{\kappa}_h \Big)_h \abs{ \vec{x}_{h,u} } \\ % & = \int_0^1 \Big( A ( \vec{\kappa}_h - \alpha^0 \vec{e}^{1}_{h} - \beta^0 \vec{e}^{2}_{h} ) \cdot ( \tilde{\vec\tau}_h \times \tilde{\vec\tau}_{h,t} + m_h \tilde{\vec\tau}_h ) \times \vec{\kappa}_h \Big)_h \abs{ \vec{x}_{h,u} }\\ % & = \int_0^1 \Big( A ( \vec{\kappa}_h - \alpha^0 \vec{e}^{1}_{h} - \beta^0 \vec{e}^{2}_{h} ) \cdot \tilde{\vec\tau}_h \tilde{\vec\tau}_{h,t} \cdot \vec{\kappa}_h % + A ( \vec{\kappa}_h - \alpha^0 \vec{e}^{1}_{h} - \beta^0 \vec{e}^{2}_{h} ) \cdot ( m_h \tilde{\vec\tau}_h ) \times \vec{\kappa}_h \Big)_h \abs{ \vec{x}_{h,u} }\\ = \int_0^1 \Big( A ( \vec{\kappa}_h - \alpha^0 \vec{e}^{1}_{h} - \beta^0 \vec{e}^{2}_{h} ) \cdot ( m_h \tilde{\vec\tau}_h ) \times \vec{\kappa}_h\Big)_h \abs{ \vec{x}_{h,u} }. \end{gathered}$$ Similarly, we see that $$\begin{gathered} \label{eq:stab-4} \int_0^1 \bigl( ( B ( \id - \tilde{\vec\tau} \otimes \tilde{\vec\tau} ) \vec{\kappa}_{h,t} - m_h \tilde{\vec\tau}_h ) \cdot \vec{\kappa}_{h,t} \bigr)_h \abs{ \vec{x}_{h,u} } \\ % & = \int_0^1 \bigl( B \abs{ ( \id - \tilde{\vec\tau} \otimes \tilde{\vec\tau} ) \vec{\kappa}_{h,t} - m_h \tilde{\vec\tau}_h \times \vec{\kappa}_h }^2 \bigr)_h \abs{ \vec{x}_{h,u} } \\ % & \qquad + \int_0^1 \bigl( ( B ( \id - \tilde{\vec\tau} \otimes \tilde{\vec\tau} ) \vec{\kappa}_{h,t}- m_h \tilde{\vec\tau}_h ) \cdot ( m \tilde{\vec\tau}_h \times \vec{\kappa}_h ) \bigr)_h \abs{ \vec{x}_{h,u} } \\ = \int_0^1 \bigl( B ( \alpha_{h,t}^2 + \beta_{h,t}^2 ) \bigr)_h \abs{ \vec{x}_{h,u} }% \\ % & \qquad + \int_0^1 \bigl( ( B ( \id - \tilde{\vec\tau} \otimes \tilde{\vec\tau} ) \vec{\kappa}_{h,t}- m_h \tilde{\vec\tau}_h ) \cdot ( m \tilde{\vec\tau}_h \times \vec{\kappa}_h ) \bigr)_h \abs{ \vec{x}_{h,u} }. \end{gathered}$$ Combining \[eq:stab-3a,eq:stab-3,eq:stab-4\] with \[eq:stab-2\] gives $$\begin{aligned} & \frac{1}{2} \ddt \int_0^1 \Bigl( A \bigl( ( \alpha_h - \alpha^0 )^2 + ( \beta_h - \beta^0 )^2 \bigr) \Bigr)_h | \vec{x}_{h,u} | + \int_0^1 \bigl( B ( \alpha_{h,t}^2 + \beta_{h,t}^2 ) \bigr)_h \abs{ \vec{x}_{h,u} } \\ & + \int_0^1 ( \id - \vec\tau_h \otimes \vec\tau_h ) \frac{ \vec{y}_{h,u} }{ \abs{ \vec{x}_{h,u} } } \cdot \vec{x}_{h,tu} \\ % & + \int_0^1 \bigl( A ( \vec{\kappa}_{h} - \alpha^0 \vec{e}^{1}_{h} - \beta^0 \vec{e}^{2}_{h} ) \cdot (m_h \tilde{\vec\tau}_h \times \vec{\kappa}_h) \bigr)_h \abs{ \vec{x}_{h,u} } \\ % & + \int_0^1 \bigl( ( B ( \id - \tilde{\vec\tau} \otimes \tilde{\vec\tau} ) \vec{\kappa}_{h,t} ) \cdot ( m \tilde{\vec\tau}_h \times \vec{\kappa}_h ) \bigr)_h \abs{ \vec{x}_{h,u} } % = 0. \end{aligned}$$ We identify the last two terms of the right-hand side with terms in \[eq:fem-y\] tested with the test function $\vec\psi_h$ given by $$\vec\psi_h( u_j ) = \begin{cases} 0 & \mbox{ for } j = 0, N, \\ m_h(u_j) \tilde{\vec\tau}_h(u_j) \times \vec{\kappa}_h(u_j) & \mbox{ for } 2 \le j \le N-1. \end{cases}$$ Noting that $\vec{y}_h \cdot \vec\psi_h = \vec{y}_h \cdot ( m_h \tilde{\vec\tau}_h \times \vec{\kappa}_h )$, we infer that: $$\begin{gathered} \label{eq:stab-5} \frac{1}{2} \ddt \int_0^1 \Bigl( A \bigl( ( \alpha_h - \alpha^0 )^2 + ( \beta_h - \beta^0 )^2 \bigr) \Bigr)_h | \vec{x}_{h,u} | + \int_0^1 \bigl( B ( \alpha_{h,t}^2 + \beta_{h,t}^2 ) \bigr)_h \abs{ \vec{x}_{h,u} } \\ % + \int_0^1 ( \id - \vec\tau_h \otimes \vec\tau_h ) \frac{ \vec{y}_{h,u} }{ \abs{ \vec{x}_{h,u} } } \cdot \vec{x}_{h,tu} % + \int_0^1 \bigl( \vec{y}_h \cdot (m_h \tilde{\vec\tau}_h \times \vec{\kappa}_h) \bigr)_h \abs{ \vec{x}_{h,u} } % = 0. \end{gathered}$$ We sum the result of testing \[eq:fem-z\] with $-\gamma_{h,t}$ and \[eq:fem-m\] with $z_h$ and rearrange: $$\begin{gathered} \label{eq:stab-6} \frac{1}{2} \ddt \int_0^1 C \abs{ \gamma - \gamma^0 }^2 \abs{ \vec{x}_{h,u} } + \int_0^1 D \gamma_{h,t}^2 \abs{ \vec{x}_{h,u} } \\ - \int_0^1 m_{h,u} z_h + \int_0^1 ( \vec\tau_h \times \vec{\kappa}_{h} ) \cdot \vec{x}_{h,tu} z_h = 0. \end{gathered}$$ Adding \[eq:stab-2,eq:stab-5,eq:stab-6\] gives the desired result. Fully discrete problem ---------------------- To discretize in time we use a uniform partition of the time interval $[0,T]$ into time steps $0 = t_0 < t_1 < \ldots < t_M = T$ where $t_n = n \Delta t$ for $n=0,\ldots,M$. We denote discrete variables at a time step $t_n$ with a superscript $n$ and the frame vectors will be denoted by $\vec{e}^{1,n}_h$. Our approach is a first order semi-implicit scheme which results in a linear problem to solve at each time step. During the numerical experiments, we demonstrate that by choosing to take certain terms implicitly we recover the semi-discrete stability result. For a variable $\eta$ defined at each time step $0 < n < M$, we denote the backward difference $\bar\partial \eta^n := ( \eta^n - \eta^{n-1} ) / \Delta t$. We define $\vec{\kappa}_{b}^n$ by $$\vec{\kappa}_{b}^n := \alpha^0( \cdot, t^n ) \vec{e}^{1,n-1}_{h} + \beta^0( \cdot, t^n ) \vec{e}^{2,n-1}_{h}.$$ As well as choosing whether to take terms implicitly or explicitly, we also integrate the constraint equation \[eq:fem-p\] forwards in time and write the frame update equation as an algebraic relation which preserves the nature of the angular velocity vector $\vec{\omega}_h$. Given preferred curvatures $\alpha^0, \beta^0$, a preferred twist $\gamma^0$, and initial conditions for $\vec{x}_h^0, \gamma_h^0$, (which imply compatible initial conditions for $\vec{\kappa}_h^0$, $\vec{e}^{1,0}_h$, $\vec{e}^{2,0}_h$ up to a fixed rotation), for $n = 1, \ldots, M$ find $\vec{x}_h^n \in V_h^3, \vec{y}_h^n \in V_{h,0}^3, \vec{\kappa}_h^n \in V_{h,0}^3 + \vec{\kappa}_{b}^n$, $m_h^n \in V_h, z_h^n, \gamma_h^n, p_h^n \in Q_h$, $\vec{e}^{1,n}_{h}, \vec{e}^{2,n}_{h} \in V_h^3$ such that \[eq:discrete\] $$\begin{aligned} \label{eq:discrete-x} \int_0^1 \K \bar\partial \vec{x}_{h}^n \cdot \vec{\phi}_h | \vec{x}_{h,u}^{n-1} | - \int_0^1 p_h^n \vec\tau_h \cdot \vec{\phi}_{h,u} \qquad\qquad\qquad\qquad\qquad\qquad \\ \nonumber - \int_0^1 \bigl( ( \id - \vec\tau_h^{n-1} \otimes \vec\tau_h^{n-1} ) \frac{1}{| \vec{x}_{h,u}^{n-1} |} \vec{y}_{h,u}^n + z_h^n \vec\tau_h^{n-1} \times \vec{\kappa}_h^{n-1} \bigr) \cdot \vec{\phi}_{h,u} & = 0 \\ % \label{eq:discrete-y} \int_0^1 \bigl( \vec{y}_h^{n} - A ( \vec{\kappa}_h^n - \alpha^0( \cdot, t^n ) \vec{e}^{1,n-1}_h - \beta^0( \cdot, t^n ) \vec{e}^{2,n-1}_h ) \qquad\qquad\qquad\qquad \\ \nonumber - B \bigl( ( \id - \tilde{\vec\tau}_h^{n-1} \otimes \tilde{\vec\tau}_h^{n-1} ) \bar\partial \vec{\kappa}_{h}^n - m_h^{n-1} \tilde{\vec\tau}_h^{n-1} \vec{\kappa}_h^n \bigr)_h \bigr) \cdot \vec\psi_h | \vec{x}_{h,u}^{n-1} | & = 0 \\ \label{eq:discrete-w} \int_0^1 \vec{\kappa}_h^{n} \cdot \vec\psi_h | \vec{x}_{h,u}^{n-1} | + \frac{1}{ | \vec{x}_{h,u}^{n-1} | } \vec{x}_{h,u}^n \cdot \vec\psi_{h,u} & = 0\end{aligned}$$ for all $\vec\phi_h \in V_h^3$, $\vec\psi_h \in V_{h,0}^3$, $$\begin{aligned} \label{eq:discrete-gamma} \int_0^1 - K^\rot m_h^n v_h | \vec{x}_{h,u}^{n-1} | - \int_0^1 z_h^n v_{h,u} + \int_0^1 \vec{y}_h^{n-1} \cdot ( \tilde{\vec\tau}_h^{n-1} \times \vec{\kappa}_h^{n-1} ) v_h | \vec{x}_{h,u}^{n-1} | & = 0, \\ % \label{eq:discrete-z} \int_0^1 ( z_h^n - C ( \gamma_h^n - \gamma^0( \cdot, t^n) ) - D \bar\partial \gamma_{h}^n ) q_h | \vec{x}_{h,u}^{n-1} | & = 0, \\ % \label{eq:discrete-m} \int_0^1 \bar\partial \gamma_{h}^n q_h | \vec{x}_{h,u}^{n-1} | - \int_0^1 m_{u,h} q_h + \int_0^1 ( \vec\tau_h^{n-1} \times \vec{\kappa}_h^{n-1} ) \cdot \bar\partial \vec{x}_{h,u}^n q_h & = 0\end{aligned}$$ for all $q_h \in Q_h$ and $v_h \in V_h$, $$\begin{aligned} \label{eq:discrete-p} \int_0^1 q_h \vec\tau_h^{n-1} \cdot \vec{x}_{h,u}^n & = \int_0^1 | \vec{x}_{h,0,u} | q_h,\end{aligned}$$ for all $q_h \in Q_h$. Using the abbreviations: $$\begin{aligned} \vec{k}_i^n & = \tilde{\vec\tau}_h^{n-1}( u_i ) \times \tilde{\vec\tau}_h^n( u_i ), &\vec{l}_i^n & = \tilde{\vec{\tau}}^n_h( u_i ), & \varphi_i^n & = \Delta t \, m_h^n( u_i ),\end{aligned}$$ we apply the Rodrigues formula twice: $$\begin{aligned} \label{eq:discrete-e1-R1} \tilde{\vec{e}}^{j,n}_{h}( u_i ) & = \vec{e}^{j,n-1}_{h}( u_i ) ( \tilde{\vec\tau}_h^{n-1}( u_i ) \cdot \tilde{\vec\tau}_h^{n}( u_i )) + \vec{k}_i^n \times \vec{e}^{j,n-1}_{h}( u_i ) \\ \nonumber & \qquad + \vec{e}^{j,n-1}_{h}( u_i ) \cdot \vec{k}_i^n \vec{k}_i^n \frac{1}{1 + \tilde{\vec\tau}_h^{n-1}( u_i ) \cdot \tilde{\vec\tau}_h^{n}( u_i )} && j=1,2 \\ % \label{eq:discrete-e1-R2} {\vec{e}}_{j,h}^n( u_i ) & = \tilde{\vec{e}}^{j,n}_{h}( u_i ) \cos( \varphi_i ) + \vec{l}_i^n \times \tilde{\vec{e}}^{j,n}_{h}( u_i ) \sin( \varphi_i ) \\ \nonumber & \qquad + ( \tilde{\vec{e}}^{j,n}_{h}( u_i ) \cdot \vec{l}_i^n )\vec{l}_i^n ( 1 - \cos( \varphi_i ) ) && j=1,2.\end{aligned}$$ The scheme results in a linear system of equations at each time step followed by an algebraic update formula for the frame. The linear system can be solved using a direct sparse solver. In this work, the numerical results are computed using the library [@umfpack]. In addition to the usual time discretization, we have chosen to integrate the constraint equation forwards in time. This gives us more control over the length element $|\vec{x}_{h,u}^k|$ as shown in the following Lemma. \[lem:length\] If there exists a solution such that $\abs{ \vec{\tau}_h^{n} - \vec{\tau}_h^{n-1} } < 2$, then $$\begin{aligned} \abs{ \vec{x}_{h,u}^0 } \le \abs{ \vec{x}_{h,u}^{n} } = \frac{ \abs{ \vec{x}_{h,u}^0} }{ 1 - \frac{1}{2} \abs{ \vec{\tau}_h^{n} - \vec{\tau}_h^{n-1} }^2 }. \end{aligned}$$ Testing equation \[eq:discrete-p\] with $q_h = \chi_{[u_i, u_{i+1}]}$, the characteristic function of the interval $[u_i,u_{i-1}]$, gives the element-wise identity $$\begin{aligned} \vec{\tau}_h^{n-1} \cdot \vec{x}_{h,u}^{n} = \abs{ \vec{x}_{h,u}^0 }.\end{aligned}$$ Then we have $$\begin{aligned} \abs{ \vec{x}_{h,u}^n } = \abs{ \vec{x}_{h,u}^n } ( 1 - \vec{\tau}^{n-1}_h \cdot \vec\tau^n_h ) + \abs{ \vec{x}_{h,u}^0 } = \frac{1}{2} \abs{ \vec{x}_{h,u}^n } \abs{ \vec{\tau}_h^{n} - \vec{\tau}_{h}^{n-1} }^2 + \abs{ \vec{x}_{h,u}^0 }.\end{aligned}$$ Since $\frac{1}{2} \abs{ \vec{x}_{h,u}^n } \abs{ \vec{\tau}_h^{n} - \vec{\tau}_{h}^{n-1} }^2\ge 0$, we have $\abs{\vec{x}_{h,u}^n} \ge \abs{ \vec{x}_{h,u}^0 }$. Furthermore if it holds that $\abs{ \vec{\tau}_h^{n} - \vec{\tau}_h^{n-1} }^2 < 2$, this equation can be rearranged to see the desired result. The first rotation, \[eq:discrete-e1-R1\], which maps $\tilde{\vec{\tau}}_h^{n-1}$ to $\tilde{\vec\tau}_h^n$ and the second rotation, \[eq:discrete-e1-R2\], rotates the frame about the new $\tilde{\vec\tau}_h^n$ (leaving $\tilde{\vec\tau}_h^n$ unaffected). Since we apply the same rotations to the two frame vectors, and these rotations map $\tilde{\vec\tau}_h^{n-1}$ to $\tilde{\vec\tau}_h^n$, this update procedure results preserves the orthogonality of the frame vectors at each vertex. In practical computations we will see an accumulation of floating point errors (see \[fig:model-3d-length-frame-mismatch\], for example). If the errors become too large we may renormalise the frame. Results {#sec:results} ======= We provide three test cases for our numerical scheme. In the first we relax a straight rod to one with prescribed curvatures and twist and in the other two we demonstrate the applicability of the method to nematode locomotion. Relaxation test --------------- We take as initial configuration of the rod a unit length straight midline curve and constant frame. We then simulate to $T=25$ with $$\alpha^0 = 2 \sin( 3 \pi u / 2 ), \quad \beta^0 = 3 \cos( 3 \pi u / 2 ), \quad \gamma^0 = 5 \cos( 2 \pi u ).$$ We take material parameters all equal to 1: $L = K_\rot = A = B = C = D = 1$, $\K = \id$. An example of the final configuration is shown in \[fig:relaxation-configuration\]. We will use this example to show how the stability result (\[lem:stability\]) translates to the discrete case. We also explore the errors in the length element (\[lem:length\]) and failure to preserve exact orthogonality of the frame. To show the properties of the scheme, we first simulate with $\Delta t = 1$ and $N=16$ and repeat with the time step $\Delta t$ reduced by a factor of four and doubling $N$: We simulate $\Delta t = 4^{-l}$ and $N=2^{4+l}$ for refinement levels $l = 0,1,\ldots,5$. ![Configuration of the relaxation test with $\Delta t = 10^{-3}$ and $N=128$ at time $T=25$ showing the midline and a sample of frame vectors. The colouring is the same as \[fig:geometry\] except that $\vec{e}_{0,h}$ is not shown. A video of this simulation is presented in \[sec:videos\].[]{data-label="fig:relaxation-configuration"}](figs/relaxation-configuration.png){width="50.00000%"} To investigate the fully discrete stability of the scheme, the elastic energy $\mathcal{E}(t^n)$ is shown at each time step $t^n$ in \[fig:relaxation-stability\] . We define $\mathcal{E}(t^n)$ by $$\begin{aligned} \mathcal{E}(t^n) := \int_0^1 \Bigl( A \abs{ \vec{\kappa}_h - \alpha^0 \vec{e}_{1,h}^n - \beta^0 \vec{e}_{2,h} }^2 + C( \gamma_h^n - \gamma^0 )^2 \Bigr)_h \abs{ \vec{x}_{h,u}^n }\end{aligned}$$ We see that across all configurations the energy decreases unconditionally for across all our simulation results. ![Elastic energy $\mathcal{E}(t^n)$ over time for varying discretization parameters for relaxation test.[]{data-label="fig:relaxation-stability"}](plots/relaxation-stability.pdf) Next, we look at the error in the length element. We follow the refinement procedure detailed above and show the results in \[tab:relaxation-length\] and \[fig:relaxation-length-frame-mismatch\]. The error shown is $$\mathcal{F}_1(t^n) := \abs{ \int_0^1 \abs{ \vec{x}_{h,u}^n } - L }.$$ The experimental order of convergence ($eoc(\Delta t)$) is $$eoc( \Delta t ) = \log\Bigl( \max_n \mathcal{F}_1(t^n)_{l} / \max_n \mathcal{F}_1(t^n)_{l-1} \Bigr) / \log\Bigr( \Delta t_l / \Delta t_{l-1} \Bigr).$$ We observe that the errors decrease in time after the increase from the first initially perfect time step. This matches with the analysis of \[lem:length\] that the error only depends on the change in tangent from one time step to the next. Since the scheme converges to a stable solution the change in tangent vector reduces in time which results in the reduction of error. Moreover, we see that the error reduces to second order in the time step which is an order higher than the expected error in the scheme overall. ![Error of local length constraint $\mathcal{F}_1(t^n)$ and frame orthogonality constraint $\mathcal{F}_2(t^n)$ over time for varying discretization parameters for relaxation test.[]{data-label="fig:relaxation-length-frame-mismatch"}](plots/relaxation-length-frame-mismatch.pdf) Finally, we test for errors in the frame orthogonality conditions with results shown in \[tab:relaxation-frame-mismatch\] and \[fig:relaxation-length-frame-mismatch\]. Here, we look at the maximum over time of the $L^2$-norm of the errors in the orthogonality conditions: $$\mathcal{F}_2(t^n) := \left( \sum_{0 \le j_1 \le j_2 \le 2} \int_0^1 \abs{ \vec{e}^{j_1,n}_h \cdot \vec{e}^{j_2,n}_h - \delta_{j_1, j_2} }^2 \abs{ \vec{x}_{h,u}^n } \right)^{{1}/{2}}.$$ We observe that the errors are very small across all simulations although the error does increase as we refine in space and time. Furthermore we see that the errors increase over time which we attribute to accumulation of rounding errors. The final column in \[tab:relaxation-frame-mismatch\] shows the maximum change in this error over time. This value is close to machine precision epsilon which indicates that the increase in errors, both in time and as we refine the time step, is due to an increase in the number of time steps. Application to nematode locomotion in two and three spatial dimensions {#sec:application} ---------------------------------------------------------------------- We augment the method detailed above by changing the linear drag term to a more general resistive force term [@Keller1976]: $$\K = ( \vec\tau \otimes \vec\tau ) + K ( \id - \vec\tau \otimes \vec\tau ),$$ which we approximate in the fully discrete scheme by $$\K = ( \vec\tau_h^{n-1} \otimes \vec{\tau}_h^{n-1} ) + K ( \id - \vec\tau_h^{n-1} \otimes \vec\tau_h^{n-1} ).$$ We demonstrate that we can use the method to simulate *C. elegans* locomotion in two and three dimensions. We set $L=1$ and restrict our considerations to a stiff environment with $K=40, K_\rot=1$ which should correspond to a crawling behaviour. We model the *C. elegans* body as an elastic tapered cylinder. We assume that the internal viscous forces do not play a role in the stiff environment [@BerBoyTas09] so we consider ($B = D = 0$). For material parameters, we take $A = C = 8 ( ( \varepsilon + u ) ( \varepsilon + 1 - u ) )^{3/2} / ( 1 + 2 \varepsilon )^3$, for $\varepsilon > 0$ small, which corresponds to a uniform elasticity across the shell of a tapered body shape. We assume that the frame directions correspond to physically meaningful directions within the worm. We assume that $\vec{e}^0$ follows the midline of the body pointing head to tail, $\vec{e}^1$ points in the ventral-dorsal plane - the usual bending direction when considering two dimensional locomotion - and that $\vec{e}^2$ points in the left-right direction. Muscle contractions generate bending in either the $\vec{e}^1$ or $\vec{e}^2$ directions. In the usual two dimensional scenario, *C. elegans* generates bending waves in the dorsal-ventral plane. This will be our first test case: $$\alpha^0( u, t ) = ( 10 u + 8 ( 1-u ) ) \sin\left( {2 \pi u}/{0.65} - 0.6 \pi t \right), \qquad \beta^0( u, t ) = 0, \qquad \gamma^0( u, t ) = 0.$$ We have previously seen [@CohRan17-pp] that the first condition can be seen to recreate realistic looking *C. elegans* locomotion postures. We explore here how well our update numerical method captures this behaviour. Further results for this test case are shown in \[sec:2d-problem\]. It is assumed that *C. elegans* generates undulations in the dorsal-ventral plane due to symmetries in its neural control. However these symmetries do not exist in the head and neck regions (see the discussion in [@Bilbao2018]). Therefore, we propose an alternative three dimensional control strategy. Using the notation $\chi_{[0,1/3]}$ for the characteristic function of the interval $[0,1/3]$, we simulate with $$\begin{gathered} \alpha^0( u, t ) = ( 10 u + 8 ( 1-u ) ) \sin\left( {2 \pi u}/{0.65} - 0.6 \pi t \right), \hfill \beta^0( u, t ) = 6 \chi_{[0,1/3]}, \hfill \gamma^0( u, t ) = 0.\end{gathered}$$ As initial condition we start with an initially straight rod with constant frame and simulate with $\alpha^0( u, 0 ), \beta^0( u, 0 ), \gamma^0( u, 0)$ until $t = 5$ and use the resulting curve as initial condition for our simulation. This means we have an initial condition where the curvatures and twist match the initial preferred curvatures and twist exactly however the frame orthogonality conditions will not hold exactly. We show some characteristic body positions in \[fig:worm-configurations\]. We note that simulations for the two dimensional case we have that the third component is zero and the twist is exactly zero (\[fig:worm-2d-kymograms\]) whereas there is some twist in the three dimensional scenario even though the preferred twist is zero (\[fig:worm-3d-kymograms\]). Further the three dimensional test case demonstrates a non-planer body position and trajectory (see \[sec:videos\]) We check the errors in the length element and frame orthogonality conditions with results shown in \[fig:model-2d-length-frame-mismatch,tab:model-2d-length,tab:model-2d-frame-mismatch\] for the two dimensional case and \[fig:model-3d-length-frame-mismatch,tab:model-3d-length,tab:model-3d-frame-mismatch\] for the three dimensional case. We observe similar results to the relaxation case. We see the same second order convergence in the error in the length element although now this error increases and decreases periodically depending on the periodic undulations. The error is higher overall since the midline continues to move throughout the simulation. We see that the frame mismatch is again very small across all simulations. This error is initially higher since the initial conditions are derived from simulations. (i)\ ![Simulations of *C. elegans* locomotion in two and three dimensions.](figs/2d-recovery-configuration.png "fig:"){width="\textwidth"} (ii)\ ![Simulations of *C. elegans* locomotion in two and three dimensions.](figs/3d-head-twitch-configuration.png "fig:"){width="\textwidth"} ![Simulations of *C. elegans* locomotion in two and three dimensions.](plots/model-2d-kymograms.pdf){width="90.00000%"} ![Simulations of *C. elegans* locomotion in two and three dimensions.](plots/model-3d-kymograms.pdf){width="90.00000%"} ![Error of local length constraint $\mathcal{F}_1(t^n)$ and frame orthogonality constraint $\mathcal{F}_2(t^n)$ over time for varying discretization parameters for two dimensional test case.[]{data-label="fig:model-2d-length-frame-mismatch"}](plots/model-2d-length-frame-mismatch.pdf) ![Error of local length constraint $\mathcal{F}_1(t^n)$ and frame orthogonality constraint $\mathcal{F}_2(t^n)$ over time for varying discretization parameters for three dimensional test case.[]{data-label="fig:model-3d-length-frame-mismatch"}](plots/model-3d-length-frame-mismatch.pdf) Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to thank Netta Cohen and Felix Salfelder for discussions which have improved the writing of the manuscript and the computer implementation of the method. [10]{} S. S. Antman. Dynamical problems for geometrically exact theories of nonlinearly viscoelastic rods. , 6(1):1–18, jan 1996. B. Audoly, N. Clauvelin, P.-T. Brun, M. Bergou, E. Grinspun, and M. Wardetzky. A discrete geometric approach for simulating the dynamics of thin viscous threads. , 253:18–49, November 2013. S. Bartels. A simple scheme for the approximation of the elastic flow of inextensible curves. , 33(4):1115–1125, jan 2013. O. A. Bauchau and L. Trainelli. The vectorial parameterization of rotation. , 32(1):71–92, apr 2003. H. C. Berg. Motile behavior of bacteria. , 53(1):24–29, jan 2000. H. C. Berg, editor. . Springer New York, 2004. M. Bergou, B. Audoly, E. Vouga, M. Wardetzky, and E. Grinspun. Discrete viscous threads. , 29(4):1, jul 2010. M. Bergou, M. Wardetzky, S. Robinson, B. Audoly, and E. Grinspun. Discrete elastic rods. , 27(3):63, 2008. S. Berri, J. H. Boyle, M. Tassieri, I. A. Hope, and N. Cohen. Forward locomotion of the nematode *C. elegans* is achieved through modulation of a single gait. , 3(3):186–193, jun 2009. F. Bertails, B. Audoly, M.-P. Cani, B. Querleux, F. Leroy, and J.-L. L[é]{}v[ê]{}que. Super-helices for predicting the dynamics of natural hair. , 25(3):1180, jul 2006. A. Bilbao, A. K. Patel, M. Rahman, S. A. Vanapalli, and J. Blawzdziewicz. Roll maneuvers are essential for active reorientation of *Caenorhabditis elegans* in 3d media. , 115(16):E3616–E3625, apr 2018. J. J. Blum and M. Hines. Biophysics of flagellar motility. , 12(02):103, may 1979. D. Bray. . Garland Science, 2001. C. Brennen and H. Winet. Fluid mechanics of propulsion by cilia and flagella. , 9(1):339–398, jan 1977. R. Casati and F. Bertails-Descoubes. Super space clothoids. , 32(4):1, July 2013. S. Childress. . Cambridge University Press, 1981. Giancarlo Cicconofri and Antonio DeSimone. Modelling biological and bio-inspired swimming at microscopic scales: Recent results and perspectives. , 179:799–805, January 2019. N. Cohen and T. Ranner. A new computational method for a model of *Caenorhabditis elegans* locomotion: Insights into elasticity and locomotion performance, feb 2017. N. Cohen and T. Sanders. Nematode locomotion: dissecting the neuronalenvironmental loop. , 25:99–106, April 2014. N. A. Croll. London: Edward Arnold (Publishers) Ltd., 1970. G. Daviet, F. Bertails-Descoubes, and L. Boissieux. A hybrid iterative solver for robustly capturing coulomb friction in hair dynamics. , 30(6):1, dec 2011. T. A. Davis. Algorithm 832. , 30(2):196–199, jun 2004. K. Deckelnick and G. Dziuk. Error analysis for the elastic flow of parametrized curves. , 78(266):645–671, October 2008. K. Deckelnick, G. Dziuk, and C. M. Elliott. Computation of geometric partial differential equations and mean curvature flow. , 14:139–232, April 2005. J. E. Denham, T. Ranner, and N. Cohen. Signatures of proprioceptive control in *Caenorhabditis elegans* locomotion. , 373(1758):20180208, sep 2018. E. H. Dill. irchhoff’s theory of rods. , 44(1):1–23, 1992. D. Durville. Numerical simulation of entangled materials mechanical properties. , 40(22):5941–5948, nov 2005. G. Dziuk. An algorithm for evolutionary surfaces. , 58(1):603–611, dec 1990. G. Dziuk, E. Kuwert, and R. Schatzle. Evolution of elastic curves in $\mathbb{R}^n$: Existence and computation. , 33(5):1228–1245, jan 2002. C. Elliott and H. Fritz. On algorithms with good mesh properties for problems with moving boundaries based on the harmonic map heat flow and the [DeTurck]{} trick. , 2:141–176, 2016. M. Elmi, V. M. Pawar, M. Shaw, D. Wong, H. Zhan, and M. A. Srinivasan. Determining the biomechanics of touch sensation in c. elegans. , 7(1), September 2017. L. C. Evans. . American Mathematical Society, March 2010. C. Fang-Yen, M. Wyart, J. Xie, R. Kawai, T. Kodger, S. Chen, Q. Wen, and A. D. T. Samuel. Biomechanical analysis of gait adaptation in the nematode *Caenorhabditis elegans*. , 107(47):20323–20328, nov 2010. L. J. Fauci and R. Dillon. Biofluidmechanics of reproduction. , 38(1):371–394, jan 2006. F.E. Fish and G.V. Lauder. Passive an active flow control by swimming fishes and mammals. , 38(1):193–224, January 2006. T. Fütterer, A. Klar, and R. Wegener. An energy conserving numerical scheme for the dynamics of hyperelastic rods. , 2012:1–15, 2012. M. Gazzola, M. Argentina, and L. Mahadevan. Scaling macroscopic aquatic locomotion. , 10(10):758–761, September 2014. M. Gazzola, L. H. Dudte, A. G. McCormick, and L. Mahadevan. Forward and inverse problems in the mechanics of soft filaments. , 5(6):171628, jun 2018. S. J. Gerbode, J. R. Puzey, A. G. McCormick, and L. Mahadevan. How the cucumber tendril coils and overwinds. , 337(6098):1087–1091, aug 2012. A. Goriely and M. Tabor. Spontaneous helix hand reversal and tendril perversion in climbing plants. , 80(7):1564–1567, feb 1998. J. Gray. . Weidenfeld & Nicolson London, 1968. J. Gray and H. W. Lissmann. The locomotion of nematodes. , 41(1):135–154, 1964. Z. V. Guo and L. Mahadevan. Limbless undulatory propulsion on land. , 105(9):3179–3184, feb 2008. M. E. J. Holwill. Physical aspects of flagellar movement. , 46(4):696–785, 1966. D. L. Hu, J. Nirody, T. Scott, and M. J. Shelley. The mechanics of slithering locomotion. , 106(25):10081–10085, June 2009. T. L. Jahn and J. J. Votta. Locomotion of protozoa. , 4(1):93–116, jan 1972. J. Kaipio and E. Somersalo. , volume 160. Springer Science & Business Media, 2006. J. M. Kaldor, D. L. James, and S. Marschner. Simulating knitted cloth at the yarn level. , 27(3):1, aug 2008. J. B. Keller and S. I. Rubinow. Slender-body theory for slow viscous flow. , 75(04):705, jun 1976. G. Kirchhoff. Ueber das gleichgewicht und die bewegung eines unendlich dünnen elastischen stabes. , 1859(56):285–313, jan 1859. L. D. Landau and E. M. Lifschitz. . Pergamon Press, 1975. H. Lang and M. Arnold. Numerical aspects in the dynamic simulation of geometrically exact rods. , 62(10):1411–1427, oct 2012. H. Lang, J. Linn, and M. Arnold. Multi-body dynamics simulation of geometrically exact [C]{}osserat rods. , 25(3):285–312, nov 2010. J. Langer and D. A. Singer. agrangian aspects of the [K]{}irchhoff elastic rod. , 38(4):605–618, December 1996. E. Lauga. Bacterial hydrodynamics. , 48(1):105–130, January 2016. E. Lauga and T. R. Powers. The hydrodynamics of swimming microorganisms. , 72(9):096601, August 2009. F. Lebois, P. Sauvage, C. Py, O. Cardoso, B. Ladoux, P. Hersen, and J.-M. D. Meglio. Locomotion control of *Caenorhabditis elegans* through confinement. , 102(12):2791–2798, jun 2012. J. Lighthill. . Society for Industrial and Applied Mathematics, jan 1975. J. Lighthill. Flagellar hydrodynamics. , 18(2):161–230, 1976. C.-C. Lin and H. R. Schwetlick. On the geometric flow of [K]{}irchhoff elastic rods. , 65(2):720–736, jan 2004. A. Mujika, P. Le[š]{}kovsk[ý]{}, R. [Á]{}lvarez, M. A. Otaduy, and G. Epelde. Modeling behavioral experiment interaction and environmental stimuli for a synthetic c. elegans. , 11, December 2017. A. Palyanov, S. Khayrulin, and S. D. Larson. Application of smoothed particle hydrodynamics to modeling mechanisms of biological tissue. , 98:1–11, aug 2016. J. T. Pierce-Shimomura, B. L. Chen, J. J. Mun, R. Ho, R. Sarkis, and S. L. McIntire. Genetic analysis of crawling and swimming locomotory patterns in *C. elegans*. , 105(52):20982–20987, dec 2008. E. M. Purcell. Life at low reynolds number. , 45(1):3–11, January 1977. A. L. Schwab and J. P. Meijaard. How to draw [E]{}uler angles and utilize euler parameters. In [*Volume 2: 30th Annual Mechanisms and Robotics Conference, Parts A and B*]{}. [ASME]{}, 2006. M. Shaw, H. Zhan, M. Elmi, V. Pawar, C. Essmann, and M. A. Srinivasan. Three-dimensional behavioural phenotyping of freely moving *C. elegans* using quantitative light field microscopy. , 13(7), jul 2018. J. Simons, L. Fauci, and R. Cortez. A fully three-dimensional model of the interaction of driven elastic filaments in a stokes flow with applications to sperm motility. , 48(9):1639–1651, June 2015. S. E. Spagnolie, B. Liu, and T. R. Powers. Locomotion of helical bodies in viscoelastic fluids: Enhanced swimming at large helical amplitudes. , 111(6), August 2013. A. M. Stuart. Inverse problems: A bayesian perspective. , 19:451–559, May 2010. B. Szigeti, P. Gleeson, M. Vella, S. Khayrulin, A. Palyanov, J. Hokanson, M. Currie, M. Cantarelli, G. Idili, and S. Larson. : an open-science approach to modeling *Caenorhabditis elegans*. , 8, nov 2014. J. Sznitman, P. K. Purohit, P. Krajacic, T. Lamitina, and P.E. Arratia. . , 98:617–626, 2010. G. Taylor. The action of waving cylindrical tails in propelling microscopic organisms. , 211(1105):225–239, 1952. V. Thom[é]{}e. , volume 1054. Springer, 1984. K. Ward, F. Bertails, T.-Y. Kim, S. R. Marschner, M.-P. Cani, and M. C. Lin. A survey on hair modeling: Styling, simulation, and rendering. , 13(2):213–234, mar 2007. G. T. Yates. How microorganisms move through water: The hydrodynamics of ciliary and flagellar propulsion reveal how microorganisms overcome the extreme effect of the viscosity of water. , 74(4):358–365, 1986.
--- abstract: 'In this paper, we propose and compare two spectral angle based approaches for spatial-spectral classification. Our methods use the spectral angle to generate unary energies in a grid-structured Markov random field defined over the pixel labels of a hyperspectral image. The first approach is to use the exponential spectral angle mapper (ESAM) kernel/covariance function, a spectral angle based function, with the support vector machine and the Gaussian process classifier. The second approach is to directly use the minimum spectral angle between the test pixel and the training pixels as the unary energy. We compare the proposed methods with the state-of-the-art Markov random field methods that use support vector machines and Gaussian processes with squared exponential kernel/covariance function. In our experiments with two datasets, it is seen that using minimum spectral angle as unary energy produces better or comparable results to the existing methods at a smaller running time.' address: | Chester F. Carlson Center for Imaging Science^1^, Dept. of Electrical Engineering^2^\ Rochester Institute of Technology, Rochester, NY\ ubg9540@rit.edu bibliography: - 'whispers2.bib' title: 'Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification using Markov Random Fields' --- Hyperspectral classification, Spatial-Spectral classification, Spectral Angle Mapper, Markov Random Fields, Support Vector Machines, Gaussian Processes Introduction {#sec:intro} ============ Hyperspectral classification is the process of identifying the material present under each pixel in a hyperspectral image. This is possible as the fraction of incident light reflected by a material at different wavelengths (the spectrum), captured at each pixel of a hyperspectral image, is dependent on the chemical structure of the material. Statistical methods have been successful in predicting the material class from the spectrum [@lu2007]. Traditionally, pixel-wise classifiers were trained to predict the material under a pixel using only the spectrum captured at that pixel. However, since, the materials in a scene are typically distributed in homogeneous regions and the presence of one material can influence the likelihood of another material being present in its vicinity, it has been seen that the classification performance can be significantly improved by utilizing the spatial information along with the spectral information [@fauvel2013]. There have been basically two approaches to build spatial-spectral hyperspectral classifiers. One is to use spatial-spectral features [@benediktsson2005; @chen2015], and the other is to use Markov random fields [@tarabalka2010; @liao2015]. In this paper, we explore the use of Markov random field for spatial-spectral classification. Currently, the common classifiers used with Markov random fields are logistic regression [@li2012], probabilistic support vector machines [@tarabalka2010] and Gaussian processes [@liao2015]. In this paper, we experiment using the exponential spectral angle mapper kernel/covariance function with the support vector machine and the Gaussian process in these methods, and also experiment with combining the the spectral angle mapper, possibly the simplest pixel-wise classifier, with the Markov random field. Background {#sec:bg} ========== Markov Random Fields -------------------- Markov random fields (MRFs) can be used to exploit the strong dependencies between the neighboring pixels in a hyperspectral image to improve the classification performance. MRFs define a joint probability distribution over all the pixel labels in an image as $$p(\mathbf{y}) = \frac{1}{Z} \exp \left( -E\left(\mathbf{y}\right) \right),$$ where $\mathbf{y} = \left[y_1,...,y_N\right]^T$ is a vector containing all the $N$ pixel labels in an image, $E\left(\mathbf{y}\right)$ is the total energy of the pixel labels and $Z$ is a normalization constant, $Z = \sum_{\mathbf{y}} \exp \left( -E\left(\mathbf{y}\right) \right)$. The inference about the pixel labels, $\mathbf{y}$, is performed by maximum likelihood estimation, which is equivalent to minimizing the total energy, $E\left(\mathbf{y}\right)$. The energy minimization can be performed by methods like GraphCuts [@boykov2001]. The total energy of the grid-structured Markov random field used for image classification consists of two parts as $$E\left(\mathbf{y}\right) = \sum_{i \in V} E_i\left(y_i\right) + \sum_{(i,j) \in D} E_{ij}\left(y_i,y_j\right),$$ where $E_i\left(y_i\right)$ is the unary energy of the i^th^ pixel with label $y_i$ and $E_{ij}\left(y_i,y_j\right)$ is the pairwise energy between the two neighboring i^th^ and the j^th^ pixels having labels $y_i$ and $y_j$ respectively. V is the set of all the pixels and D is the set of all the edges between 4-neighboring pixels in the image. The unary energy incorporates the spectral information, while the pairwise energy incorporates the spatial information. The unary energy at a pixel i when $y_i=c$ can be defined to be the negative logarithm of the probability that the pixel belongs to the class c, $E_i\left(y_i=c\right) = - \ln \left( P\left( y_i=c \mid \mathbf{x}_i \right) \right)$. The MRFs with the logistic regression, the support vector machines and the Gaussian process use this energy function in our experiments. We introduce the unary energy function used with the spectral angle mapper in Section \[sec:sam\_mrf\]. The Potts model was utilized as the pairwise energy function in this paper. It is defined as $$E_{ij}\left(y_i,y_j\right) = \begin{cases} 0,& \text{if } y_i = y_j\\ \beta, & \text{otherwise}, \end{cases}$$ where $E_{ij}\left(y_i,y_j\right)$ is the energy of the edge i-j, when $y_i$ and $y_j$ are the labels of the i^th^ and the j^th^ pixels respectively. $\beta$ is a parameter that represents the cost of the labels $y_i$ and $y_j$ being different, and its value can be learned using cross-validation. Exponential Spectral Angle Mapper (ESAM) kernel/covariance function {#sec:esam} ------------------------------------------------------------------- The ESAM kernel/covariance function for two inputs $\mathbf{x}_1$ and $\mathbf{x}_2$ is defined as $$k_\text{ESAM}(\mathbf{x}_1,\mathbf{x}_2) = \sigma_0^2 \exp( - \alpha(\mathbf{x}_1,\mathbf{x}_2) / \sigma_1^2 ), \label{eq:ESAM0}$$ where $$\alpha(\mathbf{x}_1,\mathbf{x}_2) = {\cos}^\text{-1} \left( \frac{\mathbf{x}_1 \cdot \mathbf{x}_2 }{\|\mathbf{x}_1\|\,\|\mathbf{x}_2\| } \right), \label{eq:ESAM1}$$ and, $\sigma_0^2$ and $\sigma_1^2$ are the gain and the scale parameters respectively. $\alpha(.,.) $ is the spectral angle mapper. The parameters are learned from the data while training the models. We introduced this function for biochemical prediction from hyperspectral data with the Gaussian processes in [@gewali2016]. A function similar to the ESAM function has been previously used for hyperspectral classification using the support vector machines in [@mercier2003]. Spectral Angle Mapper-Markov Random Field (SAM-MRF) {#sec:sam_mrf} =================================================== The proposed Spectral Angle Mapper-Markov Random Field (SAM-MRF) combines the spectral angle mapper metric and the Markov random field. In this method, the unary potential function at each pixel is defined as the minimum spectral angle between the test pixel and the training spectra belonging to each class. The unary energy at pixel i, when the label $y_i$ is c, is given by $$E_i\left(y_i=c\right) = \min_{\mathbf{x}_{tc} \in \substack{\text{ training spectra}\\\text{of class c}}} \alpha\left(\mathbf{x}_i, \mathbf{x}_{tc}\right),$$ where $\mathbf{x}_i$ is the spectrum of pixel i and $\alpha(.,.)$ is the spectral angle mapper from (\[eq:ESAM1\]). Intuitively, this model introduces a new decision method for determining the class of the pixel from the spectral angle. Unlike the previous methods that only consider the test pixel and make decision by thresholding the spectral angle or choose the class with minimum angle, our approach jointly minimizes the spectral angle and promotes spatial homogeneity across the image. Recently, the study [@tang2015] by Tang et al. have combined SAM and MRF using multi-center model and Gaussian normalization, but our method is different in that it directly uses the minimum spectral angle as the unary energy. Experiments {#sec:experiments} =========== We experiment with two publicly available classification datasets: the Indian Pines [@indian_pines2015] and the University of Pavia[^1]. The Indian Pines dataset contains a 145 $\times$ 145 hyperspectral image of a 2 $\times$ 2 miles area, covering agricultural land and forest, in Northwest Tippecanoe County, Indiana collected by the Airborne Visible/Infrared Imaging Spectrometer (AVRIS). The pixel diameter is around and each pixel contains 220 spectral bands, with wavelengths ranging from to . Twenty water absorption bands were removed from the image as pre-processing. In our experiments, only the 14 material classes, each of which were present at 150 or more pixel locations were used. The University of Pavia dataset was collected by Reflective Optics System Imaging Spectrometer (ROSIS) over city of Pavia in northern Italy. It contains 103 bands in visible and near-infrared ( to ). The image is 610 $\times$ 340 pixels in size, with each pixel having a diameter of . There are nine material classes for this image. Full ground truth material cover maps are available for both of the images. Both images are not atmospherically compensated, with the pixels measured in the units of spectral radiance. The spectral radiance in each band of the image were normalized to have a mean of zero and a standard deviation of one. The pixels in the image were randomly divided into the training set and the testing set. The testing set contained 50 pixels from each class, while the size of the training set was varied from 10 to 70 pixels per class at the increments of 10. 70% of the training data was used to train the models generating the unary energies, and the remaining 30% of the training data was used to choose the value of the parameter ($\beta$) in the Potts pairwise energies via cross-validation. The value of $\beta$ was chosen from {0.01,0.1,1,10,100} by maximizing the overall accuracy. The unary energies were generated using the logistic regression (LR), the support vector machine (SVM), the Gaussian process (GP) and the spectral angle mapper (SAM). The implementations used are the multivariate logistic regression with L2 regularized weights from LIBLINEAR library [@liblinear], probabilistic multi-output support vector machine from LIBSVM library [@libSVM], and the Gaussian process classifiers from the GPML library [@rasmussen2010]. The slack variable and the kernel scale in the SVM was chosen from {0.001,0.01,0.1,1,10,100,1000} by training the SVM on 90% of classifier’s training data and validating over the remaining 10%. The gain of ESAM was set to one while using it with the SVM. GPML library does not contain multi-class classifiers, so binary classifiers were trained in one-vs-one setup and the multi-class probabilities were estimated using the method [@wu2004] by Wu et al. Error function likelihood was used with the GP classifier and the inference was done using Laplace approximation. The hyper-parameters of the covariance function were learned by maximizing the likelihood. The final output labels were produced by Markov random field energy minimization, performed using the graph cut with expansion-move algorithm using the software [@szeliski2008] by Szeliski et al. Overall accuracy over the testing set was used to measure the performance. This procedure was repeated 30 times to produce the mean and the standard deviation of the overall accuracy as the final performance metric. 10 20 30 40 50 60 70 -------------- ----------------------- ----------------------- ----------------------- ----------------------- ----------------------- ----------------------- ----------------------- LR 53.87$\pm$2.3 61.62$\pm$2.0 64.88$\pm$2.2 67.53$\pm$2.5 69.28$\pm$2.3 70.81$\pm$2.0 71.62$\pm$1.8 LR-MRF **69.69**$\pm$**6.2** **79.95**$\pm$**2.9** 82.53$\pm$2.9 83.88$\pm$2.4 84.42$\pm$2.6 85.10$\pm$2.8 85.69$\pm$2.3 SVM-SE 50.38$\pm$7.3 63.04$\pm$5.4 69.42$\pm$3.9 72.37$\pm$2.7 73.41$\pm$4.1 76.79$\pm$2.0 78.39$\pm$1.7 SVM-SE-MRF 57.94$\pm$10.9 77.19$\pm$6.8 82.73$\pm$4.0 86.18$\pm$2.4 86.61$\pm$4.1 89.37$\pm$2.3 90.61$\pm$1.9 SVM-ESAM 47.42$\pm$4.3 57.49$\pm$7.4 65.64$\pm$4.1 68.91$\pm$3.2 71.75$\pm$1.9 73.71$\pm$2.1 74.85$\pm$2.3 SVM-ESAM-MRF 55.24$\pm$8.2 70.55$\pm$10.7 81.28$\pm$3.5 83.74$\pm$2.6 85.57$\pm$2.2 87.21$\pm$2.4 87.95$\pm$2.3 GP-SE 51.55$\pm$3.0 61.10$\pm$2.2 66.79$\pm$2.2 70.01$\pm$1.8 72.59$\pm$1.9 75.60$\pm$2.1 77.19$\pm$1.9 GP-SE-MRF 60.61$\pm$6.8 75.89$\pm$4.7 81.49$\pm$3.6 85.24$\pm$2.2 87.31$\pm$2.3 88.49$\pm$2.3 90.23$\pm$2.2 GP-ESAM 48.99$\pm$2.7 55.96$\pm$2.2 60.84$\pm$2.0 64.19$\pm$2.6 66.43$\pm$2.0 69.48$\pm$2.2 71.01$\pm$1.8 GP-ESAM-MRF 59.83$\pm$6.8 75.24$\pm$5.6 80.53$\pm$2.7 82.84$\pm$1.9 84.16$\pm$2.0 85.99$\pm$2.1 86.61$\pm$2.3 SAM 50.74$\pm$2.2 57.46$\pm$2.2 60.15$\pm$2.2 61.04$\pm$2.1 62.97$\pm$2.0 63.92$\pm$1.9 64.73$\pm$1.8 SAM-MRF 65.46$\pm$4.7 77.97$\pm$3.1 **85.22**$\pm$**3.3** **87.02**$\pm$**2.4** **89.28**$\pm$**2.2** **90.88**$\pm$**1.9** **92.00**$\pm$**1.8** 10 20 30 40 50 60 70 -------------- ----------------------- ----------------------- ----------------------- ----------------------- ----------------------- ----------------------- ----------------------- LR 64.21$\pm$2.8 68.24$\pm$1.7 70.10$\pm$2.1 71.16$\pm$2.3 71.72$\pm$2.0 72.23$\pm$2.2 72.13$\pm$1.6 LR-MRF 66.01$\pm$2.7 71.13$\pm$2.5 72.24$\pm$2.8 73.41$\pm$2.0 74.17$\pm$2.2 74.67$\pm$2.2 74.53$\pm$2.4 SVM-SE 69.16$\pm$4.6 75.89$\pm$5.9 79.16$\pm$5.3 83.07$\pm$2.8 83.72$\pm$2.4 85.85$\pm$2.7 86.79$\pm$2.0 SVM-SE-MRF 68.84$\pm$5.4 76.28$\pm$5.5 **80.10**$\pm$**5.8** **84.19**$\pm$**2.9** **85.81**$\pm$**2.3** **88.01**$\pm$**2.3** **88.90**$\pm$**2.1** SVM-ESAM 66.81$\pm$7.5 75.54$\pm$3.6 78.01$\pm$4.4 79.73$\pm$3.2 80.33$\pm$2.9 82.47$\pm$2.9 83.51$\pm$2.4 SVM-ESAM-MRF 66.70$\pm$8.0 76.19$\pm$3.6 79.27$\pm$6.0 82.21$\pm$4.1 83.73$\pm$3.0 85.47$\pm$2.9 87.01$\pm$2.5 GP-SE 73.07$\pm$3.3 76.31$\pm$2.2 79.18$\pm$2.9 81.88$\pm$2.4 83.44$\pm$2.2 86.03$\pm$1.9 87.37$\pm$1.8 GP-SE-MRF 73.95$\pm$3.6 76.91$\pm$2.5 79.61$\pm$3.0 82.68$\pm$2.5 84.30$\pm$2.4 87.28$\pm$1.7 88.23$\pm$1.8 GP-ESAM 71.99$\pm$2.3 75.79$\pm$2.2 77.93$\pm$2.1 78.93$\pm$2.1 80.14$\pm$1.7 81.32$\pm$2.0 82.27$\pm$2.1 GP-ESAM-MRF 72.56$\pm$2.3 76.45$\pm$2.5 78.81$\pm$2.4 79.81$\pm$2.4 81.03$\pm$1.9 82.50$\pm$2.0 83.25$\pm$2.2 SAM 71.90$\pm$2.7 74.60$\pm$2.4 76.34$\pm$2.0 77.11$\pm$2.2 77.47$\pm$2.0 78.50$\pm$2.0 78.91$\pm$2.1 SAM-MRF **74.40**$\pm$**3.2** **77.33**$\pm$**2.4** 78.34$\pm$2.5 79.80$\pm$2.0 79.93$\pm$2.0 80.66$\pm$1.2 81.53$\pm$2.4 \ \ \ \ Results {#sec:results} ======= Tables \[table\_indian\] and \[table\_pavia\] compare the performance of all the methods on the Indian Pines image and the University of Pavia image respectively. The logistic regression, the support vector machine and the Gaussian process and the spectral angle mapper have been denoted as LR, SVM, GP, and SAM respectively. The abbreviation of the name of the kernel/covariance function used with the SVM and the GP has be appended at the end of the methods name. The kernel/covariance function used are the squared exponential function (SE) and the exponential spectral angle mapper (ESAM). Those methods which use Markov random field energy minimization have MRF appended at the end of their name. Figure \[fig:indian\_pines\_images\] shows one of the classification maps produced by the proposed, SAM-MRF, when the number of training pixels per class was 50 on the Indian Pines image. When there were 50 samples per class in the training set, the SAM-MRF was the most accurate and took 3.07$\pm$0.2 seconds to compute the classification map. This is much better than 44.8$\pm$1.7 seconds taken by the second most accurate GP-SE-MRF. The second fastest method was LR-MRF, taking 4$\pm$0.3 seconds. Discussion ========== Compared to the state-of-the-art methods, the SAM-MRF method produced superior accuracies on the Indian Pines image and comparable accuracies on the University of the Pavia image. This could have been due to the two major differences between these datasets. The Indian Pines image contains many large homogeneous areas, and has less distinct material classes, e.g., most classes are the different types of vegetation. Hence, when SAM-MRF is applied to this image, the minimum angle for each class are most likely to be comparable to each other in magnitude as the material classes are less distinct. The Markov random field can then choose an appropriate label from the labels having comparable low spectral angles by considering the neighbors of the pixels which are highly related for this image, rather than just choosing the label having the minimum of the roughly equal spectral angles. The University of Pavia image, on the other hand, has fewer, smaller homogenous areas, and has more distinct classes, such as asphalt, trees, gravel, shadows and paint. Hence, the accuracy after MRF for the University of Pavia image is not improved much and is highly dependent on the pixel-wise classification accuracy, which is poor in case of SAM-MRF Hence, SAM-MRF performs poorly on this image. This also explain why applying MRF to the University of Pavia image, in general, only increased the accuracy by about 2% for all the methods. One possible way to improve the classification performance on the University of Pavia dataset could be to use a spatial-spectral features, such as the extended morphological features [@benediktsson2005], with the proposed methods. It was seen that using the ESAM kernel/covariance function did not improve the performance over using the squared exponential kernel/covariance function for both the SVM and the GP, indicating that the spectral angle based functions are not necessarily better for classification when used with these classifiers. The SVM based methods and the GP based methods did show significant difference in performance, however the SVM based ones were significantly faster. In the experiments, even the naive implementation of SAM-MRF was faster than the robust implementations of other methods. This is due to the simplicity of SAM-MRF. SAM-MRF could be made even faster and scalable to very large datasets by using heuristics, e.g., k-d tree nearest neighbor search [@muja2014], to find the approximate minimum angle. \[sec:discussion\] [^1]: both obtained from <http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes>
--- abstract: 'Methods based on Bayesian decision tree ensembles have proven valuable in constructing high-quality predictions, and are particularly attractive in certain settings because they encourage low-order interaction effects. Despite adapting to the presence of low-order interactions for prediction purpose, we show that Bayesian decision tree ensembles are generally anti-conservative for the purpose of conducting interaction detection. We address this problem by introducing Dirichlet process forests (DP-Forests), which leverage the presence of low-order interactions by clustering the trees so that trees within the same cluster focus on detecting a specific interaction. We show on both simulated and benchmark data that DP-Forests perform well relative to existing interaction detection techniques for detecting low-order interactions, attaining very low false-positive and false-negative rates while maintaining the same performance for prediction using a comparable computational budget.' bibliography: - 'mybib.bib' --- INTRODUCTION ============ In many scientific problems, a primary goal is to discover structures which allow the problem to be described parsimoniously. For example, one may wish to find a small subset of candidate variables that are predictive of a response of interest; this structure is referred to as *sparsity*. Another structure is *interaction* (or *additive*) structure. An extreme case of additive structure is a generalized additive model (see, e.g., [@hastie2017generalized]), where the effects of the predictors combine additively without any interactions. Teasing out additive structures can be valuable because it can substantially simplify the interpretation of a model. For example, if a given predictor does not interact with other predictors then it can be interpreted in isolation without reference to the values of other predictors. When predictors do interact, interpretation of the interactions is typically simplified whenever the interactions are of low-order. We consider the nonparametric regression problem $Y_i = f_0(X_i) + \epsilon_i$, $\epsilon_i \sim {\operatorname{Normal}}(0,\sigma^2)$, where $Y_i$ is a response of interest and $X_i \in {\mathbb R}^P$ is a vector of predictors, however the methods we develop here can be easily extended to many other settings. The variables $x_j$ and $x_k$ are said to *interact* if $f_0(x)$ cannot be written as $f_0(x) = f_{0\backslash j}(x) + f_{0\backslash k}(x)$ where $f_{0\backslash j}$ and $f_{0\backslash k}$ do not depend on $x_j$ and $x_k$ respectively. One can define higher order interactions similarly: a group of $K$ variables is said to have a $K$-way interaction if $f_0(x)$ cannot be decomposed as a sum of $K$ or fewer functions, each of which depends on fewer than $K$ of the variables. Methods which estimate $f_0(x)$ using an ensemble of Bayesian decision trees have proven useful in a number of statistical problems. Beginning with the seminal work of @chipman2010bart, Bayesian additive regression trees (BART) have been successfully applied in a diverse range of settings including survival analysis [@sparapani2016nonparametric], causal inference [@hahn2017bayesian], variable selection in high dimensional settings [@linero2016bayesian; @bleich2014variable], loglinear models [@murray2017log], and analysis of functional data [@starling2018functional]. A key motivating factor for the use of BART is precisely that it is designed to taking advantage of low-order interactions in the data. Indeed, @linero2017abayesian and @rockova2017posterior illustrate theoretically that the presence of low-order interactions is precisely the type of structure which BART excels at capturing. Hence BART appears to be an ideal tool for extracting low-order and potentially non-linear interactions. Surprisingly, we show that, despite the ability of BART to capture low-order interactions for *prediction* purposes, it is nonetheless not suitable for conducting fully-Bayesian inference for the *selection* task of interaction detection. When taken at face value as a Bayesian model, we show empirically that BART generally leads to the detection of spurious interaction effects. This is not contradictory because optimal prediction accuracy is generally *not* sufficient to guarantee consistency in variable selection (see, e.g., [@wang2007tuning]). We discuss the general problem which leads to the detection of spurious interactions; while this development is couched in the BART framework, we believe that the fundamental issues also occur for other decision tree ensembling methods. Specifically, the problem is that there is no penalty associated to including spurious interaction terms in the model. We then introduce a suitable modification to the BART framework which addresses this problem and allows BART detect interactions in a fully-Bayesian fashion. We accomplish this by clustering the trees into non-overlapping groups. Intuitively, the shallow trees comprising each cluster work together to learn a single low-order interaction. To bypass the need to specify the number of clusters, we induce the clustering through a Dirichlet process prior [@ferguson1973]. We refer to the ensemble constructed in this fashion as a Dirichlet Process Forest (DP-Forest). A Simple Example {#sec:a-simple-example} ---------------- ![The interaction structure detected in the example from Section \[sec:a-simple-example\]. “Truth” denotes the true interaction structure in the example.[]{data-label="fig:interaction-graph"}](./figure/Rplot06){width=".45\textwidth"} To motivate the problem, we consider a simulated data example of @vo2016sparse. This example takes $P = 100$, $N = 100$, $X_i \sim {\operatorname{Normal}}({\bm 0}, 0.02\,{\mathrm I})$, and $f_0(x) = x_1 + x_2^2 + x_3 + x_4^2 + x_5 + x_1x_2 + x_2 x_3 + x_3x_4$. We compare the DP-Forest we propose to a variant of BART referred to as SBART [@linero2017abayesian] which can accommodate sparsity in variable selection. We also consider the recently proposed iterative random forests algorithm of @basu2018iterative, selecting interactions whose stability score is higher than 0.5. In Figure \[fig:interaction-graph\] we display the interaction structure detected by each method on this data; while we considered only one iteration of this experiment here, these results are typical of replications of the experiment. Here, SBART detects a spurious edge between $x_2$ and $x_4$. This occurs because BART, despite its fundamentally additive nature, does not include any penalization which discourages unnecessary interactions from being included. On the contrary, BART *expect* interactions to occur between relevant predictions; considering a draw from a BART prior such that $x_2$ and $x_4$ are included in the model, an interaction between these variables is a-priori likely. Adapting Bayesian decision tree ensembles to interaction detection then requires a prior which discourages the inclusion of weak interactions. The iRF similarly detects two spurious interactions and misses a relevant interaction between $x_3$ and $x_4$. Related Work {#sec:related-work} ------------ Recent work has studied the theoretical properties of BART. @linero2017abayesian and @rockova2017posterior show that certain variants of BART are capable of adaptively attaining near-minimax-optimal rates of posterior concentration when $f_0$ can be expressed as a sum of low-order interaction terms $f_0(x) = \sum_{v = 1}^V f_{0v}(x)$ with each $f_{0v}(x)$ depending on a small subset of $\mathcal S_v$ of the predictors. In view this, one might conclude that no modification to BART is needed. This is true if one cares only about the mean integrated squared error $\int (f_0(x) - f(x))^2 \, F_0(dx)$ where $X_i {\stackrel{\text{iid}}{\sim}}F_0$. Optimal prediction performance, however, does not imply that variable selection and interaction detection are being performed adequately. If $\mathcal S_0$ is the true interaction structure of the data $\mathcal S$ is an estimate of $\mathcal S_0$, then attaining the minimax estimation rate for $f_0$ in terms of prediction error typically only guarantees that $\mathcal S_0 \subseteq \mathcal S$ (not $\mathcal S \subseteq \mathcal S_0$). Several other methods have been recently proposed in the literature specifically for the task of interaction detection. We offer a non-comprehensive review. For a recent review, see @bien2013lasso. @lim2015learning proposed a hierarchical group-lasso which enforces the constraint that the presence of a given interaction implies the presence of the associated main effects; a similar approach is given by @bien2013lasso. A potential shortcoming of these approaches is that they focus on linear models and allow only pairwise interactions. @radchenko2010variable propose the VANISH algorithm, which allows for nonlinear effects through the use of basis function expansions, but again limits to pairwise interactions. Several decision-tree based methods have also been proposed. The additive groves procedure of @sorokina2008detecting uses an adaptive boosting-type algorithm to sequentially test for the presence of interactions between variables after performing a variable screening step. @basu2018iterative propose the iterative random forest (iRF) algorithm which flags “stable” interaction effects as those which appear consistently in many trees in a certain random forest. BAYESIAN TREE ENSEMBLES {#sec:review} ======================= The BART Prior -------------- Our starting point is the Bayesian additive regression trees (BART) framework of @chipman2010bart, which treats the function $f_0(\cdot)$ as the realization of a sum of random decision trees $$\begin{aligned} f(x) = \sum_{t = 1}^T g(x ; {\mathcal T}_t, {\mathcal M}_t),\end{aligned}$$ where ${\mathcal T}_t$ denotes the tree structure (including the decision rules) of the $t^{\text{th}}$ tree and ${\mathcal M}_t = \{\mu_{t\ell} : \ell \in {\mathcal L}_t\}$ denotes the parameters associated to the leaf nodes; here, ${\mathcal L}_t$ denotes the collection of leaf nodes of ${\mathcal T}_t$. Let $[x \leadsto (t,\ell)]$ denote the event that the point $x$ is associated to leaf $\ell$ in tree $t$. The function $g(x; {\mathcal T}_t, {\mathcal M}_t)$ then returns $\mu_{t\ell}$ whenever $[x \leadsto (t,\ell)]$ occurs. We follow @chipman2010bart and specify a branching process prior for the tree structure ${\mathcal T}_t$. A sample from the prior for ${\mathcal T}_t$ is generated iteratively, starting from a tree with a single node of depth $d = 0$; this is made a branch with two children with probability $q(d) = \gamma / (1 + \beta)^d$, and is made a leaf node otherwise. We repeat this process independently for all nodes of depth $d = 1, 2, \ldots$ until all nodes at depth $d$ are leaves. After the structure of the tree is generated, each branch $b$ is associated with a decision rule of the form $[x_j \le C_b]$. The coordinate $j$ used to construct the decision rule is sampled with probability $s_j$ where $s = (s_1, \ldots, s_P)$ is a probability vector. The splitting proportion $s$ will play a key role later as an avenue for inducing sparsity in the regression function. Finally, we generate $C_b \sim {\operatorname{Uniform}}(L_j, U_j)$ where $(L_1, U_1) \times \cdots \times (L_P, U_P)$ is the hyper-rectangle corresponding to the values of $x$ that lead to branch $b$. We remark that this choice for $C_b$ differs from the scheme used by other BART implementations; we adopt it to simplify the full conditionals we derive in Section \[sec:dirichlet\]. For the prior on ${\mathcal M}_t$ we set $\mu_{t\ell} {\stackrel{\text{iid}}{\sim}}{\operatorname{Normal}}(0, \sigma^2_\mu / T)$ conditional on ${\mathcal T}_t$ and $\sigma^2_\mu$. By taking the variance to be $\sigma_\mu^2/T$ we ensure that the prior level of signal is constant as $T$ increases. The normal prior is selected for its conjugacy; we note, however, that any prior for $\mu_{t\ell}$ with mean $0$ and variance $\sigma_\mu/T$ leads to the approximation $f(x) \sim {\operatorname{Normal}}(0, \sigma^2_\mu)$ by the central limit theorem. We fix $\beta = 2$ and $\gamma = 0.95$; we refer readers to @linero2017abayesian for further details regarding prior specification, and to @chipman2013bayesian and @linero2017review for detailed reviews of Bayesian decision tree methods. Leveraging Structural Information --------------------------------- Several recent developments have extended the BART methodology to take advantage of structural information. @linero2016bayesian noted that sparsity in $f_0(x)$ can be accommodated automatically by setting $s \sim {\operatorname{Dirichlet}}(\alpha / P, \ldots, \alpha / P)$. Recall here that $s_j$ denotes the prior probability that, for a fixed branch, coordinate $j$ will be used to construct a split at a that branch. Hence, if $s$ is nearly-sparse with $d$ non-sparse entries, the prior will encourage realizations from the prior to include only the $d$ predictors with non-sparse entries. @linero2017abayesian showed that this prior for $s$ induces highly desirable posterior concentration properties; in particular, the posterior of $f(x)$ concentrates at close to the oracle minimax rate if we had known the relevant predictors beforehand. ![image](./figure/illustration-1){width=".7\textwidth"} @linero2017abayesian also introduce the SBART model, which uses soft decision trees [@irsoy2012soft] which effectively replace the decision boundaries of BART with smooth sigmoid functions. This allows the SBART model to adapt to the smoothness level of $f(x)$; consequently, if $f_0(x)$ is assumed to be $\alpha$-Hölder, the posterior for the SBART model concentrates around $f_0(x)$ at close to the oracle minimax rate obtainable when the smoothness level is known a-priori. While the methodology we develop applies to the usual BART models, we will use the SBART model with the sparsity-inducing Dirichlet prior in all of our illustrations. DP-FORESTS {#sec:dirichlet} ========== The distribution of $({\mathcal T}_t, {\mathcal M}_t)$ in the BART model is parameterized by the splitting proportions $s$, leaf variance $\sigma^2_\mu$, and tree topology parameters $(\gamma, \beta)$. To encourage a small number of low-order interactions, we specify a prior which clusters the trees into non-overlapping groups such that each cluster constructs splits using different subsets of the predictors. A schematic is given in Figure \[fig:illustration-1\] with $T = 4$. In this figure we see that the first two trees are dedicated to learning a main effect for $x_1$ while the second two trees are dedicated to learning an interaction between $x_2$ and $x_3$. We induce a clustering by using tree-specific splitting proportions $s{^{(t)}} \sim G$ and using a Dirichlet process prior on $G$ [@ferguson1973]. Specifically, we let $s{^{(t)}} {\stackrel{\text{iid}}{\sim}}G$ conditional on $G$ and let $G \sim {\operatorname{DP}}(\omega G_0)$ where $G_0$ is a ${\operatorname{Dirichlet}}(\alpha w_1, \ldots, \alpha w_P)$ distribution and $\omega$ denotes the precision parameter of the Dirichlet process. Using the latent-cluster interpretation of the Dirichlet process (see, .e.g, [@teh2006hierarchical]) this can be approximated by the following generative model: 1. Draw $\pi \sim {\operatorname{Dirichlet}}(\omega / K, \ldots, \omega / K)$ for large $K$. 2. Draw $Z_1, \ldots, Z_T {\stackrel{\text{ind}}{\sim}}{\operatorname{Categorical}}(\pi)$. 3. Draw $s{^{(1)}}, \ldots s{^{(K)}} {\stackrel{\text{ind}}{\sim}}{\operatorname{Dirichlet}}(\alpha w_1, \ldots, \alpha w_P)$ where $\sum_{p=1}^P w_p = 1, w_p \ge 0$. 4. For $t = 1, \ldots, T$, draw $({\mathcal T}_t, {\mathcal M}_t)$ as described in Section \[sec:review\] with $s = s{^{(Z_t)}}$. The $Z_t$’s cluster trees such that the trees within each group capture a single low-order interaction. Note that the use of the the sparsity inducing prior in step 3 above ensures that each $s{^{(k)}}$ will be nearly-sparse, and hence the trees with $Z_t = k$ will split on only a small subset of the predictors. The role played by this weight vector $w$ is to encourage a subset of the predictors to appear in multiple *different* interactions. For example, if there are interactions $(X_1, X_2)$ and $(X_2, X_3)$ we do not want to encourage an additional $(X_1, X_3)$ interaction. A large value of $w_2$ allows for this by encouraging $X_2$ to appear in several interactions. Properties of the Prior ----------------------- The degree of sparsity within each cluster of trees, as well as the overall number of clusters used, are determined by the hyperparameters $\alpha$ and $\omega$. These hyperparameters are key in determining the interaction structures that the prior favors. To help anchor intuition we first consider several special cases of the DP-Forests model. First, we consider the behavior of the prior as $\alpha \to 0$ with $\omega$ fixed. In this case, with high probability each $s{^{(t)}}$ will have only one non-sparse entry. Consequently, each tree in the ensemble will split on at most one predictor. Because the trees are composed additively, this implies that none of the variables interact, and hence the prior concentrates on a sparse generalized additive model (SPAM, [@ravikumar2007spam]). On the other hand, as $\alpha \to \infty$ we see that $s{^{(t)}} \to (w_1, \ldots, w_P)$ so that the prior reverts to original BART model with splitting proportions given by $(w_1, \ldots, w_P)$ described by @bleich2014variable. We can conduct a similar analysis with $\alpha$ fixed and $\omega$ with $K \to \infty$. As $\omega \to \infty$, each tree will be associated to a unique $s{^{(t)}}$. As $\omega \to 0$, on the other hand, all of the trees share the same $s{^{(t)}}$ so that the model collapses to the Dirichlet additive regression trees model described by @linero2016bayesian. The key difference between BART and a DP-Forest is that, once two variables are included, BART does not penalize interactions. Let $A_i$ and $A_j$ denote the event that variable $i$ and $j$ are included in the model, let $A_{ij}$ denote the event that variables $i$ and $j$ interact, and let $\Pi_{\alpha,\omega}$ denote the joint prior distribution for ${\mathcal T}_1, \ldots, {\mathcal T}_T$. We study the prior on the interaction structure by examining the probabilities $ \Lambda(\alpha, \omega) = \Pi_{\alpha, \omega}(A_{ij} \mid A_i \cap A_j), $ and $ \Xi(\alpha,\omega) = \Pi_{\alpha,\omega}(A_{ik} \mid A_{ij} \cap A_{kj}).$ In words, $\Lambda$ is the probability that $(i,j)$ interact given that both variables are relevant, while $\Xi$ represents the probability that $(i,k)$ interact given that $(i,j)$ and $(k,j)$ interact. Additionally, we examine the relationship between the average number of two-way interactions included in the model and the number of variables included. ![Plots of various quantities for $\omega = 0$ (solid, corresponding to SBART) and $\omega = 1$ (dashed) with $P = 5$ and $T = 50$. Left: plot of $\alpha$ against $\Lambda$. Middle: Plot of $\alpha$ against $\Xi$. Right: plot of the number of variables included in the model against the number of interactions. []{data-label="fig:testing-prior"}](./figure/testing-prior-1){width=".45\textwidth"} Figure \[fig:testing-prior\] shows several relationships between these quantities as $\alpha$ varies for both SBART and DP-Forests. We see that $\Lambda$ is quite large for all values of $\alpha$ with SBART, implying that the prior expects any variables included in the model to interact; the trend is decreasing in $\alpha$ only because a larger number of predictors will be included in the model, causing variables to compete for branches in the ensemble. DP-Forests do not encourage the inclusion of interactions, particularly when $\alpha$ is small. Next, we see that $\Xi$ is also uniformly large for SBART. This implies that the prior does not encourage interaction structures like the truth from Figure \[fig:interaction-graph\], while a DP-Forest with a small choice of $\alpha$ does. Default Prior Settings ---------------------- A benefit of the BART framework is the existence of default priors which require minimal tuning from users. Where applicable, we do not stray from the defaults recommended in Section \[sec:review\]. Specific to DP-Forests, the key parameter controlling the behavior of the model is $\alpha$. On the basis of Figure \[fig:testing-prior\] we recommend choosing $\alpha$ to be small; we have found setting $\alpha \sim {\operatorname{Exponential}}$ with mean $0.1$ to work well. Conversely, in our illustrations the results for the DP-Forest model do not depend strongly on $\omega$, and we set $\omega \sim {\operatorname{Exponential}}(1)$. This leaves the weight vector $w = (w_1, \ldots, w_P)$ to be specified. In our illustrations, we first run a screening step which removes irrelevant predictors. In principle any method can be used for screening; in our illustrations, we use SBART to screen variables which have posterior inclusion probability below $50\%$, and set $w_j \propto I(\text{$j$ is not screened})$. A more principled alternative is to use another sparsity-inducing prior on $w$ but we do not pursue this strategy here. Computation and Inference {#sec:computation} ------------------------- Inference for the DP-Forest model can be carried out using a Gibbs sampler with the Bayesian backfitting approach of @chipman2010bart. The Gibbs sampler operates on the state space $(\{{\mathcal T}_t, {\mathcal M}_t, Z_t\}_{t = 1}^T, \{s{^{(k)}}, \pi_k\}_{k=1}^K, \alpha, \omega, \sigma^2_\mu, \sigma^2)$. We use standard Metropolis-within-Gibbs proposals to update ${\mathcal T}_t$ and ${\mathcal M}_t$; see @kapelner2014bartmachine and @pratola2016efficient for details. The parameters $\alpha$, $\omega$, $\sigma^2_\mu$, and $\sigma^2$ can all be updated easily using the slice sampling algorithm of @slicesampling. Finally, $Z_t, s{^{(k)}}$, and $\pi$ all have conjugate full-conditional distributions: **Full conditional for $\pi$:** Note that $\pi$ is conditionally independent of all parameters given $(\omega, Z)$. By conjugacy of the Dirichlet distribution to multinomial sampling we have the full conditional $\pi \sim {\operatorname{Dirichlet}}(\omega/K + m_1, \ldots, \omega/K + m_K)$ where $m_k = \sum_t I(Z_t = k)$. **Full conditional for $s{^{(k)}}$:** The conjugacy of the Dirichlet prior to multinomial sampling implies a Dirichlet full-conditional when a single $s$ is used. To account for the clustering, we only consider the branches associated to trees with $Z_t = k$, giving the full conditional $s{^{(k)}} \sim {\operatorname{Dirichlet}}(\alpha w_1 + c_1{^{(k)}}, \ldots, \alpha w_P + c_P{^{(k)}})$ where $c_j{^{(k)}}$ is the number of branches associated to cluster $k$ which split on predictor $j$. **Full conditional for $Z_t$:** Let $p(k)$ denote the full conditional for $Z_t$. The term $[Z_t = k]$ comes in only through the factors $\pi_k$ (the prior probability of $Z_t = k$) and $\prod_{j = 1}^P s_j^{(k)c_{tj}}$ where $c_{tj}$ is the number of branches of tree $t$ which split on predictor $j$ (the likelihood of tree $t$ having split on the predictors that it has, give $Z_t = k$). Hence $p(k) \propto \pi_k \prod_{j = 1}^P s_j^{(k)c_{tj}}$. Putting these pieces together, we arrive at Algorithm \[alg:bayesian-backfitting\], which describes a single iteration of the Gibbs sampler. Update $({\mathcal T}_t, {\mathcal M}_t)$ via Metropolis-Hastings. Sample $Z_t \sim p(k), k = 1, \ldots, K$ where $ p(k) \propto \pi_k \prod_{j = 1}^P s_j^{(k)c_{tj}} $ and $c_{tj}$ is the number of branches associated to tree $t$ which split on predictor $j$. Sample $s{^{(k)}} \sim {\operatorname{Dirichlet}}(\alpha w_1 + c{^{(k)}}_1, \ldots, \alpha w_P + c{^{(k)}}_P)$ where $c{^{(k)}}_j$ is the number of branches associated to cluster $k$ which split on predictor $j$. Sample $\pi \sim {\operatorname{Dirichlet}}(\omega / K + m_1, \ldots, \omega/K + m_K)$ where $m_k = \sum_{t = 1}^T I(Z_t = k)$. Sample $(\sigma, \sigma_{\mu}, \alpha, \omega)$ using slice sampling. EXPERIMENTS =========== We now compare DP-Forests to existing methods on a number of synthetic datasets. We consider the following methods in addition to DP-Forests and SBART. **Additive groves**: The additive groves procedure of @sorokina2008detecting. Because tuning of the additive groves algorithm is compute-intensive, we ran several pilot studies to choose appropriate tuning parameters which perform well for the given simulation settings. **Hierarchical group lasso**: The hierarchical group lasso proposed by @lim2015learning for interaction detection; we abbreviate this method by HL. This procedure was designed with linearity of $f_0(x)$ in mind. Tuning parameters are selected by cross-validation. **Hierarchical group lasso, least squares**: HL is used to *select* the interactions and main effects, while the coefficients are estimated by least squares; we abbreviate this method by HL-LS. Tuning parameters are selected by cross validation. **Iterative random forests**: The iterative random forests (iRF) procedure proposed by @basu2018iterative as implemented in the `iRF` package on `CRAN`. We use the default $T = 500$ trees and 10 iterations of the iRF algorithm. ![Barplot of results for interaction detection. The top row gives the average $F_1$ score for each method for detecting interactions. The second row gives the average number of false positive interactions detected. The bottom row gives the average number of false negatives detected. The average for each method is given on each bar.[]{data-label="fig:sim_results_1_1"}](figure/sim_results_1-1.pdf){width="50.00000%"} Our simulation settings are borrowed from several existing works; we do not compare our methods to these other works due to a lack of publicly available software. 1. [@radchenko2010variable] We generate $X_i \sim {\operatorname{Uniform}}([0,1]^P)$ where $P = 50$, $N = 300$, and $\sigma^2 = 1$. We let $f_0(x)$ be $$\begin{aligned} \sqrt{0.5} \bigg[\sum_{v = 1}^V f_v(x) + f_1(x) f_2(x) + f_1(x) f_3(x)\bigg] \end{aligned}$$ where $f_1(x) = x_1$, $f_2(x) = (1+x_2)^{-1}$, $f_3 = \sin(x_3)$, $f_4(x) = e^{x_4}$, and $f_5(x) = x_5^2$. Each $f_v(x)$ is further centered and scaled so that $E(f_v(X_i)) = 0$ and ${\operatorname{Var}}(f_v(X_i)) = 1$. 2. [@vo2016sparse] We generate $X_i \sim {\operatorname{Normal}}({\bm 0}, {\mathrm I})$ with $N = 100$, $P = 100$, and $\sigma = 0.14$. We let $ f_0(x) = x_1 + x_2^2 + x_3 + x_4^2 + x_5 + x_1x_2 + x_2x_3 + x_3x_4. $ 3. Same as (S2), but without the interaction effects. 4. [@friedman1991multivariate] A common test case for BART, we generate $X_i \sim {\operatorname{Uniform}}([0,1]^P)$ with $P = 250, N = 250$, and $\sigma^2 = 1$. We set $ f_0(x) = 10 \sin(x_1x_2) + 20(x_3 - 0.5)^2 + 10 x_4 + 5 x_5. $ ![Barplot of results for detecting main effects.[]{data-label="fig:sim_results_1_2"}](figure/sim_results_1-2.pdf){width="50.00000%"} Each of these scenarios was replicated $100$ times. We evaluate each method according to the average number of false positives (FPs), false negatives (FNs), $F_1$ score, and integrated root-mean squared error $\|f_0 - \widehat f\|_2$. The $F_1$ score is a commonly used measure of overall accuracy that balances false positives against false negatives in variable selection tasks; see, for example, @zhang2015cross. Results for interaction detection are given in Figure \[fig:sim\_results\_1\_1\]. We omit the results for HL because HL-LS performs uniformly better. Under all simulation settings, DP-Forests perform better than all other methods according to $F_1$ score. SBART is also competitive with other procedures on many of the datasets. As expected, the primary problem with SBART is that it has a relatively large number of false positives, i.e. it is susceptible to detecting spurious interactions. This issue is most pronounced on (S2) and (S3), with SBART detecting between 1.5 and 2 spurious interactions. Additive groves and iterative random forests generally perform worse than SBART. In addition to having a larger false positives rate, these procedures are also prone to false negatives under simulation (S2). With the exception of (S1), the hierarchical group-lasso (HL-LS) performs worse than the other methods. Under (S1), HL-LS has reasonable performance as each component of $f_0(x)$ can be reasonably well-approximated by the assumed linear model. HL-LS also appears to perform well under (S3); this, however, is due to the fact that HL-LS typically misses several main effects, which is a substantially worse outcome than detecting a spurious interaction. The nonlinearities under (S2) and (S4) also create problems for HL-LS. All methods perform better for detecting the main effects. SBART and DP-Forests give identical results for the main effects due to the use of SBART in screening for DP-Forests. (S1) is the easiest setting, with all methods having very few false negatives and HL-LS the only method having non-negligible false-positives. Under (S2), the non-Bayesian procedures all have non-negligible false negatives, and iRF and HL-LS are additionally prone to false positives; the story is similar under (S3), with HL-LS performing better in terms of false positives but worse in terms of false negatives. All methods perform well in terms of false positives under (S4), however iRF and HL-LS also suffer from many false negatives. ![Boxplots given the distribution of integrated root mean-squared error for each method for each simulation setting.[]{data-label="fig:sim_results_1_rmse"}](figure/sim_results_1_rmse-1.pdf){width=".5\textwidth"} Results for assessing prediction performance in terms of integrated root mean-squared error (RMSE) are given in Figure \[fig:sim\_results\_1\_rmse\]. SBART and DP-Forests perform very similarly in terms of RMSE. All other methods perform substantially worse under all settings. This is likely due to a multitude of factors. First, any false negatives will contribute to poor predictive performance. Second, SBART and DP-Forests are able to take advantage of underlying smoothness in the response function which additive groves and iterative random forests cannot, while HL and HL-LS suffer from an incorrect model specification. SBART and DP-Forests are competitive in terms of runtime. For example, on a single replicate of (S4), SBART and DP-Forests took 118 seconds and 241 seconds respectively to obtain 40,000 samples from the posterior. By comparison, iRF took 279 second, HL-LS took 91 seconds, and additive groves took 4966 seconds. Additive groves was by far the slowest procedure, due to the fact that recursive feature elimination is used. We conclude that, under these settings, DP-Forests outperform all competitors are a competitive computational budget. We also consider the publicly available Boston housing dataset of @harrison1978hedonic. Analysis of the interaction structures present in this dataset was previously undertaken by @radchenko2010variable and @vo2016sparse. This dataset consists of $P = 13$ predictors and $N = 506$ neighborhoods, and a continuous response corresponding to the median house value in a given neighborhood. Method RMSE ----------------- ------ DP-Forests 1.00 iRF 1.22 HL 1.18 Additive Groves 1.16 : Cross-validation estimate of root mean-squared prediction error on the Boston housing dataset normalized by the RMSE of the DP-Forest.[]{data-label="tab:boston-rmspe"} We compare the methods in terms of goodness-of-fit, which is evaluated using a 5-fold cross validated estimate of root mean squared prediction error. Results are given in Table \[tab:boston-rmspe\]. For prediction, the DP-Forest and SBART outperform the competing methods. The DP-Forest includes most of the predictors in the model. This can be contrasted with the fit of a sparse additive model (SPAM) @ravikumar2007spam and the fit of the VANISH model reported by @radchenko2010variable, which include only a small number of predictors. Like the VANISH algorithm, the DP-Forest selects one interaction: there is strong evidence of an interaction between `DIS` (distance to an employment center in Boston) and `LSTAT` (the proportion of individuals in a neighborhood who are lower-status). This interaction was highly stable, and was selected by every fit to the data during cross-validation; additionally, this interaction was selected by additive groves in 4 out of 5 folds during cross-validation. Interestingly, this interaction was reportedly *not* selected by VANISH, which instead selects an interaction between the variables `NOX` (nitrus-oxide concentration) and `LSTAT`. Figure \[fig:boston\_results\] gives a visualization of the `LSTAT`-`DIS` interaction. To summarize the interaction we use a “fit-the-fit” strategy and fit a generalized additive model to the fitted-values of the DP-Forest with a thin plate spline term for the interaction [@wood2003thin]. The plot then displays the `LSTAT`-specific effect of `DIS` for the $10^{\text{th}}, 20^{\text{th}}, \ldots, 90^{\text{th}}$ quantiles of `LSTAT`. This GAM nearly reproduces the fitted values from the DP-Forest and is easier to visualize. We see in Figure \[fig:boston\_results\] a clear interaction between `DIS` and `LSTAT`. Intuitively, one expects that the closer a neighborhood is to an industry center the more expensive the housing will be. This is correct for areas with fewer lower-status individuals; however, this trend does not hold when there is a higher percentage of lower-status individuals. We remark also that the data is well supported near $0$ for all values of `LSTAT`, so that this behavior is unlikely to be due to extrapolation, though extrapolation may be an issue for large values of both `LSTAT` and `DIS`. DISCUSSION {#sec:discussion} ========== We have introduced Dirichlet process forests (DP-Forests) and applied them to the problem of interaction detection. We demonstrated on both synthetic and real data that DP-Forests lead to improved interaction detection. Additionally, we demonstrated that DP-Forests are highly competitive with commonly used machine learning techniques for detecting low-order interactions. There are a number of modifications one might make to improve performance further. One possibility is to allow $\sigma_{\mu}$ to also vary by mixture component. This would allow different mixture components to have different signal levels; for example, under simulation (S4), we would expect that a smaller value of $\sigma^2_{\mu}$ is appropriate for the mixture component responsible for $x_5$ relative to $x_4$. The proposed DP-Forests model captures this feature only indirectly through the number of trees assigned to each mixture component. Additionally, it would be interesting to quantify the improvement in performance of DP-Forests over SBART theoretically. It is unknown whether SBART is variable-selection consistent, and establishing theoretically that DP-Forests are consistent for interaction detection while SBART is not remains an open problem.
--- abstract: 'We introduce six new algebraic invariants for rational difference equations. We use these invariants to perform a reduction of order in each case. This reduction of order allows us to find forbidden sets in each case. These six cases include two linear fractional rational difference equations of order greater than one. In all six cases, we give a closed form solution for all initial conditions which are not in the forbidden set. In all six cases, the initial conditions and parameters are assumed to be arbitrary complex numbers.' address: 'Department of Mathematics, University of Rhode Island,Kingston, RI 02881-0816, USA;' author: - 'Frank J. Palladino' date: 'April 1, 2010' title: On Invariants and Forbidden Sets --- Introduction ============ Invariants have been used in several cases to provide insight into the behavior of rational difference equations, see \[1\]-\[4\], \[6\]-\[10\], \[12\]-\[14\], \[16\]-\[22\], \[24\], \[25\], and \[38\]-\[40\]. In this article, we introduce six new algebraic invariants for six second order rational difference equations. Two of the six second order rational difference equations are linear fractional rational difference equations. We use these invariants to perform a reduction of order in each case. This reduction of order allows us to find forbidden sets in each case. In each case, the second order rational difference equation is transformed, via the invariant, to a family of first order linear fractional rational difference equations. Since both the forbidden set, and the closed form solution are known for first order linear fractional rational difference equations, we use this information to find the forbidden set and closed form solution for the equations given in Section 4. For the reader’s convenience, the forbidden set and closed form solution for first order linear fractional rational difference equations are presented in Section 3. In all six cases, we give a closed form solution for all initial conditions which are not in the forbidden set. In all six cases, the initial conditions and parameters are assumed to be arbitrary complex numbers. In Section 2, we give some background information on some well known examples of invariants and forbidden sets for rational difference equations. Background on invariants and forbidden sets for rational difference equations ============================================================================= When building an understanding of invariants for rational difference equations it is helpful to use the following equations as prototypes: $$x_{n+1}=\frac{\alpha + x_n}{x_{n-1}},\quad n=0,1,2,\dots,$$ $$x_{n+1}=\frac{\alpha + x_n +x_{n-1} }{x_{n-2}},\quad n=0,1,2,\dots,$$ with $\alpha>0$ and positive initial conditions. Equation (1) is known by the cognomen Lyness’ equation. Equation (2) is known by the cognomen Todd’s equation and has also been referred to as the third order Lyness’ recurrence, see for example [@gsm3]. Equations (1) and (2) are discussed in the following references: \[2\], \[4\], \[6\]-\[8\], \[12\], \[13\], \[15\]-\[26\], and \[38\]-\[40\]. In [@klr] it is first shown for Equations (1) and (2) that we have an algebraic invariant in each case, particularly for Equation (1) we have: $$\left(\alpha + x_{n-1}+ x_{n}\right)\left(1+\frac{1}{x_{n-1}}\right)\left(1+\frac{1}{x_{n}}\right)= constant.$$ For Equation (2) we have the algebraic invariant, $$\left(\alpha + x_{n-2} + x_{n-1} + x_{n}\right)\left(1+\frac{1}{x_{n}}\right)\left(1+\frac{1}{x_{n-1}}\right)\left(1+\frac{1}{x_{n-2}}\right)= constant.$$ Now, in the case of Equation (1), it has been shown in the unpublished paper of Zeeman [@z] that the map induced by Equation (1) is conjugated to a rotation. Moreover, in [@gsm3], it has been shown that the three dimensional case given by Equation (2) is also conjugated to a rotation. It has been shown, see [@z] and [@gsm3], that in both Equations (1) and (2), the phase space of the associated dynamical system is foliated by invariant curves. These invariant curves, which comprise the leaves of the foliation, are algebraic curves which degenerate into isolated points in some places. The Lyness invariants can be generalized for the following $k^{th}$ order rational difference equation, sometimes called the $k^{th}$ order Lyness equation: $$x_{n+1}=\frac{\alpha + \sum^{k-1}_{i=0}x_{n-i} }{x_{n-k}},\quad n=0,1,2,\dots.$$ with $\alpha\geq 0$ and positive initial conditions. With this generalization we obtain the following algebraic invariant: $$\left(\prod^{k}_{i=0}\left( \frac{1}{x_{n-i}} +1 \right)\right)\left( \alpha + \sum^{k}_{i=0}x_{n-i} \right)= constant.$$ Significant work has been done on this equation, see for example [@br2] and [@gsm5]. Further invariants are given in [@gki] for $k$ sufficiently large. Recently geometric objects have been used effectively to obtain information about rational difference equations with invariants. A good example of this is the use of the Lie symmetry of the associated map by A. Cima, A. Gasull, and V. Mañosa in [@gsm3] and [@gsm4]. Another technique, which makes use of algebraic geometry, can be found in [@ag]. The forbidden set of a rational difference equation is the set of initial conditions which eventually map to a singularity. Finding such sets has become a topic of recent interest in the literature, see for example \[5\], \[11\], \[27\]-\[37\]. Few techniques for determining forbidden sets are known. One such technique is the use of a semiconjugate factorization, see \[29\], \[30\], \[32\], and \[34\]-\[37\] for more on this topic. In Section 4, we will find invariants for the given second order difference equations which allow us to perform a reduction of order in each case. In each case, the second order rational difference equation is transformed, via the invariant, to a family of first order linear fractional rational difference equations. Since the forbidden set and closed form solution is known for first order linear fractional rational difference equations, we use this information to find the forbidden set and closed form solution for the equations given in Section 4. For the reader’s convenience, the forbidden set and closed form solution for first order linear fractional rational difference equations are presented in Section 3. The Riccati Difference Equation =============================== In this section, we briefly summarize the known results on the Riccati difference equation, see [@gks], [@riccati] and [@kulenovicladas]. The branch cut for the complex square root will be taken to be the negative real numbers for the remainder of the article. Consider the rational difference equation $$x_{n+1}=\frac{\alpha + \beta x_{n}}{A+ Bx_{n}},\quad n=0,1,\dots,$$ where the parameters $\alpha , \beta , A, B$ and the initial condition $x_{0}$ are complex numbers. There are seven possibilities. 1. Suppose $A=B=0$, then $\mathfrak{F}=\mathbb{C}$. 2. Suppose $B=0$, and $A\neq 0$, then $x_{n+1}=\frac{\alpha + \beta x_{n}}{A}$. So $\mathfrak{F}=\emptyset$ and $x_{n}=\frac{\beta^{n}x_{0}}{A^{n}}+\sum^{n}_{i=1}\frac{\alpha \beta^{n-1}}{A^{n}}$ for all $n\geq 1$. 3. Suppose $B\neq 0$ and $\alpha B - \beta A = 0$, then the forbidden set, $\mathfrak{F}= \{\frac{-A}{B}\}$ and $x_{n}=\frac{\beta}{B}$ for all $n\geq 1$ whenever $x_{0}\not\in \{\frac{-A}{B}\}$. 4. Suppose $B\neq 0$, $\alpha B - \beta A \neq 0$, and $\beta +A =0$, then the forbidden set, $\mathfrak{F}= \{\frac{-A}{B}\}$. Furthermore, $x_{2n+1}=\frac{\alpha + \beta x_{0}}{A+ B x_{0}}$ for all $n\geq 0$, and $x_{2n}=x_{0}$ for all $n\geq 0$, whenever $x_{0}\not\in \{\frac{-A}{B}\}$. 5. Suppose $B\neq 0$, $\alpha B - \beta A \neq 0$, $\beta +A \neq 0$, and $\frac{\beta A - \alpha B}{(\beta + A)^{2}}\in \mathbb{C}\setminus [\frac{1}{4},\infty)$, and let $$w_{-}=\frac{1-\sqrt{1-4\left(\frac{\beta A - \alpha B}{(\beta + A)^{2}}\right)}}{2}\quad and \quad w_{+}=\frac{1+\sqrt{1-4\left(\frac{\beta A - \alpha B}{(\beta + A)^{2}}\right)}}{2},$$ then the forbidden set, $\mathfrak{F}= \left\{ \frac{\beta +A}{B}\left(\frac{w^{n-1}_{+}-w^{n-1}_{-}}{w^{n}_{+}-w^{n}_{-}}\right)w_{+}w_{-} - \frac{A}{B}|n\in\mathbb{N}\right\}$. Furthermore $$x_{n}=\frac{\beta +A}{B}\left(\frac{(\frac{Bx_{0}+A}{\beta + A}-w_{-})w^{n+1}_{+}-(w_{+}-\frac{Bx_{0}+A}{\beta + A})w^{n+1}_{-}}{(\frac{Bx_{0}+A}{\beta + A}-w_{-})w^{n}_{+}-(w_{+}-\frac{Bx_{0}+A}{\beta + A})w^{n}_{-}}\right) - \frac{A}{B},\quad for\; all\; n\in\mathbb{N},$$ whenever $x_{0}\not\in \left\{ \frac{\beta +A}{B}\left(\frac{w^{n-1}_{+}-w^{n-1}_{-}}{w^{n}_{+}-w^{n}_{-}}\right)w_{+}w_{-} - \frac{A}{B}|n\in\mathbb{N}\right\}$. 6. Suppose $B\neq 0$, $\alpha B - \beta A \neq 0$, $\beta +A \neq 0$, and $\frac{\beta A - \alpha B}{(\beta + A)^{2}}=\frac{1}{4}$, then the forbidden set, $\mathfrak{F}= \left\{ \frac{\beta +A}{B}\left(\frac{n-1}{2n}\right) - \frac{A}{B}|n\in\mathbb{N}\right\}$. Furthermore $$x_{n}=\frac{\beta +A}{B}\left(\frac{1+(\frac{2Bx_{0}+2A}{\beta + A}-1)(n+1)}{2+2(\frac{2Bx_{0}+2A}{\beta + A}-1)n}\right) - \frac{A}{B},\quad for\; all\; n\in\mathbb{N},$$ whenever $x_{0}\not\in \left\{ \frac{\beta +A}{B}\left(\frac{n-1}{2n}\right) - \frac{A}{B}|n\in\mathbb{N}\right\}$. 7. Suppose $B\neq 0$, $\alpha B - \beta A \neq 0$, $\beta +A \neq 0$, and $R=\frac{\beta A - \alpha B}{(\beta + A)^{2}}\in (\frac{1}{4},\infty)$, let $\phi = \arccos{\left(\frac{1}{2}\sqrt{\frac{1}{R}}\right)}$,then the forbidden set, $\mathfrak{F}= \left\{ \frac{\beta +A}{2B}\left(1-\sqrt{4R-1}\cot{\left(n\phi\right)}\right) - \frac{A}{B}|n\in\mathbb{N}\right\}$. Furthermore, for all $n\in\mathbb{N}$, $$x_{n}=\frac{\beta +A}{B}\left(\sqrt{R}\right)\left(\frac{\sqrt{4R-1}\cos{\left((n+1)\phi\right)}-(\frac{2Bx_{0}+2A}{\beta + A}-1)\sin{\left((n+1)\phi\right)}}{\sqrt{4R-1}\cos{\left(n\phi\right)}-(\frac{2Bx_{0}+2A}{\beta + A}-1)\sin{\left(n\phi\right)}}\right) - \frac{A}{B},$$ whenever $x_{0}\not\in \left\{ \frac{\beta +A}{2B}\left(1-\sqrt{4R-1}\cot{n\phi}\right) - \frac{A}{B}|n\in\mathbb{N}\right\}$. Using invariants to find forbidden sets ======================================= Here we present six rational difference equations, each of order 2, which possess algebraic invariants. The invariants allow for a reduction of order in each case so that the dynamics of the equation can be described by either a family of Riccati equations, or a family of linear equations. The examples here have nice algebraic properties which are conducive to this type of approach. A remaining question of importance which is left to further work is whether this approach can be adapted to yield forbidden sets and explicit closed form solutions for other rational equations. In the remainder of the article we make the notational convention of representing the set of inital conditions as a set in $\mathbb{C}^{2}$ with coordinates $(z_{0},z_{-1})$ this will be important for the remainder of the article, especially as it is needed to accurately describe the forbidden sets. Moreover, to accommodate the large formulae necessary to give a complete description of the forbidden sets, the forbidden sets are included in figures 1 and 2. Consider the rational difference equation, $$z_{n+1}=\frac{z_{n}}{1+B z_{n-1}-B z_{n}},\quad n=0,1,\dots,$$ with $B\in \mathbb{C}\setminus \{0\}$ and with initial conditions $z_{0},z_{-1}\in \mathbb{C}$. Then the forbidden set, $\mathfrak{F}=S_{1}$, where $S_{1}$ is given in Figure 1. Also given $(z_{0},z_{-1})\notin \mathfrak{F}$ there are four possibilities: i. $z_{0}=0$, in which case $z_{n}=0$ for all $n\geq 0$. ii. $z_{0}\neq 0$ and $z_{-1}=\frac{-1}{B}$, in which case $z_{2n+1}=\frac{-1}{B}$ for all $n\geq 0$ and $z_{2n}=\frac{-1}{2B+B^{2}z_{2n-2}}$ for all $n\geq 1$. Thus $$z_{2n}=\frac{n+2+nBz_{0}+Bz_{0}}{nB+B+nB^{2}z_{0}}-\frac{2}{B},\quad n\geq 0.$$ iii. $z_{0}=\frac{-1}{B}$, in which case $z_{2n}=\frac{-1}{B}$ for all $n\geq 0$ and $z_{2n+1}=\frac{-1}{2B+B^{2}z_{2n-1}}$ for all $n\geq 0$. Thus $$z_{2n+1}=\frac{n+3+nBz_{-1}+2Bz_{-1}}{nB+2B+nB^{2}z_{-1}+B^{2}z_{-1}}-\frac{2}{B},\quad n\geq -1.$$ iv. $z_{0}\neq 0, \frac{-1}{B}$ and $z_{-1}\neq \frac{-1}{B}$, in which case $z_{n+1}=\frac{1+Bz_{n}}{C-B-B^{2}z_{n}}$ for all $n\geq 0$, where $C=\left(\frac{1}{z_{0}} + B \right)\left(1+ B z_{-1} \right)$. This implies the following: a. If $\frac{B}{C}\in \mathbb{C}\setminus \left[\frac{1}{4},\infty\right)$, then call $C-B-B^{2}z_{0}-C\lambda_{1}=M_{1}$ and $C\lambda_{2}+B+B^{2}z_{0}-C=M_{2}$, and $$z_{n}=\frac{-C}{B^{2}}\left(\frac{M_{2}\lambda^{n+1}_{1}+M_{1}\lambda^{n+1}_{2}}{M_{2}\lambda^{n}_{1}+M_{1}\lambda^{n}_{2}}\right)+\frac{C}{B^{2}}-\frac{1}{B},\quad n\geq 0.$$ Where $$\lambda_{1}=\frac{1-\sqrt{1-\frac{4B}{C}}}{2},\quad and \quad \lambda_{2}=\frac{1+\sqrt{1-\frac{4B}{C}}}{2}.$$ b. If $C=4B$, then $$z_{n}=\frac{4+(n+1)\left(2-2Bz_{0} \right)}{nB^{2}z_{0}-2B-nB}+\frac{3}{B},\quad n\geq 0.$$ c. If $\frac{C}{B}\in (0,4)$, then call $\arccos\left(\sqrt{\frac{C}{4B}}\right)=\rho$, and for $n\geq 0$ we have, $$z_{n}=\frac{-\sqrt{\frac{C}{B}}}{B}\left(\frac{\left(\sqrt{\frac{4B}{C}-1}\right)\cos\left((n+1)\rho\right)+(2w_{0}-1)\sin\left((n+1)\rho\right)}{\left(\sqrt{\frac{4B}{C}-1}\right)\cos\left(n\rho\right)+(2w_{0}-1)\sin\left(n\rho\right)}\right)+\frac{C-B}{B^{2}}.$$ Where $$w_{0}=\frac{-B^{2}z_{0}+C-B}{C}.$$ Let us begin by finding the forbidden set for our Equation (4). Let $\mathfrak{F}_{1}$ be the forbidden set with $z_{0}=0$. Let $\mathfrak{F}_{2}$ be the forbidden set with $z_{0}\neq 0$ and $z_{-1}=\frac{-1}{B}$. Let $\mathfrak{F}_{3}$ be the forbidden set with $z_{0}=\frac{-1}{B}$. Let $\mathfrak{F}_{4}$ be the forbidden set with $z_{0}\neq 0,\frac{-1}{B}$ and $z_{-1}\neq\frac{-1}{B}$. Then the forbidden set $\mathfrak{F}=\bigcup^{4}_{i=1}\mathfrak{F}_{i}$. So we must find all $\mathfrak{F}_{i}$ with $1\leq i\leq 4$. Let us begin with $\mathfrak{F}_{1}$. We have assumed $z_{0}=0$. Let us further assume that $z_{-1}\neq \frac{-1}{B}$, then $z_{1}$ is well defined and equal to $0$. Whenever $(z_{n},z_{n-1})=(0,0)$, then $z_{n+1}$ is well defined and equal to $0$. Thus, by induction, $z_{n}$ is well defined and equal to $0$ for all $n\in\mathbb{N}$. Thus, if $z_{-1}\neq \frac{-1}{B}$, then $(0,z_{-1})\not\in \mathfrak{F}_{1}$. On the other hand, assume $z_{0}=0$ and $z_{-1}=\frac{-1}{B}$, then $z_{1}$ is not well defined and so $(0,\frac{-1}{B})\in \mathfrak{F}_{1}$. Thus, $\mathfrak{F}_{1}=\{(0,\frac{-1}{B})\}$. Now, let us find $\mathfrak{F}_{2}$. In this case we have assumed $z_{0}\neq 0$ and $z_{-1}=\frac{-1}{B}$. Assume $z_{2n}$ is well defined for $0\leq n\leq N$, then we may make an induction argument with $z_{-1}$ as the base case. Assume that $z_{2k+1}=\frac{-1}{B}$ for $k<N$, then, by our earlier assumption, $z_{2k+2}$ is well defined and $$z_{2k+2}= \frac{-1}{2B+B^{2}z_{2k}}\neq 0.$$ So, $$z_{2k+3}=\frac{z_{2k+2}}{1+B z_{2k+1}-B z_{2k+2}}= \frac{z_{2k+2}}{-B z_{2k+2}}= \frac{-1}{B}.$$ By this induction argument, $z_{2n+1}=\frac{-1}{B}$ and assuming $z_{2N+2}$ is well defined, $$z_{2n+2}= \frac{-1}{2B+B^{2}z_{2n}},$$ for $0\leq n\leq N$. Call the forbidden set of the following first order difference equation $\hat{\mathfrak{F}}$, $$x_{n+1}= \frac{-1}{2B+B^{2}x_{n}}, \quad n=0,1,2,\dots.$$ Now, suppose $z_{0}\neq 0$, $z_{-1}=\frac{-1}{B}$, and $z_{0}\not\in \hat{\mathfrak{F}}$, and assume that $z_{2n}$ is well defined for $n\leq N$, then $z_{2N+1}=\frac{-1}{B}$ and $$1+Bz_{2N}-Bz_{2N+1}=2 + Bz_{2N}=\frac{2B + B^{2}z_{2N}}{B}\neq 0,$$ since $z_{0}\not\in \hat{\mathfrak{F}}$. Thus, $z_{2n}$ is well defined for $n\leq N+1$. By induction, $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus, $(z_{0},\frac{-1}{B})\not\in \mathfrak{F}_{2}$. Now, suppose $z_{0}\neq 0$, $z_{-1}=\frac{-1}{B}$, and $z_{0}\in \hat{\mathfrak{F}}$. Further assume for the sake of contradiction that $(z_{0},\frac{-1}{B})\not\in \mathfrak{F}_{2}$. Then, since $(z_{0},\frac{-1}{B})\not\in \mathfrak{F}_{2}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $2B+B^{2}z_{2N}=0$ for some $N\in\mathbb{N}$, since $z_{0}\in \hat{\mathfrak{F}}$. But then $$1+Bz_{2N}-Bz_{2N+1}=2 + Bz_{2N}=\frac{2B + B^{2}z_{2N}}{B}= 0.$$ This is a contradiction. Thus, $(z_{0},\frac{-1}{B})\in \mathfrak{F}_{2}$. So $\mathfrak{F}_{2}=\left(\hat{\mathfrak{F}}\setminus \{0\}\right)\times \{\frac{-1}{B}\}$. Next, let us find $\mathfrak{F}_{3}$. In this case we have assumed $z_{0}=\frac{-1}{B}$. Assume $z_{2n-1}$ is well defined for $1\leq n\leq N$, then we may make an induction argument with $z_{0}$ as the base case. Assume that $z_{2k}=\frac{-1}{B}$ for $k<N$, then, by our earlier assumption, $z_{2k+1}$ is well defined and $$z_{2k+1}= \frac{-1}{2B+B^{2}z_{2k}}\neq 0.$$ So, $$z_{2k+2}=\frac{z_{2k+1}}{1+B z_{2k}-B z_{2k+1}}= \frac{z_{2k+1}}{-B z_{2k+1}}= \frac{-1}{B}.$$ By this induction argument, $z_{2n}=\frac{-1}{B}$ and assuming $z_{2N+1}$ is well defined, $$z_{2n+1}= \frac{-1}{2B+B^{2}z_{2n}},$$ for $0\leq n\leq N$. Call the forbidden set of the following first order difference equation $\hat{\mathfrak{F}}$, $$x_{n+1}= \frac{-1}{2B+B^{2}x_{n}}, \quad n=0,1,2,\dots.$$ Now, suppose $z_{0}=\frac{-1}{B}$ and $z_{-1}\not\in \hat{\mathfrak{F}}$, and assume that $z_{2n-1}$ is well defined for $n\leq N$, then $z_{2N}=\frac{-1}{B}$ and $$1+Bz_{2N-1}-Bz_{2N}=2 + Bz_{2N-1}=\frac{2B + B^{2}z_{2N-1}}{B}\neq 0,$$ since $z_{-1}\not\in \hat{\mathfrak{F}}$. Thus, $z_{2n-1}$ is well defined for $n\leq N+1$. By induction, $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus, $(\frac{-1}{B},z_{-1})\not\in \mathfrak{F}_{3}$. Now, suppose $z_{0}=\frac{-1}{B}$ and $z_{-1}\in \hat{\mathfrak{F}}$. Further assume for the sake of contradiction that $(\frac{-1}{B},z_{-1})\not\in \mathfrak{F}_{3}$. Then, since $(\frac{-1}{B},z_{-1})\not\in \mathfrak{F}_{3}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $2B+B^{2}z_{2N-1}=0$ for some $N\in\mathbb{N}$, since $z_{-1}\in \hat{\mathfrak{F}}$. But then $$1+Bz_{2N-1}-Bz_{2N}=2 + Bz_{2N-1}=\frac{2B + B^{2}z_{2N-1}}{B}= 0.$$ This is a contradiction. Thus, $(\frac{-1}{B},z_{-1})\in \mathfrak{F}_{3}$. So, $\mathfrak{F}_{3}= \{\frac{-1}{B}\} \times \hat{\mathfrak{F}}$. Finally, let us find $\mathfrak{F}_{4}$. In this case we have assumed $z_{0}\neq 0,\frac{-1}{B}$ and $z_{-1}\neq \frac{-1}{B}$. Assume $z_{n}$ is well defined for $n\leq N$, then a simple induction argument shows $z_{n}\neq 0$ for $1\leq n\leq N$. Using this we get that for $n<N$, $$\left(\frac{1}{z_{n}} + B \right)\left(1+ B z_{n-1} \right)= \left(\frac{1+B z_{n-1}}{z_{n}}\right) \left(1+ B z_{n} \right)=$$ $$\left(\frac{1+B z_{n-1}-Bz_{n}+Bz_{n}}{z_{n}}\right) \left(1+ B z_{n} \right)=$$ $$\left(\frac{1+B z_{n-1}-Bz_{n}}{z_{n}} + B\right) \left(1+ B z_{n} \right)=\left(\frac{1}{z_{n+1}} + B \right)\left(1+ B z_{n} \right).$$ So, $$\left(\frac{1}{z_{n}} + B \right)\left(1+ B z_{n-1} \right)=constant,$$ for $0\leq n\leq N$. Thus, $z_{n}\neq \frac{-1}{B}$ for $n\leq N$ and assuming $z_{N+1}$ is well defined, $$z_{n+1}= \frac{1+Bz_{n}}{C-B-B^{2}z_{n}},$$ for $0\leq n\leq N$. Where $C=\left(\frac{1}{z_{0}} + B \right)\left(1+ B z_{-1} \right)$. Call the forbidden set of the following first order difference equation $\mathfrak{F}_{C}$, $$x_{n+1}= \frac{1+Bx_{n}}{C-B-B^{2}x_{n}}, \quad n=0,1,2,\dots.$$ Note that the set $\mathfrak{F}_{C}$ changes depending on the value of $C$. Now, suppose $z_{0}\neq 0,\frac{-1}{B}$, $z_{-1}\neq \frac{-1}{B}$, $C=\left(\frac{1}{z_{0}} + B \right)\left(1+ B z_{-1} \right)$, and $z_{0}\not\in \mathfrak{F}_{C}$, and assume that $z_{n}$ is well defined for $n\leq N$. Recall that we have shown that this implies $z_{n}\neq \frac{-1}{B}$ for $n\leq N$. Then $$1+Bz_{N-1}-Bz_{N}=\frac{C}{B+\frac{1}{z_{N}}}-Bz_{N}=\frac{C-B - B^{2}z_{N}}{B+\frac{1}{z_{N}}}\neq 0,$$ since $z_{0}\not\in \mathfrak{F}_{C}$. Thus, $z_{n}$ is well defined for $n\leq N+1$. By induction, $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus, $(z_{0},z_{-1})\not\in \mathfrak{F}_{4}$. Now, suppose $z_{0}\neq 0,\frac{-1}{B}$, $z_{-1}\neq \frac{-1}{B}$, $C=\left(\frac{1}{z_{0}} + B \right)\left(1+ B z_{-1} \right)$, and $z_{0}\in \mathfrak{F}_{C}$. Further assume for the sake of contradiction that $(z_{0},z_{-1})\not\in \mathfrak{F}_{4}$. Then, since $(z_{0},z_{-1})\not\in \mathfrak{F}_{4}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $C-B - B^{2}z_{N}=0$ for some $N\in\mathbb{N}$, since $z_{0}\in \mathfrak{F}_{C}$. But then $$1+Bz_{N-1}-Bz_{N}=\frac{C}{B+\frac{1}{z_{N}}}-Bz_{N}=\frac{C-B - B^{2}z_{N}}{B+\frac{1}{z_{N}}}= 0.$$ This is a contradiction. Thus, $(z_{0},z_{-1})\in \mathfrak{F}_{4}$. So, $$\mathfrak{F}_{4}=\bigcup_{b\neq \frac{-1}{B}} \bigcup_{C\neq 0} \left(\left(\left(\mathfrak{F}_{C}\setminus\left\{0,\frac{-1}{B}\right\}\right)\times \{b\}\right)\cap \left\{(a,b)|C=\left(\frac{1}{a} + B \right)\left(1+ B b \right)\right\}\right).$$ Reducing the above expression, we get $$\mathfrak{F}_{4}= \bigcup_{C\neq 0} \left\{ \left(a,\frac{Ca-Ba-1}{B^{2}a+B}\right)| a\in\mathfrak{F}_{C}\setminus\left\{0,\frac{-1}{B}\right\}\right\}.$$ From the above characterization of $\mathfrak{F}_{1},\dots ,\mathfrak{F}_{4} $ and from the facts about the forbidden sets of the Riccati difference equation in Section 3, we get $\mathfrak{F}=S_{1}$, where $S_{1}$ is given in Figure 1. Now, let us describe the behavior when $(z_{0},z_{-1})\notin \mathfrak{F}$. Our analysis of this case will be broken into four subcases as shown in the statement of Theorem 2. Let us first consider case (i). In this case $z_{0}=0$ and also, since $(z_{0},z_{-1})\notin \mathfrak{F}$, there will never be division by zero in our solution. It is immediately clear from these two facts and from a basic induction argument that $z_{n}=0$ for all $n\geq 0$. Now, let us consider case (ii). In this case $z_{0}\neq 0$ and $z_{-1}=\frac{-1}{B}$. Also, since $(z_{0},z_{-1})\notin \mathfrak{F}$, there will never be division by zero in our solution. This allows us to prove by induction that $z_{n}\neq 0$ for all $n\geq 0$. The induction argument for this piece is straightforward and is omitted. Now, we will show by induction that $z_{2n+1}=\frac{-1}{B}$ for all $n\geq 0$. Since $z_{0}\neq 0$, we have $$z_{1}= \frac{z_{0}}{1+B z_{-1}-B z_{0}}= \frac{z_{0}}{-B z_{0}}= \frac{-1}{B}.$$ This provides the base case for our induction argument. Now, suppose $z_{2n-1}=\frac{-1}{B}$, then since $z_{n}\neq 0$ for all $n\geq 0$, we have $$z_{2n+1}=\frac{z_{2n}}{1+B z_{2n-1}-B z_{2n}}= \frac{z_{2n}}{-B z_{2n}}= \frac{-1}{B}.$$ Thus, we have shown that $z_{2n+1}=\frac{-1}{B}$ for all $n\geq 0$. This fact and Equation (4) immediately yields, $$z_{2n}=\frac{z_{2n-1}}{1+B z_{2n-2}-B z_{2n-1}}= \frac{-1}{2B+B^{2}z_{2n-2}}, \quad n=1,2,\dots.$$ Thus, the even terms are defined recursively by the above equation. Notice that since we have only rewritten the recursive Equation (4) we cannot have division by zero in this equation with our choice of initial conditions. In other words, $z_{2n}\neq \frac{-2}{B}$ for any $n\geq 0$. Notice that this is a Riccati equation after a change of variables. Since we already know the closed form solution for any Riccati equation, we may obtain a closed form solution for $z_{2n}$, and thus a closed form solution for $z_{n}$ in this case. We use the known results for Riccati equations restated in Section 3 to obtain the closed form solutions in the statement of the theorem. Now, let us consider case (iii). In this case, $z_{0}=\frac{-1}{B}$. Also, since $(z_{0},z_{-1})\notin \mathfrak{F}$, there will never be division by zero in our solution. This allows us to prove by induction that $z_{n}\neq 0$ for all $n\geq 0$. The induction argument for this piece is straightforward and is omitted. Now, we will show by induction that $z_{2n}=\frac{-1}{B}$ for all $n\geq 0$. Since $z_{1}\neq 0$, we have $$z_{2}= \frac{z_{1}}{1+B z_{0}-B z_{1}}= \frac{z_{1}}{-B z_{1}}= \frac{-1}{B}.$$ This provides the base case for our induction argument. Now, suppose $z_{2n-2}=\frac{-1}{B}$, then since $z_{n}\neq 0$ for all $n\geq 0$, we have $$z_{2n}=\frac{z_{2n-1}}{1+B z_{2n-2}-B z_{2n-1}}= \frac{z_{2n-1}}{-B z_{2n-1}}= \frac{-1}{B}.$$ Thus, we have shown that $z_{2n}=\frac{-1}{B}$ for all $n\geq 0$. This fact and Equation (4) immediately yields, $$z_{2n+1}=\frac{z_{2n}}{1+B z_{2n-1}-B z_{2n}}= \frac{-1}{2B+B^{2}z_{2n-1}}, \quad n=0,1,2,\dots.$$ Thus, the odd terms are defined recursively by the above equation. Notice that since we have only rewritten the recursive Equation (4) we cannot have division by zero in this equation with our choice of initial conditions. In other words, $z_{2n-1}\neq \frac{-2}{B}$ for any $n\geq 0$. Notice that this is a Riccati equation after a change of variables. Since we already know the closed form solution for any Riccati equation, we may obtain a closed form solution for $z_{2n+1}$, and thus a closed form solution for $z_{n}$ in this case. We use the known results for Riccati equations restated in Section 3 to obtain the closed form solutions in the statement of the theorem. Let us finally consider case (iv). In this case $z_{0}\neq 0, \frac{-1}{B}$ and $z_{-1}\neq \frac{-1}{B}$. Also, since $(z_{0},z_{-1})\notin \mathfrak{F}$, there will never be division by zero in our solution. This allows us to prove by induction that $z_{n}\neq 0$ for all $n\geq 0$. The induction argument for this piece is straightforward and is omitted. Since there is never division by zero in our solution, and since $z_{n}\neq 0$ for all $n\geq 0$, the following algebraic computation is well defined. $$\left(\frac{1}{z_{n}} + B \right)\left(1+ B z_{n-1} \right)= \left(\frac{1+B z_{n-1}}{z_{n}}\right) \left(1+ B z_{n} \right)=$$ $$\left(\frac{1+B z_{n-1}-Bz_{n}+Bz_{n}}{z_{n}}\right) \left(1+ B z_{n} \right)=$$ $$\left(\frac{1+B z_{n-1}-Bz_{n}}{z_{n}} + B\right) \left(1+ B z_{n} \right)=\left(\frac{1}{z_{n+1}} + B \right)\left(1+ B z_{n} \right).$$ Thus, we have the following algebraic invariant: $$\left(\frac{1}{z_{n}} + B \right)\left(1+ B z_{n-1} \right)=constant.$$ For our fixed but arbitrary initial conditions with $z_{0}\neq 0, \frac{-1}{B}$ and $z_{-1}\neq \frac{-1}{B}$ let us denote $C=\left(\frac{1}{z_{0}} + B \right)\left(1+ B z_{-1} \right)$. Since $z_{0}\neq 0$, $C$ is well defined, and since $z_{0},z_{-1}\neq \frac{-1}{B}$, $C\neq 0$. Since $C\neq 0$, this forces $z_{n}\neq \frac{-1}{B}$ for all $n\geq 0$. Now, we claim that since $(z_{0},z_{-1})\notin \mathfrak{F}$, $z_{n}\neq \frac{C-B}{B^{2}}$ for all $n\geq 0$. In the case where $C=B$ it follows from the fact that $z_{n}\neq 0$ for all $n\geq 0$. In the remaining case, suppose there were such an $N$, then: $$\left(\frac{B^{2}}{C-B} + B \right)\left(1+ B z_{N-1} \right)=C.$$ This implies that $z_{N}=\frac{C-B}{B^{2}}$ and $z_{N-1}= \frac{C-2B}{B^{2}}$, but then $1+Bz_{N-1}-Bz_{N}= 0$. However, this contradicts the fact that $(z_{0},z_{-1})\notin \mathfrak{F}$. Thus, we have that $z_{n}\neq \frac{C-B}{B^{2}}$ for all $n\geq 0$. Algebraic manipulations of our invariant yield the following: $$z_{n}=\frac{1+Bz_{n-1}}{C-B-B^{2}z_{n-1}},\quad n\geq 1.$$ Since $z_{n}\neq \frac{C-B}{B^{2}}$ for all $n\geq 0$, this equation is well-defined for all $n\geq 1$. Thus the dynamics of $\{z_{n}\}^{\infty}_{n=-1}$ are given by a Riccati equation in this case. Since we already know the closed form solution for any Riccati equation, we may obtain a closed form solution for $\{z_{n}\}^{\infty}_{n=-1}$ in this case. We use the known results for Riccati equations restated in Section 3 to obtain the closed form solutions in the statement of the theorem. Consider the rational difference equation, $$z_{n+1}=\frac{z_{n-1}}{1+B z_{n}-B z_{n-1}},\quad n=0,1,\dots,$$ with $B\in \mathbb{C}\setminus \{0\}$ and with initial conditions $z_{0},z_{-1}\in \mathbb{C}$. Then the forbidden set, $\mathfrak{F}=S_{2}$, where $S_{2}$ is given in Figure 1. Also, given $(z_{0},z_{-1})\notin \mathfrak{F}$, there are four possibilities: i. $z_{0}=0$, in which case $z_{2n}=0$ for all $n\geq 0$ and $z_{2n+1}=\frac{z_{2n-1}}{1-Bz_{2n-1}}$ for all $n\geq 0$. This implies $$z_{2n+1}=\frac{1}{-B}\left(\frac{1-(n+2)Bz_{-1}}{1-(n+1)Bz_{-1}}\right)+\frac{1}{B},\quad n\geq -1.$$ ii. $z_{-1}=0$, in which case $z_{2n+1}=0$ for all $n\geq 0$ and $z_{2n}=\frac{z_{2n-2}}{1-Bz_{2n-2}}$ for all $n\geq 1$. This implies $$z_{2n}=\frac{1}{-B}\left(\frac{1-(n+1)Bz_{0}}{1-nBz_{0}}\right)+\frac{1}{B},\quad n\geq 0.$$ iii. $z_{0}=\frac{-1}{B}$, in which case $z_{n}=\frac{-1}{B}$ for all $n\geq 0$. iv. $z_{0}\neq 0, \frac{-1}{B}$ and $z_{-1}\neq 0$, in which case $z_{n+1}=\frac{1}{Cz_{n}-B}$ for all $n\geq 0$, where $C=\left(\frac{1}{z_{0}} + B \right)\left(\frac{1}{z_{-1}} \right)$. This implies the following: a. If $\frac{-C}{B^{2}}\in \mathbb{C}\setminus \left[\frac{1}{4},\infty\right)$, then $$z_{n}=\frac{-B}{C}\left(\frac{(B\lambda_{2}+Cz_{0}-B)\lambda^{n+1}_{1}+(B-Cz_{0}-B\lambda_{1})\lambda^{n+1}_{2}}{(B\lambda_{2}+Cz_{0}-B)\lambda^{n}_{1}+(B-Cz_{0}-B\lambda_{1})\lambda^{n}_{2}}\right)+\frac{B}{C},\quad n\geq 0.$$ Where $$\lambda_{1}=\frac{1-\sqrt{1+\frac{4C}{B^{2}}}}{2},\quad and \quad \lambda_{2}=\frac{1+\sqrt{1+\frac{4C}{B^{2}}}}{2}.$$ b. If $\frac{-C}{B^{2}}=\frac{1}{4}$, then $$z_{n}=\frac{-B}{C}\left(\frac{-B+(n+1)\left(2Cz_{0}-B \right)}{-2B+4nCz_{0}-2nB}\right)+\frac{B}{C},\quad n\geq 0.$$ c. If $\frac{-C}{B^{2}}\in \left(\frac{1}{4},\infty\right)$, then call $B\sqrt{\frac{-4C}{B^{2}}-1}=D$ and call $\arccos\left(\sqrt{\frac{-B^{2}}{4C}}\right)=\rho$ and for $n\geq 0$, we have $$z_{n}=\sqrt{\frac{-B^{2}}{C}}\left(\frac{D\cos\left((n+1)\rho\right)+(B-2Cz_{0})\sin\left((n+1)\rho\right)}{BD\cos\left(n\rho\right)+(B^{2}-2CBz_{0})\sin\left(n\rho\right)}\right)+\frac{B}{C}.$$ We begin by finding the forbidden set for our Equation (5). Let $\mathfrak{F}_{1}$ be the forbidden set with $z_{0}=0$. Let $\mathfrak{F}_{2}$ be the forbidden set with $z_{-1}=0$. Let $\mathfrak{F}_{3}$ be the forbidden set with $z_{0}=\frac{-1}{B}$. Let $\mathfrak{F}_{4}$ be the forbidden set with $z_{0}\neq 0,\frac{-1}{B}$ and $z_{-1}\neq 0$. Then the forbidden set $\mathfrak{F}=\bigcup^{4}_{i=1}\mathfrak{F}_{i}$. So we must find all $\mathfrak{F}_{i}$ with $1\leq i\leq 4$. Let us begin with $\mathfrak{F}_{1}$. We have assumed $z_{0}=0$. Assume $z_{n}$ is well defined for $0\leq n\leq 2N$, then by a simple induction argument $z_{2n}=0$ for $0\leq n\leq N$ and so assuming $z_{2N+1}$ is well defined, $$z_{2n+1}= \frac{z_{2n-1}}{1-Bz_{2n-1}},$$ for $0\leq n\leq N$. Call the forbidden set of the following first order difference equation $\hat{\mathfrak{F}}$, $$x_{n+1}= \frac{x_{n}}{1-Bx_{n}}, \quad n=0,1,2,\dots.$$ Now, suppose $z_{0}=0$ and $z_{-1}\not\in \hat{\mathfrak{F}}$, and assume that $z_{n}$ is well defined for $n\leq 2N$, then $z_{2N}=0$ and $$1+Bz_{2N}-Bz_{2N-1}=1 - Bz_{2N-1}\neq 0,$$ since $z_{-1}\not\in \hat{\mathfrak{F}}$. Thus, $z_{n}$ is well defined for $n\leq 2N+1$ and $$z_{2N+1}= \frac{z_{2N-1}}{1-Bz_{2N-1}}.$$ So, $$1+Bz_{2N+1}-Bz_{2N}=1 + Bz_{2N+1}= \frac{1-Bz_{2N-1} + Bz_{2N-1}}{1-Bz_{2N-1}}= \frac{1}{1-Bz_{2N-1}}\neq 0,$$ since $1 - Bz_{2N-1}\neq 0$. Thus $z_{n}$ is well defined for $n\leq 2N+2$. By induction $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus $(0,z_{-1})\not\in \mathfrak{F}_{1}$. Now suppose $z_{0}=0$ and $z_{-1}\in \hat{\mathfrak{F}}$. Further assume for the sake of contradiction that $(0,z_{-1})\not\in \mathfrak{F}_{1}$. Then, since $(0,z_{-1})\not\in \mathfrak{F}_{1}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $1-Bz_{2N-1}=0$ for some $N\in\mathbb{N}$ since $z_{-1}\in \hat{\mathfrak{F}}$. But then $$1+Bz_{2N}-Bz_{2N-1}=1 - Bz_{2N-1}= 0.$$ This is a contradiction. Thus $(0,z_{-1})\in \mathfrak{F}_{1}$. So $\mathfrak{F}_{1}=\{0\}\times \hat{\mathfrak{F}} $. Next, let us find $\mathfrak{F}_{2}$. In this case, we have assumed $z_{-1}=0$. Assume $z_{n}$ is well defined for $0\leq n\leq 2N+1$, then by a simple induction argment $z_{2n+1}=0$ for $0\leq n\leq N$ and so assuming $z_{2N+2}$ is well defined, $$z_{2n+2}= \frac{z_{2n}}{1-Bz_{2n}},$$ for $0\leq n\leq N$. Call the forbidden set of the following first order difference equation $\hat{\mathfrak{F}}$, $$x_{n+1}= \frac{x_{n}}{1-Bx_{n}}, \quad n=0,1,2,\dots.$$ Now, suppose $z_{-1}=0$ and $z_{0}\not\in \hat{\mathfrak{F}}$ and assume that $z_{n}$ is well defined for $n\leq 2N+1$, then $z_{2N+1}=0$ and $$1+Bz_{2N+1}-Bz_{2N}=1 - Bz_{2N}\neq 0,$$ since $z_{0}\not\in \hat{\mathfrak{F}}$. Thus, $z_{n}$ is well defined for $n\leq 2N+2$ and $$z_{2N+2}= \frac{z_{2N}}{1-Bz_{2N}}.$$ So, $$1+Bz_{2N+2}-Bz_{2N+1}=1 + Bz_{2N+2}= \frac{1-Bz_{2N} + Bz_{2N}}{1-Bz_{2N}}= \frac{1}{1-Bz_{2N}}\neq 0,$$ since $1 - Bz_{2N}\neq 0$. Thus, $z_{n}$ is well defined for $n\leq 2N+3$. By induction $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus, $(z_{0},0)\not\in \mathfrak{F}_{2}$. Now, suppose $z_{-1}=0$ and $z_{0}\in \hat{\mathfrak{F}}$. Further assume for the sake of contradiction that $(z_{0},0)\not\in \mathfrak{F}_{2}$. Then, since $(z_{0},0)\not\in \mathfrak{F}_{2}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $1-Bz_{2N}=0$ for some $N\in\mathbb{N}$, since $z_{0}\in \hat{\mathfrak{F}}$. But then $$1+Bz_{2N+1}-Bz_{2N}=1 - Bz_{2N}= 0.$$ This is a contradiction. Thus, $(z_{0},0)\in \mathfrak{F}_{2}$. So $\mathfrak{F}_{2}= \hat{\mathfrak{F}}\times \{0\} $. Now, let us find $\mathfrak{F}_{3}$. We have assumed $z_{0}=\frac{-1}{B}$. Let us further assume that $z_{-1}\neq 0$, then $z_{1}$ is well defined and equal to $\frac{-1}{B}$. Whenever $(z_{n},z_{n-1})=(\frac{-1}{B},\frac{-1}{B})$ then $z_{n+1}$ is well defined and equal to $\frac{-1}{B}$. Thus, by induction, $z_{n}$ is well defined and equal to $\frac{-1}{B}$ for all $n\in\mathbb{N}$. Thus, if $z_{-1}\neq 0$, then $(\frac{-1}{B},z_{-1})\not\in \mathfrak{F}_{3}$. On the other hand, assume $z_{0}=\frac{-1}{B}$ and $z_{-1}=0$. Then $z_{1}$ is not well defined, and so $(\frac{-1}{B},0)\in \mathfrak{F}_{3}$. Thus, $\mathfrak{F}_{3}=\{(\frac{-1}{B},0)\}$. Finally, let us find $\mathfrak{F}_{4}$. In this case we have assumed $z_{0}\neq 0,\frac{-1}{B}$ and $z_{-1}\neq 0$. Assume $z_{n}$ is well defined for $n\leq N$, then a simple induction argument shows $z_{n}\neq 0$ for $n\leq N$, and so for $n < N$, $$\left(\frac{1}{z_{n+1}} + B \right)\left(\frac{1}{z_{n}} \right)= \left(\frac{1+ B z_{n} - B z_{n-1}}{z_{n-1}} + B \right) \left(\frac{1}{ z_{n}} \right)=$$ $$\left(\frac{1+ B z_{n}}{z_{n-1}} \right) \left(\frac{1}{ z_{n}} \right)=\left(\frac{1+ B z_{n}}{z_{n}} \right) \left(\frac{1}{ z_{n-1}} \right)=\left(\frac{1}{z_{n}} + B \right)\left(\frac{1}{z_{n-1}} \right).$$ So, $$\left(\frac{1}{z_{n}} + B \right)\left(\frac{1}{z_{n-1}} \right)=constant,$$ for $0\leq n\leq N$. Thus, $z_{n}\neq \frac{-1}{B}$ for $n\leq N$ and assuming $z_{N+1}$ is well defined, $$z_{n+1}= \frac{1}{Cz_{n}-B},$$ for $0\leq n\leq N$. Where $C=\left(\frac{1}{z_{0}} + B \right)\left(\frac{1}{z_{-1}} \right)$. Call the forbidden set of the following first order difference equation $\mathfrak{F}_{C}$, $$x_{n+1}= \frac{1}{Cx_{n}-B}, \quad n=0,1,2,\dots.$$ Note that the set $\mathfrak{F}_{C}$ changes depending on the value of $C$. Now, suppose $z_{0}\neq 0,\frac{-1}{B}$, $z_{-1}\neq 0$, $C=\left(\frac{1}{z_{0}} + B \right)\left(\frac{1}{z_{-1}} \right)$, and $z_{0}\not\in \mathfrak{F}_{C}$, and assume that $z_{n}$ is well defined for $n\leq N$. Then, notice that $$z_{N-1}=\frac{1+Bz_{N}}{Cz_{N}}.$$ Thus $$1+Bz_{N}-Bz_{N-1}=1+Bz_{N}-\left(\frac{B+B^{2}z_{N}}{Cz_{N}}\right)=\frac{Cz_{N}+BCz^{2}_{N}-B-B^{2}z_{N}}{Cz_{N}}$$$$=\frac{(Cz_{N}-B)(Bz_{N}+1)}{Cz_{N}}\neq 0,$$ since $z_{0}\not\in \mathfrak{F}_{C}$ and $z_{n}\neq 0,\frac{-1}{B}$ for all $n\leq N$. Thus, $z_{n}$ is well defined for $n\leq N+1$. By induction, $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus $(z_{0},z_{-1})\not\in \mathfrak{F}_{4}$.Now, suppose $z_{0}\neq 0,\frac{-1}{B}$, $z_{-1}\neq 0$, $C=\left(\frac{1}{z_{0}} + B \right)\left(\frac{1}{z_{-1}} \right)$, and $z_{0}\in \mathfrak{F}_{C}$. Further assume for the sake of contradiction that $(z_{0},z_{-1})\not\in \mathfrak{F}_{4}$. Then, since $(z_{0},z_{-1})\not\in \mathfrak{F}_{4}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $Cz_{N}-B=0$ for some $N\in\mathbb{N}$ since $z_{0}\in \mathfrak{F}_{C}$. But then $$1+Bz_{N}-Bz_{N-1}=1+Bz_{N}-\left(\frac{B+B^{2}z_{N}}{Cz_{N}}\right)=\frac{(Cz_{N}-B)(Bz_{N}+1)}{Cz_{N}}= 0.$$ This is a contradiction. Thus, $(z_{0},z_{-1})\in \mathfrak{F}_{4}$. So, $$\mathfrak{F}_{4}=\bigcup_{b\neq 0} \bigcup_{C\neq 0} \left(\left(\left(\mathfrak{F}_{C}\setminus\left\{0,\frac{-1}{B}\right\}\right)\times \{b\}\right)\cap \left\{(a,b)|C=\left(\frac{1}{a} + B \right)\left(\frac{1}{b} \right)\right\}\right).$$ Reducing the above expression, we get $$\mathfrak{F}_{4}= \bigcup_{C\neq 0} \left\{ \left(a,\frac{Ba+1}{Ca}\right)| a\in\mathfrak{F}_{C}\setminus\left\{0,\frac{-1}{B}\right\}\right\}.$$ From the above characterization of $\mathfrak{F}_{1},\dots ,\mathfrak{F}_{4} $ and from the facts about the forbidden sets of the Riccati difference equation in Section 3, we get $\mathfrak{F}=S_{2}$, where $S_{2}$ is given in Figure 1. Now, we will describe the behavior when $(z_{0},z_{-1})\notin \mathfrak{F}$. Our analysis of this case will be broken into four subcases as shown in the statement of Theorem 3. Let us first consider case (i). In this case, $z_{0}=0$ and also since $(z_{0},z_{-1})\notin \mathfrak{F}$ there will never be division by zero in our solution. It is immediately clear from these two facts and from a basic induction argument that $z_{2n}=0$ for all $n\geq 0$. This fact and Equation (5) immediately yields, $$z_{2n+1}=\frac{z_{2n-1}}{1+Bz_{2n}-Bz_{2n-1}}=\frac{z_{2n-1}}{1-Bz_{2n-1}} , \quad n=0,1,2,\dots.$$ Thus, the odd terms are defined recursively by the above equation. Notice that since we have only rewritten the recursive Equation (5), we cannot have division by zero in this equation with our choice of initial conditions. In other words, $z_{2n+1}\neq \frac{1}{B}$ for any $n\geq 0$. Notice that this is a Riccati equation after a change of variables. Since we already know the closed form solution for any Riccati equation, we may obtain a closed form solution for $z_{2n+1}$, and thus a closed form solution for $z_{n}$ in this case. We use the known results for Riccati equations restated in Section 3 to obtain the closed form solutions in the statement of the theorem. Now, let us consider case (ii). In this case $z_{-1}=0$ and also since $(z_{0},z_{-1})\notin \mathfrak{F}$ there will never be division by zero in our solution. It is immediately clear from these two facts and from a basic induction argument that $z_{2n+1}=0$ for all $n\geq 0$. This fact and Equation (5) immediately yields, $$z_{2n}=\frac{z_{2n-2}}{1+Bz_{2n-1}-Bz_{2n-2}}=\frac{z_{2n-2}}{1-Bz_{2n-2}} , \quad n=1,2,\dots.$$ Thus, the even terms are defined recursively by the above equation. Notice that since we have only rewritten the recursive Equation (5), we cannot have division by zero in this equation with our choice of initial conditions. In other words, $z_{2n}\neq \frac{1}{B}$ for any $n\geq 0$. Notice that this is a Riccati equation after a change of variables. Since we already know the closed form solution for any Riccati equation, we may obtain a closed form solution for $z_{2n}$, and thus a closed form solution for $z_{n}$ in this case. We use the known results for Riccati equations restated in Section 3 to obtain the closed form solutions in the statement of the theorem. Now, let us consider case (iii). In this case, $z_{0}=\frac{-1}{B}$. Also, since $(z_{0},z_{-1})\notin \mathfrak{F}$, there will never be division by zero in our solution. This allows us to prove by induction that $z_{n} = \frac{-1}{B}$ for all $n\geq 0$. The case $z_{0}= \frac{-1}{B}$ provides the base case. Assume that $z_{n}= \frac{-1}{B}$, since there is never division by zero from Equation (5) we get, $$z_{n+1}=\frac{z_{n-1}}{1+B z_{n}-B z_{n-1}}= \frac{z_{n-1}}{-Bz_{n-1}}= \frac{-1}{B}.$$ Thus we have shown that $z_{n} = \frac{-1}{B}$ for all $n\geq 0$. Let us finally consider case (iv). In this case, $z_{0}\neq 0, \frac{-1}{B}$ and $z_{-1}\neq 0$. Also, since $(z_{0},z_{-1})\notin \mathfrak{F}$, there will never be division by zero in our solution. This allows us to prove by induction that $z_{n}\neq 0$ for all $n\geq 0$. The induction argument for this piece is straightforward and is omitted. Since there is never division by zero in our solution and since $z_{n}\neq 0$ for all $n\geq 0$, the following algebraic computation is well defined. $$\left(\frac{1}{z_{n+1}} + B \right)\left(\frac{1}{z_{n}} \right)= \left(\frac{1+ B z_{n} - B z_{n-1}}{z_{n-1}} + B \right) \left(\frac{1}{ z_{n}} \right)= \left(\frac{1+ B z_{n}}{z_{n-1}} \right) \left(\frac{1}{ z_{n}} \right)=$$ $$\left(\frac{1+ B z_{n}}{z_{n}} \right) \left(\frac{1}{ z_{n-1}} \right)=\left(\frac{1}{z_{n}} + B \right)\left(\frac{1}{z_{n-1}} \right).$$ Thus, we have the following algebraic invariant: $$\left(\frac{1}{z_{n}} + B \right)\left(\frac{1}{z_{n-1}} \right)=constant.$$ For our fixed but arbitrary initial conditions with $z_{0}\neq 0, \frac{-1}{B}$ and $z_{-1}\neq 0$, let us denote $C=\left(\frac{1}{z_{0}} + B \right)\left(\frac{1}{ z_{-1}} \right)$. Since $z_{0}\neq 0$, $C$ is well defined and since $z_{0}\neq \frac{-1}{B}$, $C\neq 0$. Since $C\neq 0$, this forces $z_{n}\neq \frac{-1}{B}$ for all $n\geq 0$. Now, we claim that since $(z_{0},z_{-1})\notin \mathfrak{F}$, $z_{n}\neq \frac{B}{C}$ for all $n\geq 0$. In the case where $C=-B^{2}$, it follows from the fact that $z_{n}\neq \frac{-1}{B}$ for all $n\geq 0$. In the remaining case, suppose there were such an $N$, then: $$\left(\frac{C}{B} + B \right)\left(\frac{1}{z_{N-1}} \right)=C.$$ This implies that $z_{N}=\frac{B}{C}$ and $z_{N-1}= \frac{B^{2}+C}{CB}$, but then $1+Bz_{N}-Bz_{N-1}= 0$. However, this contradicts the fact that $(z_{0},z_{-1})\notin \mathfrak{F}$. Thus, we have that $z_{n}\neq \frac{B}{C}$ for all $n\geq 0$. Algebraic manipulations of our invariant yield the following: $$z_{n}=\frac{1}{Cz_{n-1}- B},\quad n\geq 1.$$ Since $z_{n}\neq \frac{B}{C}$ for all $n\geq 0$, this equation is well-defined for all $n\geq 1$. Thus, the dynamics of $\{z_{n}\}^{\infty}_{n=-1}$ are given by a Riccati equation in this case. Since we already know the closed form solution for any Riccati equation, we may obtain a closed form solution for $\{z_{n}\}^{\infty}_{n=-1}$ in this case. We use the known results for Riccati equations restated in Section 3 to obtain the closed form solutions in the statement of the theorem. Consider the rational difference equation, $$z_{n+1}=\frac{z^{2}_{n}+Bz_{n} - Bz_{n-1}}{z_{n-1}},\quad n=0,1,\dots,$$ with $B\in \mathbb{C}$ and with initial conditions $z_{0},z_{-1}\in \mathbb{C}$. Then the forbidden set, $\mathfrak{F}=S_{3}$, where $S_{3}$ is given in Figure 2. Also, given $(z_{0},z_{-1})\notin \mathfrak{F}$, $z_{n+1}=Cz_{n}-B$ for all $n\geq 0$, where $C=\frac{z_{0} + B }{z_{-1}}$. This implies $$z_{n}=C^{n}z_{0}-\sum^{n}_{k=1}C^{k-1}B,\quad n\geq 1.$$ Let us first consider the case where $(z_{0},z_{-1})\notin \mathfrak{F}$. In this case clearly $z_{n}\neq 0$ for $n\geq -1$ or else we would have division by zero. Since $z_{n}\neq 0$ for all $n\geq -1$ the following algebraic computation is well defined: $$\frac{z_{n+1}+B}{z_{n}}=\frac{\frac{z^{2}_{n}+Bz_{n} - Bz_{n-1}}{z_{n-1}}+B}{z_{n}}= \frac{z^{2}_{n}+Bz_{n}}{z_{n}z_{n-1}}= \frac{z_{n}+B}{z_{n-1}}.$$ Thus, we have the following algebraic invariant: $$\frac{z_{n}+B}{z_{n-1}}=constant.$$ For our fixed but arbitrary initial conditions, let us denote $C=\frac{z_{0} + B }{z_{-1}}$. Since $z_{-1}\neq 0$, $C$ is well defined. Algebraic manipulations of our invariant yield the following: $$z_{n}=Cz_{n-1}-B,\quad n\geq 1.$$ Thus, the dynamics of $\{z_{n}\}^{\infty}_{n=-1}$ are given by a linear equation in this case. Since we already know the closed form solution for any linear equation, we may obtain a closed form solution for $\{z_{n}\}^{\infty}_{n=-1}$ in this case. The resulting closed form solution is $$z_{n}=C^{n}z_{0}-\sum^{n}_{k=1}C^{k-1}B,\quad n\geq 1.$$ Now, we must find the forbidden set for our Equation (6). Let $\mathfrak{F}$ be the forbidden set. Suppose $z_{0},z_{-1}\neq 0$, $C=\frac{z_{0} + B }{z_{-1}}\neq 1$, and $z_{0}\not\in \left\{\frac{B-BC^{n}}{C^{n}-C^{n+1}}|n\in\mathbb{N}\right\}$, and assume that $z_{n}$ is well defined for $n\leq N$. Then $$z_{N-1}\neq 0,$$ since $z_{0}\not\in \{\frac{B-BC^{n}}{C^{n}-C^{n+1}}|n\in\mathbb{N}\}$. Thus, $z_{n}$ is well defined for $n\leq N+1$. By induction, $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus, $(z_{0},z_{-1})\not\in \mathfrak{F}$. Now, suppose $z_{0},z_{-1}\neq 0$, $C=\frac{z_{0} + B }{z_{-1}}\neq 1$, and $z_{0}\in \left\{\frac{B-BC^{n}}{C^{n}-C^{n+1}}|n\in\mathbb{N}\right\}$. Further assume for the sake of contradiction that $(z_{0},z_{-1})\not\in \mathfrak{F}$. Then since $(z_{0},z_{-1})\not\in \mathfrak{F}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $z_{N}=0$ for some $N\in\mathbb{N}$, since $z_{0}\in \left\{\frac{B-BC^{n}}{C^{n}-C^{n+1}}|n\in\mathbb{N}\right\}$. This is a contradiction. Thus, $(z_{0},z_{-1})\in \mathfrak{F}$. Suppose $z_{0},z_{-1}\neq 0$, $C=\frac{z_{0} + B }{z_{-1}}= 1$, and $z_{0}\not\in \left\{nB|n\in\mathbb{N}\right\}$, and assume that $z_{n}$ is well defined for $n\leq N$. Then $$z_{N-1}\neq 0,$$ since $z_{0}\not\in \{nB|n\in\mathbb{N}\}$. Thus, $z_{n}$ is well defined for $n\leq N+1$. By induction, $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus, $(z_{0},z_{-1})\not\in \mathfrak{F}$. Now, suppose $z_{0},z_{-1}\neq 0$, $C=\frac{z_{0} + B }{z_{-1}}=1$, and $z_{0}\in \left\{nB|n\in\mathbb{N}\right\}$. Further assume for the sake of contradiction that $(z_{0},z_{-1})\not\in \mathfrak{F}$. Then, since $(z_{0},z_{-1})\not\in \mathfrak{F}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $z_{N}=0$ for some $N\in\mathbb{N}$, since $z_{0}\in \left\{nB|n\in\mathbb{N}\right\}$. This is a contradiction. Thus, $(z_{0},z_{-1})\in \mathfrak{F}$. Finally, suppose either $z_{0}=0$ or $z_{-1}=0$, then $(z_{0},z_{-1})\in \mathfrak{F}$. Thus, $\mathfrak{F}=S_{3}$, where $S_{3}$ is given in Figure 2. Consider the rational difference equation, $$z_{n+1}=\frac{z^{2}_{n} + B z_{n}}{ z_{n-1} + B},\quad n=0,1,\dots,$$ with $B\in \mathbb{C}\setminus \{0\}$ and with initial conditions $z_{0},z_{-1}\in \mathbb{C}$. Then the forbidden set, $\mathfrak{F}=S_{4}$, where $S_{4}$ is given in Figure 2. Also, given $(z_{0},z_{-1})\notin \mathfrak{F}$ there are two possibilities: i. $z_{0}=0$, in which case $z_{n}=0$ for all $n\geq 0$; ii. $z_{0}\neq 0$, in which case $z_{n+1}=\frac{z_{n}+B}{C}$ for all $n\geq 0$, where $C=\frac{z_{-1}+B}{z_{0}}$. This implies $$z_{n}=\frac{z_{0}}{C^{n}}+\sum^{n}_{k=1}\frac{B}{C^{k}},\quad n\geq 1.$$ Let us first consider the case where $(z_{0},z_{-1})\notin \mathfrak{F}$. Our analysis of this case will be broken into two subcases as shown in the statement of Theorem 5. Let us first consider case (i). In this case, $z_{0}=0$ and also since $(z_{0},z_{-1})\notin \mathfrak{F}$, there will never be division by zero in our solution. It is immediately clear from these two facts and from a basic induction argument that $z_{n}=0$ for all $n\geq 0$. Now, let us consider case (ii). In this case $z_{0}\neq 0$. Also, since $(z_{0},z_{-1})\notin \mathfrak{F}$, there will never be division by zero in our solution. Thus, $z_{n}\neq -B$ for all $n\geq -1$ or else we would have division by zero. This allows us to prove by induction that $z_{n}\neq 0$ for all $n\geq 0$. The induction argument for this piece is straightforward and is omitted. Since there is never division by zero in our solution, and since $z_{n}\neq 0$ for all $n\geq 0$, the following algebraic computation is well defined. $$\frac{z_{n}+B}{z_{n+1}}= \frac{(z_{n}+B)(z_{n-1}+B)}{z^{2}_{n}+Bz_{n}}= \frac{z_{n-1}+B}{z_{n}}.$$ Thus, we have the following algebraic invariant: $$\frac{z_{n-1}+B}{z_{n}}=constant.$$ For our fixed but arbitrary initial conditions with $z_{0}\neq 0$, let us denote $C=\frac{z_{-1}+B}{z_{0}}$. Since $z_{0}\neq 0$, $C$ is well defined and since $z_{-1}\neq -B$, $C\neq 0$. Algebraic manipulations of our invariant yield the following: $$z_{n}=\frac{z_{n-1}+B}{C} ,\quad n\geq 1.$$ Thus, the dynamics of $\{z_{n}\}^{\infty}_{n=-1}$ are given by a linear equation in this case. Since we already know the closed form solution for any linear equation, we may obtain a closed form solution for $\{z_{n}\}^{\infty}_{n=-1}$ in this case. The resulting closed form solution is $$z_{n}=\frac{z_{0}}{C^{n}}+\sum^{n}_{k=1}\frac{B}{C^{k}},\quad n\geq 1.$$ Now, we must find the forbidden set for our Equation (7). Let $\mathfrak{F}$ be the forbidden set. Suppose $z_{0},z_{-1}\neq -B$, $C=\frac{z_{-1} + B }{z_{0}}\neq 1$, and $z_{0}\not\in \left\{\frac{B-BC^{n+1}}{C-1}|n\in\mathbb{N}\right\}$, and assume that $z_{n}$ is well defined for $n\leq N$. Then $$z_{N-1}\neq -B,$$ since $z_{0}\not\in \left\{\frac{B-BC^{n+1}}{C-1}|n\in\mathbb{N}\right\}$. Thus, $z_{n}$ is well defined for $n\leq N+1$. By induction, $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus, $(z_{0},z_{-1})\not\in \mathfrak{F}$. Now, suppose $z_{0},z_{-1}\neq -B$, $C=\frac{z_{-1} + B }{z_{0}}\neq 1$, and $z_{0}\in \left\{\frac{B-BC^{n+1}}{C-1}|n\in\mathbb{N}\right\}$. Further assume for the sake of contradiction that $(z_{0},z_{-1})\not\in \mathfrak{F}$. Then, since $(z_{0},z_{-1})\not\in \mathfrak{F}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $z_{N}=-B$ for some $N\in\mathbb{N}$, since $z_{0}\in \left\{\frac{B-BC^{n+1}}{C-1}|n\in\mathbb{N}\right\}$. This is a contradiction. Thus, $(z_{0},z_{-1})\in \mathfrak{F}$. Suppose $z_{0},z_{-1}\neq -B$, $C=\frac{z_{-1} + B }{z_{0}}= 1$, and $z_{0}\not\in \left\{-nB-B|n\in\mathbb{N}\right\}$, and assume that $z_{n}$ is well defined for $n\leq N$. Then $$z_{N-1}\neq -B,$$ since $z_{0}\not\in \{-nB-B|n\in\mathbb{N}\}$. Thus, $z_{n}$ is well defined for $n\leq N+1$. By induction, $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus, $(z_{0},z_{-1})\not\in \mathfrak{F}$. Now, suppose $z_{0},z_{-1}\neq -B$, $C=\frac{z_{-1} + B }{z_{0}}=1$, and $z_{0}\in \left\{-nB-B|n\in\mathbb{N}\right\}$. Further assume for the sake of contradiction that $(z_{0},z_{-1})\not\in \mathfrak{F}$. Then, since $(z_{0},z_{-1})\not\in \mathfrak{F}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $z_{N}=-B$ for some $N\in\mathbb{N}$, since $z_{0}\in \left\{-nB-B|n\in\mathbb{N}\right\}$. This is a contradiction. Thus, $(z_{0},z_{-1})\in \mathfrak{F}$. Finally, suppose either $z_{0}=-B$ or $z_{-1}=-B$, then $(z_{0},z_{-1})\in \mathfrak{F}$. Thus, $\mathfrak{F}=S_{4}$, where $S_{4}$ is given in Figure 2. Consider the rational difference equation, $$z_{n+1}=\frac{z_{n}z_{n-1}+Bz_{n}}{B + z_{n}},\quad n=0,1,\dots,$$ with $B\in \mathbb{C}\setminus \{0\}$ and with initial conditions $z_{0},z_{-1}\in \mathbb{C}$. Then the forbidden set, $\mathfrak{F}=S_{5}$, where $S_{5}$ is given in Figure 2. Also, given $(z_{0},z_{-1})\notin \mathfrak{F}$, $z_{n+1}=\frac{C}{z_{n}+B}$ for all $n\geq 0$, where $C=\left(z_{-1} + B \right)\left(z_{0}\right)$. This implies the following: a. If $C=0$, then $z_{n}=0$ for all $n\geq 0$. b. If $\frac{-C}{B^{2}}\in \mathbb{C}\setminus \left[\frac{1}{4},\infty\right)$, then $$z_{n}=B\left(\frac{(B\lambda_{2}-z_{0}-B)\lambda^{n+1}_{1}+(z_{0}+B-B\lambda_{1})\lambda^{n+1}_{2}}{(B\lambda_{2}-z_{0}-B)\lambda^{n}_{1}+(z_{0}+B-B\lambda_{1})\lambda^{n}_{2}}\right)-B,\quad n\geq 0.$$ Where $$\lambda_{1}=\frac{1-\sqrt{1+\frac{4C}{B^{2}}}}{2},\quad and \quad \lambda_{2}=\frac{1+\sqrt{1+\frac{4C}{B^{2}}}}{2}.$$ c. If $\frac{-C}{B^{2}}=\frac{1}{4}$, then $$z_{n}=B\left(\frac{B+(n+1)\left(2z_{0}+B \right)}{2B+4nz_{0}+2nB}\right)-B,\quad n\geq 0.$$ d. If $\frac{-C}{B^{2}}\in \left(\frac{1}{4},\infty\right)$, then call $B\sqrt{\frac{-4C}{B^{2}}-1}=D$ and call $\arccos\left(\sqrt{\frac{B^{2}}{-4C}}\right)=\rho$, and for $n\geq 0$, we have $$z_{n}=B\sqrt{\frac{-C}{B^{2}}}\left(\frac{D\cos\left((n+1)\rho\right)+(B+2z_{0})\sin\left((n+1)\rho\right)}{D\cos\left(n\rho\right)+(B+2z_{0})\sin\left(n\rho\right)} \right) -B.$$ Let us first consider the case where $(z_{0},z_{-1})\notin \mathfrak{F}$. In this case clearly $z_{n}\neq -B$ for $n\geq 0$, or else we would have division by zero. Since $z_{n}\neq -B$ for all $n\geq 0$, the following algebraic computation is well defined: $$z_{n+1}(z_{n}+B)=\left(\frac{z_{n}z_{n-1}+Bz_{n}}{B + z_{n}}\right)(z_{n}+B)= z_{n}(z_{n-1}+B).$$ Thus we have the following algebraic invariant: $$z_{n}(z_{n-1}+B)=constant.$$ For our fixed but arbitrary initial conditions, let us denote $C=z_{0}(z_{-1}+B)$. Algebraic manipulations of our invariant yield the following: $$z_{n}=\frac{C}{z_{n-1}+ B},\quad n\geq 1.$$ Since $z_{n}\neq -B$ for all $n\geq 0$, this equation is well-defined for all $n\geq 1$. Thus the dynamics of $\{z_{n}\}^{\infty}_{n=-1}$ are given by a Riccati equation in this case. Since we already know the closed form solution for any Riccati equation, we may obtain a closed form solution for $\{z_{n}\}^{\infty}_{n=-1}$ in this case. We use the known results for Riccati equations restated in Section 3 to obtain the closed form solutions in the statement of the theorem. Now, we must find the forbidden set for our Equation (8). Let $\mathfrak{F}$ be the forbidden set. Assume $z_{n}$ is well defined for $n\leq N$, then $z_{n}\neq -B$ for $1\leq n\leq N-1$. Using this we get that for $n<N$, $$z_{n+1}(z_{n}+B)=\left(\frac{z_{n}z_{n-1}+Bz_{n}}{B + z_{n}}\right)(z_{n}+B)= z_{n}(z_{n-1}+B).$$ So, $$z_{n}(z_{n-1}+B)=constant.$$ for $0\leq n\leq N$. Thus, assuming $z_{N+1}$ is well defined, $$z_{n+1}= \frac{C}{z_{n}+B},$$ for $0\leq n\leq N$. Where $C=z_{0}(z_{-1}+B)$. Call the forbidden set of the following first order difference equation $\mathfrak{F}_{C}$, $$x_{n+1}= \frac{C}{x_{n}+B}, \quad n=0,1,2,\dots.$$ Note that the set $\mathfrak{F}_{C}$ changes depending on the value of $C$. Now, suppose $C=z_{0}(z_{-1}+B)$ and $z_{0}\not\in \mathfrak{F}_{C}$, and assume that $z_{n}$ is well defined for $n\leq N$. Recall that we have shown that this implies $z_{n}\neq -B$ for $n < N$. Then $$B+z_{n}\neq 0,$$ since $z_{0}\not\in \mathfrak{F}_{C}$. Thus, $z_{n}$ is well defined for $n\leq N+1$. By induction, $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus, $(z_{0},z_{-1})\not\in \mathfrak{F}$. Now, suppose $C=z_{0}(z_{-1}+B)$ and $z_{0}\in \mathfrak{F}_{C}$. Further assume for the sake of contradiction that $(z_{0},z_{-1})\not\in \mathfrak{F}$. Then, since $(z_{0},z_{-1})\not\in \mathfrak{F}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $B+z_{N}=0$ for some $N\in\mathbb{N}$, since $z_{0}\in \mathfrak{F}_{C}$. This is a contradiction. Thus $(z_{0},z_{-1})\in \mathfrak{F}$. So, $$\mathfrak{F}=\bigcup_{b\in\mathbb{C}} \bigcup_{C\in\mathbb{C}} \left(\left(\mathfrak{F}_{C}\times \{b\}\right)\cap \left\{(a,b)|C=a(b+B)\right\}\right).$$ Notice that $\left(\{0\}\times \mathbb{C}\right)\cap \mathfrak{F}=\emptyset$ since if $z_{0}=0$ then a simple induction argument tells us that $z_{n}=0$ for all $n\in\mathbb{N}$, so there will never be division by zero in such a case. So we may reduce the above expression as follows, $$\mathfrak{F}= \bigcup_{C\in\mathbb{C}} \left\{ \left(a,\frac{C-Ba}{a}\right)| a\in\mathfrak{F}_{C}\setminus\left\{0\right\}\right\}.$$ From the above reduction, and from the facts about the forbidden sets of the Riccati difference equation in Section 3, we get $\mathfrak{F}=S_{5}$, where $S_{5}$ is given in Figure 2. Consider the rational difference equation, $$z_{n+1}=\frac{z_{n}z_{n-1}+Bz_{n-1} - Bz_{n}}{z_{n}},\quad n=0,1,\dots,$$ with $B\in \mathbb{C}\setminus\{0\}$ and with initial conditions $z_{0},z_{-1}\in \mathbb{C}$. Then the forbidden set, $\mathfrak{F}=S_{6}$, where $S_{6}$ is given in Figure 2. Also, given $(z_{0},z_{-1})\notin \mathfrak{F}$, $z_{n+1}=\frac{C - B z_{n}}{z_{n}}$ for all $n\geq 0$, where $C=z_{-1}(z_{0}+B)$. This implies the following: a. If $C=0$, then $z_{n}=-B$ for all $n\geq 0$. b. If $\frac{-C}{B^{2}}\in \mathbb{C}\setminus \left[\frac{1}{4},\infty\right)$, then $$z_{n}=-B\left(\frac{(B\lambda_{2}+z_{0})\lambda^{n+1}_{1}-(z_{0}+B\lambda_{1})\lambda^{n+1}_{2}}{(B\lambda_{2}+z_{0})\lambda^{n}_{1}-(z_{0}+B\lambda_{1})\lambda^{n}_{2}}\right),\quad n\geq 0.$$ Where $$\lambda_{1}=\frac{1-\sqrt{1+\frac{4C}{B^{2}}}}{2},\quad and \quad \lambda_{2}=\frac{1+\sqrt{1+\frac{4C}{B^{2}}}}{2}.$$ c. If $\frac{-C}{B^{2}}=\frac{1}{4}$, then $$z_{n}=-B\left(\frac{-B+(n+1)\left(2z_{0}+B \right)}{-2B+4nz_{0}+2nB}\right),\quad n\geq 0.$$ d. If $\frac{-C}{B^{2}}\in \left(\frac{1}{4},\infty\right)$, then call $B\sqrt{\frac{-4C}{B^{2}}-1}=D$ and $\arccos\left(\sqrt{\frac{B^{2}}{-4C}}\right)=\rho$, and for $n\geq 0$, we get $$z_{n}=-B\sqrt{\frac{-C}{B^{2}}}\left(\frac{D\cos\left((n+1)\rho\right)+(-2z_{0}-B)\sin\left((n+1)\rho\right)}{D\cos\left(n\rho\right)+(-2z_{0}-B)\sin\left(n\rho\right)}\right).$$ Let us first consider the case where $(z_{0},z_{-1})\notin \mathfrak{F}$. In this case clearly $z_{n}\neq 0$ for $n\geq 0$, or else we would have division by zero. Since $z_{n}\neq 0$ for all $n\geq 0$, the following algebraic computation is well defined: $$z_{n}(z_{n+1}+B)=\left(\frac{z_{n}z_{n-1}+Bz_{n-1} - Bz_{n}}{z_{n}}+B\right)(z_{n})= \left(\frac{z_{n}z_{n-1}+Bz_{n-1}}{z_{n}}\right)(z_{n})= z_{n-1}(z_{n}+B).$$ Thus, we have the following algebraic invariant: $$z_{n-1}(z_{n}+B)=constant.$$ For our fixed but arbitrary initial conditions, let us denote $C=z_{-1}(z_{0}+B)$. Algebraic manipulations of our invariant yield the following: $$z_{n}=\frac{C - B z_{n-1}}{z_{n-1}},\quad n\geq 1.$$ Since $z_{n}\neq 0$ for all $n\geq 0$, this equation is well-defined for all $n\geq 1$. Thus, the dynamics of $\{z_{n}\}^{\infty}_{n=-1}$ are given by a Riccati equation in this case. Since we already know the closed form solution for any Riccati equation, we may obtain a closed form solution for $\{z_{n}\}^{\infty}_{n=-1}$ in this case. We use the known results for Riccati equations restated in Section 3 to obtain the closed form solutions in the statement of the theorem. Now, we must find the forbidden set for our Equation (9). Let $\mathfrak{F}$ be the forbidden set. Assume $z_{n}$ is well defined for $n\leq N$, then $z_{n}\neq 0$ for $1\leq n\leq N-1$. Using this we get that for $n<N$ $$z_{n}(z_{n+1}+B)=\left(\frac{z_{n}z_{n-1}+Bz_{n-1} - Bz_{n}}{z_{n}}+B\right)(z_{n})=$$$$\left(\frac{z_{n}z_{n-1}+Bz_{n-1}}{z_{n}}\right)(z_{n})= z_{n-1}(z_{n}+B).$$ So, $$z_{n}(z_{n-1}+B)=constant.$$ for $0\leq n\leq N$. Thus, assuming $z_{N+1}$ is well defined, $$z_{n+1}= \frac{C-Bz_{n}}{z_{n}},$$ for $0\leq n\leq N$, where $C=z_{-1}(z_{0}+B)$. Call the forbidden set of the following first order difference equation $\mathfrak{F}_{C}$, $$x_{n+1}= \frac{C-Bx_{n}}{x_{n}}, \quad n=0,1,2,\dots.$$ Note that the set $\mathfrak{F}_{C}$ changes depending on the value of $C$. Now, suppose $C=z_{-1}(z_{0}+B)$ and $z_{0}\not\in \mathfrak{F}_{C}$, and assume that $z_{n}$ is well defined for $n\leq N$. Recall that we have shown that this implies $z_{n}\neq 0$ for $n < N$. Then $$z_{N}\neq 0,$$ since $z_{0}\not\in \mathfrak{F}_{C}$. Thus $z_{n}$ is well defined for $n\leq N+1$. By induction, $z_{n}$ is well defined for all $n\in\mathbb{N}$. Thus, $(z_{0},z_{-1})\not\in \mathfrak{F}$.Now, suppose $C=z_{-1}(z_{0}+B)$ and $z_{0}\in \mathfrak{F}_{C}$. Further assume for the sake of contradiction that $(z_{0},z_{-1})\not\in \mathfrak{F}$. Then, since $(z_{0},z_{-1})\not\in \mathfrak{F}$, $z_{n}$ is well defined for all $n\in\mathbb{N}$, but also $z_{N}=0$ for some $N\in\mathbb{N}$, since $z_{0}\in \mathfrak{F}_{C}$. This is a contradiction. Thus, $(z_{0},z_{-1})\in \mathfrak{F}$. So $$\mathfrak{F}=\bigcup_{z_{-1}\in\mathbb{C}} \bigcup_{C\in\mathbb{C}} \left(\left(\mathfrak{F}_{C}\times \{z_{-1}\}\right)\cap \left\{(z_{0},z_{-1})|C=z_{-1}(z_{0}+B)\right\}\right).$$ Notice that $\left(\{-B\}\times \mathbb{C}\right)\cap \mathfrak{F}=\emptyset$, since if $z_{0}=-B$, then a simple induction argument tells us that $z_{n}=-B$ for all $n\in\mathbb{N}$, so there will never be division by zero in such a case. So, we may reduce the above expression as follows, $$\mathfrak{F}= \bigcup_{C\in\mathbb{C}} \left\{ \left(a,\frac{C}{a+B}\right)| a\in\mathfrak{F}_{C}\setminus\left\{-B\right\}\right\}.$$ From the above reduction, and from the facts about the forbidden sets of the Riccati difference equation in Section 3, we get $\mathfrak{F}=S_{6}$, where $S_{6}$ is given in Figure 2. Conclusion ========== We have introduced some new invariants for rational difference equations, including some invariants for certain cases of the second order rational difference equation of the form, $$x_{n+1}=\frac{\alpha + \beta x_{n} + \gamma x_{n-1}}{A + B x_{n} + C x_{n-1}},\quad n=0,1,2,\dots.$$ This second order linear fractional rational difference equation has been studied extensively in the case of nonnegative parameters and nonnegative initial conditions. See [@kulenovicladas] for more information about this case. However, the invariants we have found only apply in a region of the parameters where at least one of the parameters cannot be a nonnegative real number. For this reason, these particular examples were overlooked in [@kulenovicladas] as well as in the subsequent literature. What is particularly interesting about the presented cases is that we are able to use invariants we find to obtain both the forbidden set, and a closed form solution for our rational difference equations through reduction of order. [90]{} $$S_{1}=\left\{\left(0,\frac{-1}{B}\right)\right\}\bigcup \left\{\left(\frac{-1}{B}\left(\frac{n+1}{n}\right),\frac{-1}{B}\right)|n\in\mathbb{N}\right\}\bigcup \left\{\left(\frac{-1}{B},\frac{-1}{B}\left(\frac{n+1}{n}\right)\right)|n\in\mathbb{N}\right\}\bigcup$$ $$\bigcup_{D\in\mathbb{C}\setminus [0,4]}\left\{\left(a,\frac{DBa-Ba-1}{B^{2}a+B}\right)\left|a\in\left\{\frac{-2}{B}\left(\frac{\left(1+\sqrt{1-\frac{4}{D}}\right)^{n-1}-\left(1-\sqrt{1-\frac{4}{D}}\right)^{n-1}}{\left(1+\sqrt{1-\frac{4}{D}}\right)^{n}-\left(1-\sqrt{1-\frac{4}{D}}\right)^{n}}\right)+\frac{D-1}{B}\right|n\in\mathbb{N}\right\}\setminus\left\{0,\frac{-1}{B}\right\}\right\}$$ $$\bigcup_{D\in (0,4)} \left\{\left(a,\frac{DBa-Ba-1}{B^{2}a+B}\right)\left|a\in\left\{\frac{-D}{B}\left(1-\sqrt{\frac{4}{D}-1}\cot\left(n\cdot \arccos\left(\frac{\sqrt{D}}{2}\right)\right)\right)+\frac{D-1}{B}\right|n\in\mathbb{N}\right\}\setminus\left\{0,\frac{-1}{B}\right\}\right\}$$ $$\bigcup \left\{\left(a,\frac{3Ba-1}{B^{2}a+B}\right)\left|a\in \left\{\frac{-2}{B}\left(\frac{n-1}{n}\right)+\frac{3}{B}\right|n\in\mathbb{N}\right\}\setminus\left\{0,\frac{-1}{B}\right\}\right\}.$$ $$S_{2}=\left\{\left(\frac{-1}{B},0\right)\right\}\bigcup \left\{\left(0,\frac{1}{Bn}\right)|n\in\mathbb{N}\right\}\bigcup \left\{\left(\frac{1}{Bn},0\right)|n\in\mathbb{N}\right\}\bigcup$$ $$\bigcup_{D\in\mathbb{C}\setminus \left((-\infty,\frac{-1}{4}]\cup\{0\}\right)}\left\{\left(a,\frac{Ba+1}{DB^{2}a}\right)\left|a\in\left\{\frac{2}{B}\left(\frac{\left(1+\sqrt{1+4D}\right)^{n-1}-\left(1-\sqrt{1+4D}\right)^{n-1}}{\left(1+\sqrt{1+4D}\right)^{n}-\left(1-\sqrt{1+4D}\right)^{n}}\right)+\frac{1}{DB}\right|n\in\mathbb{N}\right\}\setminus\left\{0,\frac{-1}{B}\right\}\right\}$$ $$\bigcup_{D\in (-\infty,\frac{-1}{4})} \left\{\left(a,\frac{Ba+1}{DB^{2}a}\right)\left|a\in\left\{\frac{-1}{2DB}\left(-1-\sqrt{-4D-1}\cot\left(n\cdot \arccos\left(\frac{1}{2\sqrt{-D}}\right)\right)\right)\right|n\in\mathbb{N}\right\}\setminus\left\{0,\frac{-1}{B}\right\}\right\}$$ $$\bigcup \left\{ \left(a,\frac{-4Ba-4}{B^{2}a}\right)\left| a\in\left\{\frac{-2n-2}{Bn}\right|n\in\mathbb{N}\right\}\setminus\left\{0,\frac{-1}{B}\right\}\right\}.$$ Figure 1. [90]{} $$S_{3}=\bigcup_{C\neq 1}\left\{\left(\frac{B-BC^{n}}{C^{n}-C^{n+1}},\frac{B-BC^{n+1}}{C^{n+1}-C^{n+2}}\right)|n\in\mathbb{N}\right\}\bigcup\left(\{0\}\times \mathbb{C}\right)\bigcup\left(\mathbb{C}\times \{0\}\right)\bigcup\left\{\left(nB,nB+B\right)|n\in\mathbb{N}\right\}.$$ $$S_{4}=\bigcup_{C\neq 1}\left\{\left(\frac{B-BC^{n+1}}{C-1},\frac{B-BC^{n+2}}{C-1}\right)|n\in\mathbb{N}\right\}\bigcup\left(\{-B\}\times \mathbb{C}\right)\bigcup\left(\mathbb{C}\times \{-B\}\right)\bigcup\left\{\left(-nB-B,-nB-2B\right)|n\in\mathbb{N}\right\}.$$ $$S_{5}= \{-B,-B\}\bigcup \left\{ \left(a,\frac{B^{2}}{-4a}-B\right)\left| a\in\left\{\frac{-Bn-B}{2n}\right|n\in\mathbb{N}\right\}\right\}\bigcup$$ $$\bigcup_{D\in\mathbb{C}\setminus \left((-\infty,\frac{-1}{4}]\cup\{0\}\right)}\left\{\left(a,\frac{DB^{2}}{a}-B\right)\left|a\in\left\{-2DB\left(\frac{\left(1+\sqrt{1+4D}\right)^{n-1}-\left(1-\sqrt{1+4D}\right)^{n-1}}{\left(1+\sqrt{1+4D}\right)^{n}-\left(1-\sqrt{1+4D}\right)^{n}}\right)-B\right|n\in\mathbb{N}\right\}\setminus\left\{0\right\}\right\}$$ $$\bigcup_{D\in (-\infty,\frac{-1}{4})} \left\{\left(a,\frac{DB^{2}}{a}-B\right)\left|a\in\left\{\frac{B}{2}\left(-1-\sqrt{-4D-1}\cot\left(n\cdot \arccos\left(\frac{1}{2\sqrt{-D}}\right)\right)\right)\right|n\in\mathbb{N}\right\}\setminus\left\{0\right\}\right\}.$$ $$S_{6}= \{0,0\}\bigcup \left\{ \left(a,\frac{-B^{2}}{4a+4B}\right)\left| a\in\left\{\frac{-Bn+B}{2n}\right|n\in\mathbb{N}\right\}\right\}\bigcup$$ $$\bigcup_{D\in\mathbb{C}\setminus \left((-\infty,\frac{-1}{4}]\cup\{0\}\right)}\left\{\left(a,\frac{DB^{2}}{a+B}\right)\left|a\in\left\{2DB\left(\frac{\left(1+\sqrt{1+4D}\right)^{n-1}-\left(1-\sqrt{1+4D}\right)^{n-1}}{\left(1+\sqrt{1+4D}\right)^{n}-\left(1-\sqrt{1+4D}\right)^{n}}\right)\right|n\in\mathbb{N}\right\}\setminus\left\{-B\right\}\right\}$$ $$\bigcup_{D\in (-\infty,\frac{-1}{4})} \left\{\left(a,\frac{DB^{2}}{a+B}\right)\left|a\in\left\{\frac{-B}{2}\left(1-\sqrt{-4D-1}\cot\left(n\cdot \arccos\left(\frac{1}{2\sqrt{-D}}\right)\right)\right)\right|n\in\mathbb{N}\right\}\setminus\left\{-B\right\}\right\}.$$ Figure 2. [99]{} E. Barbeau, B. Gelford, and S. Tanny, Periodicities of solutions of the generailzed Lyness equation,*J. Difference Equ. Appl.***1**(1995), 291-306. G. Bastien and M. Rogalski, Global behavior of the solutions of Lyness’ difference equation $u_{n+2}u_{n}=u_{n+1}+a$,*J. Difference Equ. Appl.***11**(2004), 997-1003. G. Bastien and M. Rogalski, Global behavior of the solutions of the $k$-lacunary order $2k$ Lyness’ difference equation $u_{n}=\frac{u_{n-k}+a}{u_{n-2k}}$ in $R_{*}^{+}$ and of other more general equations,*J. Difference Equ. Appl.***13**(2007), 79-88. E. Camouzis and G. Ladas, *Dynamics of Third-Order Rational Difference Equations with Open Problems and Conjectures*, Chapman & Hall/CRC Press, Boca Raton, 2007. E. Camouzis and R. DeVault, The forbidden set of $x_{n+1}=p+{x_{n-1}\over x_n}$, Special Session of the American Mathematical Society Meeting, Part II, San Diego, 2002. A. Cima, A. Gasull, and V. Mañosa, Dynamics of rational discrete dynamical systems via first integrals,*Int. J. Bifurcations and Chaos in Appl. Sci. Engrg.***16**(2006), 631-645. A. Cima, A. Gasull, and V. Mañosa, Global periodicity and complete integrability of discrete dynamical systems,*J. Difference Equ. Appl.***12**(2006), 697-716. A. Cima, A. Gasull, and V. Mañosa, Dynamics of the third order Lyness’ difference equations,*J. Difference Equ. Appl.***13**(2007), 855-884. A. Cima, A. Gasull, and V. Mañosa, Studying discrete dynamical systems through differential equations,*J. Differential Equations***244**(2008), 630-648. A. Cima, A. Gasull, and V. Mañosa, Some properties of the k-dimensional Lyness’ map,*J. Phys. A: Math. Theor.***41**(2008), 285-205. M. Dehghan, C.M. Kent, R. Mazrooei-Sebdani, N.L. Ortiz, and H. Sedaghat, Dynamics of rational difference equations containing quadratic terms,*J. Difference Equ. Appl.***14**(2008), 191-208. M. Gao, Y. Kato, and M. Ito, Some invariants for the kth-order Lyness equation,*Appl. Math. Lett.***17**(2004), 1183-1189. E.A. Grove, E.J. Janowski, C.M. Kent, and G. Ladas, On the rational recursive sequence $x_{n+1}=\frac{\alpha x_{n} + \beta}{(\gamma x_{n} + \delta)x_{n-1}}$,*Commun. Appl. Nonlinear Anal.***1**(1994), 61-72. E.A. Grove, Y. Kostrov, and S.W. Schultz, On Riccati difference equations with complex coefficients,Difference Equations and Applications, 195-202 Uğur-Bahçeşehir Univ. Publ. Co., Istanbul, 2009. E.A. Grove and G. Ladas, *Periodicities in Nonlinear Difference Equations*, Chapman & Hall/CRC Press, Boca Raton, 2005. E.A. Grove, G. Ladas, L.C. McGrath, and C.T. Teixeira, Existence and behavior of solutions of a rational system,*Commun. Appl. Nonlinear Anal.***8**(2001), 1-25. D. Jogia, J.A.G. Roberts, and F. Vivaldi, An algebraic geometric approach to integrable maps of the plane,*J. Phys. A: Math. Gen.***39**(2006), 1133-1149. V.L. Kocic and G. Ladas, *Global Behavior of Nonlinear Difference Equations of Higher Order with Applications*, Kluwer Academic Publishers, Dordrecht, 1993. V.L. Kocic, G. Ladas, and I.W. Rodrigues, On rational recursive sequences ,*J. Math. Anal. Appl.***173**(1993), 127-157. M.R.S. Kulenović, Invariants and related Liapunov functions for difference equations,*Appl. Math. Lett.***13**(2000), 1-8. M.R.S. Kulenović and G. Ladas, *Dynamics of Second Order Rational Difference Equations*, Chapman & Hall/CRC Press, Boca Raton, 2002. G. Ladas, On the recursive sequence $x_{n+1}=\frac{\alpha + \beta x_{n} + \gamma x_{n-1}}{A + Bx_{n} + Cx_{n-1}}$,*J. Difference Equ. Appl.***1**(1995), 317-321. G. Ladas, G. Tzanetopoulos, and E. Thomas, On the stability of Lyness’s Equation,*Dynamics of Continuous, Discrete Impulsive Syst.***1**(1995), 245-254. G. Lugo and F.J. Palladino, Unboundedness for some classes of rational difference equations,*Int. J. Difference Equ.***4**(2009), 97-113. R.C. Lyness, Note 1581,*Math. Gaz.***26**(1942), 62. R.C. Lyness, Note 1847,*Math. Gaz.***29**(1945), 231. J. Rubió-Massegú, Global periodicity and openness of the set of solutions for discrete dynamical systems,*J. Difference Equ. Appl.***15**(2009), 569-578. J. Rubió-Massegú, On the existence of solutions for difference equations,*J. Difference Equ. Appl.***13**(2007), 655-664. H. Sedaghat, A note: all homogeneous second order difference equations of degree one have semiconjugate factorizations,*J. Difference Equ. Appl.***13**(2007), 453-456. H. Sedaghat, A note: every homogeneous difference equation of degree one admits a reduction in order,*J. Difference Equ. Appl.***15**(2009), 621-624. H. Sedaghat, Existence of solutions for certain singular difference equations,*J. Differ. Equations Appl.***6**(2000), 535-561. H. Sedaghat, Global behaviours of rational difference equations of orders two and three with quadratic terms,*J. Difference Equ. Appl.***15**(2009), 215-224. H. Sedaghat, On third-order rational difference equations with quadratic terms,*J. Difference Equ. Appl.***14**(2008), 889-897. H. Sedaghat, Reduction of order in difference equations by semiconjugate factorization,*Int. J. Pure Appl. Math.***53**(2009), no. 3, 377-384. H. Sedaghat, Reduction of order of separable second order difference equations with form symmetries,*Int. J. Pure Appl. Math.***47**(2008), no. 2, 155-163. H. Sedaghat, Semiconjugate factorization of non-autonomous higher order difference equations,*Int. J. Pure Appl. Math.***62**(2010), no. 2, 233-245. H. Sedaghat, Semiconjugates of one-dimensional maps,*J. Difference Equ. Appl.***8**(2002), 649-666. W.S. Sizer, Periodicity in the Lyness equation,*Math. Sci. Res. J.***7**(2003), 366-372. W.S. Sizer, Some periodic solutions of the Lyness equation,*Proceedings of the Fifth International Conference on Difference Equations and Applications* January 3-7, 2000, Temuca, Chile, Gordon and Breach Science Publishers. E.C. Zeeman, Geometric unfolding of a difference equation,*http://www.math.utsa.edu/ecz/gu.html* Unpublished paper. Hertford College, Oxford,(1996).
--- abstract: | The H II region W40 harbors a small group of young, hot stars behind roughly 9 magnitudes of visual extinction. We have detected gaseous carbon monoxide (CO) and diatomic carbon (C$_2$) in absorption toward the star W 40 IRS 1a. The 2-0 R0, R1, and R2 lines of $^{12}$CO at 2.3 $\mu$m were measured using the CSHELL on the NASA IR Telescope Facility (with upper limits placed on R3, R4, and R5) yielding an $N_{CO}$ of $(1.1 \pm 0.2) \times 10^{18}$ cm$^{-2}$. Excitation analysis indicates $T_{kin} > 7$ K. The Phillips system of C$_2$ transitions near 8775 Å was measured using the Kitt Peak 4-m telescope and echelle spectrometer. Radiative pumping models indicate a total C$_2$ column density of $(7.0 \pm 0.4) \times 10^{14}$ cm$^{-2}$, two excitation temperatures (39 and 126 K), and a total gas density of $n \sim 250$ cm$^{-3}$. The CO ice band at 4.7  was not detected, placing an upper limit on the CO depletion of $\delta < 1\%$. We postulate that the sightline has multiple translucent components and is associated with the W40 molecular cloud. Our data for [W40 IRS 1a]{} coupled with other sightlines, shows that the ratio of CO/[C$_2$]{} increases from diffuse through translucent environs. Finally, we show that the hydrogen to dust ratio seems to remain constant from diffuse to dense environments, while the CO to dust ratio apparently does not. author: - 'R. Young Shuping and Theodore P. Snow' - Richard Crutcher - 'Barry L. Lutz' title: 'CO and C$_2$ Absorption Toward W40 IRS 1a' --- Introduction ============ Molecular species are useful diagnostics for understanding the physics and chemistry of the interstellar medium (ISM), and have been studied in various ways for many years. Most molecules are observed in dense molecular clouds via rotational emission lines in the radio band. These emission studies are indispensable to our understanding of galactic ecology and the ISM: They yield maps of dense regions, have very high spectral (velocity) resolution, and allow us to study complex molecules not otherwise observable. There are some drawbacks to molecular emission studies, however. First, analysis of the line excitation is very model-dependent and can lead to significant systematic errors. Second, since spatial resolution is typically low (and varies from species to species), it is sometimes hard to discern whether emission from different species comes from the same location within a cloud. This makes comparisons among species difficult. Molecular absorption line observations have some important advantages over emission line studies. First, absorption from different species observed in the same line of sight are more likely to coexist spatially, thus reducing geometrical ambiguities and allowing more reliable inter-species comparisons. Second, because the unexcited molecules are observed directly, column density determinations are not as model-dependent as for emission studies, and hence more accurate. Third, some molecular species (notably H$_2$ and [C$_2$]{}) are essentially unobservable in the radio due to lack of a dipole moment. These species can sometimes be observed via electronic transitions in the ultraviolet (e.g., [@sb82]), or through rotational-vibrational transitions in the visible or infrared (e.g. [@fetal94]; [@letal94]). And finally, molecular abundances derived from absorption measures can easily be compared to line-of-sight bulk properties (e.g., extinction, polarization, and atomic abundances) which are also derived from absorption measures. CO and [C$_2$]{} have transitions at 2.3 $\mu$m and 8775 Å, respectively, which can be observed in absorption. The primary drawback to studying molecular absorption lines in the infrared (IR) is finding a suitable background star. The target must be bright in the IR yet fortuitously placed behind a substantial amount of absorbing material. In our effort to find background sources for absorption line studies in regions also accessible to emission-line studies, we have selected the target [W40 IRS 1a]{}, an OB star embedded within the radio source W40. This source appears well-suited for our purposes; Though dim in the visible ($V = 15.0$) it is bright in the IR ($K = 5.6$), and lies behind about 9 magnitudes of extinction in $V$ ([$A_V$]{}). W40 itself is a blister-type region breaking out of a local molecular cloud, about 400 pc away towards the galactic center ([@zl78]; [@cc82]). A group of hot stars (most likely B-class) is bathing the region with ionizing radiation; W40 IRS 2a (OS 2a) seems to be producing most of the energy ([@setal85]). IRS 1a is brighter than 2a in the optical and near-IR indicating somewhat greater extinction toward IRS 2a. Crutcher and Chu (1982) mapped $^{12}$CO and $^{13}$CO, HCO$^+$, HCN and H$\alpha$ emission toward W40, generating a comprehensive kinematic interpretation of the region. Vallée et al. (1987, 1991, 1992, and 1994) have done extensive studies on the properties of the molecular cloud and its interaction with the region using recombination lines, radio continuum emission, and CO emission. The existence of circumstellar dust shells around W40 IRS 1a, 2a and 3a ($T \sim 250-350$ K and $M < 0.1 M_\odot$) has been suggested based on broad-band IR imaging and continuum measurements in the millimeter and sub-millimeter ([@setal85]; [@vm94]). Both CO and [C$_2$]{} are important interstellar molecules. CO is the most abundant molecule in the ISM after molecular hydrogen, and has a number of important roles: The CO emission lines at 2.6 mm and shorter are not only responsible for cooling molecular clouds but also serve as tracers for H$_2$ and dense molecular gas (e.g., [@mo95] and references therein). [C$_2$]{} is much less abundant than CO and has been observed primarily in diffuse and translucent clouds (e.g. [@vdb89]; [@lsf95]). Like H$_2$, C$_2$ has no dipole moment and hence no pure rotational spectrum. It is a very useful diagnostic of cloud physical conditions such as kinetic temperature, density, and radiation field intensity ([@vdb82]; [@vd84]). Both CO and [C$_2$]{} are important to carbon chemistry everywhere in the ISM. CO absorption at 2.3 $\mu$m ($v = 2 - 0$) has not been observed as frequently as the v = 1 - 0 band at 4.7 $\mu$m, most likely owing to the smaller transition probabilities. It is, however, easier to observe at 2.3 $\mu$m since the thermal background from the Earth’s atmosphere is much lower than at 4.7 $\mu$m. Black and Willner (1984) and Black et al. (1990) used the lines at 2.3 $\mu$m to study the physical conditions and chemistry toward NGC 2024, NGC 2264 and AFGL 2591. Recently, Lacy et al. (1994) were able to observe both CO and H$_2$ absorption near 2.3 $\mu$m toward NGC 2024 IRS 2 yielding for the first time a direct measure of the H$_2$/CO ratio in a molecular cloud. Absorption lines of [C$_2$]{} have been used to address a number of problems in the ISM, including: Diffuse interstellar cloud chemistry (e.g. [@lsf95]), carbon chemistry in translucent clouds (e.g. [@vdb89]), molecular cloud envelopes (e.g. [@fetal94]), and the line of sight structure toward $\zeta$ Oph (e.g. [@crawford97]). Federman et al. (1994) provide a good compilation of [C$_2$]{} measurements up to 1992. In this paper we report on absorption-line studies of CO and [C$_2$]{} toward [W40 IRS 1a]{}. The line of sight is discussed in the next section. In Section 3 we describe our observations and the results, and in Section 4 we discuss their implication for the physical state of the material in this line of sight. The final section contains a brief summary of our conclusions. Line Of Sight ============= Very little is known about the interstellar sightline to [W40 IRS 1a]{}. Both Crutcher and Chu (1982) and Smith et al. (1985) infer $A_V \sim 9$. The average extinction due to diffuse material over 400 pc is about 0.6 mag (see [@s78], p. 155). Therefore the obscuring material is most likely local to the W40 region and more dense than the diffuse ISM. In addition, Crutcher and Chu (1982) found a molecular [$^{13}$CO]{} component in front of the W40 region with $V_{LSR} = 8$ km s$^{-1}$, $N_{CO} \simeq 1.2 \times 10^{18}$ cm$^{-2}$, and $\log{(nT)} \sim 4$, further suggesting that the sightline passes through cold material, most likely associated with the neighboring molecular cloud. The line of sight intersects at least 3 distinct physical regimes. As noted above there appears to be some cold foreground material, perhaps associated with the nearby molecular cloud. Closer to [W40 IRS 1a]{} there must be a photon-dominated region (PDR), and closer still, the W40 region itself. If [W40 IRS 1a]{} has a circumstellar dust shell, it is not clear what its effect on the CO and [C$_2$]{} spectra might be. A warm dust shell might be associated with an elevated gas temperature, and hence the appearance of high-$J$ molecular rotational lines. The region is roughly 0.9 pc across and has an electron density of 200 cm$^{-3}$ (based on results in Crutcher and Chu \[1982\] and a distance of 400 pc). The column density of ionized hydrogen associated with the region should be about $3 \times 10^{20}$ cm$^{-2}$, assuming spherical geometry and that all hydrogen is ionized. Using the relation of total hydrogen column density to extinction found in Bohlin, Savage, & Drake (1978), the region should contribute $A_V \sim 0.1$ mag to the line-of-sight extinction. This, of course, assumes a standard gas-to-dust ratio, which may not apply to regions in general. The contribution to [$A_V$]{} could increase or decrease depending primarily on the nature of any grain-destroying shocks which may have passed through the region. In either case, the region probably does not strongly contribute to the total extinction on the line of sight to [W40 IRS 1a]{}. In addition, molecules like CO and [C$_2$]{} cannot survive in the harsh environment, so the region should not contribute to the line-absorption for these molecules either. There is almost surely a photon-dominated region (PDR) on the line of sight to [W40 IRS 1a]{}, which could account for as much as $A_V \sim 10$, nearly the entire measured visual extinction (see [@hollenbach90] for a good discussion). Hydrogen is expected to be neutral or molecular throughout the PDR. CO and perhaps [C$_2$]{} cannot survive in the regions of the PDR closest to the region. The effect of dust in PDRs is poorly understood, and so a quantitative treatment of these regions in general is difficult. In summary, CO and [C$_2$]{} absorption lines should sample the cold material in the foreground, at least part of the PDR, and none of the region. A circumstellar dust shell, if it exists, would most likely only contribute to high-$J$ molecular absorption. The visual extinction ($A_V \sim 9$) should arise almost entirely in the foreground material (which may be part of the local molecular cloud) and the PDR, though we note that the PDR could in theory produce all of the observed extinction. Data and Analysis for [$^{12}$CO]{} and [C$_2$]{} toward [W40 IRS 1a]{} ======================================================================= C$_2$ Observations at 8775 Å ---------------------------- The observations of [W40 IRS 1a]{} were obtained with the Cassegrain echelle spectrograph and the RCA CCD camera on the 4-m Mayall telescope at Kitt Peak National Observatory on the night of UT 16 June 1983. Two spectra, each of one hour’s duration, were recorded in the region of the 2-0 band of the Phillips system (A$^1 \Pi_u$ – X$^1 \Sigma_g^+$) of [C$_2$]{}. The entire 2-0 band was contained in a single order with its center near 8775 Å and with a nominal reciprocal dispersion of 0.074 Å per pixel at the face of the CCD chip. Quartz lamp and thorium-argon spectra were obtained for flat-fielding and wavelength calibration. An 84 $\mu$m wide slit was employed, providing a nominal resolution of 0.15 Å which in turn corresponds to approximately 2 pixels on the chip. The two consecutive one-hour frames were co-added and averaged. Similarly, two separate flat-field frames were co-added and averaged, and the result was divided into the averaged stellar frame to produce the final photometric spectrum. Both flat-field and stellar frames were bias-corrected. The average of fifteen bias frames was first subtracted from the flat-field and stellar frames, after which a second-order bias correction was accomplished by subtracting from the average stellar and flat-field frames the mean row biases obtained from the masked bias columns of each. The final spectrum was extracted by collapsing the three columns along the slit direction which contained the maximum signal. This spectrum is shown in Figure 1. Spikes are due to cosmic ray hits and possibly OH airglow lines. Rotational lines in all three branches (P, Q and R) were identified, with the rotational quantum number $J$ reaching as high as 12 for the Q-branch. Unfortunately, there were problems with the wavelength calibration at the telescope and we were not able to derive a precise measure for the radial velocities of the [C$_2$]{} lines. Equivalent widths for these lines were determined using standard spectral reduction procedures in the NOAO/IRAF package, and column densities for each of the rotational levels were calculated from the rotational lines using a simple Gaussian curve-of-growth (c.f. [@s78]). Initially, the $f$-value derived by Erman et al. (1982) was used. Recently, the Phillips system transition probabilities have been refined (see [@lsf95] and references therein) and the column densities have been adjusted to reflect the new $f$-value derived by Lambert, Sheffer, and Federman (1995), $(1.23 \pm 0.16) \times 10^{-3}$. The data for each line and the column densities for each $J$ are shown in Tables 1 and 2. Several of the lines exhibited effects of saturation, and an estimate of the Doppler constant ($b$) was derived by requiring that rotational lines originating from the same rotational energy level yield the same rotational population for that level. This method is a simple extension of the doublet ratio method used in the analysis of saturated atomic species. The best value for the Doppler constant was found to be 1.25 km s$^{-1}$. The $J = 12$ data are not included due to high uncertainty: Only the Q12 line was observed and it is very weak. [$^{12}$CO]{} Observations at 2.3 $\mu$m ---------------------------------------- The $v=2-0$ rotational-vibrational lines for CO fall near 2.3 $\mu$m, within the K band. Since the oscillator strengths are smaller, the $v=2-0$ lines are not as saturated as their $v=1-0$ cousins at 4.7 $\mu$m. In addition, it is somewhat easier to observe at 2.3 $\mu$m, as the sky emission at 4.7 $\mu$m is much greater and more problematic. Contamination from stellar CO absorption should be negligible since [W40 IRS 1a]{} appears to be a hot O or B star, which would not allow stellar CO to survive ([@cc82]; [@setal85]). Circumstellar gas and dust, if it exists, could affect the high-$J$ molecular levels. Ro-vib transitions at 2.3 $\mu$m for [$^{12}$CO]{} were observed UT 11 June 1994 and UT 21 July 1997 using the CSHELL IR spectrometer at NASAs Infrared Telescope Facility (IRTF) on Mauna Kea. The IRTF is a 3 meter primary, off-axis cassegrain yielding f/13.67 at the spectrometer slit. The CSHELL is a cryogenically cooled echelle spectrometer with a 256 x 256 SBRC InSb detector array ([@greene93]). We used the 0.5  slit which gives $R \simeq 43,000$. Only the R0, 1, and 2 transitions were detected (Figure 3) with upper limits placed on R3, 4 and 5. A summary of the observations is shown in Table 3. The data were obtained and reduced following typical IR observing procedures which we briefly summarize below. Dark frames were coadded and subtracted from the flat-field image for each wavelength setting. [W40 IRS 1a]{} and the standard stars were observed in both the “A” and “B” beams (each beam places the spectrum in a different spatial location on the detector). Beam differences (A-B and B-A) were calculated to eliminate sky emission, then coadded and normalized using the flat field for the appropriate wavelength setting. Wavelength calibration was achieved using Ar, Kr, and Xe lamps with 3 lines per wavelength setting. One dimensional spectra for [W40 IRS 1a]{}, the standard stars, and the calibration lamps were then extracted from the detector images using the APALL task in IRAF. Once the wavelength solution was applied, telluric absorption features were identified in both the standard stars and [W40 IRS 1a]{} spectra. Telluric lines were eliminated from the [W40 IRS 1a]{} spectra using the IRAF task TELLURIC, which shifts and scales the standard star spectrum before dividing into the object spectrum. It is important to note that merely dividing the standard star spectrum into the object spectrum does not produce the [*true*]{} object spectrum without telluric features. To properly remove telluric features, one must generate an [*expected*]{} object spectrum, convolve it with the standard star spectrum, and compare to the actual observed object spectrum. Lacy et al. (1994) give a good discussion of this technique. We opted to merely divide, however, as our data quality did not warrant more sophisticated techniques. The R0, 1, and 2 lines are shown in Figure 3. These lines imply $v_{lsr} = 2 \pm 2$ km s$^{-1}$, in contrast with the 8 km s$^{-1}$ foreground molecular material seen in emission ([@cc82]). Since emission line data sample a large, beam-averaged area, and absorption lines just a pencil beam, we do not necessarily expect velocities derived from both to agree. We merely note that they are not wildly different. $R = 43,000$ corresponds to a resolution of $\Delta v \simeq 7$ km s$^{-1}$. The R0, 1, and 2 lines all have FWHM $\sim 8$ km s$^{-1}$ and hence are not well-resolved. After continuum normalization, each line was directly integrated for equivalent width. Errors reflect continuum placement ambiguity and are of high confidence ($2\sigma$). All the widths (along with transition information and errors) are given in Table 4. Widths for the R3, R4 and R5 lines are 2$\sigma$ upper limits based on the noise in the continuum at the expected line position. The equivalent width of any absorption line is dependent on the column density of the species and, if saturated, the velocity parameter [*b*]{} (c.f. [@s78]). We generated a model curve of growth (COG) using $b = 1.25$ km s$^{-1}$, as determined from the C$_2$ lines. Each [$^{12}$CO]{} equivalent width was fit to the COG independently and the column density for each rotational level ($N_J$) is given in Table 5. The R2 through R5 lines were clearly optically thin while the R0 and R1 lines were more optically thick, falling on the transition from the linear portion to the saturated part of the COG. If the $b$-value for CO is greater than 1.25 km s$^{-1}$, then the R0 and R1 lines become more optically thin and their abundances drop by 2 and 0.5 $\times 10^{17}$ cm$^{-2}$ respectively. Summing the abundances for each line gives a total column density of $N_{CO} = (1.1 \pm 0.2) \times 10^{18}$ cm$^{-2}$ (assuming $^{12}$CO to be the dominant isotope). Depending on the excitation of the higher $J$ levels, this value could be slightly too small, and if $b > 1.25$ km s$^{-1}$, then it would be too high. Our column density for CO is nearly identical to that inferred by Crutcher & Chu (1982) for the foreground, $N_{CO} \simeq 1.2 \times 10^{18}$ cm$^{-2}$. CO Ice Band at 4.7 ------------------- In an effort to detect the CO ice band at 4.7  we made observations with moderate spectral resolution at the United Kingdom IR Telescope (UKIRT) using the CGS4 on 19 August 1998. All observations were carried out while nodding along the slit to remove atmospheric emission. The total integration time on [W40 IRS 1a]{} was 11.2 minutes at an average airmass of 1.45. The spectrum for [W40 IRS 1a]{} was ratioed by BS 7236 (B9V) to remove telluric absorption features. The S/N is $\sim 44$ at 4.67  and no CO ice feature is apparent. An upper limit for the optical depth of the band is $\tau_{4.67} < 0.02$ (2$\sigma$), implying $N_{CO}(Ice) < 10^{16}$ cm$^{-2}$ ([@ta87]). The depletion of CO into icy mantles for the [W40 IRS 1a]{} sightline must be less than 1 %. Discussion ========== Excitation Conditions and the Foreground Cloud Physical Properties ------------------------------------------------------------------ We have three basic diagnostics for the temperature and density toward [W40 IRS 1a]{}. The $^{13}$CO emission data imply $\log{(nT)} \sim 4$, much lower than the nearby molecular cloud, $\log{(nT)} \sim 6.5$ ([@cc82]). Excitation conditions can also be derived from the rotational populations of CO and [C$_2$]{}. The rotational level populations of the [C$_2$]{} Phillips system ($v = 0,J$) are attained via radiative pumping. Lifetimes of these levels are so long that collisions and upward electronic transitions are the most important depopulation mechanisms. Hence the rotational populations reflect the competition between pumping and collisions and are non-thermal in general ([@vdb82]). For densities greater than $100$ cm$^{-3}$ the $J = 0$ and $J = 2$ levels are very nearly thermal. Otherwise, the populations depend on the thermal temperature, $T$, and the radiation parameter, $\frac{n_c \sigma}{I_R}$, where $n_c$ is the collision partner density (usually $n($H$) + n($H$_2)$), $\sigma$ is the effective cross section for collisional de-excitation, and $I_R$ is the scaling factor for the radiation field in the far-red ([@vdb82]). Total molecular abundance and excitation temperatures were calculated from models which fit a Maxwell-Boltzmann population distribution to the observed rotational level abundances. As had been found for other relatively dense clouds ([@lc83]), the rotational abundances could not be fit with a single temperature distribution, presumably as a result of radiative pumping ([@chaffee80]; [@vdb82]). Consequently, we fit these data with a two-temperature model used by Lutz and Crutcher (1983): J-levels 4 through 10 were fit with an excitation temperature ($T_{ex}$) of 126 K, which we associate with the effects of radiative pumping. After correcting the populations in J = 0 and 2 for the contributions from the 126 K population distribution, we derived a J=2/J=0 excitation temperature of 39 K, which we associate with the thermal distribution of the gas. The resulting total column density of [C$_2$]{} towards [W40 IRS 1a]{} is $(7.0 \pm 0.4) \times 10^{14}$ cm$^{-2}$, based on a Boltzmann distribution for both temperatures. This is the highest column density of [C$_2$]{} yet seen in absorption. Figure 2 shows the final fits. In comparing the results for [W40 IRS 1a]{} to the radiative pumping models calculated by van Dishoeck (1984), we found that they are best characterized by her model with a kinetic temperature of 40 K and a radiation parameter, $\frac{n_c \sigma}{I_R}$ of $5.27 \times 10^{-14}$. Assuming $I_R = 1$, $\sigma \sim 2 \times 10^{-16}$ cm$^2$ ([@vdb89]), and that hydrogen ( + H$_2$) is the only important collision partner, we get $n_H \sim 250$ cm$^{-3}$ for the [W40 IRS 1a]{}sightline. Since the value of $\sigma$ is not known to better than a factor of 2 ([@vdb89]), our value for $n_H$ is equally imprecise. In addition, for an enhanced (or depleted) radiation field, the estimated collision (hydrogen) density would scale accordingly. Rotational transitions of CO are primarily excited by collisions with hydrogen. De-excitation can occur via collisions with hydrogen or by spontaneous line emission. The critical density at which collisions begin to overtake spontaneous emission is around 3000 cm$^{-3}$. More detailed studies of the excitation, photodissociation, and chemistry have been conducted by van Dishoeck & Black (1988) and Warin, Benayoun, & Viala (1996). In general it is found that the rotational populations of CO are sub-thermal except for the first few levels in dense cases ([@wbv96]). Hence the assumption of LTE will almost always produce excitation temperatures which underestimate the actual thermal temperature ($T_{ex} < T_{kin}$). Recent work by Wannier, Penprase, & Andersson (1997) suggests that the dominant form of CO excitation in diffuse and translucent clouds can be line emission from nearby molecular clouds, if the clouds have similar velocity vectors. As a start, we constructed a Boltzmann plot for the [$^{12}$CO]{} data (Figure 4). Fitting the temperature to all lines gives $T_{ex} = 7$ K. This value is similar to that found by Cruther & Chu (1982), but it is important to note that the excitation temperatures for [$^{13}$CO]{} and [$^{12}$CO]{} are different in general ([@wbv96]). Since the thermal temperature is greater than the CO rotational temperature in general, $T_{kin} > 7$ K, which is consistent with the [C$_2$]{} analysis. Warin, Benayoun, & Viala (1996) have constructed models of CO excitation for the dense, translucent, and diffuse cloud regimes, including UV photodissociation. These models show very clearly that the excitation temperature is much lower than the thermal temperature for diffuse and translucent clouds. The excitation of CO in dense clouds is nearly thermal and $T_{ex} = T_{kin}$. If we assume that the cloud(s) toward [W40 IRS 1a]{} are dense, then $T_{kin} \simeq 7$ K. For diffuse and translucent clouds, the level populations observed can be scaled (see Section 4.2 of Warin, Benayoun, & Viala \[1996\]) to derive “pseudo-LTE” rotational populations, i.e. representative LTE populations for the actual thermal temperature of the gas. A slope can be fitted to these populations and a thermal temperature inferred. If the cloud(s) toward [W40 IRS 1a]{} are translucent, then the R0 population is reduced by $\sim 1.75$ (changes in R1 and R2 happen to be negligible). The resulting populations imply $T_{kin} \simeq 9$ K (see Figure 4), which still does not agree with the [C$_2$]{} analysis. It is not clear, however, that the models constructed by Warin, Benayoun, & Viala (1996) apply to our line of sight: The UV field is probably higher than normal near W40, and it seems very likely that there are multiple clouds on the sightline. Since the cloud(s) on the [W40 IRS 1a]{} line of sight are very near the W40 molecular cloud, and the LSR velocities of both are similar (5 and 8 km s$^{-1}$), the excitation for [$^{12}$CO]{} may be at least partially radiative. Comparing to models constructed by Wannier, Penprase, & Andersson (1997) it is apparent that our $T_{ex} = 7$ K is degenerate: It can be accounted for by many combinations of radiative excitation, collision partner density, and kinetic temperature. The [C$_2$]{} excitation conditions can help constrain those for [$^{12}$CO]{}. If we assume $T_{kin} \simeq 40$ K, the density of the gas can range from 0 to 450 cm$^{-3}$, depending on the efficiency of radiative excitation. Collisional excitation becomes dominant as $n$ approaches 450 cm$^{-3}$ (which we use as an upper limit). If we further assume $n \sim 250$ cm$^{-3}$ (as indicated by the [C$_2$]{} data), then the excitation of CO can only be explained by collisional and radiative processes combined. A summary of the [W40 IRS 1a]{} sightline cloud physical properties based on the CO and [C$_2$]{} data in this paper as well as the CO emission study by Crutcher and Chu (1982) is given in Table 6. As discussed in Section 2, we assume that nearly all of the absorption on this line of sight arises in one cloud complex (single or multiple components) local to the W40 region, since it is only 400 pc distant (i.e., the cloud(s) cannot be diffuse). The physical conditions are best determined by [C$_2$]{}: $n \sim 250$ cm$^{-3}$ and $T \simeq 40$ K. The CO emission and absorption values and limits agree very well. The cloud(s) are clearly not dense enough to be considered molecular, but could be considered translucent. Translucent lines of sight typically have $A_V = 2 -- 5$ and can be studied via both absorption lines (UV, optical, and/or IR), and radio emission lines. In addition, translucent material is expected in the outer envelopes of molecular clouds ([@vdb89]). We postulate that the [W40 IRS 1a]{} sightline is composed of multiple translucent components associated with the W40 molecular cloud. Molecular Abundances -------------------- CO and [C$_2$]{} abundances have been determined jointly for a number of sightlines, a sample of which is shown in Table 7. [W40 IRS 1a]{} is one of few sightlines allowing a direct comparison of CO and [C$_2$]{} in [*absorption*]{}. The abundance ratio we derive is CO/[C$_2$]{} $= 1600 \pm 600$ (2$\sigma$). The depletion of CO onto dust is negligible ($< 1 \%$), in view of our failure to detect CO ice, but the [C$_2$]{}depletion has not been assessed. Note that the CO/[C$_2$]{} ratio increases from diffuse to translucent and molecular regimes indicating (to first order) that the formation/destruction rates favor CO over [C$_2$]{}as cloud type changes. This may merely be due to self-shielding of CO, but other processes may also be in action: The data are not yet precise enough to tell. Assuming all hydrogen is molecular, we can estimate the amount of H$_2$ on the [W40 IRS 1a]{} line of sight from $N_{CO}$. Using an H$_2$/CO ratio of $3700_{-2700}^{+3100}$ (based on the direct comparison of H$_2$ and CO IR absorption lines toward NGC 2024 IRS 2 ($A_V = 21.5 \pm 5$), [@letal94]), we find $N_{H_2} = 4.4_{-3.2}^{+3.8} \times 10^{21}$ cm$^{-2}$. The H$_2$:CO ratio is derived from a direct comparison of weak absorption lines and hence should be reliable, but is based on only one data point: NGC 2024 IRS 2. If this sightline is abnormal in any way, then the ratio does not necessarily apply to other lines of sight such as [W40 IRS 1a]{}. Dust Indicators --------------- The [$A_V$]{} calculated by Crutcher and Chu (1982) and Smith et al. (1985) assumes a “normal” extinction law ($R_V \sim 3$), which may be incorrect. The ratio of visual to selective extinction, $R_V$, is a grain size distribution indicator ([@ccm89]): $R_V < 3$ implies an abundance of small grains compared to the typical size distribution, whereas $R_V > 3$ implies that larger grains dominate the extinction. As interstellar clouds collapse the dust grains tend to agglomerate, eliminating the smaller particles ([@j80]). If the line of sight toward [W40 IRS 1a]{} does indeed sample the edge of a molecular cloud, then we would expect a population of larger grains and $3 < R_V < 5$. Smith et al. (1985) found $(B-V)$ = 2.2 for [W40 IRS 1a]{} and assuming it is an OB star, $E_{B-V} \simeq 2.5$. For $R_V = 3-5$, we get a range in [$A_V$]{} of 7.5 to 12.5, which agrees with the previous work by Crutcher and Chu (1982) and Smith et al. (1985). Our calculated value of [$A_V$]{} can be used to predict the amount of CO expected toward [W40 IRS 1a]{}. As discussed in Section 2, however, the extinction and CO absorption may not be well-correlated in the PDR, leading us to slightly [*overestimate*]{} $N_{CO}$ based on [$A_V$]{}. Conversely, most studies of the CO/[$A_V$]{} ratio have not addressed the depletion of CO into ice mantles (which may be as high as 40 %, [@cetal95]). Since CO is almost entirely in the gas-phase for the [W40 IRS 1a]{} line of sight, these studies will [*underestimate*]{} the predicted CO column density. Using radio maps, spectroscopic data, and star counts in the Taurus and $\rho$ Oph clouds, Frerking, Langer, and Wilson (1982) found: $$N(C^{18}O) = 1.7 \times 10^{14} (A_V - 1.3)$$ for $N$ in cm$^{-2}$ and $4 < A_V < 21$. This relation predicts $N_{CO} = (0.5 - 1.0) \times 10^{18}$ cm$^{-2}$ (with $N($CO$)/N($C$^{18}$O$) = 490$) for our line of sight, very nearly the same as we have measured. Interestingly, it may be that the PDR effect nearly balances the depletion effect. There is no strong correlation between [C$_2$]{} and $E_{B-V}$ ([@vdb89]) though other dust indicators have not been investigated. The lack of correlation may be due to the fact that many of the stars included in [C$_2$]{} absorption studies to date have been distant supergiants where some of the sightline extinction is caused by diffuse clouds, which lack abundant [C$_2$]{}. Comparing the [W40 IRS 1a]{} sightline to others in Table 7, it is apparent that CO and [C$_2$]{} generally increase with [$A_V$]{} but the data are too scattered to draw any strong conclusions. The measurement of CO and [C$_2$]{} absorption toward Cyg OB2 No. 12, the classic diffuse cloud line of sight with $A_V \simeq 10$, is quite interesting. Lutz and Crutcher (1983) found $N_{C_2} = (3.0 \pm 0.2) \times 10^{14}$ cm$^{-2}$ (adjusted for new Phillips system $f$-values in Lambert, Sheffer, and Federman (1995)), about half of the abundance on the [W40 IRS 1a]{} sightline. Recently, McCall et al. (1998) measured CO IR absorption toward Cyg OB2 No. 12 yielding $N_{CO} = 2 \times 10^{16}$ cm$^{-2}$, a factor of 60 less than what we find for [W40 IRS 1a]{} over nearly the same total visual extinction. The 4.7 $\mu$m CO ice feature is not seen on this line of sight; therefore, depletion of CO onto ice mantles cannot readily explain the discrepancy. This clearly shows that the gaseous CO to dust ratio changes from diffuse to denser environments. As an interesting side note, we have compared the total hydrogen column density (assuming all of it to be molecular) for both [W40 IRS 1a]{} and NGC 2024 IRS 2 to the hydrogen/reddening correlations found by Bohlin, Savage, and Drake (1978) and Dickman (1978) (Figure 5). The $E_{B-V}$ for NGC 2024 IRS 2 assumes $A_V = 21$ ([@letal94]; [@jpl84]) and $R_V$ ranging from 3 to 5. The correlations (“intercloud” and “cloud”) from Bohlin, Savage, and Drake (1978) are based on L$\alpha$ absorption for lightly reddened sightlines with $E_{B-V} < 0.6$, while the relation from Dickman (1978), derived from CO emission, is good to $E_{B-V} \sim 3$. In Figure 5 we have extrapolated out to $E_{B-V} = 10$. The data from [W40 IRS 1a]{} and NGC 2024 IRS2 agree with the extrapolated relations surprisingly well, within factors of 2 or so. Conclusions =========== We have used [$^{12}$CO]{} and [C$_2$]{} IR and visible absorption lines to investigate the line of sight toward [W40 IRS 1a]{}. The [C$_2$]{} data were obtained at the 4 m Mayall Telescope at KPNO and the [$^{12}$CO]{} data from the CSHELL on the IRTF. The CO ice band at 4.7 was not detected. This sightline is clearly much more dense than diffuse, based on $A_V \sim 9$ and a distance of only 400 pc (we calculate a range in [$A_V$]{} of 7.5 to 12.5, assuming a population of large grains). Our [$^{12}$CO]{} and [C$_2$]{} data show: 1\. The [C$_2$]{} excitation conditions, $T \simeq 40$ K and $n \sim 250$ cm$^{-3}$, agree with limits determined from CO emission and absorption. Since the [$A_V$]{} is large and both absorption and emission measures are available, we postulate that the [W40 IRS 1a]{}sightline has multiple translucent components. 2\. The non-detection of CO ice indicates a CO depletion of $\delta < 1\%$. 3\. Comparing to other sightlines in Table 7, we find an overall increase in $N_{CO}$ and $N_{C_2}$ with increasing [$A_V$]{}. The data, however, are too scattered to draw any further conclusions at this point. The ratio of CO to [C$_2$]{} appears to increase from diffuse to translucent and molecular sightlines probably due to the self-shielding of CO. 4\. The column density of CO toward [W40 IRS 1a]{} is $\sim 60$ times that found for Cyg OB2 No. 12, the classic diffuse line of sight ([@mccall98]), despite very similar [$A_V$]{} values. This is not a depletion effect, and suggests that the CO-to-dust ratio changes from diffuse to dense environments. 5\. Finally, the relationship of hydrogen column density to interstellar reddening ([@bsd78]; [@dickman78]) was found to be roughly consistent with recent data out to $E_{B-V} \sim 10$ (Figure 5). Contrary to the CO-to-dust ratio, the hydrogen-to-dust ratio appears to be valid from diffuse to dense regimes. This research was supported by NASA Graduate Student Research Program grant (NGT5-50032) to R. Y. Shuping and NASA grant NAG5-4184 to T. P. Snow. We would like to thank the staff and operators at KPNO, the IRTF (J. Rayner), and the UKIRT (T. Kerr) for their help and tireless duty through the night. Many thanks to J. Black and the referee S. Federman, whose comments and suggestions greatly improved the content and quality of this paper. B. L. Lutz would like to thank K. Sheth who carried out the curve-of-growth analysis and temperature fits for the [C$_2$]{} lines. R. Y. Shuping would like to thank B-G Andersson, J. H. Lacy, and D. Jansen for their helpful input, and also D. E. Schutz for useful conversations and support. Black, J. H. & Willner, S. P. 1984, , 279, 673 Black, J. H., van Dishoeck, E. F., Willner, S. P., & Woods, R. C. 1990, , 358, 459 Bohlin, R. C., Savage, B. D., & Drake, J. F. 1978, , 224, 132 Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, , 345, 245 Chackerian, C. & Tipping, R. H. 1983, J. Molec. Spectrosc., 99, 431 Chaffee, F. H., Jr., Lutz, B. L., Black, J. H., Vanden Bout, P. A., & Snell, R. L. 1980, , 236, 474 Chiar, J. E., Adamson, A. J., Kerr, T. H., & Whittet, D. C. B. 1995, , 455, 234 Crawford, I. A. 1997, , 290, 41 Crutcher, R. M. & Chu, Y. 1982, Regions of Recent Star Formation, 53, Roger, R. S. & Dewdney, P. E. eds. Dickman, R. L. 1978, , 37, 407 Erman, P., Lambert, D. L., Larsson, M., & Mannfors, B. 1982,, 253, 983 Federman, S. R., Strom, C. J., Lambert, D. L., Cardelli, J. A., Smith, V. V., & Joseph, C. L. 1994, , 424, 772 Federman, S. R., Lambert, D. L., van Dishoeck, E. F., Andersson, B-G, & Zsargo, J. 1999, in prep Frerking, M. A., Langer, W. D., & Wilson, R. W. 1982, , 262, 590 Gredel, R., van Dishoeck, E. F., de Vries, C. P., & Black, J. H. 1992, , 257, 245 Gredel, R. , van Dishoeck, E. F., & Black, J. H. 1994, , 285, 300 Greene, T. P., Tokunaga, A. T., Toomey, D. W., & Carr, J. S. 1993, Proc. SPIE, 1946, 313 Hollenbach, D. J. 1990, ASP Conference Series Vol. 12, The Evolution of the Interstellar Medium, L. Blitz ed., p. 167 Jiang, D. R., Perrier, C., & Lena, P. 1984, , 135, 249 Jura, M. 1980, , 235, 63 Lacy, J. H., Knacke, R., Geballe, T. R., & Tokunaga, A. T. 1994, , 428, L69 Lambert, D. L., Sheffer, Y., & Federman, S. R. 1995, , 438, 740 Lutz, B. L. & Crutcher, R. M. 1983, , 271, L101 Magnani, L. & Onello, J. S. 1995, , 443, 169 Mantz, A. W. & Maillard, J.-P. 1974, J. Molec. Spectrosc., 53, 466 McCall, B. J., Geballe, T. R., Hinkle, K. H., & Oka, T. 1998, Science, 279, 1910 Shull, J. M. & Beckwith, S. 1982, , 20, 163 Smith, J., Bentley, A., Castelaz, M., Gehrz, R. D., Grasdalen, G. L., & Hackwell, J. A. 1985, , 291, 571 Spitzer, L.Jr. 1978, Physical Processes in the Interstellar Medium, (New York: Wiley) Tielens, A. G. G. M. & Allamandola, L. J. 1987, Physical Processes in Interstellar Clouds, eds. G. E. Morfill & M. Scholer, p. 333, Dordrecht: Reidel Vallée, J.P. 1987, , 178,237 Vallée, J. P. & MacLeod, J. M. 1991, , 250, 143 Vallée, J. P., Guilloteau, S. & MacLeod, J. M. 1992, , 266, 520 Vallée, J. P. & MacLeod, J. M. 1994, , 108, 998 van Dishoeck, E. F. 1984, PhD Thesis, Leiden University van Dishoeck, E. F. & Black, J. H. 1982, , 258, 533 van Dishoeck, E. F. & Black, J. H. 1988, , 334, 771 van Dishoeck, E. F. & Black, J. H. 1989, , 340, 273 Wannier, P., Penprase, B. E., & Andersson, B-G 1997, , 487, L165 Warin, S., Benayoun, J. J., & Viala, Y.P. 1996, , 308, 535 Zeilik, M. & Lada, C. J. 1978, , 222, 896 [ccc]{} R0 & 8757.7 & $37.0 \pm 3.0$ R2 & 8753.9 & $52.5 \pm 1.5$ Q2 & 8761.2 & $55.0 \pm 5.0$ P2 & 8766.0 & $18.0 \pm 3.0$ R4 & 8751.6 & $42.5 \pm 7.5$ Q4 & 8763.7 & $36.0 \pm 3.0$ P4 + Q8 & 8773.3 & $43.0 \pm 4.0$ R6 & 8750.8 & $22.8 \pm 3.0$ Q6 & 8767.7 & $31.5 \pm 4.5$ P6 & 8782.3 & $12.5 \pm 2.5$ P8 & 8792.6 & $9.5 \pm 2.0$ Q10 & 8780.1 & $17.5 \pm 2.5$ Q12 & 8788.5 & $6.5 \pm 2.0$ [ccc]{} 0 & 0.000 & $5.4 \pm 0.8$ 2 & 15.635 & $19.8 \pm 2.0$ 4 & 52.114 & $11.2 \pm 1.6$ 6 & 109.430 & $9.2 \pm 0.7$ 8 & 187.572 & $5.6 \pm 0.7$ 10 & 286.526 & $4.1 \pm 0.4$ Total & & $70 \pm 4$ [ccccc]{} 2.334-2.340 & R3, R4 and R5 & 25 & 90 & BS 6714, 7110 and 5 Aql 2.340-2.346 & R0,1, and 2 & 25 & 90 & Spica [cccc]{} R0 & 2.34530523 & 8.78 & $130 \pm 20$ R1 & 2.34326929 & 5.89 & $100 \pm 10$ R2 & 2.34127497 & 5.33 & $60 \pm 17$ R3 & 2.33932336 & 5.11 & $< 20$ R4 & 2.33741273 & 5.00 & $< 20$ R5 & 2.33554435 & 4.94 & $< 20$ [ccc]{} 0 & 0 & $ 5.0 \pm 2.0 $ 1 & 5.532 & $ 4.0 \pm 1.5 $ 2 & 16.60 & $2.2 \pm 0.7 $ 3 & 33.19 & $< 0.8$ 4 & 55.32 & $< 0.8$ 5 & 82.98 & $< 0.8$ Total & & $11 \pm 2$ [ccccccc]{} [C$_2$]{}& ... & 1.25 & $7.0 \pm 0.4 \times 10^{14}$ & 39 and 126 & $\sim 250$ & ... $^{13}$CO Em. & 8 & ... & $1.2 \times 10^{18}$ & ... & ... & $\sim 4$ $^{12}$CO & $2 \pm 2$ & ... &$1.1 \pm 0.2 \times 10^{18}$ & $> 7$ & $< 450$ & ... [ccccccc]{} W40 IRS 1a & $1.1 \pm0.2$(18) & $7.0 \pm 0.4$(14) & $1600 \pm 600$ & T & 7.5 – 12.5 & This Work Cyg OB2 \#12 & 2(16) & $3.0 \pm 0.2$(14) & $67 \pm 5$ & D & $\sim 10$ & 1, 2 HD94413 & 0.7–1.2(16) & 2.8(13) & 250–430 & T & 2.4 & 3 HD154368 & 0.6–1.5(16) & 4.6(13) & 130–330 & T & 2.5 & 3 HD169454 & 0.55–1.8(16) & 5.6(13) & 100–320 & T & 3.3 & 3 $o$ Per & 1.1(15) & 1.8(13) & 60 & D & 0.9 & 4 HD27778 & 2.5(16) &3.0(13) & 830 & T & 1.2 & 4 $\rho$ Oph A & 1.9(15) & 2.1(13) & 90 & T & 1.4 & 4,7 $\zeta$ Oph & 2.3(15) & $1.79 \pm 0.06$(13) & 130 & D/T & 0.96 & 4 20 Aql & 3(15) & 4.2(13) & 71 & D/T & 0.99 & 4 HD207198 & 2.6(15) & 2.4(13) & 110 & T & 1.9 & 4 HD21483 & 1.0(18) & 7.4(13) & 100 - $10^5$ & T & 1.7 & 4 $\zeta$ Per & 1.2(15) & 2.8(13) & 43 & D/T & 1.0 & 4 X Per & 5.0(15) & 4.2(13) & 120 & T & 1.4 & 4 HD 26571 & 6.0(16) & 8.8(13) & 680 & D/T & 0.9 & 4 AE Aur & 1.3(15) & 4.6(13) & 28 & T & 1.6 & 4 HD110432 & 1.0(15) & 2.4(13) & 42 & T & 1.2 & 4 $\pi$ Sco & 1.0(12) & $<1.0(12)$ & $>1$ & D & 0.19 & 5 $\beta^1$ Sco & 1.2(13) & $<1.0(12)$ & $>12$ & D & 0.62 & 5 $\omega^1$ Sco &4.0(13) & $<2.0(12)$ & $>23$ & D & 0.68 & 5 $\chi$ Oph & 3.0(14) & 2.8(13) & 11 & T & 1.2 & 4 9 Cep & 1.7(13) & 1.0(13) & 2 & D & 1.5 & 4 HD 210121 & 3.0(15) & 5.2(13) & 58 & T (High Lat.) & $\sim 1$ & 6 $\lambda$ Cep & 1.4(15) & 1.4(13) & 100 & D & 1.8 & 4
--- author: - 'F. Jim'' enez Forteza' - 'J. Betancort Rijo' date: 'Received \*, \*\*; accepted \*, \*\*' title: Some remarks on the old problem of recombination --- [In the seminal works of @Zeldovich1968 and @Peebles1968, a procedure was outlined to obtain the equation of evolution of the hydrogen fraction without an explicit use of the radiative transfer equation. This procedure is based, explicitly or implicitly, on the concept of escape probability and, using the Sobolev approximation for this problem [@Sobolev1960], has extensively been used since then in developing refined approximations.]{} [To derive in a simple, rigorous and general manner the above mentioned procedure and to obtain exact analytical expressions for the spectral density of radiation generated at one photon and two photon recombination transitions. These expressions are used to estimate the implications of several interesting effects.]{} [Some slight re-elaborations of basic principles of transport theory.]{} [We have obtained the expressions searched for and used them in several explicit computations. We have found that the relative change in the electronic fraction due to the absorption by the two photon line, $1s\rightarrow 2s$, of photons that have escaped from the line $2p\rightarrow 1s$ is $0.67\%$, that is a result $12\%$ higher than the previous ones obtained using some approximations [@Kholupenko2006; @Hirata2008]. The photons generated by the transition $2s\rightarrow 1s$ and later absorbed by the same transition (in combination with a photon more energetic than its original partner) imply a $0.05\%$ maximum variation of the electronic fraction. This problem has not been treated previously analytically, although a numerical estimate have been recently carried out in @Chluba2011. ]{} Introduction. ============= The initial aim of this work was to give an account of the basic recombination picture through the escape of photons generated by hydrogen transitions in an as rigorous and simplified manner as possible. To this end, we have computed the hydrogen formation rate (in the ground state) due to the escape of Lyman $\alpha$ photon by mean of the escape probability. This has been done by a number of people [@Zeldovich1968; @Peebles1968; @Seager2000] in slightly different presentations using the Sobolev escape probability. To deal with the hydrogen destruction through absorption of CMB photons, we advanced an expression for the hydrogen net generation rate which is simply equal to the generation rate through escaping Lyman $\alpha$ photons times one minus the ratio of the actual hydrogen number density and that given by equilibrium. The expression satisfies the necessary requirement that in equilibrium the net rate is zero but it’s not easy to prove that this expression is exact at intermediate cases where equilibrium does not hold but where the correction factor still differ substantially from one. #### {#section .unnumbered} It could seem that the expression is obviously true: if all Lyman $\alpha$ photon escape, the net hydrogen generation rate would be equal to the difference between the number of transitions $2p\rightarrow1s$ and $1s\rightarrow2p$ per unit time and unit volume; if the escape probability is smaller than one, the net rate should be modulated by this probability. However, on closer scrutiny, this turns out to be not obvious at all. In particular, it’s not transparent the meaning of the escape probability multiplying the hydrogen destruction rate. Fortunately we have found a way to frame the problem that allows a rigorous and simple manner to prove that expression. In fact, in @Peebles1968 a derivation for this is provided, but it’s not as simple and transparent as the one given here. On top of that, we have found that our approach to the problem makes it possible an exact treatment of the problem by mean of a single differential equation, even when lines with frequencies higher than Lyman $\alpha$ are considered. Using our procedure to make an exact computation is beyond our interest and the scope of this work is to illustrate the relevance of the correcting terms with respect to similar but not exact treatments [@Seager2000]: we consider in detail examples of two processes. One of the processes is the absorption in the Lyman $\alpha$ line of photons generated by transitions leading to more energetic photons which are then redshifted into the Lyman $\alpha$. The other processes concern the two photon transition $(2s\rightarrow 1s)$, that we have noted that can not properly be treated as a single line, as is usually done in the standard scenario. When properly treated, stimulated emission plays a non-negligible role. We have found that this point has also been treated by @Chluba2006, [@Kholupenko2006] and @Hirata2008 . The other two photon processes that we consider are: the absorption by the line $2s\rightarrow1s$ of a photon generated by the transition $2p\rightarrow1s$ [@Kholupenko2006; @Hirata2008] with frequency $\nu_\alpha$ and redshifted to $\nu$ when combined with a CMB photon with frequency $\nu'$ such that $\nu+\nu'=\nu_\alpha$ and transitions $1s\rightarrow 2s$ generated by one of the two photons emitted at an earlier $2s\rightarrow 1s$, and that collectively escaped from that line through redshifting, combined with a CMB photon so that the sum of the frequencies is equal to $\nu_\alpha$. Rigorous derivation of the basic equation. ========================================== If all the hydrogen is formed through a Lyman $\alpha$ transition it’s clear that the time derivative of the hydrogen comoving density in the ground state is given by the following effective equation: $$\begin{aligned} \frac{dn_{H_{1s}}}{dt}&=p_{2p,1s}n_{H_{2p}}A_{2p,1s}\left(1+ (e^{\frac{h \nu_\alpha}{k T}}-1)^{-1}\right) \\ & -(1-e^{\tau_{2p,1s}})H(z)\nu_{\alpha}{\frac{a^3(t) B(\nu_{\alpha})}{h \nu_\alpha } } \end{aligned} \label{nsrate}$$ where $n_{H_{2p}}$ is also comoving, $\frac{a^3(t) B(\nu_\alpha)}{h \nu}$ is the comoving density of photons in a unit frequency interval, $a(t)$ is the scale factor, $p_{2p,1s}$ is the escape probability and where we have taken into account the spontaneous and stimulated emission. The rational for this expression is that an hydrogen is created for any photon escaping from the line towards smaller frequencies (the first term on the right in equation ), while an hydrogen is destroyed by any photon redshifted on to the line. The number of transitions $1s\rightarrow2p$ per unit time and frequency interval is given by the number of photons being redshifted into the line per unit of time and volume: $$\left(H(z)\nu_\alpha\right) \left(\frac{B(\nu_\alpha,T)}{h\nu_\alpha}\right) \label{eq:bb}$$ multiplied by the probability for a photon to be absorbed while crossing the entire line (from the blue side to the red one), which in terms of the optical depth, $\tau$ can be written in the form:$$1-e^{-\tau_{2p,1s}}$$ Notice that expression is simply the product of the photons “speed” in $\nu$-space (first parenthesis) by the $\nu$-space photon number density per unit volume (second parenthesis). We have written expression in terms of the general concepts denoted by $p$ and $\tau$, so that it can account for the fact that the emission and absorption profiles are not identical [@Chluba2009]. To compute $\tau$ one must use the real absorption profile (excluding coherent scattering) so that $$1-e^{-\tau}$$ is the “kill probability”. On computing $p$ one must take into account that when an “absorbed” photon (including coherent scattering) is “reemitted” there is a superposition of real absorption and emission, with a frequency change distribution corresponding to the intrinsic width of the line plus the thermal widening and coherent scattering while for which the widening is essentially thermal. So, the “absorption” and “emission” profiles are not equal, although for real absorption and coherent scattering separately they are. Furthermore, two photon transitions from the $2p$ state to higher levels also contribute to widening the emission profile; this effect turns out to be the main effect in producing asymmetry between emission and absorption profiles [@Chluba2009_2]. In what follows, however, we shall not consider this fact, that is negligible as long as the thermal width it is much larger than the intrinsic one, and we use Sobolev [@Seager2000] for $p_{(Sob)}$ and $\tau_{(Sob)}$ where the subscript $(Sob)$ refers to Sobolev values for probability and opacity. From this point, we use $p\equiv p_{(Sob) }$ and $\tau\equiv\tau_{(Sob)}$ for simplicity in the notation. #### {#section-1 .unnumbered} Let us comment on the relationship between expression and the equation for the evolution of the number density of hydrogen (in the ground state) in an straightforward approach: $$\label{eq:nsrate2} \frac{dn_{H_{1s}}}{dt}=n_{H_{2p}}(A_{2p,1s}+B_{2p,1s}J(\nu_\alpha,T))- n_{H_{1s}} B_{1s,2p} J(\nu_\alpha,T)$$ where $n_{H_{1s}}$, $n_{H_{2p}}$ are the comoving number densities and $J(\nu,T)$ is the spectral energy density of radiation, that it’s made up of the initial black body radiation and that coming out from recombination lines, and whose evolution is given by the corresponding radiative transfer equation. #### {#section-2 .unnumbered} In expression we make a different grouping of the terms than in . In we divide all photons in the line $\nu_\alpha$ in two categories: those generated by previous $2p\rightarrow1s$ transitions and those generated much earlier (when most of CMB photons were lastly generated before starting a merely passive evolution) and passively redshifted into $\nu_\alpha$. The transition involving the first category are accounted for by the first term in , while those in the second are accounted for by the second term. To compute this last term in it’s essential the assumption that new photons getting trapped within the line (those already trapped are already included in the first term) are only those in the preexisting (before recombination) black body CMB (those due to higher frequency recombination lines will be considered later) being redshifted into the line. If the CMB were not evolving passively but there were processes generating new photons, an additional term should be included in . After some algebra, equation may be written in the form: $$\label{eq:nsrate3} \begin{aligned} \frac{dn_{H_{1s}}}{dt}&=\frac{1-e^{-\tau_{2p,1s}}}{\tau_{2p,1s}}(n_{H_{2p}}A_{2p,1s}\left(1+(e^{\frac{h \nu_\alpha}{k T}}-1)^{-1}\right)\\ &-n_{H_{1s}}B_{1s,2p}B(\nu_\alpha)) \end{aligned}$$ where $n_{H_{1s}}$, $n_{H_{2p}}$ are comoving densities. If neutral hydrogen were formed only through the two photon transition $2s\rightarrow1s$, an equation similar to would hold in as much as the transition can be described by a one photon model. The same holds for any other line if it were the only generating neutral hydrogen. When all lines contribute simultaneously, the equation of evolution for $n_{H_{1s}}$, in first approximation, has in the right hand side a sum of terms as that in (one for each line). $$\label{eq:n1s4} \begin{split} \frac{dn_{H_{1s}}}{dt}&=\sum_i \bigg(p_{i,1s}n_{H_{i}}A_{i,1s}\left(1+ e^{\frac{h \nu_\alpha}{k T}}-1)^{-1}\right)+\\ & -p_{i,1s}n_{H_{1s}}B_{1s,i}B(\nu_i)\bigg) \end{split}$$ where for formal simplicity in the forthcoming equations we use the following net rate definition: $$C_i \equiv p_{i,1s}n_{H_{i}}\left(A_{i,1s}\left(1+(e^{\frac{h \nu_\alpha}{k T}}-1)^{-1}\right)-n_{H_{1s}}B_{1s,i}B(\nu_i)\right)$$ To gauge the relevance of various terms in our formally exact formalism, we use an evolution equation for $n_{H_{1s}}$ with just the transition $2p\rightarrow1s$ and $2s\rightarrow1$: $$\begin{aligned} \frac{dn_{H_{1s}}}{dt}&= p_{2p,1s}n_{H_{2p}}A_{2p,1s}\left(1+(e^{\frac{h \nu_\alpha}{k T}}-1)^{-1}\right)+\\ &-p_{2p,1s}n_{H_{1s}}B_{1s,2p}B(\nu_\alpha)+\\ &+n_{H_{2s}}A_{2s,1s}\left(1+(e^{\frac{h \nu_\alpha}{k T}}-1)^{-1}\right)+\\ &-n_{H_{1s}}B_{1s,2s}B(\nu_\alpha)\\ \end{aligned} \label{eq:nsrate4}$$ The parenthesis multiplying the positive terms include a term corresponding to stimulated emission. This inclusion is merely formal because, in practice, the stimulated emission is completely negligible for one photon lines, but it will be of some relevance when the rigorous treatment of the two photon transition is carried out. Notice that we have set $p_{2s,1s}\equiv1$ for the transition $1s\rightarrow2s$. This have been done by @Peebles1968 based on the fact can generate by itself a transition $1s\rightarrow2s$. In a forthcoming work we will deal with the details of this problem and show that although Peebles assumption is not exactly true it’s in practice perfectly valid. As long as other lines are not considered, the emission and absorption profiles are considered equal and the two photons effects are neglected, expression is the right equation to use, because the photons escaping from any of the two transitions can not generate the other transition. ![image](./eps/fig1.eps){width="1.3\columnwidth"} #### {#section-3 .unnumbered} In Figure \[fig:fig1\] we compare our results obtained integrating equation (assuming that all but the fundamental level are in thermal equilibrium and a common temperature for matter and radiation) with the results of the RECFAST code [@Seager2000]. The differences in the evolution come from two different causes: the general separation between the two curves (beginning at high redshift) is due to the detailed study done in RECFAST, where a 300-level atom is represented by adjusting the fudge factor in the hydrogen equation and secondly from our hypothesis of equilibrium in the excited states that increase the separation at low redshift. The maximum of the visibility function, i.e., the redshift at which recombination occurs is at $z_{ls} =1081$. Absorption of “escaped photons” at lower frequencies resonances. ================================================================ Expression assumes that the photons being redshifted into the lines are only those preexisting in the Planckian background and not remnant from previous recombinations. If these photons where to be present they could only come from transitions with frequency larger than $\nu_\alpha$. #### {#section-4 .unnumbered} This problem has been exhaustively studied in [@Chluba2007]. Here we present a simple and rigorous procedure to deal with it. #### {#section-5 .unnumbered} Due to the strongly suppressing Boltzmann factor, all levels with $n>2$ have negligible occupation number ($n=2$ itself is negligible respect to $n_{H_{1s}}$ but being much larger than the abundance of higher level it is the dominating way to generate $n_{H_{1s}}$), therefore, the dominant process generating photons with frequency larger than $\nu_\alpha$ is the recombination to the fundamental state. However, this will result in photons with frequency larger than $\nu_\alpha$ (that corresponding to an energy of $13.59 \; eV$) by a non-negligible amount, and, therefore, it will take a large amount of time before it is redshifted below $\nu_\alpha$. Thus, even when the cross section for the absorption of these photons by $n_{H_{1s}}$ it is not resonant, almost all photons will be absorbed since the generation of photons with $\nu>\nu_\alpha$ is very small for all the potential processes and it is not clear without detailed computations which one will dominate. However, for completeness, we give here the modification of equation needed when only one extra line (other than $2p\rightarrow1s$ or $2s\rightarrow1s$) is considered. The generalization to the case of more than one line is obvious. The extra terms to be included in expression when there is just one line with $\nu_i>\nu_\alpha$ is: $$C_i(z) -(1-e^{-\tau_{2p,1s}})H(z)\nu_{\alpha}{\frac{C_i(z_i)}{\nu_i H(z_i)}\frac{1+z_i}{1+z} } \\ \label{eq:nsrate6}$$ $$1+z_i=(1+z)\frac{\nu_i}{\nu_\alpha}$$ where $\tau_{2p,1s}$ is as defined in equation and $C_i(z)$ is the term corresponding to the *i\_th* line in expression . The first term is simply that corresponding to the line $i$ in expression (here we have assumed a transition to the fundamental state, but this is immaterial); the last term corresponds to those photons, generated in previous transitions (line $i$), which have not been included in the negative terms in . To obtain those terms we have assumed that only black-body photons were being redshifted into $\nu_\alpha$, now we are including (in subtractive mode) those photons generated at $\nu_i$ and redshifted into $\nu_\alpha$. Note that in the second term in we have assumed that the photons being redshifted on to $\nu_i$ are merely black-body. If lines with $\nu>\nu_i$ where relevant correcting terms should be added which are in the same relationship with respect to that keeps with respect to . #### {#section-6 .unnumbered} In $z_i$ is the redshift at which it was generated a photon with frequency $\nu_i$ that at time $z$ has already been redshifted to $\nu_\alpha$. The reason for the last term in is as follows: $C_i(z_i)$ photons (with $\nu_i$) are generated in unit time in a unit comoving volume. Due to the expansion of the universe, the frequency of the photons generated at the beginning of the unit of time have redshifted an amount $\Delta\nu=-\nu_i H(z_i)$ during that unit of time. Therefore, those photons are spread over an interval in frequency $\mid\Delta\nu\mid$ and their spectral density (per unit of comoving volume) is $C_i(z_i)/\nu_i H(z_i)$. Now, the rate at which these photons are crossing the line $\nu_\alpha$ is equal to the spectral density at $\nu_\alpha$ at the time that $\nu_i$ have been redshifted to $\nu_\alpha$ (i.e. at redshift $z$) multiplied by the “velocity” of the photons in $\nu$-space (i.e. $\nu_\alpha H(z)$). But the spectral density at that time is equal to that at generation multiplied by $(1+z_i)/(1+z)$ because the interval $\Delta\nu$ has decreased by the factor $(1+z_i)/(1+z)$. It only rest to multiply this flux per unit time and unit comoving volume by the factor $(1- e^{-\tau_{2p,1s}})$ to obtain the rate at which $n_{H_{1s}}$ is being destroyed by photons generated at $\nu_i$. Noting that $\nu_\alpha/\nu_i=(1+z_i)/(1+z)$, the last term in simplifies to: $$\label{eq:n1srate7} (1-e^{-\tau_{2p,1s}})\frac{H(z)}{H(z_i)}C_i$$ Using expressions and and its generalization $n_{H_{1s}}$ can be obtained exactly integrating a single differential equation as long as the occupation of other levels is given by equilibrium and two photons effects are neglected. Proper treatment of the two photon transition. ============================================== In expression we have assumed that the transition $2s\rightarrow1s$ could be described by a one photon model. This means that to obtain the rate of transitions $2s\rightarrow1s$ and $1s\rightarrow2s$, expressions formally equal to that corresponding to a one photon line with frequency $\nu_\alpha$ (with the standard relationship between absorption and emission coefficients) are used, although, when determining the escape probability for couples of photons generated from transitions $2s\rightarrow1s$ we have not used the one photon model (which gives probabilities around $1/2$ in the relevant range of $z$ values), but the generally accepted value of one. In a forthcoming work we will rigorously justify that this value represents a very good approximation and deal in detail with some issues concerning the two photon transition. Here we simply present the results that are of some relevance in determining the recombination history. #### {#section-7 .unnumbered} The first question that we consider is the change of equation implied by the correct treatment of the two photon transition keeping the assumption that the background photons are only those of a passively evolving primordial Planckian. The transition $2s\rightarrow1s$ is characterized by the probability per unit time and unit frequency interval $A_{2s,1s}(\nu/\nu_\alpha)$: $$\label{eq:A2} \frac{1}{2}\int_0^{\nu_\alpha}A_{2s,1s}(\frac{\nu}{\nu_\alpha})d\nu=A_{2s,1s}$$ A sufficient approximation for $A_{2s,1s}(\nu/\nu_\alpha)$ is given in @Nussbaumer1984: $$\label{eq:A22} A_{2s,1s}(\frac{\nu}{\nu_{\alpha}})=\frac{C}{\nu_{\alpha}}\left(\left(1-4w\right)^\gamma +4\alpha w^{\gamma+\beta}\right)$$ where $C=201.96$, $\alpha=0.88$, $\beta=1.53$, $\gamma=0.8$ and $w=\frac{\nu}{\nu_{\alpha}}(1-\nu/\nu_{\alpha})$. The factor $1/2$ weights correctly that two photons are emitted in each transition. The absorption coefficient $B(\nu/\nu_\alpha)$ is given by: $$\label{eq:B2} B_{1s,2s}(\frac{\nu}{\nu_\alpha})=A_{2s,1s}(\frac{\nu}{\nu_\alpha})\bigg(\frac{8\pi h\nu^3}{c^3}\frac{8\pi h \nu'^3}{c^3}\bigg)^{-1}$$ $$N_{H_{2s}}=n_{H_{1s}}B_{1s,2s}(\frac{\nu}{\nu_\alpha})J(\nu)J(\nu')$$ where $\nu'\equiv \nu_\alpha-\nu$, $c$ is the speed of light and $N_{H_{2s}}$ is the number of absorptions per unit time, unit volume and unit frequency interval generated by a couple of photons with frequencies $\nu$ and $\nu'$ via the $1s\rightarrow2s$ line while $J(\nu)$ and $J(\nu')$ are the energy densities per unit of frequency interval. In terms of the occupation number $\phi(\nu)$: $$\label{eq:fnu} \phi(\nu)=\frac{J(\nu)}{h \nu}\left(\frac{c^3}{8\pi \nu^2}\right)$$ where the first factor is the number of photons per unit volume and unit frequency interval and the second is one over the number of mode per unit frequency interval. The same formal dependence is also valid for $J(\nu')$. So, the net hydrogen $n_{H_{1s}}$ generation through the line $2s\rightarrow1s$ is given by the net balancing between the generation terms (spontaneous and stimulated emission) and the destruction term: $$\label{eq:n2s} \begin{aligned} &n_{H_{2s}}\frac{1}{2}\int_0^{\nu_\alpha}A_{2s,1s}(\frac{\nu}{\nu_\alpha})(1+\phi(\nu))(1+\phi(\nu'))d\nu +\\ & - n_{H_{1s}}\frac{1}{2}\int_0^{\nu_\alpha}A_{2s,1s}(\frac{\nu}{\nu_\alpha})\phi(\nu)\phi(\nu')d\nu \end{aligned}$$ It’s clear that the contribution of the spontaneous and stimulated emission is in the first integral in while the absorption appears in the second one. In fact, the contribution of the “one” in the cross products of the first integral is exactly the term defined in equation whereas the other terms are the stimulated emission contribution. Finally, substituting equation in equation in the place of the net $2s\rightarrow1s$ rate defined as one effective line we find the results plotted in Figure \[fig:fig2\] where it is shown that recombination occurs faster than the standard model due to the dominant effect of the stimulated emission and a maximum relative difference of $-1.3\%$ in the electronic fraction $x_e$ around $z\sim 1100$ as in @Chluba2006. ![image](./eps/fig2.eps){width="1.3\columnwidth"} Absorption of escaped $2p\rightarrow1s$ photons. ================================================ In the conventional calculations, once a photon has escaped from the high opacity line it can travel freely without any other line Lyman $\alpha$ interaction. However, the existence of the two photons line opens a new way to destroy hydrogen; the reabsorption of those photons “already escaped” and redshifted to a frequency $\nu$ (from the $2p\rightarrow1s$ line) via a combination with, either CMB photons remnant that are evolving passively with the expansion or another redshifted photons coming from the $2p\rightarrow1s$ line such that $\nu +\nu' = \nu_\alpha$. As it is aforementioned, this term has been explicitly treated in @Kholupenko2006 with some approximations. In this work, we present a different derivation of this effect without any relevant simplifications. It corrects the standard calculations done by the reference codes like RECFAST [@Seager2000] and @Chluba2006 work, where other effects of the $2s\rightarrow1s$ transition have been accurately dealt with. #### {#section-8 .unnumbered} The effect of this excess of Lyman $\alpha$ photons modifies the radiation field profile (initially black-body) as follows: $$\label{eq:j} J(\nu) = B(\nu) + F_{2p}(\nu)$$ where $F_{2p}(\nu)$ represents that excess of Lyman $\alpha$ photons, i.e., the net number of photons per unit volume and frequency interval escaped from the $2p\rightarrow1s$ line that are then redshifted to a frequency $\nu$. In other words, we follow the history of a photon emitted at redshift $z'$ with frequency $\nu_\alpha$ and that are absorbed through the $1s\rightarrow2s$ at redshift $z$ conjointly with a photon of frequency $\nu'$. With the considerations given in section \[3\], we have for $F_{2p}(\nu)$: $$\label{eq:netrate} F_{2p}(\nu)=h \nu \frac{C_{2p,1s}(z')}{a^3(t)\nu H(z')}$$ $$\frac{\nu}{\nu_\alpha}=\frac{1+z}{1+z'}$$ $$z'\geq z$$ where $C_{2p,1s}(z')$ is the term corresponding to $2p\rightarrow1s$ in equation which give the net number of transitions per unit of comoving volume and time at redshift $z'$, and $a^3(t)$ converts it to physical units. Taking into account and , this corrective term takes the form of: #### {#section-9 .unnumbered} $$\begin{aligned} \frac{d\Delta n^{(2p)}_{H_{1s}}}{dt}& = n_{H_{1s}}(z) \frac{1}{2}\int_0^{\nu_\alpha}\bigg(B_{1s,2s}(\frac{\nu}{\nu_\alpha} )J(\nu,T)(\nu)J(\nu',T) +\\ &- B_{1s,2s}(\frac{\nu}{\nu_\alpha} )(B(\nu,T)B(\nu',T)\bigg)d\nu \label{eq:rate2spert} \end{aligned}$$ #### {#section-10 .unnumbered} where the superscript $2p$ indicates that the variation $\Delta n_{H_{1s}}$ is produced due to the reabsorption of redshifted Lyman $\alpha$ photons. The rational for the negative term is that it has already been treated in equation . We show our results in Figure \[fig:fig3\] where we have found important differences of $\Delta x_e/x_e = 2\%$ at $z\sim 1000$ and a variation of one unit in $z_{ls}=1080$. In Figure \[fig:fig5\] we have computed the total fractional difference using equation compared with the two $2s\rightarrow 1s$ corrections that we have dealt with, that is, using and in in order to compare it with the results obtained by other authors [@Kholupenko2006; @Hirata2008]. We have found a total fractional variation of $0.67\%$ at $z\sim 900$, that differs in a $12\%$ if we compare it with the $0.6\%$ presented in the above mentioned papers. ![[]{data-label="fig:fig3"}](./eps/fig3.eps){width="\columnwidth"} ![[]{data-label="fig:fig5"}](./eps/fig5.eps){width="\columnwidth"} Absorption of escaped $2s\rightarrow1s$ photons. ================================================ We present for the remnant $2s\rightarrow1s$ photons, i.e., those escaped from the two photons line that increase the number density of photons in relation to the equilibrium distribution, an analogous analysis that we have carried out in the previous section. So, we need to evaluate the probability that at $z'$ a two photon transition is produced with a certain frequency distribution, track them until $z$ and check the probability that those two photons interact with their complementary (such that the sum is $\nu_\alpha$) plus an hydrogen in the fundamental state $n_{H_{1s}}$. To evaluate the effect of these processes we use an equation analogous to . #### {#section-11 .unnumbered} $$\begin{aligned} \frac{d\Delta n^{(2s)}_{H_{1s}}}{dt}&= n_{H_{1s}}(z) \frac{1}{2}\int_0^{\nu_\alpha}\bigg(B_{1s,2s}(\frac{\nu}{\nu_\alpha} ) \\ &(B(\nu,T)+F_{2s}(\nu))((B(\nu',T)+F_{2s}(\nu'))\bigg)d\nu \end{aligned} \label{eq:rate2s_pertdob}$$ where $\Delta n^{(2s)}_{H_{1s}}$ is the change implied for the comoving density of hydrogen in the ground state ($1s$) by the present processes. #### {#section-12 .unnumbered} Now, we can not obtain $F_{2s}(\nu)$ in the simple manner given in , which corresponds to one photon transition. But we can use this expression to obtain the number of couples of photons with global frequency $\bar{\nu}$ (the sum of the redshifted frequencies) per unit volume and unit of global frequency interval, that we represent by $g(\bar{\nu})$: $$\label{eq:g} g(\bar{\nu})=h \bar{\nu} \frac{C_{2s,1s}(z')}{a^3(t)\bar{\nu} H(z')}$$ $$z'=\frac{1+z}{\frac{\bar{\nu}}{\nu_{\alpha}}}-1$$ $$z'\geq z$$ Let us represent by $G_{2s}(\nu)$ the spectral density of photons corresponding to $F_{2s}(\nu)$.$$G_{2s}(\nu)\equiv \frac{F_{2s}(\nu)}{h \nu}$$ This quantity can be related to $g(\nu)$ through the following expression: $$\label{eq:gint} \int^{\infty}_{\nu}G(\nu')d\nu'=\int^{\nu_{\alpha}}_{\nu} g(\bar{\nu}) \left(P_{1}(\nu \mid \bar{\nu})+P_{2}(\nu \mid \bar{\nu})\right)d\bar{\nu}$$ where $P_{1}(\nu \mid \bar{\nu})$ is the probability that a couple of photons with global frequency $\bar{\nu}$ contains a photon with frequency larger than $\nu$ whereas $P_{2}(\nu \mid \bar{\nu})$ is the probability that contains two. #### {#section-13 .unnumbered} The relationship expresses the fact that the comoving density of the relevant photons (those coming from $2s\rightarrow1s$ transitions) with frequency larger than $\nu$ at redshift $z$ (the dependence on $z$ is implicit everywhere) is equal to the comoving density of couples of photons with $\bar{\nu}$ between $\nu$ and $\nu + d\nu$ multiplied by the probability of that couple containing a photon with frequency above $\nu$, while any couple with two photons contribute with an extra photon. So, for $P_1$ and $P_2$ we have: $$P_{1}(\nu \mid \bar{\nu}) = \frac{\int^{\nu_\alpha}_\nu A_{2s,1s}(\frac{\nu'}{\bar{\nu}}) \frac{d\nu'}{\bar{\nu}} } {\frac{A_{2s,1s}}{\nu_\alpha}}$$ $$P_{2}(\nu \mid \bar{\nu}) = \frac{\int^{\bar{\nu}-\nu}_{\nu/2} A_{2s,1s}(\frac{\nu'}{\bar{\nu}}) \frac{d\nu'}{\bar{\nu}} } {\frac{A_{2s,1s}}{\nu_\alpha}}$$ These expressions can be obtained immediately by obtaining from equations and the probability distribution for the ratio $\nu/\nu_\alpha$ for the “largest” photon in the couple ($\frac{\nu}{\nu_\alpha} \in [\frac{1}{2},1]$) and noting that $\nu'/\bar{\nu}$ follow the same distribution, since all frequencies have been redshifted by the same factor. #### {#section-14 .unnumbered} Deriving with respect to $\nu$ one can readily obtain an explicit expression for $G_{2s}(\nu)$: $$G_{2s}(\nu) = \frac{d}{d\nu}\int^{\nu_{\alpha}}_\nu g(\bar{\nu})P_{1}(\nu \mid \bar{\nu})d\bar{\nu} = \frac{\int^{\nu_\alpha}_{\nu} g(\bar{\nu}) A_{2s,1s}(\frac{\nu}{\bar{\nu}}) \frac{d\bar{\nu}}{\bar{\nu}} } {\frac{A_{2s,1s}}{\nu_\alpha}}$$ If we compare the relative fractional variation $\Delta x_e/x_e$ with the values of $x_e$ obtained using we find a maximum difference of $0.05\%$ at $z\sim 1025$, as it can be seen in Figure \[fig:fig4\]. This effect has not been treated explicitly and separately analytically previously because of the lack of an appropriate analytic formalism. However, this method has been treated numerically in @Chluba2011. ![[]{data-label="fig:fig4"}](./eps/fig4.eps){width="\columnwidth"} Conclusions. ============ We have studied the recombination process with a somewhat different approach, showing how to deal within it in a simple manner several particularly interesting small effects. From a conceptual point of view our main goal has been the simplicity of the approach and the rigour of the corresponding formalism, while from a pragmatic point of view we have centred on assessing the implications of some outstanding effects, one of which has not previously been treated, using that formalism and an “unperturbed" recombination sequence that is basically that given by Peebles (1968). It has not been our goal to carry out highly accurate computations, but to point out simplifications that could be implemented or effects that could be included in the existing accurate codes. - [We have developed a formally exact approach to the recombination problem without an explicit use of the radiative transfer equation. Our simple and rigorous framing of the problem hinges around two issues: first using equation for the recombinations associated with each relevant transition, under the assumption that the background radiation is purely Planckian and, secondly, a procedure for computing the spectral energy density corresponding to the radiation generated at transitions, the implementation of this procedure lead to expressions , and . Using our approach we have shown how to account exactly for the absorption at resonances of photons that have escaped from higher frequency lines.]{} #### {#section-15 .unnumbered} - [We have made an appropriate treatment of the $2s\rightarrow 1s$ line, avoiding its treatment as one effective Lyman $\alpha$, as is commonly used in studying recombination. We have found that the stimulated emission affecting the small fraction of photons generated at this transition, which have frequencies much smaller than $\nu_{\alpha}$, accelerate the recombination, rendering a maximum relative difference for the electronic fraction of $-1.3\%$. This result is in a good agreement with the previous [@Chluba2006; @Kholupenko2006; @Hirata2008].]{} #### {#section-16 .unnumbered} - [We have treated explicitly and separately the reabsorption of photons which have escaped from the $2p\rightarrow 1s$ line by the transition $1s\rightarrow 2s$ conjointly with another photon. Accounting for this effect imply a maximum difference of a $2\%$ for the electronic fraction at $z\sim 1000$. Adding the effect treated in the previous point and the present one we find a maximum difference of a $0.67\%$. Previously, it has been found with some approximations roughly a $0.6\%$ maximum difference in the electronic fraction [@Kholupenko2006; @Hirata2008].]{} #### {#section-17 .unnumbered} - [We have done the analogous study for the photons generated by $2s\rightarrow 1s$ transitions which are later absorbed at this transition in combination with a photon different from its original couple. Accounting for this effect lead to a maximum difference of $0.05\%$ for the electronic fraction at $z\sim 1025$ compared with the correction mentioned in the first point of these conclusions. This result can be compared with [@Chluba2011]. Although this effect is included in the codes which integrate the radiative transfer equation and make an appropriate treatment of the two photon lines, no explicit and separate study of this effect seems to have been done previously. It is interesting to comment the fact that the effect discussed in this point is so much smaller than that discussed in the previous one. The net rate of transitions $2s\rightarrow 1s$ is roughly one and a half that of $2p\rightarrow1s$, and the number of photons being generated at the former is around three times of the number of those being generated at the latter (two photons are generated in each $2s\rightarrow 1s$ transition). But the photons generated at a $2s\rightarrow1s$ transition has to combine with a primordial photon to be reabsorbed, and the occupation number of the primordial radiation is non-negligible only for frequencies much smaller than $\nu_\alpha$. Thus, the photons under consideration can be absorbed through the transition $1s\rightarrow2s$ when their frequencies are very close to $\nu_\alpha$. The probability for the photons generated at the transition $2s\rightarrow 1s$ to be within this narrow range of frequencies is small (around a $0.05\%$) , what explains the small relevance of the effect.]{} #### {#section-18 .unnumbered} The points we have mentioned above illustrate the potential of our approach to provide accurate and relatively simple evaluation of various effects that might be relevant to the recombination process. Our approach can handle all the physical details that could be relevant to the process, for example, it can exactly be use to account for the fact that the emission and absorption profiles are not equal @Chluba2009 or even the effect of Raman scattering @Hirata2008. We acknowledge the use of the RECFAST software package. We want to thank Jose Alberto Rubiño for the stimulating discussions and help in the topic. F.J. is grateful for the support of the European Union FEDER funds, the Spanish Ministry of Economy and Competitiveness (Project No. FPA2010-16495), the ’Conselleria d’Educació, Cultura i Unversitats’ and the “Conselleria d’Economia i Competitivitat” of the Govern de les Illes Balears”. J.B. is grateful for the support of the Spanish Ministry of Economy and Competitiveness (Project No. AYA2010-21231-C02-02). J., & [Sunyaev]{} R. A., 2006, , 446, 39 Chluba, J., & Sunyaev, R. A. 2007, , 475, 109 Chluba, J., & Sunyaev, R. A. 2010, , 512, A53 Chluba, J., & Sunyaev, R. A. 2009, , 496, 619 Chluba, J., & Thomas, R. M. 2011, , 412, 748 Hirata, C. M. 2008, , 78, 023001 Kholupenko, E. E., & Ivanchik, A. V. 2006, Astronomy Letters, 32, 795 H., [Schmutz]{} W., 1984, , 138, 495 Peebles, P. J. E. 1968, , 153, 1 Rubi[ñ]{}o-Mart[í]{}n, J. A., Chluba, J., & Sunyaev, R. A. 2008, , 485, 377 S., [Sasselov]{} D. D., [Scott]{} D., 2000, , 128, 407 Sobolev, V. V. 1960, Cambridge: Harvard University Press, 1960, Y. B., [Kurt]{} V. G., [Sunyaev]{} R. A., 1968, ZhETF, 55, 278
--- abstract: 'We study the non-equilibrium steady state (NESS) of a driven dissipative one-dimensional system near a critical point, and explore how the quantum correlations compare to the known critical behavior in the ground state. The model we study corresponds to a cavity array driven parametrically at a two photon resonance, equivalent in a rotating frame to a transverse field anisotropic XY model \[C. E. Bardyn and A. Imamoğlu, Phys. Rev. Lett [**109**]{} 253606 (2012)\]. Depending on the sign of transverse field, the steady state of the open system can be either related to the ground state or to the maximum energy state. In both cases, many properties of the entanglement are similar to the ground state, although no critical behavior occurs. As one varies from the Ising limit to the isotropic XY limit, entanglement range grows. The isotropic limit of the NESS is however singular, with simultaneously diverging range and vanishing magnitude of entanglement. This singular limiting behavior is quite distinct from the ground state behavior, it can however be understood analytically within spin-wave theory.' author: - Chaitanya Joshi - Felix Nissen - Jonathan Keeling title: 'Quantum correlations in the 1-D driven dissipative transverse field XY model.' --- Introduction ============ A central feature of critical behavior in any non-mean-field phase-transition is the existence of a diverging correlation length [@Sachdev2011; @kadanoff2000statistical]. Such divergences explain why universal theories, controlled only by symmetries of the problem, apply in the vicinity of a critical point. They also lead to scaling behavior [@kadanoff2000statistical] of correlation functions. More recently, it has been noted that measures of specifically *quantum* correlation, e.g. entanglement [@nielsen2010quantum], also show scaling behavior [@Osterloh2002; @Osborne2002; @Vidal2003a; @Amico2008]. Entanglement is one of the characteristic traits of quantum mechanics [@nielsen2010quantum] and is of practical significance as it captures quantum correlations which can be a resource for quantum cryptography, quantum teleportation, and dense coding [@Horodecki2009a]. Despite the diverging correlation length at critical points, entanglement generally has a finite range [@Osterloh2002; @Osborne2002; @Amico2008], critical scaling is instead seen in the magnitude of the entanglement. In a dissipative system, coupling to an external environment [@Breuer2002] leads to dephasing, and consequent degradation of quantum correlations, ultimately reducing the system to a classical description [@Zurek2003; @LoFranco2013]. Nonetheless, in a coherently driven dissipative system, i.e. pumped by an external coherent drive, non-trivial steady states can be found [@Dimer2007; @Hartmann2010a; @Baumann2010; @Nagy2010; @Diehl2010; @Ferretti2010; @Lee2011a; @Marcos2012; @Murch2012; @Lee2012; @Grujic2012; @Torre2011; @Torre2012; @LeBoite2013a; @Leib2013; @Lee2013; @Genway2013; @Hu]. In an extended interacting driven dissipative system, such as an array of coupled nonlinear cavities as discussed below, this enables non-local quantum correlations to exist in the non-equilibrium steady state. Such systems allow one to study quantum correlations out of equilibrium, and to study whether dissipation has particular significance for distinctively quantum correlations such as entanglement. The aim of this paper is to explore the range and scaling of quantum correlations in the non-equilibrium-steady-state (NESS) near to a critical point of the corresponding equilibrium system. A natural system in which to address such questions is an array of coupled cavities [@Hartmann2006; @Greentree2006; @Angelakis2007a; @Hartmann2008; @Schmidt2013b]. Such systems allow for tunable coupling and nonlinearity, and inevitably have dissipation, as light escapes from the cavities. Recently @Bardyn2012 have shown that such systems can in certain limits map to dissipative spin chain models, as explained below. Their proposed configuration allows tuning of both the anisotropy of the spin-spin coupling, and of a transverse field. We study the non-equilibrium steady state, i.e. the long time behavior, in the presence of dissipation. Within this scenario, we determine the dependence of quantum correlations on both of these parameters, exploring the range from the transverse field Ising model to the transverse field XY model. The transverse field Ising model is a paradigmatic example of quantum critical behavior [@Sachdev2011], and so the scaling of entanglement in the equilibrium Ising model (or anisotropic XY model) was one of the first examples studied [@Osterloh2002; @Osborne2002; @Vidal2003a]. As noted above, while the magnitude of entanglement shows critical scaling, the range over which non-zero entanglement exists does not [@Osterloh2002; @Osborne2002]. This finite range behavior persists for all models in the Ising universality class [@Osborne2002; @Maziero2010; @StelmachoviPeter2004]. Following these early studies, there have been many subsequent explorations of critical entanglement, including the spin-boson system [@Hur2008] which can be viewed as a phase transition of a dissipative quantum system. For a review, see Ref. [@Amico2008]. A major difficulty in understanding a many body quantum system is the exponential growth of Hilbert space dimension with the system size. One method to overcome this difficulty is to use a matrix product state (MPS) approach [@Vidal2003; @Vidal2004]. Such methods make use of the fact that many physically relevant states have entanglement which is either constant or grows at most polynomially with system size [@Zurek2003]; an MPS can efficiently represent such a state. The MPS representation of a state is the concept underlying the Density Matrix Renormalization Group (DMRG) [@White:DMRG; @White:Algorithms] approach. While the DMRG was originally used as a method to determine ground states of interacting systems, it was later extended to study dynamics [@Cazalilla2002; @White2004; @Daley2004; @Micheli2004; @Clark2004; @Cai2013], by an approach known as time evolving block decimation (TEBD). All these approaches ultimately rely on the fact that an efficient MPS representation of the relevant states of the system exists, for a discussion of this see e.g. Ref. [@Schollwock2011a]. These approaches have also been extended to open systems (mixed states), by introducing matrix product operators (MPO) [@Zwolak2004; @Orus2008; @Prosen2009]. This allows one to efficiently time evolve the density matrix equations of motion for one dimensional open systems, and thus find the non-equilibrium steady state. The remainder of this paper is organized as follows. Section \[sec:driv-diss-ising\] reviews the basis of our calculations. In particular, sections \[sec:effect-hamilt\],\[sec:dissipation\] introduce the effective Hamiltonian we study and its coupling to an external environment; section \[sec:meas-quant-corr\] reviews the measures of quantum correlations we calculate; section \[sec:matrix-product-state\] outlines the MPO method we use to find the steady state. Section \[sec:scal-quant-corr\] then presents the results of our numerical calculation. After reviewing the nature of the steady state in section \[sec:nature-non-equil\], and comparing these results to the mean-field theory of our model in section  \[sec:mft\], sections \[sec:open-syst-corr\],\[sec:open-syst-corr-1\],\[sec:corr-vs-decay\] discuss the dependence of quantum correlations on each of the model parameters in turn. Finally, section \[sec:asymptotic-delta-to\] discusses analytic calculations which can reproduce the behavior seen for weak driving. In section \[sec:conclusions\] we summarize our findings. Driven-dissipative model and observables {#sec:driv-diss-ising} ======================================== Effective Hamiltonian {#sec:effect-hamilt} --------------------- ![(Color online) Cartoon illustrating coupled cavity array with hopping $J$, two-cavity pumping $\Omega$ and loss rate $\kappa$.[]{data-label="fig:cca-cartoon"}](cca){width="3.2in"} We consider a coupled cavity array realization of the transverse field anisotropic XY model, as introduced in Ref. [@Bardyn2012]. For completeness, we briefly summarize the nature of such a model here. As illustrated in Fig. \[fig:cca-cartoon\] the model consists of a 1-D array of optical cavities, supporting photon modes, described by bosonic operators $c_j$ with hopping amplitude $J$ between the cavities so that $H = \sum h_j - J \sum_{j} [ c^\dagger_j c^{}_{j+1} + \text{H.c.} ]$. The on-site Hamiltonian, $h_j= \omega_c c^\dagger_j c^{}_j + U c^\dagger_j c^\dagger_j c^{}_j c^{}_j $, incorporates an optical nonlinearity $U$. Physically this can be induced by coupling each cavity to a saturable optical absorber [@Hartmann2006; @Hartmann2008; @LeBoite2013a]. In addition to these elements, which would lead to a Bose-Hubbard model [@Fisher1989], we include a two-photon driving term as proposed in Ref. [@Bardyn2012]. Specifically, we consider a drive $\Omega \cos(2 \omega_p t)$ near two-photon resonance, i.e. $\omega_p \simeq \omega_c$, and we work in the limit of strong optical nonlinearity. In this limit, the problem simplifies, as one may truncate each site to occupations $0$ or $1$. Furthermore, this implies that the two-photon pump is only resonant for the creation of pairs of photons on adjacent cavities. When restricted to the $0,1$ occupation subspace, one may replace each cavity mode with a spin $1/2$, i.e. replace bosonic operators by Pauli matrices $(c^{}_j, c^\dagger_j) \to (\sigma^-_j, \sigma^+_j)$. Here $\sigma^{\pm}_j = (\sigma^x_j \pm i \sigma^y_j)/2$ in terms of regular Pauli matrices. In this notation, the Hamiltonian becomes: $$\begin{gathered} \label{eq:1} \hat{H}_0 = \sum_j \frac{\omega_c}{2} \sigma_j^z - J \sum_j \left(\sigma^+_j \sigma^-_{j+1} + \sigma^-_j \sigma^+_{j+1} \right) \\ - \Omega \sum_j \left(\sigma^+_j \sigma^+_{j+1} e^{- 2 i \omega_p t} + \sigma^-_j \sigma^-_{j+1} e^{2 i \omega_p t} \right).\end{gathered}$$ The explicit time dependence appearing here can be removed by a transformation to a rotating frame. In such a frame the Hamiltonian is given by: $$\begin{gathered} \label{eqnham2} \hat{H} = -J \sum_{j} \left[ g \sigma_{j}^{z} + (\sigma_{j}^{+}\sigma_{j+1}^{-}+\sigma_{j+1}^{+}\sigma_{j}^{-}) \right. \\ \left. + \Delta (\sigma_{j}^{+}\sigma_{j+1}^{+}+\sigma_{j+1}^{-}\sigma_{j}^{-}) \right],\end{gathered}$$ where we have introduced dimensionless parameters $g = (\omega_p - \omega_c)/2J$ and $\Delta=\Omega/J$. This can also be written in the canonical form of the Ising model [@mattis2006theory]: $$\label{eq:2} \hat{H} = - J \sum_j \left[ g \sigma_{j}^{z} + \frac{1+\Delta}{2} \sigma_{j}^{x}\sigma_{j+1}^{x} + \frac{1-\Delta}{2} \sigma_{j}^{y}\sigma_{j+1}^{y}\right].$$ The parameter $\Delta$ describes the anisotropy of the interaction: $\Delta=0$ corresponds to the isotropic $XY$ model, $\Delta=1$ to the Ising model. For $0 < |\Delta| \le 1 $ the Hamiltonian is in the Ising universality class. In the ground state, changing the transverse field $g$ will induce a quantum phase transition [@Sachdev2011] at $|g|=1$, between a phase with $\langle \sigma_x \rangle \neq 0$ for $|g|<1$, and a phase with vanishing $\langle \sigma_x \rangle$ for $|g|>1$. Dissipation {#sec:dissipation} ----------- In addition to the terms described so far, each cavity is also assumed to couple to a continuum of radiation modes describing irreversible loss into the environment [@Breuer2002]. At optical cavity and pump frequencies, one may eliminate such modes via the Born-Markov approximation [@scully97; @Breuer2002], producing the master equation: $$\label{equationfin} \frac{d}{d t}\rho =-i[\hat{H},\rho] +\kappa \sum_{j} [2 \sigma_{j}^{-}{\rho} \sigma_{j}^{+}-\sigma_{j}^{+}\sigma_{j}^{-}\rho- \rho\sigma_{j}^{+}\sigma_{j}^{-}].$$ The dissipation described in Eq.  corresponds to independent incoherent loss from each cavity. In the spin language, this corresponds to a process that flips the spin from up to down. Such dissipation corresponds to a zero temperature bath, this is appropriate when considering optical frequency systems as the characteristic energy scales are much larger than temperature. In the following we introduce the dimensionless ${\tilde{\kappa}}=\kappa/J$, and consider the steady state of the system as a function of the parameters $(g, \Delta, {\tilde{\kappa}})$. In the remainder of the manuscript all energies are thus given in units of $J$. It is important to note that the form of Eq. (\[equationfin\]) can only follow from an originally time-dependent, i.e. pumped system. For a time-independent system coupled to an external bath, a correct treatment of the bath [@Cresser1992a] must lead to a Master equation which drives the system toward its thermal state. Such behavior is clearly required to be consistent with equilibrium statistical mechanics. The same is not however true of a time dependent Hamiltonian — in the rotating frame, coupling to the bath need not satisfy detailed balance due to the “extra” time dependence induced by the pump frequency [@Joshi2013]. The crossover between these limits as one varies $\omega_c, \omega_p$ while keeping $g$ constant is an interesting question for future work. Measures of quantum correlations {#sec:meas-quant-corr} -------------------------------- To quantify the *quantum* correlations between different sites requires some care, since a given pair of sites will in general be entangled both with other sites and with the external environment. As such, it is important to use a measure of the quantum correlation between a specific pair of sites. The measure of pairwise entanglement we will use will be negativity, $\mathcal{N}$ defined as: $$\label{eq:4} \mathcal N= \text{max}(0,\sum_{i}^{4} |\lambda_{i}|-1),$$ where $\lambda_{i}$ are the eigenvalues of the partially transposed two-qubit density matrix $\rho_{AB}^{T_B}$ [@nielsen2010quantum], where $T_B$ indicates transpose for system $B$. According to the Peres-Horodecki criterion [@Peres1996; @Horodecki1996] a (mixed) state of a bipartite system is separable if the negativity is zero. For any separable state, the density matrix would remain positive under a partial transpose. In an entangled state a partial transpose may produce a non-positive density matrix [@Lupo2005]. The negativity as defined in Eq. (\[eq:4\]) is a measure of whether the partial transpose produces negative eigenvalues. A non-zero value of negativity serves both as a necessary and sufficient condition for the inseparability of a general two qubit state [@Peres1996; @Horodecki1996]. For pure states entanglement is a sufficient measure of quantum correlations and quantifies the ability of a state to act as a resource for quantum computational speed-up [@Josza2003]. For mixed states separability (vanishing entanglement) does not in general imply classicality [@Ollivier2001; @Henderson2001; @Modi2012b] — computational speed-up for mixed state quantum computing can occur without entanglement [@Knill1998a]. Such speed-up has been attributed to the presence of non-zero quantum discord [@Ollivier2001; @Henderson2001; @Modi2012b; @Knill1998a; @Dakic2012a], $\mathcal D$ defined [@Modi2012b] as follows: Consider a bipartite system AB in a state $\rho$, and a local measurement performed on subsystem B with its result ignored. Such a measurement will cause a disturbance of subsystem $A$ for almost all states. There is however a class, $\Omega$, of states that is unchanged by such a measurement. For such states $\chi\in\Omega$, known as “classical-quantum” states, one may write: $\chi=\sum_{i} p_{i} \rho_{A i} \otimes |i \rangle_B {}_B\langle i |$, where $p_{i}$ is a probability distribution, $\rho_{A i}$ is the marginal density matrix of A, and the states $|i\rangle_B$ form an orthonormal set. Geometric discord $\mathcal D$ is the distance between the state $\rho$ and the closest classical-quantum state $\chi\in\Omega$. Explicitly, for an arbitrary mixed state $\rho$ of a $d \otimes d$ quantum system it is $\mathcal D(\rho)=\frac{d}{d-1}\text{min}_{\chi \in \Omega} ||\rho-\chi||^{2}$, where $||M||=\sqrt{\sum_{i} m_{i}^{2}}$ is the Hilbert-Schmidt norm of the operator $M$ with eigenvalues $m_{i}$. In the specific case of two-level systems (qubits), a closed form for $\mathcal{D}$ exists [@Modi2012b; @Dakic2010] Writing the state of two qubits as: $$\rho=\frac{1}{4} \sum_{i,j=0}^{3} R_{ij}\sigma_{i} \otimes \sigma_{j}, \qquad \mathbf{R} = \left( \begin{array}{cc} 1 & \mathbf{y}^T \\ \mathbf{x} & \mathbf{t} \end{array} \right)$$ where $\sigma^{0,1,2,3}_j = \left\{\mathbbm{1}_j, \sigma^x_j, \sigma^y_j, \sigma^z_j \right\}$, and $\mathbf{R}$ is given in block structure above, one may then construct the $3 \times 3$ matrix $S=(1/4)(\mathbf{x}\mathbf{x}^T + \mathbf{t}\mathbf{t}^{T})$ from which $$\mathcal D=2 \text{Tr}[S]-2 \lambda_\text{max}(S),$$ where $\lambda_\text{max}(S)$ is the largest eigenvalue of the matrix S. Matrix product state evolution {#sec:matrix-product-state} ------------------------------ As noted above, to find the non-equilibrium steady state, we time evolve Eq. (\[equationfin\]) using a matrix product operator approach [@Zwolak2004; @Orus2008; @Prosen2009]. We here briefly summarize the method used in our calculation. Further details of our specific implementation can be found in Ref. [@Nissen2013a]. Our problem requires time-evolving the density matrix of a chain of $N$ two-level systems. This density matrix may be written in the form: $$\label{supr} \rho= \sum_{\{ i_1, i_2, \ldots, i_N\}} c_{i_1, i_2, \ldots, i_N} \bigotimes_{j=1}^N \sigma_j^{i_j}$$ with $\sigma^{0,1,2,3}_j$ as given earlier forming a basis for the density matrix on each site. The central point of the MPO approach is to write the coefficients $c_{ i_1, i_2, \ldots, i_N}$ in terms of matrices $\Gamma^{[j]i_j}$ and vectors $\lambda^{[j]}$ as follows: $$\begin{gathered} c_{i_{1},i_{2},\ldots i_{N}}= \sum_{\{\alpha_{j}\}}\Gamma_{1,\alpha_{1}}^{[1]i_{1}}\lambda_{\alpha_{1}}^{[1]}\Gamma_{\alpha_{1},\alpha_{2}}^{[2]i_{2}}\ldots \\ \Gamma_{\alpha_{j-2},\alpha_{j-1}}^{[j-1]i_{j-1}}\lambda_{\alpha_{j-1}}^{[j-1]}\Gamma_{\alpha_{j-1},\alpha_{j}}^{[j]i_{j}}\ldots\Gamma_{\alpha_{N-2},\alpha_{N-1}}^{[N-1]i_{N-1}}\lambda_{\alpha_{N-1}}^{[N-1]}\Gamma_{\alpha_{N-1},1}^{[N]i_{N}}.\end{gathered}$$ The matrix $\Gamma^{[j]i_{j}}$, corresponding to basis component $i_j$ on site $j$, is a $\chi_{j-1} \times \chi_j$ matrix, and $\lambda^{[j]}$ is a set of $\chi_j$ coefficients associated with the bond between site $j$ and site $j+1$. Here $\chi_j$ is the (integer) bond dimension of the matrix associated with bond $j$. If all $\chi_j=1$ then one has entirely separable density matrix, i.e. $\rho = \bigotimes \rho_j$, equivalent to a mean-field approximation. If $\chi_j$ are sufficiently large, any state can be written in the above form — the required size for our two-level-system density matrix is $\chi_j = \text{min}(4^j, 4^{N-j})$. To efficiently simulate such a system we restrict $\chi_j< \chi_{\text{max}}$. For a fixed $\chi_{\text{max}}$, the size of computation scales linearly with chain length. Despite this, the representation is able to accurately describe the full quantum dynamics in many problems. To time-evolve the state, we follow the algorithm described in Ref. [@Vidal2004; @Orus2008]. The Master equation may be written in a superoperator form, with the density matrix as a vector $\rho \to |\rho\rangle$, so that $\partial_t |\rho\rangle = M |\rho \rangle$. The superoperator $M$ can be decomposed as $M = \sum_j M^{\text{pair}}_{j,j+1} $, with the one-site operations split between the appropriate pair operators. Evolution by a time step $\delta t$ corresponds to propagating the coefficients $\Gamma^{[j]i_{j}}, \lambda^{[j]}$ under the operator $\exp(M^{\text{pair}}_{j,j+1} \delta t)$. This is done by converting the MPO representation for a given pair of sites into its explicit form, evolving the pair, and then performing a singular value decomposition (SVD) [@nielsen2010quantum] to return the final form to MPO representation. The rank $\chi_j$ after such an update will generally have increased, but can be restored to $\chi_j \le \chi_{\text{max}}$ by keeping only the largest $\chi_{\text{max}}$ singular values in the SVD. To extend from a single pair to many, the overall superoperator $M$ can be divided into parts on odd and even sites $j$ and, using the Suzuki-Trotter expansion $e^{M \delta t} = e^{M^{\text{odd}} \delta t/2} e^{M^{\text{even}} \delta t} e^{M^{\text{odd}} \delta t/2} + \mathcal{O}(\delta t^3)$. Since $M^{\text{odd}}$ involves a sum of pair operations which each mutually commute, all the updates in $M^{\text{odd}}$ can be performed in parallel. The same applies to $M^{\text{even}}$. ![(Color online) Negativity $\mathcal N$ vs transverse field $g$, comparing MPO numerical solution for $\chi_{\text{max}}=20$ (blue-solid) to exact diagonalization (red-dashed) for a four-site Ising model. Parameters (in units of $J$): $\Delta=1$, ${\tilde{\kappa}}=0.5$.[]{data-label="fig:compare-exact-mps"}](4sitesg.pdf){width="3.0in"} To demonstrate the accuracy of our implementation [@Nissen2013a], Fig. \[fig:compare-exact-mps\] shows a comparison between exact diagonalization of the four-site Master equation and the open system MPO code with $\chi_{\text{max}}=20$ showing close agreement. These results, as all results in our paper, are calculated for a chain with open boundary conditions. For longer chains, comparison with exact solutions is not feasible so we instead check for convergence of numerical results with matrix rank $\chi_{\text{max}}$. Efficient simulation depends on whether convergence is achieved for sufficiently small values of the matrix rank $\chi_{\text{max}}$. If correlation lengths diverge, such as at critical points, strong long-range correlations exist. In such cases convergence would only occur at large $\chi_{\text{max}}$ and and evolution becomes computationally expensive. In our system, we will see that the dissipation $\tilde{\kappa}$ suppresses such long range correlations; for small values of $\tilde{\kappa}$ the computational cost would increase, particularly near the equilibrium critical points $|g|=1$. It is important also to note that in this paper we are only concerned with convergence of the steady-state properties. If one is also interested in the short time dynamics, the required matrix rank may be much larger [@Hartmann2009], due to transient correlations arising before dissipation has time to act. In addition to convergence with matrix rank, we also find and check that properties near the middle of the chain converge with increasing chain length. Scaling of quantum correlations in Non-equilibrium steady states {#sec:scal-quant-corr} ================================================================ Nature of the Non-Equilibrium Steady state {#sec:nature-non-equil} ------------------------------------------ Before discussing the quantum correlations in the non-equilibrium steady state of Eq. (\[equationfin\]), we first discuss the nature of the steady state itself. The dissipation term on its own would drive the system to a state with all spins pointing down. In the following we denote this state as the trivial empty state. In general (unless $\Delta=0$), this trivial state is not an eigenstate of the Hamiltonian so is not the steady state. An observable that gives a clear indication of the nature of the steady state is the correlation function $\langle \sigma^x_{j} \sigma^x_{j+l} \rangle$. This is plotted in Fig. \[fig:isingorder\] for $\Delta=1$, for sites near the center of the chain, hence avoiding edge effects. ![(Color online) Panel (a) showing spin-spin correlations $\langle \sigma^x_{j} \sigma^x_{j+l} \rangle$ as a function of transverse field. The different lines correspond to different separations. Panel (b) showing decay of spin-spin correlations $\langle \sigma^x_{j} \sigma^x_{j+l} \rangle$ as a function of separation $l$ between the spin sites. Both panels plotted for the Ising limit ($\Delta=1$). It is clearly seen that the NESS exhibit FM and AFM ordering for negative and positive values of transverse field ($g=\pm1$). Inset (panel (c)) shows short range incommensurate order for lower values of transverse field ($g=\pm 0.1$). The axes in the inset are same as in the main plot. Other parameters (in units of $J$): ${\tilde{\kappa}}=0.5$ and MPO calculation performed for $N=40$ site chain, with $\chi_{\text{max}}=20$. []{data-label="fig:isingorder"}](corrnsising.pdf "fig:"){width="3.2in"}\ ![(Color online) Panel (a) showing spin-spin correlations $\langle \sigma^x_{j} \sigma^x_{j+l} \rangle$ as a function of transverse field. The different lines correspond to different separations. Panel (b) showing decay of spin-spin correlations $\langle \sigma^x_{j} \sigma^x_{j+l} \rangle$ as a function of separation $l$ between the spin sites. Both panels plotted for the Ising limit ($\Delta=1$). It is clearly seen that the NESS exhibit FM and AFM ordering for negative and positive values of transverse field ($g=\pm1$). Inset (panel (c)) shows short range incommensurate order for lower values of transverse field ($g=\pm 0.1$). The axes in the inset are same as in the main plot. Other parameters (in units of $J$): ${\tilde{\kappa}}=0.5$ and MPO calculation performed for $N=40$ site chain, with $\chi_{\text{max}}=20$. []{data-label="fig:isingorder"}](corrndelta1.pdf "fig:"){width="3.2in"} As is clear from Fig. \[fig:isingorder\], in the NESS, the $x$ components of spin show (short range) ferromagnetic order for transverse fields around $g\simeq -1$ and antiferromagnetic order for fields around $g\simeq 1$. In comparison, in the ground state of the Ising model there are ferromagnetic correlations for $|g|<1$, regardless of the sign of $g$. As will be proven below, there is a direct relation between the NESS for positive and negative $g$, corresponding to a $\pi$ rotation around the $z$ axis on every second lattice site. This duality implies that if (short-range) ferromagnetic correlations are seen for a given $g$, anti-ferromagnetic correlations will exist for $g \to -g$. As well as this formal duality, we will also discuss next a more intuitive picture for the different behaviors at positive and negative $g$, a picture substantiated by analytic results of mean-field analysis given in Sec. \[sec:mft\]. For large negative $g$, the ground state is compatible with the dissipation terms: both favor spins pointing in the $-z$ direction. For weak decay ($\kappa \to 0$), steady states of the collective dynamics generally correspond to stationary points of the closed-system dynamics. Such stationary points will correspond to extrema of the energy. The ferromagnetic correlations seen for $g<0$ clearly reflect the properties of the ground state, including a peak in correlations near $g=-1$, where the ground state undergoes a quantum phase transition. In contrast, for large positive $g$ the ground state is incompatible with the dissipation. However, the maximum energy state, which is also a stationary point of the dynamics is compatible with the dissipation. The behavior of the correlations seen in Fig. \[fig:isingorder\] suggests that for $g<0$, the attractor of the dynamics is related to the ground state, while for $g>0$ the attractor is instead related to the state of maximum energy. Similar behavior has been seen in the dynamics of the Dicke model, where duality under change of sign of cavity-pump detuning leads to an inverted normal state [@Bhaseen:Noneqdicke]. The proof of the duality under change of the sign of $g$ follows by considering transformations of the density matrix that relate its steady state for $g$ to that for $-g$. We consider dividing the chain into sublattices of odd and even sites. The switch from ferro- to antiferromagnetic order is equivalent to the statement that correlations between sites on different (the same) sublattices are odd (even) functions of the field $g$. Two dualities are required to show this. Firstly, duality under $\hat{H} \to - \hat{H}$, $\rho \to \rho^\ast$. This follows from taking the complex (*not* Hermitian) conjugate of the equation of motion. Since both $\hat{H}$ and all the loss terms are real, this complex conjugation means that $\hat{H} \to - \hat{H}$ is equivalent to $\rho \to \rho^\ast$. The second duality concerns rotation around the $z$ axis on one sublattice, $\rho \to \hat{R}_{\text{odd}} \rho \hat{R}_{\text{odd}}$ where $\hat{R}_{\text{odd}} =\prod_{j=1,3,5\ldots} \sigma^z_j$, this has the effect of modifying Eq. (\[eq:2\]) by changing the sign of the inter-site couplings; this is equivalent to the combination $H \to -H, g \to -g$. Combining this duality with complex conjugation, one finds that interchanging $g \to -g$ alone is equivalent to $\rho \to \hat{R}_{\text{odd}} \rho^\ast \hat{R}_{\text{odd}}$. This transformation swaps the sign of correlations between the two sublattices as required. The dualities involved make clear the rôle of the inversion $H \to -H$ in relating the steady states for $g \to -g$, corroborating the statement that the $g>0$ steady state is related to the maximum energy state. As can be seen in Fig. \[fig:isingorder\](c), for small values of $g$, correlations become small, and vanish as $g = 0$. In the small $g$ regime these small short-range correlations are neither strictly ferromagnetic nor anti-ferromagnetic, but instead show an incommensurate ordering. Such behavior occurs in a regime where the mean-field theory would predict the trivial state. (Note that in other models, mean-field theory can also predict incommensurate orderings [@Lee2013].) As expected the spin-spin correlation functions always respects the sublattice dualities as discussed above. The appearance of the trivial state as an attractor at $g\to 0$, cannot be simply related to minimum or maximum energy states as in the earlier discussion. Note also that the above dualities do not explain why the same-sublattice correlators, which are even functions of $g$, should vanish at $g=0$. The state at $g=0$ can nonetheless be directly understood: at $g=0, \Delta=1$, the effective magnetic field seen by any site points purely in the $x$ direction, and so the evolution combines precession around the $x$ axis with decay. Consequently, the $x$ component of all spins vanishes at this point. The correlators $\langle \sigma^y_{j} \sigma^y_{j+l} \rangle$ (not shown) do not generally vanish at $g=0$, but still show the odd–even symmetry discussed above. For $\Delta<1$ the $\langle \sigma^x_{j} \sigma^x_{j+l} \rangle$ do not vanish at $g=0$ either; this is discussed further in Sec. \[sec:open-syst-corr-1\]. Comparison with the mean-field theory {#sec:mft} ------------------------------------- To further understand the differences between the NESS and the ground state, we next discuss the mean-field prediction for the NESS. While mean-field theory incorrectly predicts long-range order in low dimensions, the nature of the order predicted is reflected by the full MPO numerics. Within mean-field theory it is possible to give closed-form expressions for the phase boundary, and for the nature of the order anticipated for given values of $g, \Delta, \kappa$. This provides further intuition for the differences between the NESS and the ground state. In mean-field theory, the full density matrix is approximated as a product state (i.e. equivalent to restricting $\chi_{\text{max}}=1$ in an MPO simulation). The equations of motion then reduce to the following set of non-linear Bloch equations: $$\begin{aligned} \label{eq:5} \frac{d}{d t}\langle \hat{\sigma}_{j}^{x}\rangle &=\! -{\tilde{\kappa}}\langle \hat{\sigma}_{j}^{x}\rangle \!+\!2g\langle \hat{\sigma}_{j}^{y}\rangle \!-\!(1-\Delta)\langle \hat{\sigma}_{j}^{z}\rangle (\langle \hat{\sigma}_{j-1}^{y}\rangle+\langle \hat{\sigma}_{j+1}^{y}\rangle) \nonumber\\ \frac{d}{d t}\langle \hat{\sigma}_{j}^{y}\rangle &=\! -{\tilde{\kappa}}\langle \hat{\sigma}_{j}^{y}\rangle \!-\!2g\langle \hat{\sigma}_{j}^{x}\rangle \!+\!(1+\Delta)\langle \hat{\sigma}_{j}^{z}\rangle (\langle \hat{\sigma}_{j-1}^{x}\rangle+\langle \hat{\sigma}_{j+1}^{x}\rangle) \nonumber\\ \frac{d}{d t}\langle \hat{\sigma}_{j}^{z}\rangle &=\! -2{\tilde{\kappa}}(\langle \hat{\sigma}_{j}^{z}\rangle+1)-(1+\Delta)\langle \hat{\sigma}_{j}^{y}\rangle (\langle \hat{\sigma}_{j-1}^{x}\rangle+\langle \hat{\sigma}_{j+1}^{x}\rangle) \nonumber\\ & \! +(1-\Delta)\langle \hat{\sigma}_{j}^{x}\rangle (\langle \hat{\sigma}_{j-1}^{y}\rangle+\langle \hat{\sigma}_{j+1}^{y}\rangle). \end{aligned}$$ One may either directly time-evolve these equations to determine steady states, or attempt to analytically solve these equations in cases where the spatial dependence is relatively simple. Below we first present the analytical approach, and then discuss direct numerical evolution. It is clear from Eq. (\[eq:5\]) that the trivial state $\langle \hat{\sigma}_{j}^{x}\rangle=\langle \hat{\sigma}_{j}^{y}\rangle=0, \langle \hat{\sigma}_{j}^{z}\rangle=-1$ is always a fixed point, i.e. a steady state. This trivial state does not break the $\mathbbm{Z}_{2}$ symmetry of Eq. (\[equationfin\]) and so can also be referred to as a paramagnetic state[@Lee2013]. While such a steady state always exists, this state need not always be stable to small fluctuations. To test linear stability, one may linearize the equations of motion around the steady state, and consider plane-wave fluctuations of the form: $$\left(\begin{array}{c} \langle \hat{\sigma}_{j}^{x}\rangle \\ \langle \hat{\sigma}_{j}^{y}\rangle \\ \langle \hat{\sigma}_{j}^{z}\rangle \end{array} \right) = - \left( \begin{array}{c} 0 \\ 0 \\ 1 \end{array} \right) + \sum_{k} \left( \begin{array}{c} x_k \\ y_k \\ z_k \end{array} \right) e^{-i \nu_k t - i j k}.$$ The equations of motion then yield a secular equation for the frequencies $\nu_k$, with solutions $$\label{kappaeqn} \nu_k = - i \tilde{\kappa} \pm 2\sqrt{g^2+2 g \cos (k)+(1-\Delta^2) \cos^2(k)}.$$ and $\nu_k=-2i\tilde{\kappa}$. The steady state is stable to such a plane wave fluctuation $k$ if $\Im[\nu_k] <0$, meaning such fluctuations exponentially decay. It is clear that for $|\Delta|<1$, the trivial state is stable at both $g \to 0$ and $g \to \infty$. The trivial state can be unstable at intermediate $g$. For positive $g$, the most unstable fluctuations have $\cos(k)=-1$, i.e AFM fluctuations, whereas for negative $g$ FM fluctuations, $\cos(k)=1$ are the most unstable. In the Ising limit $\Delta=1$, one can write a simple expression for the phase boundary, $\tilde{\kappa} = 2\sqrt{1-(g\pm1)^2}$, indicating that for small enough $\tilde{\kappa}$ the normal state is unstable near to $g=\pm1$. In addition to the trivial state one may consider the FM ansatz $\langle \hat{\sigma}_{j}^{x}\rangle=X, \langle \hat{\sigma}_{j}^{y}\rangle=Y, \langle \hat{\sigma}_{j}^{z}\rangle=Z$, or AFM ansatz $\langle \hat{\sigma}_{j}^{x}\rangle=(-1)^j X, \langle \hat{\sigma}_{j}^{y}\rangle=(-1)^{j}Y, \langle \hat{\sigma}_{j}^{z}\rangle=Z$, and then find $X,Y,Z$ by substituting these forms into Eq. (\[eq:5\]) and solving the resulting cubic equation. One finds that for negative $g$, there is a non-trivial FM solution ($X,Y \neq 0$), which exists only when the trivial state is unstable. (When the trivial state is stable, the cubic equation only has one real root corresponding to $X=Y=0, Z=-1$.) For $g>0$ the same statements apply to the AFM ansatz. Whenever these non-trivial solutions exist they can be shown to be stable. This analysis predicts a simple phase diagram, corroborated by direct numerical time evolution of Eq. (\[eq:5\]). There are three phases, trivial, FM and AFM. The boundaries between theses are given by the surfaces $\nu_\pi=0, \nu_0=0$ with $\nu_k$ from Eq. (\[kappaeqn\]). This phase diagram is shown in Fig. \[fig:meanfield\] as a function of parameters $g,\Delta, {\tilde{\kappa}}$. It is clear that for a fixed ${\tilde{\kappa}}$ and with decreasing value of $\Delta$, the range of the transverse field strength $g$ over which the FM and AFM exist decreases. As $\Delta \to 0$, for finite $\kappa$, the trivial state always occurs regardless of the value $g$. ![(Color online) Mean-field phase diagram for the non-equilibrium steady state of Eq. (\[equationfin\]) as a function of dimensionless parameters $g, \Delta, {\tilde{\kappa}}$.[]{data-label="fig:meanfield"}](Mean_field_kappavs_g_delta_3d.pdf){width="3.2in"} To compare the predictions of mean-field theory and the full numerics, Fig. \[fig:comparemft\] compares their predictions for the correlation function $\langle \hat{\sigma}_{j}^{x} \hat{\sigma}_{j+1}^{x}\rangle$ as a function of transverse field strength $g$. In the trivial state, MFT predicts this correlation to vanish, while in the ordered states it predicts $\pm X^2$, for the FM (AFM) states respectively. As can be seen, MFT does predict the kind of order that is seen, but predicts sharp phase boundaries that are not seen in the full numerics. ![(Color online) Spin-spin correlations $\langle \sigma^x_{j} \sigma^x_{j+1} \rangle$ as a function of transverse field strength $g$. Parameters (in units of $J$): $\Delta=1, {\tilde{\kappa}}=0.5$ and MPO calculation performed for $N=40$ site chain, with $\chi_{\text{max}}=20$. []{data-label="fig:comparemft"}](Mean_field_comparenumerics.pdf){width="3.2in"} As noted above, direct time evolution of Eqs. (\[eq:5\]) corroborate the above phase diagram. However, the steady state found does depend on the initial conditions used. Specifically, considering small periodic perturbations around the trivial state and time evolving Eq. (\[eq:5\]) yields the AFM, trivial and FM states exactly as discussed above. In contrast, if time evolved from a random initial configuration, domains of FM/AFM can exist, separated by defect sites (domain walls). The dynamics of such domain walls becomes frozen within the mean-field numerics. The absence of long-range order seen in the full MPO numerics can be considered as the effect of a superposition of many different configurations of domain walls. Correlations vs transverse field in the Ising limit {#sec:open-syst-corr} --------------------------------------------------- We now turn to the properties of quantum correlations at $\Delta=1$ (the Ising model). For comparison, we summarize here the ground state properties, as studied in [@Osterloh2002; @Osborne2002; @Vidal2003a]. In the Ising model entanglement is short ranged: Only nearest and next-nearest neighboring spins are entangled. The magnitude of the nearest neighbor entanglement however shows critical scaling. At the critical point $|g|=1$ Ref. [@Osterloh2002] showed that $d \mathcal{C} / d g$ (where $\mathcal{C}$ is concurrence, another measure of entanglement) scaled as a power of the system size. Consequently, the peak value of $\mathcal{C}(g)$ actually occurs for $|g|>1$, rather than at the critical point. In the ground state, nearest neighbor entanglement only vanished at $g \to 0, |g| \to \infty$ [@Osterloh2002]. Quantum discord for the same model was studied in Ref. [@Maziero2010]. Discord is not restricted to nearest neighbors, and is peaked near $|g|=1$. ![(Color online) Evolution of quantum correlations with transverse field $g$ in the Ising limit. Panel (a) shows negativity $\mathcal N$, panel (b) geometric quantum discord $\mathcal D$. In addition the integrated susceptibility $S^{xx}_{\text{int}} = \sum_j \langle \sigma_{i}^x \sigma_{j}^x \rangle$ is shown in (c); at the equilibrium critical point this would show a power law divergence with system size. Note that only one line is shown in panel (a) because entanglement vanishes beyond nearest neighbors at $\Delta=1$. Parameters as in Fig. \[fig:isingorder\]. []{data-label="fig:ising"}](negising.pdf "fig:"){width="3.2in"}\ ![(Color online) Evolution of quantum correlations with transverse field $g$ in the Ising limit. Panel (a) shows negativity $\mathcal N$, panel (b) geometric quantum discord $\mathcal D$. In addition the integrated susceptibility $S^{xx}_{\text{int}} = \sum_j \langle \sigma_{i}^x \sigma_{j}^x \rangle$ is shown in (c); at the equilibrium critical point this would show a power law divergence with system size. Note that only one line is shown in panel (a) because entanglement vanishes beyond nearest neighbors at $\Delta=1$. Parameters as in Fig. \[fig:isingorder\]. []{data-label="fig:ising"}](disising.pdf "fig:"){width="3.2in"}\ ![(Color online) Evolution of quantum correlations with transverse field $g$ in the Ising limit. Panel (a) shows negativity $\mathcal N$, panel (b) geometric quantum discord $\mathcal D$. In addition the integrated susceptibility $S^{xx}_{\text{int}} = \sum_j \langle \sigma_{i}^x \sigma_{j}^x \rangle$ is shown in (c); at the equilibrium critical point this would show a power law divergence with system size. Note that only one line is shown in panel (a) because entanglement vanishes beyond nearest neighbors at $\Delta=1$. Parameters as in Fig. \[fig:isingorder\]. []{data-label="fig:ising"}](totalising.pdf "fig:"){width="3.2in"} Figure \[fig:ising\] shows the evolution of quantum correlations (negativity [^1] and geometric quantum discord) with transverse field $g$ in the non-equilibrium steady state at $\Delta=1, {\tilde{\kappa}}=0.5$. In addition, the integrated susceptibility $S^{xx}_{\text{int}} = \sum_j \langle \sigma_{i}^x \sigma_{j}^x \rangle$ (static spin structure factor) is shown. This correlation function both serves as an example of a correlation function that does not require specifically quantum correlations, and also as a function which would diverge (as a power law of system size) at the ground state critical point — such a divergence reflects the appearance of quasi-long range order in the spin-spin correlator. The asymmetry of this correlation function seen in Fig. \[fig:ising\](c) reflects the switch from ferromagnetic to antiferromagnetic order. Despite the switch between ferro- and antiferromagnetic order with sign of $g$, which is absent in the ground state, several features of the quantum correlations match closely the ground state behavior. Entanglement has a short range, existing now only between nearest neighbors as shown in Fig. \[fig:ising\](a), while discord extends to greater separations Fig. \[fig:ising\](b). Negativity also peaks at a value $|g|>1$. These features exist for both signs of $g$; this is because the entanglement measures are not affected by the sublattice sign-changes induced by the duality discussed above. As discussed in Ref. [@Ma2011], two-mode squeezing is a sufficient condition for pairwise entanglement. We have confirmed that in the range of $g$ for which which bipartite entanglement vanishes, the two-mode spin squeezing parameter is identically zero. In contrast to the ground state, there is however no critical behavior: the entanglement is an analytic function of $g$ with no singular behavior at $|g|=1$. Similarly, the integrated susceptibility does not diverge with increasing system size but instead saturates. This reflects exponential spatial decay of correlations, i.e. a finite correlation length, as anticipated due to the dissipation. The absence of critical behavior and the presence of only short range correlations suggests the NESS of this 1D system does not undergo any phase transition. Such a result is to be expected, since any finite temperature leads to short range order for a 1D system with short ranged interactions. Although we consider dissipation due to an empty (i.e. zero temperature) bath, we consider a non-equilibrium situation. As has been discussed elsewhere, see e.g. [@Torre2011; @Torre2012], this leads to a non-zero low-energy effective temperature. Also in contrast to the ground state behavior, for small $|g|$ entanglement vanishes entirely. The nature of this disappearance, i.e. the sharp threshold seen in Fig. \[fig:ising\](a) is a general feature of entanglement in a dissipative system [@Yu2009] — finite amounts of dissipation can make a state become separable. Discord however remains non-zero between nearest neighbors at $g=0$. Correlations vs anisotropy (pump-strength) $\Delta$ {#sec:open-syst-corr-1} --------------------------------------------------- In the ground state, the range of entanglement was found to grow as one moves away from the Ising limit ($\Delta=1$), toward the isotropic XY limit ($\Delta=0$) [@Osborne2002]. We therefore next explore how pump strength $\Delta$ affects the scale and range of correlations. Since the anisotropy parameter $\Delta$ is also the strength of pumping the isotropic limit corresponds to vanishing pump, the consequence of this double rôle of $\Delta$. ![(Color online) Evolution of quantum correlations with transverse field $g$ near the isotropic limit, $\Delta=0.05$. Panels (a),(b) show negativity $\mathcal N$ and geometric quantum discord $\mathcal D$ as in Fig. \[fig:ising\]. Panel (c) shows spin-spin correlation $\langle \sigma^x_{j} \sigma^x_{j+l} \rangle$ as in Fig. \[fig:isingorder\]. Panel (d) shows spatial dependence of correlations for the anisotropic X-Y model. Parameters (in units of $J$): ${\tilde{\kappa}}=0.5$ and MPO calculation performed for $N=40$ site chain, with $\chi_{\text{max}}=20$. []{data-label="fig:xy"}](xyneg1.pdf "fig:"){width="3.2in"}\ ![(Color online) Evolution of quantum correlations with transverse field $g$ near the isotropic limit, $\Delta=0.05$. Panels (a),(b) show negativity $\mathcal N$ and geometric quantum discord $\mathcal D$ as in Fig. \[fig:ising\]. Panel (c) shows spin-spin correlation $\langle \sigma^x_{j} \sigma^x_{j+l} \rangle$ as in Fig. \[fig:isingorder\]. Panel (d) shows spatial dependence of correlations for the anisotropic X-Y model. Parameters (in units of $J$): ${\tilde{\kappa}}=0.5$ and MPO calculation performed for $N=40$ site chain, with $\chi_{\text{max}}=20$. []{data-label="fig:xy"}](xydis.pdf "fig:"){width="3.2in"}\ ![(Color online) Evolution of quantum correlations with transverse field $g$ near the isotropic limit, $\Delta=0.05$. Panels (a),(b) show negativity $\mathcal N$ and geometric quantum discord $\mathcal D$ as in Fig. \[fig:ising\]. Panel (c) shows spin-spin correlation $\langle \sigma^x_{j} \sigma^x_{j+l} \rangle$ as in Fig. \[fig:isingorder\]. Panel (d) shows spatial dependence of correlations for the anisotropic X-Y model. Parameters (in units of $J$): ${\tilde{\kappa}}=0.5$ and MPO calculation performed for $N=40$ site chain, with $\chi_{\text{max}}=20$. []{data-label="fig:xy"}](xyord.pdf "fig:"){width="3.2in"}\ ![(Color online) Evolution of quantum correlations with transverse field $g$ near the isotropic limit, $\Delta=0.05$. Panels (a),(b) show negativity $\mathcal N$ and geometric quantum discord $\mathcal D$ as in Fig. \[fig:ising\]. Panel (c) shows spin-spin correlation $\langle \sigma^x_{j} \sigma^x_{j+l} \rangle$ as in Fig. \[fig:isingorder\]. Panel (d) shows spatial dependence of correlations for the anisotropic X-Y model. Parameters (in units of $J$): ${\tilde{\kappa}}=0.5$ and MPO calculation performed for $N=40$ site chain, with $\chi_{\text{max}}=20$. []{data-label="fig:xy"}](corrndeltapntzero5.pdf "fig:"){width="3.2in"} We first consider how Fig. \[fig:ising\] is modified when $\Delta<1$. Figure \[fig:xy\] shows the behavior of entanglement, discord and correlation functions for $\Delta=0.05$, close to the isotropic limit. As discussed above, the $\langle \sigma^x_{N/2} \sigma^x_{N/2+l} \rangle$ still show the odd even symmetry, but the vanishing of all correlations at $g=0$ no longer occurs — the precession axis now lies within the $xy$ plane, and so the $x$ component of spin need not decay to zero. When $\Delta<1$, as in the ground state, entanglement extends over a larger range, i.e. not only between nearest neighbors. In addition, the peak entanglement now occurs near $g=0$, rather than at $|g|>1$, i.e. quantum correlations attain their maximal value away from the equilibrium quantum critical point [@Latorre2004a]. In addition, the peak value of entanglement (and all correlations) is significantly smaller than that seen at $\Delta=1$. From Fig. \[fig:xy\](c) it is clear that at large negative $g$ there is again short-range ferromagnetic order, and antiferromagnetic order at large positive $g$. At smaller $g$, just as seen at $\Delta=1$, the short-range ordering is incommensurate \[see Fig. \[fig:xy\](d)\]. However, the value of $|g|$ required to see FM/AFM order is larger for $\Delta=0.05$ than it was for $\Delta=1$, so that $g=\pm1$ now shows incommensurate order.The correlations do still respect the sublattice duality discussed earlier. In contrast to the behavior at $\Delta=1$, the correlations always have a small magnitude \[compare the scale of Fig. \[fig:xy\](c,d) as compared to Fig. \[fig:isingorder\]\]. This is consistent with the observation that for $\Delta=0.05, \tilde{\kappa}=0.5$, the mean-field theory would predict the trivial state independent of the value of $g$ (see Fig. \[fig:meanfield\]). ![ (Color online) Evolution of quantum correlations with anisotropy $\Delta$. Panel (a) shows negativity $\mathcal N$ and (b) geometric quantum discord $\mathcal D$. Parameters (in units of $J$): $g=-1, {\tilde{\kappa}}=0.5$ and MPO calculation performed for $N=40$ site chain, with $\chi_{\text{max}}=20$. []{data-label="fig:vs-delta"}](negdelta.pdf "fig:"){width="3.2in"}\ ![ (Color online) Evolution of quantum correlations with anisotropy $\Delta$. Panel (a) shows negativity $\mathcal N$ and (b) geometric quantum discord $\mathcal D$. Parameters (in units of $J$): $g=-1, {\tilde{\kappa}}=0.5$ and MPO calculation performed for $N=40$ site chain, with $\chi_{\text{max}}=20$. []{data-label="fig:vs-delta"}](disdelta.pdf "fig:"){width="3.2in"} While $\Delta=0.05$ leads to a longer range of entanglement, the symmetry of the problem remains Ising-like for all $0<\Delta \le 1$. In the ground state, the combination of this fact and universality together imply that the range of entanglement must remain finite as long as $\Delta$ is non-zero [@Maziero2010]. The same behavior is indeed seen in the non-equilibrium steady state: For any non-zero $\Delta$, entanglement only extends over a finite range, this range grows as $\Delta$ shrinks and diverges at $\Delta \to 0$. This can be seen in Fig. \[fig:vs-delta\] which shows the evolution of entanglement and discord as a function of $\Delta$ for various different separations between sites. As anticipated above, the limit $\Delta \to 0$ is special, since $\Delta$ corresponds to pumping strength. Specifically, as $\Delta \to 0$, the range over which entanglement exists continues to grow, but the magnitude of the entanglement for any pair of sites ultimately vanishes. Thus the limit $\Delta\to 0$ is singular, with diverging range of correlations, but vanishing magnitude. The vanishing of negativity, and in fact of all correlations, at $\Delta=0$ can be easily understood from the equation of motion: at $\Delta=0$, the Hamiltonian conserves numbers of excited two-level systems, while the dissipation reduces this number, so the steady state must be the trivial empty state, which is a product state and thus uncorrelated. The origin of growing range of negativity can be found by examining the structure and scaling of the two-site density matrix. We first note that this density matrix has a simple structure: $$\label{eqnden} \rho_{ij}= \left( \begin{array}{cccc} p_{11} & 0 & 0 & x_{4} \\ 0 & p_{10} & x_{5} & 0 \\ 0 & x_{5}^{\ast} & p_{01} & 0 \\ x_{4}^{\ast} & 0 & 0 & p_{00}. \end{array}\right).$$ This structure is due to a symmetry of the equation of motion, under the transformation $\rho \to \hat{R} \rho \hat{R}$ with $\hat{R} = \prod_j \sigma^z_j$. The consequences of such a symmetry for the Hamiltonian were previously discussed [@Osborne2002]; the decay terms we consider also respect this symmetry. Consequently the steady state density matrix should satisfy $[R, \rho]=0$. Tracing over all but two sites, $[\sigma^z_{i}\sigma^z_{j},\rho_{ij}]=0$, which imposes the structure discussed above. A state of the form is entangled iff either $p_{10}p_{01} <|x_4|^{2}$ or $p_{00} p_{11} <|x_5|^{2}$. In the limit of small $\Delta$ the excited state populations $p_{11}, p_{01}, p_{10} \sim \Delta^2$ and so $p_{00} \sim 1$. The off diagonal matrix elements scale as $|x_{4}| \sim \Delta, |x_{5}| \sim \Delta^{2}$. All of these expressions have prefactors that depend on the separation between sites. However, regardless of these prefactors, the scaling of $p_{01}, p_{10}, x_4$ with $\Delta$ implies that as $\Delta \to 0$, the first of the two criteria above will always be satisfied, i.e. for any pair of sites, there exists a $\Delta_c$ such that for $0<\Delta<\Delta_c$ they will be entangled. Furthermore, as discussed in section \[sec:asymptotic-delta-to\], this behavior can be derived analytically within a spin-wave approximation. Correlations vs decay rate {#sec:corr-vs-decay} -------------------------- ![(Color online) Evolution of quantum correlations with ${\tilde{\kappa}}$. Panel (a) shows negativity $\mathcal N$, and panel (b) shows geometric quantum discord $\mathcal D$, for both Ising limit and small anisotropy limit. Parameters (in units of $J$): $g=-1$ and MPO calculation performed for $N=40$ site chain, with $\chi_{\text{max}}=20$. []{data-label="fig:vs-decay"}](negdecay.pdf "fig:"){width="3.2in"}\ ![(Color online) Evolution of quantum correlations with ${\tilde{\kappa}}$. Panel (a) shows negativity $\mathcal N$, and panel (b) shows geometric quantum discord $\mathcal D$, for both Ising limit and small anisotropy limit. Parameters (in units of $J$): $g=-1$ and MPO calculation performed for $N=40$ site chain, with $\chi_{\text{max}}=20$. []{data-label="fig:vs-decay"}](disdecay.pdf "fig:"){width="3.2in"} Having explored the dependence on the parameters $\Delta, g$, we conclude our discussion of numerical results by presenting the dependence of quantum correlations on the decay rate ${\tilde{\kappa}}=\kappa/J$. Figure \[fig:vs-decay\] shows the evolution with decay rate at $g=-1$, and the two values of $\Delta$ shown in detail above. Whereas the discord decreases monotonically with decay rate, the behavior of the negativity depends on anisotropy. In particular, in the Ising limit, there is a non-monotonic dependence, exhibiting a separable but non-classical state for sufficiently small ${\tilde{\kappa}}$. The appearance of non-zero entanglement with increasing ${\tilde{\kappa}}$ corresponds to the condition $p_{01} p_{10}=|x_4|^2$: on increasing ${\tilde{\kappa}}$, the probabilities $p_{01}\equiv p_{10}$ decrease while $|x_4|$ varies little at small ${\tilde{\kappa}}$. Non-monotonic dependence of entanglement on decay rate has also been seen in other contexts [@Joshi2013]. Note that the decay terms remain important even at ${\tilde{\kappa}}\to 0$. In this limit the steady state is only attained at long times; the state which is finally attained is still determined by the open system dynamics. Asymptotic $\Delta \to 0$ behavior and spin-wave approximation {#sec:asymptotic-delta-to} ============================================================== Spin-wave calculation of negativity {#sec:spin-wave-calc} ----------------------------------- As noted above, for $\Delta=0$, the NESS of our model corresponds to an empty state. This suggests that for small $\Delta$ an approximation based on a low density of excited two-level systems can be used: a bosonic spin-wave approach [@mattis2006theory]. This corresponds to reverting from two-level systems (hard core bosons) to bosonic fields $\sigma_{j}^{-} \rightarrow \hat{b}_{j}$. Equation (\[eqnham2\]) thus becomes: $$\begin{gathered} \label{eq:heff_boson} H_\text{eff} =- \sum_{j} \left[ g(2\hat{b}_{j}^{\dagger}\hat{b}_{j}-1) + \left( \hat{b}_{j}^{\dagger}\hat{b}_{j+1}+\hat{b}_{j+1}^{\dagger}\hat{b}_{j} \right) \right.\\\left. {}+\Delta (\hat{b}_{j}^{\dagger}\hat{b}_{j+1}^{\dagger}+\hat{b}_{j+1}\hat{b}_{j}) \right].\end{gathered}$$ This approximation is valid as long as double occupancy of a site can be ignored. Fourier transforming both this and the loss term, the Master equation can be written as: $$\begin{aligned} \label{harcorebosmas} \frac{d\rho}{dt}&=-i[ \sum_{k} h_k, \rho ] + {\tilde{\kappa}}\! \sum_{k} [2 \hat{b}_{k}{\rho}\hat{b}_{k}^{\dagger}-\hat{b}_{k}^{\dagger}\hat{b}_{k}\rho- \rho\hat{b}_{k}^{\dagger}\hat{b}_{k}] \\ h_{k}&=- \left(\begin{array}{cc} \hat{b}^{\dagger}_{k}& \hat{b}_{-k} \end{array}\right) \left( \begin{array}{cc} g + \cos(k) & \Delta \cos(k) \\ \Delta \cos(k) & g + \cos(k) \end{array} \right) \left( \begin{array}{c} \hat{b}_k\\ \hat{b}^{\dagger}_{-k} \end{array} \right) \nonumber,\end{aligned}$$ so that each pair of modes $k, -k$ form a closed subsystem. To find steady state correlations, we replace the density matrix equation of motion, Eq. (\[harcorebosmas\]), by equivalent Heisenberg-Langevin equations [@scully97]. The Heisenberg-Langevin equations can be derived by writing the Heisenberg equations for the system operators coupled to a Markovian bath. After eliminating the dynamics of the bath operators, one finds equations for the system operators of the form: $$\label{sec:spin-wave-calc-2} \frac{d}{dt} \hat{b}_{k}=i[h_{k}+h_{-k},\hat{b}_{k}]-{\tilde{\kappa}}\hat{b}_{k}+\sqrt{2 {\tilde{\kappa}}} \hat{b}_{k}^{in}(t).$$ The Markovian bath has two effects: it causes decay of the system operator $\hat{b}_{k}$ at the rate ${\tilde{\kappa}}$, and it introduces an “input noise” term $\hat{b}_{k}^{in}(t)$. Since we consider decay into an zero temperature (i.e. empty) bath, there is only vacuum quantum noise: the only non-zero noise correlation function is $\langle \hat{b}_{k}^{in}(t) \hat{b}_{k^{\prime}}^{\dagger in}(t^{\prime}) \rangle=\delta_{k,k^{\prime}}\delta(t-t^{\prime})$. Because of the anomalous (pumping) terms in Eq. (\[eq:heff\_boson\]), the equation for $\hat{b}_k$ couples to that for $\hat{b}_{-k}^{\dagger}$ and vice versa. The coupled equations for operators $\hat{b}_{k}$ and $\hat{b}_{-k}^{\dagger}$ can be written in a matrix form, $$\begin{aligned} \label{eqndif} \dot{\hat{f}}(t) &=\mathcal M \hat{f}(t)+ \hat{m}(t), \end{aligned}$$ in which $\hat{f}(t)$ is the column vector comprising operators $\hat{b}_{k}(t)$ and $\hat{b}_{-k}^{\dagger}(t)$, $\hat{m}(t)$ is the column vector containing the noise operators: $$\begin{aligned} \hat{f}(t) &=\! \left(\! \begin{array}{cc} \hat{b}_{k}(t), & \hat{b}_{-k}^{\dagger}(t) \end{array}\!\!\right)^T \nonumber \\ \hat{m}(t) &=\! \left(\! \begin{array}{cc} \sqrt{2{\tilde{\kappa}}}\hat{b}_{k}^{in}(t), & \sqrt{2{\tilde{\kappa}}}\hat{b}_{-k}^{\dagger in}(t) \end{array}\!\!\right)^T \nonumber; \end{aligned}$$ and the matrix $\mathcal M$ is given by $$\begin{aligned} \mathcal{M}= \left(\! \begin{array}{cc} -{\tilde{\kappa}}+ 2i(g+\cos(k)) & 2i\Delta\cos(k) \\ -2i\Delta\cos(k) & -{\tilde{\kappa}}-2i(g+\cos(k)) \end{array}\!\!\right).\nonumber \end{aligned}$$ The solution of Eq. (\[eqndif\]) is $ \hat{f}(t)=e^{\mathcal M t}\hat{f}(0)+\int_{0}^{t}e^{\mathcal M (t-t')}\hat{m}(t')dt' $. Since the real parts of the eigenvalues of $\mathcal M$ are negative, the first of these terms vanishes in the long time limit $t \to \infty$. In this limit one then finds: $$\begin{aligned} \frac{\hat{b}_{k}(t)}{\sqrt{2 {\tilde{\kappa}}}} &=\! \int_{0}^{t} \!\! dt^\prime \left[ \mathcal G_{1} (t-t^\prime)\hat{b}_{k}^{in}(t^\prime) +\mathcal G_{2}(t-t^\prime) \hat{b}_{-k}^{\dagger in}(t^\prime) \right] \nonumber\\ \frac{\hat{b}_{-k}^{\dagger}(t)}{\sqrt{2 {\tilde{\kappa}}}} &=\! \int_{0}^{t} \!\! dt^\prime \left[ \mathcal G_{1}^{\ast} (t-t^\prime)\hat{b}_{-k}^{\dagger in}(t^\prime) + \mathcal G_{2}^{\ast}(t-t^\prime) \hat{b}_{k}^{n}(t^\prime)) \right] \label{eq:3}\end{aligned}$$ where the propagators $\mathcal G_{1,2}(\tau)$ are matrix elements of $\exp(M \tau)$. By introducing the dispersions $ \epsilon_{k}=2(g+\cos(k)), \eta_{k}=2\Delta\cos(k), \xi_k = \sqrt{\epsilon_k^2 - \eta_k^2},$ the propagators can be written as: $$\begin{aligned} \mathcal G_{1}(\tau)&=e^{-{\tilde{\kappa}}\tau} \left[ \cos(\tau \xi_k)+i \epsilon_{k} \frac{\sin \left(\tau \xi_k\right)}{\xi_k} \right]\\ \mathcal G_{2}(\tau)&= i \eta_{k} e^{-{\tilde{\kappa}}\tau} \sin(\tau \xi_k)/\xi_k.\end{aligned}$$ To find the quantum correlations of the state, we first note that since the problem involves non-interacting bosons, the steady state is Gaussian, i.e. it can be fully characterized by the covariance matrix $V_{j,k}$ as given below. Introducing $\hat{x}_j=\hat{b}^{}_j + \hat{b}^\dagger_j, \hat{p}_j =(\hat{b}^{}_j - \hat{b}^\dagger_j)/i$ we have $$\label{eqndcov} V_{j,k}= \!\left(\! \begin{array}{cc} {\bf A}_{j} & {\bf C}_{jk}\\ {\bf C}_{jk}^{T} & {\bf A}_{k} \end{array}\!\!\right)\!, \quad {\bf C}_{jk}=\! \left(\! \begin{array}{cc} \langle x_j x_k \rangle_s & \langle x_j p_k \rangle_s \\ \langle x_k p_j \rangle_s & \langle p_j p_k \rangle_s \end{array}\!\!\right),$$ and ${\bf A}_j = {\bf C}_{jj}$ where $\langle xp\rangle_s = \langle xp +px\rangle/2$. To find these correlators, it is sufficient to find $\langle \hat{b}_{j}^{\dagger}\hat{b}_{j+l} \rangle$, and $\langle \hat{b}_{j}\hat{b}_{j+l} \rangle$. In the real space the correlator $\langle \hat{b}_{j}^{\dagger}\hat{b}_{j+l} \rangle$ can be expressed as $$\label{realspce} \langle \hat{b}_{j}^{\dagger} \hat{b}^{}_{j+l} \rangle =\frac{1}{N} \sum_{k,k'} \langle \hat{b}_{k}^{\dagger} \hat{b}^{}_{k'} \rangle e^{i(k'-k)j}e^{ik' l} .$$ Using Eq. (\[eq:3\]) one finds that for $N \to \infty$: $$\label{ctrint} \langle \hat{b}_{j}^{\dagger} \hat{b}_{j+l} \rangle = \frac{1}{4\pi}\int_{-\pi}^{\pi} \frac{\eta_k^2}{\xi_k^2+{\tilde{\kappa}}^{2}} dk e^{i k l}$$ and a similar expression for $ \langle \hat{b}_{j}^{} \hat{b}_{j+l} \rangle$. By substituting $e^{ik} \to z$, the integral becomes a contour integral around the unit circle $|z|=1$, so its value depends on the residue of those poles $z=Z$ with $ |Z|<1$. The four poles come in complex conjugate pairs and can be found in closed form $Z=\zeta\pm \sqrt{\zeta^2-1}$ where $\zeta=\left[ g \pm \sqrt{g^2 \Delta^2 - {\tilde{\kappa}}^2(1-\Delta^2)/4} \right]/(1-\Delta^2)$. Two of these poles which we denote as $Z_0, Z_0^\ast$ lie within the unit circle, and in terms of these one finds: $$\begin{aligned} \langle \hat{b}_{j}^{\dagger} \hat{b}_{j+l} \rangle &= \Delta^{2} \left[\alpha (Z_0)^{l-1}+\alpha^{*} (Z_0^{*})^{l-1} \right] \\ \langle \hat{b}_{j} \hat{b}_{j+l} \rangle &=\Delta \beta (Z_0^{*})^{l-1}\end{aligned}$$ where $\alpha,\beta$ are complex functions of ${\tilde{\kappa}},\Delta,g$. We have factored out the asymptotic scaling with $\Delta$ at $\Delta \to 0$. Since $|Z_0|<1$, all correlations decay exponentially with separation $n$. The definition of negativity given earlier, Eq. (\[eq:4\]), is specific to qubits, i.e. two-level systems. For a Gaussian state an alternate definition of negativity can be found in terms of the symplectic eigenvalue $\tilde{\nu}_{-}^{2}=(\tau-\sqrt{\tau^{2}-4 {\rm Det}[V_{j,j+l}]})/2$, where $\tau={\rm Det}[A_{j}]+{\rm Det}[A_{j+l}]-2{\rm Det}[C_{j,j+l}]$. The state is separable if $\tilde{\nu}_{-} > 1$, and so negativity for such states may be defined as $\mathcal N=\text{max}( 0, 1-\tilde{\nu}_{-})$. Using the asymptotic scaling of the elements of the covariance matrix with $\Delta$, we find that in the $\Delta \to 0$ limit $$\label{eqn:nega} \tilde{\nu}_{-} \simeq\sqrt{1-4|\langle \hat{b}_{j} \hat{b}_{j+l} \rangle|^2}, \quad \mathcal N \simeq 2|\langle \hat{b}_{j} \hat{b}_{j+l} \rangle|$$ Within this limit, it is thus clear that $\mathcal N > 0$ for all pairs of sites, but $\mathcal N \propto \Delta$ and so $\mathcal N$ vanishes at small $\Delta$, reproducing the singular behavior found numerically in the previous section. Comparing spin-wave approximation to numerics {#sec:validity-spin-wave} --------------------------------------------- The spin wave theory relies on neglecting effects of possible double occupation of a given site. While the probability of such an event is small for $\Delta \to 0$, it is not a-priori clear whether its effects are negligible, since the pair creation term creates excitations on adjacent sites, hence hopping can easily create a doubly occupied site within the bosonic approximation. For this reason, we a compare the results of the MPO numerics and the spin-wave theory in the limit $\Delta \to 0$. We focus on the correlation function $\langle\sigma_{j}^{-}\sigma_{j+l}^{-} \rangle$, or its equivalent bosonic form, which according to Eq. (\[eqn:nega\]) determines the asymptotic negativity as $\Delta \to 0$. Both MPO and spin-wave results show this correlation function decays exponentially with separation $l$ (neglecting edge effects). Consequently this correlation function can be characterized by its value for nearest neighbors $l=1$ (NB the $l=0$ case vanishes by definition), and by its correlation length $\xi_c$, defined as $|\langle\sigma_{j}^{-}\sigma_{j+l}^{-} \rangle| \propto e^{-l/\xi_c}$. In the spin-wave theory $\xi_c=-1/\ln[Z_0]$. These two characteristic quantities are shown in Fig. \[fig:compare-mps-sw\], focussing on the limiting behavior at $\Delta \to 0$. ![ (Color online) Panel (a) correlation length $\xi_{c}$ at $\Delta = 0.005$ vs decay rate ${\tilde{\kappa}}$. Panel (b) Nearest neighbor correlations $|\langle\sigma_{j}^{-}\sigma_{j+1}^{-} \rangle|$ vs anisotropy $\Delta$ for several values of ${\tilde{\kappa}}$. In both panels MPO numerics (points) are compared to spin-wave theory (lines). Parameters (in units of $J$): $g=-1$, MPO calculation is performed for $N = 40$ site chain with $\chi_{\text{max}}=20$. []{data-label="fig:compare-mps-sw"}](corrndlength.pdf "fig:"){width="3.2in"}\ ![ (Color online) Panel (a) correlation length $\xi_{c}$ at $\Delta = 0.005$ vs decay rate ${\tilde{\kappa}}$. Panel (b) Nearest neighbor correlations $|\langle\sigma_{j}^{-}\sigma_{j+1}^{-} \rangle|$ vs anisotropy $\Delta$ for several values of ${\tilde{\kappa}}$. In both panels MPO numerics (points) are compared to spin-wave theory (lines). Parameters (in units of $J$): $g=-1$, MPO calculation is performed for $N = 40$ site chain with $\chi_{\text{max}}=20$. []{data-label="fig:compare-mps-sw"}](cmprecorr.pdf "fig:"){width="3.2in"}\ The correlation length shown in Fig. \[fig:compare-mps-sw\] shows that the spin-wave theory accurately reproduces the results of the numerics, and both show a diverging correlation length ($|Z_0| \to 1$) in the limit ${\tilde{\kappa}}\to 0$. In contrast, the magnitude of correlations (i.e. prefactors of the exponential decay) do not match well except at ${\tilde{\kappa}}\gg 1$. This can be explained as follows: at small ${\tilde{\kappa}}$, excitations created on adjacent sites can easily hop to create doubly occupied sites and thus rendering the bosonic approximation inaccurate. For ${\tilde{\kappa}}\gg 1$, excitations on adjacent sites are lost before hopping can create doubly excited sites. Conclusions {#sec:conclusions} =========== In the present work we have studied the non-equilibrium steady state of parametrically driven 1-D coupled cavity array. Making use of an MPO representation to determine the open system evolution, we obtain the non-equilibrium steady state of a dissipative transverse field Ising model. The steady state can be related to the ground state configuration for transverse field $g<0$, and to the maximum energy configuration for $g>0$. Consequently, for either sign of $g$, many features of the quantum correlations behave similarly to those in the ground state Ising model. The most significant difference is that dissipation destroys the phase transition, and so no critical behavior occurs at $|g|=1$ with correlation lengths remaining finite. We have also compared the results of the MPO numerics with the predictions of the mean-field theory. Mean-field theory erroneously predicts long-range ordered phases, but the nature of the ordering predicted is reflected by the MPO numerics. We have identified a singular limit, of weak driving, where the range of quantum correlations diverges, but the magnitude of the correlations vanishes. This limiting behavior can be recovered analytically from a spin wave theory, which accurately recovers the correlation length in this limit. CJ and JK acknowledge support from EPSRC programme “TOPNES” (EP/I031014/1), and EPSRC (EP/G004714/2). FN acknowledges support from the EPSRC grant “A Pragmatic Approach to Adiabatic Quantum Computation” (EP/K02163X/1). JK acknowledges helpful suggestsions from R. Fazio and hospitality from MPI-PKS, Dresden. We acknowledge discussions with A. G. Green, C. A. Hooley and S. H. Simon. [74]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{},  ed. (, , ) [**](http://books.google.co.uk/books?id=6bKmQgAACAAJ), Statistical Physics: Statics, Dynamics and Renormalization (, , ) [**](http://books.google.co.uk/books?id=-s4DEy7o-a0C) (, , ) [****,  ()](\doibase 10.1038/416608a) [****,  ()](\doibase 10.1103/PhysRevA.66.032110) [****,  ()](\doibase 10.1103/PhysRevLett.90.227902) [****,  ()](\doibase 10.1103/RevModPhys.80.517) [****,  ()](http://link.aps.org/doi/10.1103/RevModPhys.81.865) @noop [**]{} (, , ) [****, ()](\doibase 10.1103/RevModPhys.75.715) [****,  ()](http://www.worldscientific.com/doi/abs/10.1142/S0217979213450537) [****,  ()](\doibase 10.1103/PhysRevA.75.013804) [****,  ()](\doibase 10.1103/PhysRevLett.104.113601) [****,  ()](\doibase 10.1038/nature09009) [****,  ()](\doibase 10.1103/PhysRevLett.104.130401) [****,  ()](\doibase 10.1103/PhysRevLett.105.015702) [****,  ()](\doibase 10.1103/PhysRevA.82.013841) [****,  ()](\doibase 10.1103/PhysRevA.84.031402) [****,  ()](http://iopscience.iop.org/1367-2630/14/5/055005) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.109.183602) [****,  ()](\doibase 10.1103/PhysRevLett.108.023602) [****,  ()](\doibase 10.1088/1367-2630/14/10/103025) [****,  ()](\doibase 10.1103/PhysRevB.85.184302) [****,  ()](\doibase 10.1103/PhysRevA.87.023831) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.110.233601) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.110.163605) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.110.257204) @noop [  ()]{},  @noop [  ()]{},  [****, ()](\doibase 10.1038/nphys462) [****,  ()](\doibase 10.1038/nphys466) [****,  ()](\doibase 10.1103/PhysRevA.76.031805) [****,  ()](\doibase 10.1002/lpor.200810046) [****,  ()](\doibase 10.1002/andp.201200261) [****,  ()](\doibase 10.1103/PhysRevLett.109.253606) [****,  ()](\doibase 10.1103/PhysRevA.82.012106) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevA.70.032313) [****, ()](\doibase 10.1016/j.aop.2007.12.003) [****,  ()](\doibase 10.1103/PhysRevLett.91.147902) [****,  ()](\doibase 10.1103/PhysRevLett.93.040502) @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.88.256403) [****,  ()](\doibase 10.1103/PhysRevLett.93.076401) [****,  ()](\doibase 10.1088/1742-5468/2004/04/P04005) [****,  ()](\doibase 10.1103/PhysRevLett.93.140408) [****,  ()](\doibase 10.1103/PhysRevA.70.043612) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.111.150403) [****,  ()](http://www.sciencedirect.com/science/article/pii/S0003491610001752) [****, ()](\doibase 10.1103/PhysRevLett.93.207205) [****, ()](\doibase 10.1103/PhysRevB.78.155117) [****,  ()](\doibase 10.1088/1742-5468/2009/02/P02035) [****, ()](\doibase 10.1103/PhysRevB.40.546) [**](http://books.google.co.uk/books?id=VkNBAQAAIAAJ) (, , ) @noop [**]{} (, , ) [****,  ()](\doibase 10.1080/09500349214552211) [****, ()](http://link.aps.org/doi/10.1103/PhysRevA.87.062304) [****,  ()](\doibase 10.1103/PhysRevLett.77.1413) [****,  ()](\doibase 10.1016/S0375-9601(96)00706-2) [****,  ()](http://iopscience.iop.org/0305-4470/38/48/009/) [****,  ()](http://rspa.royalsocietypublishing.org/content/459/2036/2011) [****,  ()](\doibase 10.1103/PhysRevLett.88.017901) [****,  ()](\doibase 10.1088/0305-4470/34/35/315) [****, ()](http://link.aps.org/doi/10.1103/RevModPhys.84.1655) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.81.5672) [****,  ()](http://dx.doi.org/10.1038/nphys2377) [****,  ()](\doibase 10.1103/PhysRevLett.105.190502) **, @noop [Ph.D. thesis]{},  () [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.102.057202) [****,  ()](\doibase 10.1103/PhysRevA.85.013817) [****,  ()](http://www.sciencedirect.com/science/article/pii/S0370157311002201) [****,  ()](\doibase 10.1126/science.1167343) [****,  ()](http://authors.library.caltech.edu/24913/) [^1]: NB, the measure that we use, negativity, and that used in Refs.[@Osterloh2002; @Osborne2002], concurrence, are not identical. We have checked that plotting concurrence instead of negativity does not affect any of the conclusions discussed here.
--- abstract: 'We described a decentralized distributed deterministic asynchronous Dykstra’s algorithm that allows for time-varying graphs in an earlier paper. In this paper, we show how to incorporate subdifferentiable functions into the framework using a step similar to the bundle method. We point out that our algorithm also allows for partial data communications. We discuss a standard step for treating the composition of a convex and linear function.' author: - 'C.H. Jeffrey Pang' bibliography: - '../refs.bib' title: 'Subdifferentiable functions and partial data communication in a distributed deterministic asynchronous Dykstra’s algorithm' --- [^1] Introduction ============ Consider a connected graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ where a closed convex function $f_{i}:\mathbb{R}^{m}\to\mathbb{R}\cup\{\infty\}$ is defined on each vertex $i\in\mathcal{V}$. A problem of interest that occurs in problems with data too large to be stored in a single location is to minimize the sum $$\begin{array}{c} \underset{x\in\mathbb{R}^{m}}{\min}\underset{i\in\mathcal{V}}{\sum}f_{i}(x)\end{array}\label{eq:distrib-primal}$$ in a distributed manner so that the communications of data occur only along the edges of the graph. In our earlier paper [@Pang_Dist_Dyk], we consider the regularized problem $$\begin{array}{c} \underset{x\in\mathbb{R}^{m}}{\min}\underset{i\in\mathcal{V}}{\sum}[f_{i}(x)+\frac{1}{2}\|x-[\bar{\mathbf{{x}}}]_{i}\|^{2}]\end{array}\label{eq:distrib-dyk-primal}$$ instead, where $\bar{\mathbf{{x}}}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$. \[subsec:Distrib-algs\]Distributed optimization algorithms ---------------------------------------------------------- Since this paper builds on [@Pang_Dist_Dyk], we shall give a brief introduction. Our algorithm is for the case when the edges are undirected. But we remark that notable papers on the directed case. A notable paper based on the directed case using the subgradient algorithm is [@EXTRA_Shi_Ling_Wu_Yin], and surveys are [@Nedich_survey] and [@Nedich_talk_2017]. The papers [@Nedich_Olshevsky] and [@Nedich_Olshevsky_Shi] further touch on the case of time-varying graphs. The algorithm in [@Notarstefano_gang_Newton_2017] uses a Newton-Raphson method to design a distributed algorithm for directed graphs. Naturally, the communication requirements for directed graphs need to be more stringent that the requirements for undirected graphs. From here on, we discuss only algorithms for undirected graphs. A product space formulation on the ADMM leads to a distributed algorithm [@Boyd_Eckstein_ADMM_review Chapter 7]. Such an algorithm is decentralized and distributed, but is not asynchronous and so can get slowed down by slow vertices. An approach based on [@Eckstein_Combettes_MAPR] allows for asynchronous operation, but is not decentralized. Moving beyond deterministic algorithms, distributed decentralized asynchronous algorithms were proposed, but many of them involve some sort of randomization. For example, the work [@Iutzeler_Bianchi_Ciblat_Hachem_1st_paper_dist; @Bianchi_Hachem_Iutzeler_2nd_paper_dist] and the generalization [@AROCK_Peng_Xu_Yan_Yin] are based on monotone operator theory (see for example the textbook [@BauschkeCombettes11]), and require the computations in the nodes to follow specific probability distributions. We now look at asynchronous distributed algorithm with deterministic convergence (rather than probabilistic convergence). Other than subgradient methods, we mention that the paper [@Aytekin_F_Johansson_2016] is an algorithm for strongly convex problems that is primal in nature, so can’t handle constraint sets as is. The method in [@Aybat_Hamedani_2016] may arguably be considered to have these properties. \[subsec:Dyk-method\]Dykstra’s algorithm and the corresponding distributed algorithm ------------------------------------------------------------------------------------ Again, we shall be brief with the introduction, and defer to [@Pang_Dist_Dyk] for a more detailed introduction. Dykstra’s algorithm was first studied in [@Dykstra83] for projecting a point onto the intersection of a number of closed convex sets. The convergence proof without the existence of dual solutions was established in [@BD86] and rewritten in terms of duality in [@Gaffke_Mathar], and is sometimes called the Boyle-Dykstra theorem. Dykstra’s algorithm was independently noted in [@Han88] to be block coordinate minimization on the dual problem, but their proof depends on the existence of a dual solution. (For an example of a problem without dual solutions, look at [@Han88 page 9] where two circles in $\mathbb{R}^{2}$ intersect at only one point.) We pointed out in [@Pang_Dyk_spl] that the Boyle-Dykstra theorem can be extended to the case of minimization problems of the form $\min_{x}\frac{1}{2}\|x-\bar{x}\|^{2}+\sum_{i=1}^{k}f_{i}(x)$. For more on the background on Dykstra’s algorithm, we refer to [@BauschkeCombettes11; @BB96_survey; @Deustch01; @EsRa11]. Dykstra’s algorithm was extended to a distributed algorithm in [@Borkar_distrib_dyk], and they highlight the works [@Aybat_Hamedani_2016; @LeeNedich2013; @Nedic_Ram_Veeravalli_2010; @Ozdaglar_Nedich_Parrilo] on distributed optimization. The work in [@Borkar_distrib_dyk] is vastly different from how Dykstra’s algorithm is studied in [@BD86] and [@Gaffke_Mathar]. It turns out that [@Notars_asyn_distrib_2015] also makes use of the same Dykstra’s algorithm setting, but they solve with a randomized dual proximal gradient method. The differences between their setup and ours is detailed in [@Pang_Dist_Dyk]. In [@Pang_Dist_Dyk], we rewrote in a form similar to (see Remark \[rem:partial-comms-change\] for an explanation of the differences) and applied an extended Dykstra’s algorithm. We list down the features of the distributed Dyksyra’s algorithm: 1. distributed (with communications occurring only between adjacent agents $i$ and $j$ connected by an edge), 2. decentralized (i.e., there is no central node coordinating calculations), 3. asynchronous (contrast this to synchronous algorithms, where the faster agents would wait for slower agents before they can perform their next calculations), 4. able to allow for time-varying graphs in the sense of [@Nedich_Olshevsky; @Nedich_Olshevsky_Shi] (to be robust against failures of communication between two agents), 5. deterministic (i.e., not using any probabilistic methods, like stochastic gradient methods), 6. able to allow for constrained optimization, where the feasible region is the intersection of several sets (this largely rules out primal-only methods), 7. able to incorporate proximable functions naturally. Since Dykstra’s algorithm is also dual block coordinate ascent, the following property is obtained: 1. choosing large number of dual variables to be maximized over gives a greedier increase of the dual objective value. Also, the distributed Dykstra’s algorithm does not require the existence of a dual minimizer provided that the functions $f_{i}(\cdot)$ are proximable. Moreover, if some of the $f_{i}(\cdot)$ were defined to be the indicator functions of closed convex sets, then a greedy step for dual ascent [@Pang_DBAP] is possible. For the rest of this paper, we shall just refer to the algorithm in [@Pang_Dist_Dyk] as the distributed Dykstra’s algorithm. Main contribution of this paper ------------------------------- This paper builds on [@Pang_Dist_Dyk]. We now describe the main contribution of this paper without assuming any prior knowledge of [@Pang_Dist_Dyk]. For each node $i\in\mathcal{V}$, recall the function $f_{i}:\mathbb{R}^{m}\to\mathbb{R}$ in . Let $\mathbf{f}_{i}:[\mathbb{R}^{m}]^{|\mathcal{V}|}\to\bar{\mathbb{R}}$ be defined by $\mathbf{f}_{i}(\mathbf{{x}})=f_{i}(x_{i})$ (i.e., $\mathbf{f}_{i}$ depends only on $i$-th variable, where $i\in\mathcal{V}$). Recall the graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$. Let the set $\bar{\mathcal{E}}$ be defined to be $$\bar{\mathcal{E}}:=\mathcal{E}\times\{1,\dots,m\}.$$ For $\mathbf{{x}}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$, the component $\mathbf{{x}}_{i}\in\mathbb{R}^{m}$ is straightforward. We let $[\mathbf{{x}}_{i}]_{\bar{k}}\in\mathbb{R}$ be the $\bar{k}$-th component of $\mathbf{{x}}_{i}\in\mathbb{R}^{m}$. For each $((i,j),\bar{k})\in\bar{\mathcal{E}}$, the hyperplane $H_{((i,j),\bar{k})}\subset[\mathbb{R}^{m}]^{|\mathcal{V}|}$ is defined to be $$H_{((i,j),\bar{k})}:=\{\mathbf{{x}}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}:[\mathbf{{x}}_{i}]_{\bar{k}}=[\mathbf{{x}}_{j}]_{\bar{k}}\}.\label{eq:def-halfspaces}$$ We can see that the regularized problem is equivalent to $$\min_{\mathbf{{x}}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}}\frac{1}{2}\|\mathbf{{x}}-\bar{\mathbf{{x}}}\|^{2}+\sum_{((i,j),\bar{k})\in\bar{\mathcal{E}}}\underbrace{\delta_{H_{((i,j),\bar{k})}}(\mathbf{{x}})}_{\mathbf{f}_{((i,j),\bar{k})}(\mathbf{{x}})}+\sum_{i\in\mathcal{V}}\mathbf{{f}}_{i}(\mathbf{{x}}).\label{eq:Dyk-primal}$$ We let the functions $\mathbf{f}_{\alpha}:[\mathbb{R}^{m}]^{|\mathcal{V}|}\to\bar{\mathbb{R}}$ be as defined in for all $\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}$. The (Fenchel) dual of is $$\max_{\mathbf{{z}}_{\alpha}\in[\mathbb{R}^{m}]^{|\mathcal{V}|},\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}F(\{\mathbf{{z}}_{\alpha}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}),\label{eq:dual-fn}$$ where $$\begin{array}{c} F(\{\mathbf{{z}}_{\alpha}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}):=-\frac{1}{2}\bigg\|\bar{\mathbf{{x}}}-\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{{z}}_{\alpha}\bigg\|^{2}+\frac{1}{2}\|\bar{\mathbf{{x}}}\|^{2}-\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{f}_{\alpha}^{*}(\mathbf{{z}}_{\alpha}).\end{array}\label{eq:Dykstra-dual-defn}$$ To give further insight on the problems -, we note that if $\mathbf{{f}}_{i}\equiv0$, then the problems reduces to the averaged consensus algorithm in [@Boyd_distrib_averaging; @Distrib_averaging_Dimakis_Kar_Moura_Rabbat_Scaglione], where the primal variable $\mathbf{{x}}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$ converges to the vector $\mathbf{{x}}^{*}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$ (where $\mathbf{{x}}^{*}$ is defined so that each $\mathbf{{x}}_{i}^{*}\in\mathbb{R}^{m}$ is the average $\frac{1}{|\mathcal{V}|}\sum_{i\in\mathcal{V}}\bar{\mathbf{{x}}}_{i}$) at a linear rate dependent on the properties of the graph $(\mathcal{V},\mathcal{E})$. In [@Pang_Dist_Dyk], we applied the techniques of [@Gaffke_Mathar; @Hundal-Deutsch-97] to prove that a block coordinate optimization applied to leads to the increase of the objective value $F(\cdot)$ in to the maximal value, which is also the objective value of since strong duality can also be proven. Further work in [@Pang_Dist_Dyk] shows that the algorithm has properties (1)-(5). We now note that block coodinate optimization applied to can be easily carried out only if the functions $\mathbf{f}_{i}(\cdot)$ are proximable. For illustration, we suppose that we only minimize with respect to $\mathbf{{z}}_{i^{*}}$ for some $i^{*}\in\mathcal{V}$, but leave all other $\{\mathbf{{z}}_{\alpha}\}_{\alpha\in[\mathcal{V}\cup\mathcal{E}]\backslash\{i^{*}\}}$ fixed. We showed in [@Pang_Dist_Dyk] that $\mathbf{{z}}_{i^{*}}$ is sparse, with $[\mathbf{{z}}_{i^{*}}]_{j}=0$ whenever $i^{*}\neq j$ (see Proposition \[prop:sparsity\]), so employing techniques in [@Pang_Dist_Dyk] (see Proposition \[prop:subproblems\]) shows that the primal problem to be solved is $$\mathbf{{x}}_{i^{*}}=\min_{\mathbf{{x}}'_{i^{*}}\in\mathbb{R}^{m}}\frac{1}{2}\|\mathbf{{x}}'_{i^{*}}-[\bar{\mathbf{{x}}}-\sum_{\alpha\in[\mathcal{E}\cup\mathcal{V}]\backslash\{i^{*}\}}\mathbf{{z}}_{i}]_{i^{*}}\|^{2}+f_{i}(\mathbf{{x}}'_{i^{*}}),$$ and the $[z_{i^{*}}]_{i^{*}}$ is the corresponding dual variable. This means that $f_{i}(\cdot)$ has to be proximable. As it stands, the algorithm in [@Pang_Dist_Dyk] does not handle the case when $f_{i}(\cdot)$ are smooth for all $i\in\mathcal{V}$. Given an affine minorant of $f_{i^{*}}(\cdot)$, say $\tilde{f}_{i^{*}}(\cdot)$, the conjugate $\tilde{f}_{i^{*}}^{*}(\cdot)$ satisfies $\tilde{f}_{i^{*}}^{*}(\cdot)\geq f_{i^{*}}^{*}(\cdot)$. The main contribution of this paper is to show that for the dual function , if the $f_{i}^{*}(\cdot)$ are replaced by $\tilde{f}_{i}^{*}(\cdot)$ whenever $f_{i}(\cdot)$ is a subdifferentiable function and $\tilde{f}_{i}(\cdot)$ is defined as an affine minorant of $f_{i}(\cdot)$, then the minorized dual functions would have the values ascending and converging to the optimal objective value of the dual problem . This extends the algorithm in [@Pang_Dist_Dyk] to give an algorithm with properties (1)-(8) and also incorporating subdifferentiable $f_{i}(\cdot)$ naturally. (A more traditional method of majorizing $f_{i^{*}}^{*}(\cdot)$ through $f_{i^{*}}^{*}([\mathbf{z}_{i^{*}}]_{i^{*}})+\langle\mathbf{x}_{i},z-[\mathbf{z}_{i^{*}}]_{i^{*}}\rangle+\frac{\sigma}{2}\|z-[\mathbf{z}_{i^{*}}]_{i^{*}}\|^{2}$ would be problematic because a strongly convex modulus $\sigma$ of $f_{i^{*}}^{*}(\cdot)$ may not even exist, which is the case when $f_{i^{*}}(\cdot)$ is affine.) As far as we are aware, distributed algorithms for subdifferentiable functions include methods based on the subgradient algorithm as mentioned earlier as well as [@Wang_Bertsekas_incremental_unpub_2013]. (Since the problems we treat in this paper are strongly convex, it would be unfair to bring out the fact that subgradient methods are slow for problems that are not strongly convex due to the need of using diminishing stepsizes. But still, our dual approach has other advantages compared to the subgradient algorithm since not all of properties (1)-(8) are satisfied by the subgradient algorithm.) In Section \[sec:First-alg\], we first show that this procedure is sound for the sum of one subdifferentiable function and a regularizing quadratic with convergence rates compatible with standard first order methods. In Section \[sec:main-alg\], we integrate this algorithm into our distributed Dykstra’s algorithm. Other contributions of this paper --------------------------------- In [@Pang_Dist_Dyk], we had used the hyperplanes $H_{(i,j)}:=\{\mathbf{x}\in\mathbf{X}:[\mathbf{x}]_{i}=[\mathbf{x}]_{j}\}$ for all $(i,j)\in\mathcal{E}$ instead of . We point out that using $H_{((i,j),k)}$ instead of $H_{(i,j)}$ allows for part of the data to be communicated at one time step to achieve convergence to the optimal solution, which in turn means that computation will not be held back by communications between nodes. See Subsection \[subsec:Partial-comm-prelim\] and Example \[exa:partial-comms\] for more details. Finally, in Subsection \[subsec:composition-lin-op\], we point out that a standard step allows us to reduce matrix operations whenever the function $f_{i}(\cdot)$ of the form $\tilde{f}_{i}\circ A_{i}$ for some closed convex function $\tilde{f}_{i}(\cdot)$ and linear map $A_{i}$, although such a step now introduces additional regularizing functions. Notation -------- For much of the paper, we will be looking at functions with domain either $\mathbb{R}^{m}$ or $[\mathbb{R}^{m}]^{|\mathcal{V}|}$. We reserve bold letters for functions with domain $[\mathbb{R}^{m}]^{|\mathcal{V}|}$ (for example, ), and we usually use non-bold letters for functions with domain $\mathbb{R}^{m}$ (for example, ). For a vector $\mathbf{{z}}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$, $\mathbf{{z}}_{i}\in\mathbb{R}^{m}$ and $[\mathbf{{z}}]_{i}\in\mathbb{R}^{m}$ are both understood to be the $i$-th component of $\mathbf{{z}}$, where $i\in|\mathcal{V}|$. Furthermore, $[\mathbf{{z}}_{i}]_{\bar{k}}$ and $[[\mathbf{{z}}]_{i}]_{\bar{k}}\in\mathbb{R}$ are both understood to be the $\bar{k}$-th component of $[\mathbf{{z}}]_{i}$. We say that $f(\cdot)$ is proximable if the problem $\arg\min_{x}f(x)+\frac{1}{2}\|x-\bar{x}\|^{2}$ is easy to solve for any $\bar{x}$. For a closed convex set $C$, the indicator function is denoted by $\delta_{C}(\cdot)$. All other notation are standard. \[sec:First-alg\]The algorithm for one function =============================================== In this section, we consider the problem $$\begin{array}{c} \underset{x\in\mathbb{R}^{m}}{\min}f(x)+\frac{1}{2}\|x-\bar{x}\|^{2},\end{array}\label{eq:small-pblm}$$ where $f:\mathbb{R}^{m}\to\mathbb{R}$ is a subdifferentiable convex function such that ${\mbox{\rm dom}}(f)=\mathbb{R}^{m}$. We define our first dual ascent algorithm to solve before we show how to integrate it into the distributed Dykstra’s algorithm for solving through the increasing the dual objective value in -. Consider Algorithm , which is somewhat like the bundle method. \[alg:basic-dual-ascent\]In this algorithm, we want to solve Let $h_{0}:\mathbb{R}^{m}\to\mathbb{R}$ be an affine function such that $h_{0}(\cdot)\leq f(\cdot)$ defined by the parameters $(\tilde{x}_{0},\tilde{f}_{0},\tilde{y}_{0})\in\mathbb{R}^{m}\times\mathbb{R}\times\mathbb{R}^{m}$ where for all $w\geq0$, $h_{w}:\mathbb{R}^{m}\to\mathbb{R}$ is defined through $(\tilde{x}_{w},\tilde{f}_{w},\tilde{y}_{w})\in\mathbb{R}^{m}\times\mathbb{R}\times\mathbb{R}^{m}$ by $$h_{w}(x)=\tilde{y}_{w}^{T}(x-\tilde{x}_{w})+\tilde{f}_{w}.\label{eq:def-q}$$ Without loss of generality, let $\tilde{x}_{0}$ be the minimizer to $\min_{x}h_{0}(x)+\frac{1}{2}\|x-\bar{x}\|^{2}$. 01 For $w=0,\dots$ 02 $\quad$Recall $\tilde{x}_{w}$ is the minimizer to $\min_{x}h_{w}(x)+\frac{1}{2}\|x-\bar{x}\|^{2}$. 03 $\quad$Evaluate $f(\tilde{x}_{w})$ and find a subgradient $\tilde{s}_{w}\in\partial f(\tilde{x}_{w})$. 04 $\quad$Construct the affine function $\tilde{h}_{w}:\mathbb{R}^{m}\to\mathbb{R}$ to be $$\tilde{h}_{w}(x)=\tilde{s}_{w}^{T}(x-\tilde{x}_{w})+f(\tilde{x}_{w}).\label{eq:def-h-tilde-w}$$ 05 $\quad$Consider $$\begin{array}{c} \underset{x}{\min}[\max\{\tilde{h}_{w},h_{w}\}(x)+\frac{1}{2}\|x-\bar{x}\|^{2}].\end{array}\label{eq:max-quad}$$ 06 $\quad$Let $\tilde{x}_{w+1}$ be the minimizer of . 07 $\quad$Let $\tilde{f}_{w+1}=\max\{\tilde{h}_{w},h_{w}\}(\tilde{x}_{w+1})$. 08 $\quad$Let $\tilde{y}_{w+1}$ be $\bar{x}-\tilde{x}_{w+1}$. 09 $\quad$Define $h_{w+1}(\cdot)$ through $(\tilde{x}_{w+1},\tilde{f}_{w+1},\tilde{y}_{w+1})$ and . 10 End for We shall prove that each function of the form is a lower approximation of $f(\cdot)$ in Lemma \[lem:h-w-leq-f\]. With a sequence of such lower approximations like in the bundle method, we can then solve . We prove some lemmas before proving the convergence of Algorithm \[alg:basic-dual-ascent\]. \[lem:h-w-leq-f\]In Algorithm \[alg:basic-dual-ascent\], the functions $h_{w}(\cdot)$ are such that $h_{w}(\cdot)\leq f(\cdot)$. We prove our result by induction. Note that $h_{0}(\cdot)$ was defined so that $h_{0}(\cdot)\leq f(\cdot)$. It is also clear from the definition of $\tilde{h}_{w}(\cdot)$ in that $\tilde{h}_{w}(\cdot)\leq f(\cdot)$ for all $w\geq0$. Suppose that $h_{w}(\cdot)\leq f(\cdot)$. We now show that $h_{w+1}(\cdot)\leq f(\cdot)$. We have $$\max\{\tilde{h}_{w},h_{w}\}(\cdot)\leq f(\cdot).\label{eq:max-leq-f}$$ The functions $\tilde{h}_{w}(\cdot)$ and $h_{w}(\cdot)$ are convex, so $\max\{\tilde{h}_{w},h_{w}\}(\cdot)$ is convex. Since the minimum of $\max\{\tilde{h}_{w},h_{w}\}(\cdot)+\frac{1}{2}\|\cdot-\bar{x}\|^{2}$ is attained at $\tilde{x}_{w+1}$, it follows that $0\in\partial[\max\{\tilde{h}_{w},h_{w}\}](\tilde{x}_{w+1})+\tilde{x}_{w+1}-\bar{x}$, or that $\bar{x}-\tilde{x}_{w+1}\in\partial[\max\{\tilde{h}_{w},h_{w}\}](\tilde{x}_{w+1})$. The construction of $h_{w+1}(\cdot)$ implies that $h_{w+1}(\tilde{x}_{w+1})=\max\{\tilde{h}_{w},h_{w}\}(\tilde{x}_{w+1})$ and $h_{w+1}(\cdot)\leq\max\{\tilde{h}_{w},h_{w}\}(\cdot)$. Together with , this implies $h_{w+1}(\cdot)\leq f(\cdot)$, which completes the proof. Let $$\bar{\alpha}_{w}:=f(\tilde{x}_{w})-h_{w}(\tilde{x}_{w}).\label{eq:bar-alpha-w}$$ Let the minimizer of be $x^{*}$. We have $$\begin{aligned} \begin{array}{c} h_{w}(\tilde{x}_{w})+\frac{1}{2}\|\tilde{x}_{w}-\bar{x}\|^{2}\end{array} & \overset{\scriptsize{\mbox{Lem }\ref{lem:h-w-leq-f}\mbox{, Alg \ref{alg:basic-dual-ascent} line 2}}}{\leq} & \begin{array}{c} f(x^{*})+\frac{1}{2}\|x^{*}-\bar{x}\|^{2}\end{array}\label{eq:value-chain}\\ & \overset{\scriptsize{x^{*}\mbox{ solves \eqref{eq:small-pblm}}}}{\leq} & \begin{array}{c} f(\tilde{x}_{w})+\frac{1}{2}\|\tilde{x}_{w}-\bar{x}\|^{2}.\end{array}\nonumber \end{aligned}$$ Let the real number $\alpha_{w}$ be $$\begin{array}{c} \alpha_{w}:=\left[f(x^{*})+\frac{1}{2}\|x^{*}-\bar{x}\|^{2}\right]-\left[h_{w}(\tilde{x}_{w})+\frac{1}{2}\|\tilde{x}_{w}-\bar{x}\|^{2}\right].\end{array}\label{eq:alpha-w}$$ It is clear to see that translates to $0\leq\alpha_{w}\leq\bar{\alpha}_{w}$. \[lem:alpha-recurrs\]Recall the definitions of $\alpha_{w}$ and $\bar{\alpha}_{w}$ in and . 1. We have $\alpha_{w+1}\leq\alpha_{w}-\frac{1}{2}t^{2}$, where $\frac{1}{2}t^{2}+t\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|=\bar{\alpha}_{w}$. 2. Next, $$\begin{array}{c} \frac{1}{2\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|^{2}}\alpha_{w+1}^{2}+\alpha_{w+1}\leq\alpha_{w}.\end{array}\label{eq:target-quad}$$ Since the function $x\mapsto\tilde{h}_{w}(x)+\frac{1}{2}\|x-\bar{x}\|^{2}$ is convex, it is bounded from below by its linearization at $\tilde{x}_{w}$ using $\tilde{s}_{w}\in\partial f(\tilde{x}_{w})$ via . In other words, for all $x\in\mathbb{R}^{m}$, we have $$\begin{array}{c} \tilde{h}_{w}(x)+\frac{1}{2}\|x-\bar{x}\|^{2}\overset{\eqref{eq:def-h-tilde-w}}{\geq}\underbrace{(\tilde{s}_{w}+\tilde{x}_{w}-\bar{x})^{T}(x-\tilde{x}_{w})+f(\tilde{x}_{w})+\frac{1}{2}\|\tilde{x}_{w}-\bar{x}\|^{2}}_{l_{(\tilde{x}_{w},\tilde{s}_{w})}(x)},\end{array}\label{eq:q-hat-lower-bdd}$$ where $l_{(\tilde{x}_{w},\tilde{s}_{w})}(\cdot)$ as defined above is the affine function derived from taking a subgradient of $\tilde{h}_{w}(\cdot)+\frac{1}{2}\|\cdot-\bar{x}\|^{2}$ at $\tilde{x}_{w}$. Consider the problem $$\begin{array}{c} \underset{x}{\min}\underbrace{\max\left\{ h_{w}(x)+\frac{1}{2}\|x-\bar{x}\|^{2},l_{(\tilde{x}_{w},\tilde{s}_{w})}(x)\right\} }_{h_{w}^{+}(x)},\end{array}\label{eq:def-h-plus-w}$$ where $h_{w}^{+}(\cdot)$ is as defined above. We first look at the case when $\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}=0$. We have $h_{w}(\tilde{x}_{w})\leq f(\tilde{x}_{w})$ by Lemma \[lem:h-w-leq-f\]. So $$\begin{aligned} & f(\tilde{x}_{w})+\frac{1}{2}\|\tilde{x}_{w}-\bar{x}\|^{2}\overset{\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}=0}{=}\min_{x}l_{(\tilde{x}_{w},\tilde{s}_{w})}(x)\overset{\eqref{eq:def-h-plus-w}}{\leq}\min_{x}h_{w}^{+}(x)\\ \leq & h^{+}(\tilde{x}_{w})\overset{\scriptsize{\text{Lemma \ref{lem:h-w-leq-f}}}}{\leq}f(\tilde{x}_{w})+\frac{1}{2}\|\tilde{x}_{w}-\bar{x}\|^{2}.\end{aligned}$$ We then have $h_{w}(\tilde{x}_{w})=f(\tilde{x}_{w})$, or $\alpha_{w}=0$. Also, $$\begin{array}{c} 0=\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\in\partial f(\tilde{x}_{w})+\partial(\frac{1}{2}\|\cdot-\bar{x}\|^{2})(\tilde{x}_{w}),\end{array}$$ so $\tilde{x}_{w}=x^{*}$. The remaining techniques in this proof shows that $\alpha_{w'}=0$ for all $w'\geq w$, which implies the claims in this lemma. Thus, we assume $\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\neq0$. Since $l_{(\tilde{x}_{w},\tilde{s}_{w})}(\cdot)$ is affine and $h_{w}(\cdot)+\frac{1}{2}\|\cdot-\bar{x}\|^{2}$ is a quadratic with minimizer $\tilde{x}_{w}$ and Hessian $I$, some elementary calculations will show that the minimizer of is of the form $\tilde{x}_{w}-t\frac{\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}}{\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|}$ for some $t\geq0$. Let $\tilde{d}_{w}:=\frac{\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}}{\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|}$, and let this minimizer be $\tilde{x}_{w}-t\tilde{d}_{w}$. We can see that $t>0$, because if $t=0$, then $\tilde{x}_{w}-t\tilde{d}_{w}=\tilde{x}_{w}$, and $\tilde{x}_{w}$ would once again be the minimizer of $f(\cdot)+\frac{1}{2}\|\cdot-\bar{x}\|^{2}$. With $t>0$, the function values in are equal, which gives $$\begin{array}{c} h_{w}(\tilde{x}_{w}-t\tilde{d}_{w})+\frac{1}{2}\|(\tilde{x}_{w}-t\tilde{d}_{w})-\bar{x}\|^{2}\overset{\scriptsize{\mbox{Alg \ref{alg:basic-dual-ascent}, line 2}}}{=}h_{w}(\tilde{x}_{w})+\frac{1}{2}\|\tilde{x}_{w}-\bar{x}\|^{2}+\frac{1}{2}t^{2},\end{array}\label{eq:t-square-term}$$ and $$\begin{array}{c} l_{(\tilde{x}_{w},\tilde{s})}(\tilde{x}_{w}-t\tilde{d}_{w})\overset{\eqref{eq:bar-alpha-w},\eqref{eq:q-hat-lower-bdd}}{=}h_{w}(\tilde{x}_{w})+\frac{1}{2}\|\tilde{x}_{w}-\bar{x}\|^{2}+\bar{\alpha}_{w}-t\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|.\end{array}$$ Equating the last two formulas gives $$\begin{aligned} & \begin{array}{c} \frac{1}{2}t^{2}+t\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|=\bar{\alpha}_{w}.\end{array}\label{eq:end-of-bar-alpha}\end{aligned}$$ Next, we have $$\begin{aligned} \begin{array}{c} h_{w+1}(\tilde{x}_{w+1})+\frac{1}{2}\|\tilde{x}_{w+1}-\bar{x}\|^{2}\end{array} & \overset{\scriptsize{\mbox{Alg \ref{alg:basic-dual-ascent} line 9}}}{=} & \begin{array}{c} \max\{h_{w},\tilde{h}_{w}\}(\tilde{x}_{w+1})+\frac{1}{2}\|\tilde{x}_{w+1}-\bar{x}\|^{2}\end{array}\nonumber \\ & \overset{\eqref{eq:q-hat-lower-bdd},\eqref{eq:def-h-plus-w}}{\geq} & \begin{array}{c} h_{w}^{+}(\tilde{x}_{w+1})\end{array}\label{eq:alpha-w-smaller}\\ & \geq & \begin{array}{c} h_{w}^{+}(\tilde{x}_{w}-t\tilde{d}_{w})\end{array}\nonumber \\ & \overset{\eqref{eq:def-h-plus-w}}{=} & \begin{array}{c} h_{w}(\tilde{x}_{w}-t\tilde{d}_{w})+\frac{1}{2}\|\tilde{x}_{w}-t\tilde{d}_{w}-\bar{x}\|^{2}\end{array}\nonumber \\ & \overset{\eqref{eq:t-square-term}}{=} & \begin{array}{c} h_{w}(\tilde{x}_{w})+\frac{1}{2}\|\tilde{x}_{w}-\bar{x}\|^{2}+\frac{1}{2}t^{2}.\end{array}\nonumber \end{aligned}$$ The formulas and imply the first part of our lemma. Next, let $t_{2}$ be the positive root of $$\begin{array}{c} \frac{1}{2}t_{2}^{2}+t_{2}\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|=\alpha_{w}.\end{array}\label{eq:quad-form}$$ Since $\alpha_{w}\leq\bar{\alpha}_{w}$, we have $t_{2}\leq t$. Recalling the definition of $\alpha_{w}$ in , we have $$\begin{array}{c} \alpha_{w+1}\overset{\eqref{eq:alpha-w},\eqref{eq:alpha-w-smaller}}{\leq}\alpha_{w}-\frac{1}{2}t^{2}\leq\alpha_{w}-\frac{1}{2}t_{2}^{2}\overset{\eqref{eq:quad-form}}{=}t_{2}\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|,\end{array}$$ or $\frac{\alpha_{w+1}}{\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|}\leq t_{2}$. Substituting this into gives as needed. It is clear that $h_{0}(\cdot)$ in Algorithm \[alg:basic-dual-ascent\] can be defined as an affine function based on the evaluation of $f(x)$ and a subgradient in $\partial f(x)$ for some point $x$. In line 9, instead of $h_{w+1}(\cdot)$ defined there, one can use the maximum of a number of affine functions like in the bundle method. We shall only limit to the easy case of using one affine function to model $h_{w+1}(\cdot)$ for pedagogical reasons. We need the following result proved in [@Beck_Tetruashvili_2013] and [@Beck_alt_min_SIOPT_2015]. \[lem:seq-conv-rate\](Sequence convergence rate) Let $\alpha>0$. Suppose the sequence of nonnegative numbers $\{a_{k}\}_{k=0}^{\infty}$ is such that $$a_{k}\geq a_{k+1}+\alpha a_{k+1}^{2}\mbox{ for all }k\in\{1,2,\dots\}.$$ 1. [@Beck_Tetruashvili_2013 Lemma 6.2] If furthermore, $\begin{array}{c} a_{1}\leq\frac{1.5}{\alpha}\mbox{ and }a_{2}\leq\frac{1.5}{2\alpha}\end{array}$, then $$\begin{array}{c} a_{k}\leq\frac{1.5}{\alpha k}\mbox{ for all }k\in\{1,2,\dots\}.\end{array}$$ 2. [@Beck_alt_min_SIOPT_2015 Lemma 3.8] For any $k\geq2$, $$\begin{array}{c} a_{k}\leq\max\left\{ \left(\frac{1}{2}\right)^{(k-1)/2}a_{0},\frac{4}{\alpha(k-1)}\right\} .\end{array}$$ In addition, for any $\epsilon>0$, if $$\begin{array}{c} \begin{array}{c} k\geq\max\left\{ \frac{2}{\ln(2)}[\ln(a_{0})+\ln(1/\epsilon)],\frac{4}{\alpha\epsilon}\right\} +1,\end{array}\end{array}$$ then $a_{k}\leq\epsilon$. Theorem \[thm:basic-conv-rate\] shows that Algorithm \[alg:basic-dual-ascent\] has convergence rates consistent with standard first order methods. \[thm:basic-conv-rate\](Convergence rate) Suppose Algorithm \[alg:basic-dual-ascent\] is used to solve , and ${\mbox{\rm dom}}(f)=\mathbb{R}^{m}$. 1. There is a $O(1/w)$ convergence rate. 2. If in addition, $\nabla f(\cdot)$ is Lipschitz with constant $L_{1}$, then there is a linear rate of convergence. Recall that $h_{w}(\cdot)\leq f(\cdot)$ from Lemma \[lem:h-w-leq-f\], so $$\begin{aligned} \begin{array}{c} f(x^{*})+\frac{1}{2}\|x^{*}-\bar{x}\|^{2}\end{array} & \geq & \begin{array}{c} h_{w}(x^{*})+\frac{1}{2}\|x^{*}-\bar{x}\|^{2}\end{array}\\ & \overset{\scriptsize{\mbox{Alg \ref{alg:basic-dual-ascent} line 2}}}{\geq} & \begin{array}{c} h_{w}(\tilde{x}_{w})+\frac{1}{2}\|\tilde{x}_{w}-\bar{x}\|^{2}+\frac{1}{2}\|x^{*}-\tilde{x}_{w}\|^{2}.\end{array}\\ \begin{array}{c} \Rightarrow\alpha_{w}\end{array} & \overset{\eqref{eq:alpha-w}}{\geq} & \begin{array}{c} \frac{1}{2}\|x^{*}-\tilde{x}_{w}\|^{2}.\end{array}\end{aligned}$$ Therefore, $\|\tilde{x}_{w}-x^{*}\|\leq\sqrt{2\alpha_{w}}$. We have $0\in\partial[f(\cdot)+\frac{1}{2}\|\cdot-\bar{x}\|^{2}](x^{*})$ and $\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\in\partial[f(\cdot)+\frac{1}{2}\|\cdot-\bar{x}\|^{2}](\tilde{x}_{w})$. **** Since ${\mbox{\rm dom}}(f)=\mathbb{R}^{m}$ and $\tilde{x}_{w}$ lies in a compact set, there is some constant $L>0$ such that $\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|\leq L$. Hence $$\begin{array}{c} \frac{1}{2L^{2}}\alpha_{w+1}^{2}+\alpha_{w+1}\leq\frac{1}{2\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|^{2}}\alpha_{w+1}^{2}+\alpha_{w+1}\overset{\eqref{eq:target-quad}}{\leq}\alpha_{w}.\end{array}$$ This recurrence together with Lemma \[lem:seq-conv-rate\] gives us the $O(1/w)$ convergence rate we need. **** **** Let $L_{1}$ be the Lipschitz constant on the gradient of $f(\cdot)+\frac{1}{2}\|\cdot-\bar{x}\|^{2}$. We then have $\|\tilde{s}_{w}+\tilde{x}_{w}-\bar{x}\|\leq L_{1}\|\tilde{x}_{w}-x^{*}\|\leq L_{1}\sqrt{2\alpha_{w}}$. Using the formula gives us $\frac{1}{4L_{1}\alpha_{w}}\alpha_{w+1}^{2}+\alpha_{w+1}\leq\alpha_{w}$, or $$\begin{array}{c} \frac{1}{4L_{1}}\left(\frac{\alpha_{w+1}}{\alpha_{w}}\right)^{2}+\frac{\alpha_{w+1}}{\alpha_{w}}\leq1.\end{array}$$ This gives us the linear convergence as needed. \[rem:min-max-of-2-quads\](Minimizing ) The quadratic program can be solved easily by noting that the minimizer must be a minimizer of one of the problems $$\begin{aligned} & & \begin{array}{c} \underset{x}{\min}\,\tilde{h}_{w}(x)+\frac{1}{2}\|x-\bar{x}\|^{2},\text{ }\underset{x}{\min}\,h_{w}(x)+\frac{1}{2}\|x-\bar{x}\|^{2},\end{array}\\ & \text{ or} & \begin{array}{c} \underset{x}{\min}\{h_{w}(x)+\frac{1}{2}\|x-\bar{x}\|^{2}:\tilde{h}_{w}(x)=h_{w}(x)\},\end{array}\end{aligned}$$ all of which are rather easy to solve. We now state a proposition that will be useful for the proof of convergence. \[prop:P-D-of-one-block\]Consider the problem $$\begin{array}{c} \underset{x\in\mathbb{R}^{m}}{\min}\frac{1}{2}\|\bar{x}-x\|^{2}+f(x),\end{array}$$ and its corresponding (Fenchel) dual $$\begin{array}{c} \underset{y\in\mathbb{R}^{m}}{\max}-\frac{1}{2}\|\bar{x}-y\|^{2}+\frac{1}{2}\|\bar{x}\|^{2}-f^{*}(y).\end{array}$$ The optimal solutions $x^{*}$ and $y^{*}$ are related by $x^{*}+y^{*}=\bar{x}$, and strong duality holds. This result can be seen to be Moreau’s decomposition theorem. In view of Proposition \[prop:P-D-of-one-block\], we now explain that Algorithm \[alg:basic-dual-ascent\] can be interpreted as a dual ascent algorithm. We can see that Algorithm \[alg:basic-dual-ascent\] finds $h_{w}(\cdot)$ for $w=0,1,\dots$ such that $h_{w}^{*}(\cdot)\geq f^{*}(\cdot)$ and dual iterates $\{\tilde{y}_{w}\}_{w=0}^{\infty}$ so that $\{-h_{w}^{*}(\tilde{y}_{w})+\frac{1}{2}\|\bar{x}\|^{2}-\frac{1}{2}\|\tilde{y}_{w}-\bar{x}\|^{2}\}_{w=0}^{\infty}$ is a monotonically nondecreasing sequence that converges to $-f^{*}(y^{*})+\frac{1}{2}\|\bar{x}\|^{2}-\frac{1}{2}\|y^{*}-\bar{x}\|^{2}$, where $y^{*}$ is the optimal dual variable for . This interpretation shall be exploited in our subdifferentiable distributed Dykstra’s algorithm. \[sec:main-alg\]Deterministic Distributed Asynchronous Dykstra Algorithm ======================================================================== We now proceed to integrate the dual ascent algorithm in Section \[sec:First-alg\] into the distributed Dykstra’s algorithm for the problem . We partition the vertex set $\mathcal{V}$ as the disjoint union *$\mathcal{V}=\mathcal{V}_{1}\cup\mathcal{V}_{2}\cup\mathcal{V}_{3}\cup\mathcal{V}_{4}$* so that - $f_{i}(\cdot)$ are proximable functions for all $i\in\mathcal{V}_{1}$. - $f_{i}(\cdot)$ are indicator functions of closed convex sets for all $i\in\mathcal{V}_{2}$. - $f_{i}(\cdot)$ are proximable functions such that ${\mbox{\rm dom}}(f_{i})=\mathbb{R}^{m}$ for all $i\in\mathcal{V}_{3}$. - *$f_{i}(\cdot)$* are subdifferentiable functions (i.e., a subgradient is easy to obtain) such that ${\mbox{\rm dom}}(f_{i})=\mathbb{R}^{m}$ for all $i\in\mathcal{V}_{4}$. We had the 3 sets $\mathcal{V}_{1}$, $\mathcal{V}_{2}$ and $\mathcal{V}_{3}$ in [@Pang_Dyk_spl]. In principle, the vertices in $\mathcal{V}_{2}$ and $\mathcal{V}_{3}$ can be placed into $\mathcal{V}_{1}$. As explained in [@Pang_Dyk_spl], the advantage of separating $\mathcal{V}_{3}$ is that more than one function can be minimized at a time for vertices in this set without affecting the proof of convergence (see Proposition \[prop:control-growth\]), and the advantage of separating $\mathcal{V}_{2}$ is that one can apply a greedy SHQP step in [@Pang_DBAP]. The set $\mathcal{V}_{4}$ contains subdifferentiable functions, which is the subject of this paper. To simplify calculations, we let $\mathbf{{v}}_{A}$, $\mathbf{{v}}_{H}$ and $\mathbf{{x}}$ be denoted by \[eq\_m:all\_acronyms\] $$\begin{aligned} \mathbf{{v}}_{H} & = & \sum_{((i,j),\bar{k})\in\bar{\mathcal{E}}}\mathbf{{z}}_{((i,j),\bar{k})}\label{eq:v-H-def}\\ \mathbf{{v}}_{A} & = & \mathbf{{v}}_{H}+\sum_{i\in\mathcal{V}}\mathbf{{z}}_{i}\label{eq:from-10}\\ \mathbf{{x}} & = & \bar{\mathbf{{x}}}-\mathbf{{v}}_{A}.\label{eq:x-from-v-A}\end{aligned}$$ Intuitively, $\mathbf{{v}}_{H}$ describes the sum of the dual variables due to $H_{((i,j),\bar{k})}$ for all $((i,j),\bar{k})\in\bar{\mathcal{E}}$, $\mathbf{{v}}_{A}$ is the sum of all dual variables, and $\mathbf{{x}}$ is the estimate of the primal variable. \[subsec:Partial-comm-prelim\]Partial communication of data ----------------------------------------------------------- One insight that we point out in this paper is that Algorithm \[alg:Ext-Dyk\] supports the partial communication of data. We lay down the foundations of the parts of Algorithm \[alg:Ext-Dyk\] relevant for this insight. Let $D\subset[\mathbb{R}^{m}]^{|\mathcal{V}|}$ be the diagonal set defined by $$D:=\{\mathbf{{x}}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}:\mathbf{{x}}_{1}=\mathbf{{x}}_{2}=\cdots=\mathbf{{x}}_{|\mathcal{V}|}\}.\label{eq:diagonal-set}$$ With the definition of the hyperplanes $H_{((i,j),\bar{k})}$ in and $\mathcal{G}=(\mathcal{V},\mathcal{E})$ being a connected graph, we have $$\bigcap_{((i,j),\bar{k})\in\bar{\mathcal{E}}}H_{((i,j),\bar{k})}=D\mbox{ and }\sum_{((i,j),\bar{k})\in\bar{\mathcal{E}}}H_{((i,j),\bar{k})}^{\perp}=D^{\perp}=\left\{ \mathbf{{z}}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}:\sum_{i\in\mathcal{V}}\mathbf{{z}}_{i}=0\right\} .\label{eq:D-and-D-perp}$$ \[prop:E-connects-V\]Suppose $\mathcal{G}=(\mathcal{V},\mathcal{E})$ is a connected graph. Let $H_{((i,j),\bar{k})}$ be the hyperplane . Let $\bar{\mathcal{E}}'$ be a subset of $\bar{\mathcal{E}}$. The following conditions are equivalent: 1. $\cap_{((i,j),\bar{k})\in\bar{\mathcal{E}}'}H_{((i,j),\bar{k})}=D$ 2. $\sum_{((i,j),\bar{k})\in\mathcal{\bar{E}}'}H_{((i,j),\bar{k})}^{\perp}=D^{\perp}.$ 3. For each $\bar{k}\in\{1,\dots,m\}$, the graph $\mathcal{G}'=(\mathcal{V},\mathcal{E}_{\bar{k}}')$ is connected, where $\mathcal{E}_{\bar{k}}':=\{(i,j)\in\mathcal{E}:((i,j),\bar{k})\in\bar{\mathcal{E}}'\}$. The equivalence between (1) and (3) is easy, and the equivalence between (1) and (2) is simple linear algebra. \[def:E-connects-V\]We say that *$\bar{\mathcal{E}}'\subset\bar{\mathcal{E}}$ connects $\mathcal{V}$* if any of the equivalent properties in Proposition \[prop:E-connects-V\] is satisfied. \[rem:partial-comms-change\](Change from [@Pang_Dist_Dyk]) The change in this paper from [@Pang_Dist_Dyk] is that each hyperplane $H_{((i,j),\bar{k})}$ is now of codimension 1. In [@Pang_Dist_Dyk], we defined the hyperplanes $H_{(i,j)}:=\{\mathbf{{x}}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}:\mathbf{{x}}_{i}=\mathbf{{x}}_{j}\}$ of codimension $m$ which are indexed by $(i,j)\in\mathcal{E}$ instead. The advantage of introducing the additional variables is that we can have a partial transfer of the data between two vertices rather than a full transfer. This will be elaborated in Example \[exa:partial-comms\]. \[lem:express-v-as-sum\](Expressing $v$ as a sum) Recall the definitions of $D$ and $H_{((i,j),\bar{k})}$ in and . There is a $C_{1}>0$ such that for all $\mathbf{{v}}\in D^{\perp}$ and $\bar{\mathcal{E}}'\subset\bar{\mathcal{E}}$ such that $\bar{\mathcal{E}}'$ connects $\mathcal{V}$, we can find $\mathbf{{z}}_{((i,j),\bar{k})}\in H_{((i,j),\bar{k})}^{\perp}$ for all $((i,j),\bar{k})\in\bar{\mathcal{E}}'$ such that $\sum_{((i,j),\bar{k})\in\bar{\mathcal{E}}'}\mathbf{{z}}_{((i,j),\bar{k})}=\mathbf{{v}}$ and $\|\mathbf{{z}}_{((i,j),\bar{k})}\|\leq C_{1}\|\mathbf{{v}}\|$ for all $((i,j),\bar{k})\in\bar{\mathcal{E}}'$. This is elementary linear algebra. We refer to [@Pang_Dist_Dyk] for a proof of a similar result. Algorithm description and preliminaries --------------------------------------- In this subsection, we present Algorithm \[alg:Ext-Dyk\] below and recall some of the results that were presented in [@Pang_Dist_Dyk] that are necessary for further discussions. Recall that in the one node case in Section \[sec:First-alg\], the subdifferentiable function $f_{i}(\cdot)$ is handled using lower approximates. In addition to , we need to consider the function $$\begin{array}{c} F_{n,w}(\{\mathbf{{z}}_{\alpha}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}):=-\frac{1}{2}\bigg\|\bar{\mathbf{{x}}}-\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{{z}}_{\alpha}\bigg\|^{2}+\frac{1}{2}\|\bar{\mathbf{{x}}}\|^{2}-\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{f}_{\alpha,n,w}^{*}(\mathbf{{z}}_{\alpha}),\end{array}\label{eq:Dykstra-dual-defn-1}$$ where for all $n\geq1$ and $w\in\{1,\dots,\bar{w}\}$, $\mathbf{f}_{\alpha,n,w}:[\mathbb{R}^{m}]^{|\mathcal{V}|}\to\mathbb{R}$ satisfies \[eq\_m:h-a-n-w\] $$\begin{aligned} \mathbf{f}_{\alpha,n,w}(\cdot) & = & \mathbf{f}_{\alpha}(\cdot)\mbox{ for all }\alpha\in[\bar{\mathcal{E}}\cup\mathcal{V}]\backslash\mathcal{V}_{4}\label{eq:h-a-n-w-eq-h-a}\\ \mbox{ and }\mathbf{f}_{\alpha,n,w}(\cdot) & \leq & \mathbf{f}_{\alpha}(\cdot)\mbox{ for all }\alpha\in\mathcal{V}_{4}.\label{eq:h-a-n-w-lesser}\end{aligned}$$ So $F_{n,w}(\cdot)\leq F(\cdot)$. We present Algorithm . \[alg:Ext-Dyk\](Distributed Dykstra’s algorithm) Consider the problem along with the associated dual problem . Let $\bar{w}$ be a positive integer. Let $C_{1}>0$ satisfy Lemma \[lem:express-v-as-sum\]. For each $\alpha\in[\bar{\mathcal{E}}\cup\mathcal{V}]\backslash\mathcal{V}_{4}$, $n\geq1$ and $w\in\{1,\dots,\bar{w}\}$, let $\mathbf{f}_{\alpha,n,w}:[\mathbb{R}^{m}]^{|\mathcal{V}|}\to\mathbb{R}$ be as defined in . Our distributed Dykstra’s algorithm is as follows: 01$\quad$Let - $\mathbf{{z}}_{i}^{1,0}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$ be a starting dual vector for $\mathbf{f}_{i}(\cdot)$ for each $i\in\mathcal{V}$ so that $[\mathbf{{z}}_{i}^{1,0}]_{j}=0\in\mathbb{R}^{m}$ for all $j\in\mathcal{V}\backslash\{i\}$. - $\mathbf{{v}}_{H}^{1,0}\in D^{\perp}$ be a starting dual vector for . - Note: $\{\mathbf{{z}}_{((i,j),\bar{k})}^{n,0}\}_{((i,j),\bar{k})\in\bar{\mathcal{E}}}$ is defined through $\mathbf{{v}}_{H}^{n,0}$ in . - Let $\mathbf{{x}}^{1,0}$ be $\mathbf{{x}}^{1,0}=\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{1,0}-\sum_{i\in\mathcal{V}}\mathbf{{z}}_{i}^{1,0}$. 02$\quad$For each $i\in\mathcal{V}_{4}$, let $\mathbf{f}_{i,1,0}:[\mathbb{R}^{m}]^{|\mathcal{V}|}\to\mathbb{R}$ be a function such that $\mathbf{f}_{i,1,0}(\cdot)\leq\mathbf{f}_{i}(\cdot)$ 03$\quad$For $n=1,2,\dots$ 04$\quad$$\quad$ 05$\quad$$\quad$ $\quad$$\quad$ 06$\quad$$\quad$For $w=1,2,\dots,\bar{w}$ 07$\quad$$\quad$$\quad$Choose a set $S_{n,w}\subset\bar{\mathcal{E}}_{n}\cup\mathcal{V}$ such that $S_{n,w}\neq\emptyset$. 08$\quad$$\quad$$\quad$If $S_{n,w}\subset\mathcal{V}_{4}$, then 09$\quad$$\quad$$\quad$$\quad$Apply Algorithm \[alg:subdiff-subalg\]. 10$\quad$$\quad$$\quad$else 11$\quad$$\quad$$\quad$$\quad$Set $\mathbf{f}_{i,n,w}(\cdot):=\mathbf{f}_{i,n,w-1}(\cdot)$ for all $i\in\mathcal{V}_{4}$. 12$\quad$$\quad$$\quad$$\quad$Define $\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in S_{n,w}}$ by $$\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in S_{n,w}}=\underset{z_{\alpha},\alpha\in S_{n,w}}{\arg\min}\frac{1}{2}\left\Vert \bar{\mathbf{{x}}}-\sum_{\alpha\notin S_{n,w}}\mathbf{{z}}_{\alpha}^{n,w-1}-\sum_{\alpha\in S_{n,w}}\mathbf{{z}}_{\alpha}\right\Vert ^{2}+\sum_{\alpha\in S_{n,w}}\mathbf{f}_{\alpha,n,w}^{*}(\mathbf{{z}}_{\alpha}).\label{eq:Dykstra-min-subpblm}$$ 13$\quad$$\quad$$\quad$end if 14$\quad$$\quad$$\quad$Set $\mathbf{{z}}_{\alpha}^{n,w}:=\mathbf{{z}}_{\alpha}^{n,w-1}$ for all $\alpha\notin S_{n,w}$. 15$\quad$$\quad$End For 16$\quad$$\quad$Let $\mathbf{{z}}_{i}^{n+1,0}=\mathbf{{z}}_{i}^{n,\bar{w}}$ for all $i\in\mathcal{V}$ and $\mathbf{{v}}_{H}^{n+1,0}=\mathbf{{v}}_{H}^{n,\bar{w}}=\sum_{((i,j),\bar{k})\in\bar{\mathcal{E}}}\mathbf{{z}}_{((i,j),\bar{k})}^{n,\bar{w}}$. 17$\quad$$\quad$Let $\mathbf{f}_{i,n+1,0}(\cdot)=\mathbf{f}_{i,n,\bar{w}}(\cdot)$ for all $i\in\mathcal{V}_{4}$. 18$\quad$End For Even though Algorithm \[alg:Ext-Dyk\] is described so that each node $i\in\mathcal{V}$ and $((i,j),\bar{k})\in\bar{\mathcal{E}}$ is associated with a dual variable $\mathbf{{z}}_{\alpha}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$, we point out that the size of the dual variable $z_{\alpha}$ that needs to be stored in each node and edge is small due to sparsity. \[prop:sparsity\](Sparsity of $\mathbf{{z}}_{\alpha}$) We have $[\mathbf{{z}}_{i}^{n,w}]_{j}=0$ for all $j\in\mathcal{V}\backslash\{i\}$, $n\geq1$ and $w\in\{0,1,\dots,\bar{w}\}$. Similarly, for all $n\geq1$, $w\in\{0,1,\dots,\bar{w}\}$ and $(e,\bar{k})\in\bar{\mathcal{E}}$, the vector $\mathbf{{z}}_{(e,\bar{k})}^{n,w}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$ satisfies $[[\mathbf{{z}}_{(e,\bar{k})}^{n,w}]_{i'}]_{k'}=0$ unless $\bar{k}=k'$ and $i$ is an endpoint of $e$. The proof of this result is similar to the corresponding result in [@Pang_Dist_Dyk]. The claim for $z_{i}^{n,w}$ relies on the fact that $\mathbf{f}_{i,n,w}(\cdot)$ depends only on the $i$-th component, and the claim for $\mathbf{{z}}_{(e,\bar{k})}^{n,w}$ relies on the fact that $\mathbf{f}_{(e,\bar{k})}(\cdot)=\delta_{H_{(e,\bar{k})}^{\perp}}(\cdot)$, with $H_{(e,\bar{k})}^{\perp}$ containing vectors that are zero in all but 2 coordinates. Dykstra’s algorithm is traditionally written in terms of solving for the primal variable $x$. For completeness, we show the equivalence between and the primal minimization problem. The proof is easily extended from [@Pang_Dyk_spl]. \[prop:subproblems\](On solving ) If a minimizer $\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in S_{n,w}}$ for exists, then the $x^{n,w}$ in satisfies $$\begin{array}{c} \mathbf{{x}}^{n,w}=\underset{\mathbf{{x}}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}}{\arg\min}\underset{\alpha\in S_{n,w}}{\sum}\mathbf{f}_{\alpha,n,w}(\mathbf{{x}})+\frac{1}{2}\left\Vert \mathbf{{x}}-\left(\bar{\mathbf{{x}}}-\underset{\alpha\notin S_{n,w}}{\sum}\mathbf{{z}}_{\alpha}^{n,w}\right)\right\Vert ^{2}.\end{array}\label{eq:primal-subpblm}$$ Conversely, if $\mathbf{{x}}^{n,w}$ solves with the dual variables $\{\tilde{\mathbf{{z}}}_{\alpha}^{n,w}\}_{\alpha\in S_{n,w}}$ satisfying $$\begin{array}{c} \tilde{\mathbf{{z}}}_{\alpha}^{n,w}\in\partial\mathbf{f}_{\alpha,n,w}(\mathbf{{x}}^{n,w})\mbox{ and }\mathbf{{x}}^{n,w}-\bar{\mathbf{{x}}}+\underset{\alpha\notin S_{n,w}}{\sum}\mathbf{{z}}_{\alpha}^{n,w}+\underset{\alpha\in S_{n,w}}{\sum}\tilde{\mathbf{{z}}}_{\alpha}^{n,w}=0,\end{array}\label{eq:primal-optim-cond}$$ then $\{\tilde{\mathbf{{z}}}_{\alpha}^{n,w}\}_{\alpha\in S_{n,w}}$ solves . \[rem:Irrelevance-of-z\](Irrelevance of $\mathbf{{z}}_{((i,j),\bar{k})}^{n,w}$) In [@Pang_Dist_Dyk], we explained that each node $i\in\mathcal{V}$ needs to keep track of just $[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}\in\mathbb{R}^{m}$ and $[\mathbf{{z}}_{i}^{n,w}]_{i}\in\mathbb{R}^{m}$, and does not have to keep track of any part of the vectors $\mathbf{{z}}_{((i,j),\bar{k})}^{n,w}\in\mathbb{R}^{m}$ for $((i,j),\bar{k})\in\bar{\mathcal{E}}$. The same is true for Algorithms \[alg:Ext-Dyk\] and \[alg:subdiff-subalg\] here. The reason for introducing $\mathbf{{z}}_{((i,j),\bar{k})}^{n,w}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$ is that the proof of the convergence result in Theorem \[thm:convergence\] needs , which in turn needs the variables $\mathbf{{z}}_{((i,j),\bar{k})}^{n,w}$. \[exa:partial-comms\](Partial communication of data) Fix some $(i,j)\in\mathcal{E}$ and some set $\bar{K}\subset\{1,\dots,m\}$. Suppose the set $S_{n,w}$ is chosen to be $\{((i,j),\bar{k}):\bar{k}\in\bar{K}\}$. Then $\mathbf{{x}}^{n,w}$ is obtained from , which tells that $\mathbf{{x}}^{n,w}$ is the projection of $[\bar{\mathbf{{x}}}-\sum_{\alpha\notin S_{n,w}}\mathbf{{z}}_{\alpha}^{n,w-1}]$ onto $\cap_{((i,j),\bar{k}):\bar{k}\in\bar{K}}H_{((i,j),\bar{k})}$. Since $H_{((i,j),\bar{k})}$ are all affine spaces with normals $\mathbf{{z}}_{((i,j),\bar{k})}^{n,w-1}$, $\mathbf{{x}}^{n,w}$ is also the projection of $$\begin{array}{c} \bar{\mathbf{{x}}}-\underset{\alpha\notin S_{n,w}}{\sum}\mathbf{{z}}_{\alpha}^{n,w-1}-\underset{\alpha\in S_{n,w}}{\sum}\mathbf{{z}}_{\alpha}^{n,w-1},\end{array}$$ or $\mathbf{{x}}^{n,w-1}$, onto $\cap_{((i,j),\bar{k}):\bar{k}\in\bar{K}}H_{((i,j),\bar{k})}$. This gives $$[[\mathbf{{x}}^{n,w}]_{i'}]_{k'}=\begin{cases} \frac{1}{2}([[\mathbf{{x}}^{n,w-1}]_{i}]_{\bar{k}}+[[\mathbf{{x}}^{n,w-1}]_{j}]_{\bar{k}}) & \mbox{ if }i'\in\{i,j\}\mbox{ and }\bar{k}\in\bar{K}\\{} [[\mathbf{{x}}^{n,w-1}]_{i'}]_{k'} & \mbox{ otherwise.} \end{cases}$$ As mentioned in Remark \[rem:Irrelevance-of-z\], there is no need to keep track of the dual variables $\mathbf{{z}}_{((i,j),\bar{k})}^{n,w}$ to run Algorithm \[alg:Ext-Dyk\]. So the larger $\bar{K}$ is, the more variables are updated. Thus in Algorithm \[alg:Ext-Dyk\], computations can be performed continuously even when not all the data is communicated. In other words, communications will not be a bottleneck for Algorithm \[alg:Ext-Dyk\]. Subroutine for subdifferentiable functions ------------------------------------------ If $\mathcal{V}_{4}=\emptyset$, then Algorithm \[alg:Ext-Dyk\] corresponds mostly to the algorithm in [@Pang_Dist_Dyk] because there are no subdifferentiable functions. In this subsection, we present and derive Algorithm \[alg:subdiff-subalg\], which is a subroutine within Algorithm \[alg:Ext-Dyk\] to handle subdifferentiable functions. We state some notation necessary for further discussions. For any $\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}$ and $n\in\{1,2,\dots\}$, let $p(n,\alpha)$ be $$p(n,\alpha):=\max\{w':w'\leq\bar{w},\alpha\in S_{n,w'}\}.$$ In other words, $p(n,\alpha)$ is the index $w'$ such that $\alpha\in S_{n,w'}$ but $\alpha\notin S_{n,k}$ for all $k\in\{w'+1,\dots,\bar{w}\}$. It follows from line 14 in Algorithm \[alg:Ext-Dyk\] that $$\mathbf{{z}}_{\alpha}^{n,p(n,\alpha)}=\mathbf{{z}}_{\alpha}^{n,p(n,\alpha)+1}=\cdots=\mathbf{{z}}_{\alpha}^{n,\bar{w}}\mbox{ for all }\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}.\label{eq:stagnant-indices}$$ Moreover, $((i,j),\bar{k})\notin\bar{\mathcal{E}}_{n}$ implies $((i,j),\bar{k})\notin S_{n,w}$ for all $w\in\{1,\dots,\bar{w}\}$, so $$0\overset{\scriptsize\eqref{eq:reset-z-i-j-1}}{=}\mathbf{{z}}_{((i,j),\bar{k})}^{n,0}=\mathbf{{z}}_{((i,j),\bar{k})}^{n,1}=\cdots=\mathbf{{z}}_{((i,j),\bar{k})}^{n,\bar{w}}\mbox{ for all }((i,j),\bar{k})\in\bar{\mathcal{E}}\backslash\bar{\mathcal{E}}_{n}.\label{eq:zero-indices}$$ We present Algorithm . \[alg:subdiff-subalg\](Subalgorithm for subdifferentiable functions) This algorithm is run when line 9 of Algorithm \[alg:Ext-Dyk\] is reached. Suppose $S_{n,w}\subset\mathcal{V}_{4}$ and Assumption \[assu:to-start-subalg\] holds. 01 For each $i\in S_{n,w}$ 02 $\quad$For $\tilde{f}_{i,n,w-1}(\cdot)$ defined in , consider $$\begin{array}{c} \underset{x\in\mathbb{R}^{m}}{\min}\left[\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}]_{i}-x\|^{2}+\max\{f_{i,n,w-1},\tilde{f}_{i,n,w-1}\}(x)\right],\end{array}\label{eq:alg-primal-subpblm}$$ 03 $\quad$Let the primal and dual solutions of be $x_{i}^{+}$ and $z_{i}^{+}$ 04 $\quad$Define $f_{i,n,w}:\mathbb{R}^{m}\to\mathbb{R}$ to be the affine function $$f_{i,n,w}(x):=f_{i,n,w-1}(x_{i}^{+})+\langle x-x_{i}^{+},[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}]_{i}-x_{i}^{+}\rangle.\label{eq:def-f-i-n-w}$$ 05 $\quad$In other words, $f_{i,n,w}(\cdot)$ is chosen such that the $\qquad\qquad$primal and dual optimizers to coincide with that of $$\begin{array}{c} \underset{x\in\mathbb{R}^{m}}{\min}\left[\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}]_{i}-x\|^{2}+f_{i,n,w}(x)\right].\end{array}\label{eq:finw-design}$$ 06 $\quad$Define the function $\mathbf{f}_{i,n,w}:[\mathbb{R}^{m}]^{|\mathcal{V}|}\to\mathbb{R}$ and $\qquad\qquad$the dual vector $z_{i}^{n,w}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$ to be $$\mathbf{f}_{i,n,w}(\mathbf{{x}}):=f_{i,n,w}(\mathbf{{x}}_{i})\mbox{ and }[\mathbf{{z}}_{i}^{n,w}]_{j}:=\begin{cases} \mathbf{{z}}_{i}^{+} & \mbox{ if }j=i\\ 0 & \mbox{ if }j\neq i. \end{cases}\label{eq:def-z-i}$$ 07 End for 08 For all $i\in\mathcal{V}_{4}\backslash S_{n,w}$, $\mathbf{f}_{i,n,w}(\cdot)=\mathbf{f}_{i,n,w-1}(\cdot)$. We make three assumptions that will be needed for the proof of convergence of Theorem \[thm:convergence\]. \[assu:to-start-subalg\](Start of Algorithm \[alg:subdiff-subalg\]) Recall that at the start of Algorithm \[alg:subdiff-subalg\], $S_{n,w}\subset\mathcal{V}_{4}$. We make three assumptions. 1. Suppose $(n,w)$ is such that $w>1$ and $S_{n,w}\subset\mathcal{V}_{4}$ so that Algorithm \[alg:subdiff-subalg\] is invoked. Then for all $i\in S_{n,w}$, $[\mathbf{{z}}_{i}^{n,w-1}]_{i}\in\mathbb{R}^{m}$ is the optimizer to the problem $$\begin{array}{c} \underset{z\in\mathbb{R}^{m}}{\min}\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}]_{i}-z\|^{2}+f_{i,n,w-1}^{*}(z).\end{array}\label{eq:multi-node-start}$$ In other words, suppose $w_{i}\geq1$ is the largest $w'$ such that $i\in S_{n,w'}$ and $i\notin S_{n,\tilde{w}}$ for all $\tilde{w}\in\{w'+1,w'+2,\dots,w-1\}$. Then for all $\tilde{w}\in\{w_{i}+1,\dots,w-1\}$, $(e,\bar{k})\notin S_{n,\tilde{w}}$ if $i$ is an endpoint of $e$. 2. Suppose that for all $i\in\mathcal{V}_{4}$ and $\tilde{w}\in\{p(n,i)+1,\dots,\bar{w}\}$, $(e,\bar{k})\notin S_{n,\tilde{w}}$ if $i$ is an endpoint of $e$. (This implies $\mathbf{x}_{i}^{n,p(n,i)}=\mathbf{x}_{i}^{n,\bar{w}}$.) 3. Suppose that $S_{n,1}=\mathcal{V}_{4}$ for all $n>1$. We need Assumption \[assu:to-start-subalg\](1) for Proposition \[prop:quad-dec-case-2\], which is in turn needed for the proof of Theorem \[thm:convergence\](i). We need Assumption \[assu:to-start-subalg\](2) so that the analogue of Lemma \[lem:alpha-recurrs\](1) holds, which in turn is used in the proof of Theorem \[thm:convergence\](iv). Also, Assumption \[assu:to-start-subalg\](1) is seen to be satisfied if $S_{n,w}\subset S_{n,w-1}$ if $S_{n,w}\subset\mathcal{V}_{4}$. (On the problem ) Consider the case where $S_{n,w}=\{i\}$ first, where $i\in\mathcal{V}_{4}$. If $i$ were in $\mathcal{V}\backslash\mathcal{V}_{4}$ instead, $\mathbf{{z}}_{i}^{n,w}$ is the minimizer of $$\begin{array}{c} \underset{\mathbf{{z}}_{i}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}}{\min}\frac{1}{2}\bigg\|\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}-\underset{j\in\mathcal{V}\backslash\{i\}}{\sum}\mathbf{{z}}_{j}^{n,w-1}-\mathbf{{z}}_{i}\bigg\|^{2}+\mathbf{f}_{i}^{*}(\mathbf{{z}}_{i}).\end{array}\label{eq:block-dual}$$ When $i\in\mathcal{V}_{4}$, we use $f_{i,n,w-1}(\cdot)$, where $f_{i,n,w-1}(\cdot)\leq f_{i}(\cdot)$, instead of $f_{i}(\cdot)$. This gives $f_{i,n,w}^{*}(\cdot)\geq f_{i}^{*}(\cdot)$. Instead of , we now have $$\begin{array}{c} \underset{\mathbf{{z}}_{i}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}}{\min}\frac{1}{2}\bigg\|\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}-\underset{j\in\mathcal{V}\backslash\{i\}}{\sum}\mathbf{{z}}_{j}^{n,w-1}-\mathbf{{z}}_{i}\bigg\|^{2}+\mathbf{f}_{i,n,w-1}^{*}(\mathbf{{z}}_{i}).\end{array}\label{eq:dual-of-approx}$$ The dual of is (up to a constant independent of $\mathbf{{x}}$) $$\begin{array}{c} \underset{\mathbf{{x}}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}}{\min}\frac{1}{2}\bigg\|\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}-\underset{j\in\mathcal{V}\backslash\{i\}}{\sum}\mathbf{{z}}_{j}^{n,w-1}-\mathbf{{x}}\bigg\|^{2}+\mathbf{f}_{i,n,w-1}(\mathbf{{x}}).\end{array}\label{eq:dual-of-dual-subpblm}$$ Since $\mathbf{{z}}_{i}^{n,w-1}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$ and $\mathbf{{z}}_{i}^{n,w}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$ are such that the components in $\mathcal{V}\backslash\{i\}$ are all zero by Proposition \[prop:sparsity\], the problem reduces to $$\begin{array}{c} \underset{x\in\mathbb{R}^{m}}{\min}\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}]_{i}-x\|^{2}+f_{i,n,w-1}(x).\end{array}\label{eq:unmod-alg-primal}$$ Suppose that the minimizer of is $\mathbf{{z}}_{i}^{n,w-1}$, which is the case when Assumption \[assu:to-start-subalg\](1) holds. Then the minimizer of is $[\mathbf{{x}}^{n,w-1}]_{i}$, which is also $[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}-\mathbf{{z}}_{i}^{n,w-1}]_{i}$ by . Construct $\tilde{f}_{i,n,w-1}:\mathbb{R}^{m}\to\mathbb{R}$ by $$\tilde{f}_{i,n,w-1}(x):=f_{i}([\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}-\mathbf{{z}}_{i}^{n,w-1}]_{i})+\langle s,x-[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}-\mathbf{{z}}_{i}^{n,w-1}]_{i}\rangle,\label{eq:linearize-f-i-n-w}$$ where $s\in\partial f_{i}([\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}-\mathbf{{z}}_{i}^{n,w-1}]_{i})$. The primal problem that we now consider is . $\hfill\Delta$ (On the condition $S_{n,1}=V_{4}$) Throughout this paper, we assumed $S_{n,1}=V_{4}$ in Assumption \[assu:to-start-subalg\]. Algorithm \[alg:Ext-Dyk\] with this condition would not be truly asynchronous, but it is relatively easy to enforce this condition. One way to enforce this condition is to use a global clock. Another way to enforce this condition is to use the sparsity of $\mathbf{z}_{\alpha}$ in Proposition \[prop:sparsity\]. Suppose that $\{S_{n,w}\}_{w=1}^{\bar{w}}$ is such that for all $i\in V_{4}$, $S_{n,w_{i}}=\{i\}$ for some $w_{i}\in\{1,\dots,\bar{w}\}$. Suppose also that for all $i,j\in V_{4}$ such that $w_{i}<w_{j}$: - There are no $(e,k)\in\bar{E}$ such that $i$ and $j$ are the two endpoints of $e$ and $(e,k)\in S_{n,w'}$ for some $w'$ such that $w_{i}<w'<w_{j}$. If condition ($\star$) holds for some $i,j\in V_{4}$, then the sparsity of $\mathbf{z}_{\alpha}^{n,w}$ implies that if we changed from $S_{n,w_{i}}=\{i\}$ and $S_{n,w_{j}}=\{j\}$ to $S_{n,w_{i}}=\{i,j\}$ and $S_{n,w_{j}}=\emptyset$, then the iterates $\{\mathbf{x}^{n,w}\}_{w}$ obtained will remain equivalent. It is possible to ensure ($\star$) for all $i,j\in V_{4}$ using a signal from a fixed node in $V$ propagated as computations in the algorithm are carried out. As mentioned in Remark \[rem:min-max-of-2-quads\], the problem is still easy to solve if $f_{i,n,w-1}(\cdot)$ and $\tilde{f}_{i,n,w-1}(\cdot)$ are affine functions with the known parameters $[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}]_{i}$ and $\mathbf{{z}}_{i}^{n,w-1}$. Next, for the primal optimizer $x_{i}^{+}$ defined in line 3 of Algorithm \[alg:subdiff-subalg\], we can construct the affine function $f_{i,n,w}:\mathbb{R}^{m}\to\mathbb{R}$ to be such that $$\begin{array}{c} \underset{x\in\mathbb{R}^{m}}{\min}\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}]_{i}-x\|^{2}+f_{i,n,w}(x)\end{array}$$ has the same minimizer and objective value as . The function $f_{i,n,w}(\cdot)$ can be checked to be . It is clear to see that $f_{i,n,w}(\cdot)\leq\max\{f_{i,n,w-1}(\cdot),\tilde{f}_{i,n,w-1}(\cdot)\}$. Since both $f_{i,n,w-1}(\cdot)$ and $\tilde{f}_{i,n,w-1}(\cdot)$ are both by definition lower approximates of $f_{i}(\cdot)$, $f_{i,n,w}(\cdot)$ will also be a lower approximate of $f_{i}(\cdot)$. The function $\mathbf{f}_{i,n,w}:[\mathbb{R}^{m}]^{|\mathcal{V}|}\to\mathbb{R}$ is constructed to be $$\mathbf{f}_{i,n,w}(\mathbf{{x}})=f_{i,n,w}([\mathbf{{x}}]_{i}).$$ The $\mathbf{{z}}_{i}^{n,w}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$ defined by would be the optimal solution of the dual problem $$\begin{array}{c} \underset{\mathbf{{z}}_{i}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}}{\min}\frac{1}{2}\bigg\|\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}-\underset{j\in\mathcal{V}\backslash\{i\}}{\sum}\mathbf{{z}}_{j}^{n,w-1}-\mathbf{{z}}_{i}\bigg\|^{2}+\mathbf{f}_{i,n,w}^{*}(\mathbf{{z}}_{i}).\end{array}$$ (Similarities to the one node case) Note that the problem corresponds to , the function to , the problem to , and the function $h_{w+1}(\cdot)$ in line 9 of Algorithm \[alg:basic-dual-ascent\] to . One way to understand Proposition \[prop:P-D-of-one-block\] is to see that any change in the primal objective value gives the same change in the dual objective value. We have the following result. \[prop:quad-dec-case-2\]Suppose $(n,w)$ is such that $w>1$ and $S_{n,w}\subset\mathcal{V}_{4}$ so that Algorithm \[alg:subdiff-subalg\] is run, and Assumption \[assu:to-start-subalg\](1) holds. Then we have $$\begin{array}{c} \frac{1}{2}\|\mathbf{{x}}^{n,w}-\mathbf{{x}}^{n,w-1}\|^{2}\leq F_{n,w}(\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})-F_{n,w-1}(\{\mathbf{{z}}_{\alpha}^{n,w-1}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}).\end{array}\label{eq:quad-dec-case-2}$$ Similarly, if Assumption \[assu:to-start-subalg\](2) and (3) hold, then $$\begin{array}{c} \frac{1}{2}\|\mathbf{{x}}^{n,1}-\mathbf{{x}}^{n,0}\|^{2}\leq F_{n,1}(\{\mathbf{{z}}_{\alpha}^{n,1}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})-F_{n,0}(\{\mathbf{{z}}_{\alpha}^{n,0}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}).\end{array}\label{eq:quad-dec-case-3}$$ Recall Proposition \[prop:sparsity\] on the sparsity of the $\mathbf{{z}}_{i}^{n,w}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$. Recall that in line 3 of Algorithm \[alg:subdiff-subalg\] the primal and dual optimal solutions of are $x_{i}^{+}$ and $z_{i}^{+}$. We can see that $x_{i}^{+}=[\mathbf{{x}}^{n,w}]_{i}$ and $z_{i}^{+}=[\mathbf{{z}}_{i}^{n,w}]_{i}$. Let the dual and primal optimal solutions of be $z_{i}^{\circ}$ and $x_{i}^{\circ}$, which are $z_{i}^{\circ}=[\mathbf{{z}}_{i}^{n,w-1}]_{i}$ and $x_{i}^{\circ}=[\mathbf{{x}}^{n,w-1}]_{i}$ respectively. By Proposition \[prop:P-D-of-one-block\] and the forms of the problems and , we have $x_{i}^{+}+z_{i}^{+}=x_{i}^{\circ}+z_{i}^{\circ}$. Thus $z_{i}^{+}-z_{i}^{\circ}=-(x_{i}^{+}-x_{i}^{\circ})$. In other words, $$[\mathbf{{x}}^{n,w}-\mathbf{{x}}^{n,w-1}]_{i}=-[\mathbf{{z}}_{i}^{n,w}-\mathbf{{z}}_{i}^{n,w-1}]_{i}.\label{eq:for-beta-identity}$$ Note that since $S_{n,w}\cap\bar{\mathcal{E}}=\emptyset$, $\mathbf{{v}}_{H}^{n,w}=\mathbf{{v}}_{H}^{n,w-1}$. We have the following inequality chain, which we explain in a moment. $$\begin{aligned} & & \begin{array}{c} f_{i,n,w}([\mathbf{{x}}^{n,w}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}-[\mathbf{{x}}^{n,w}]_{i}\|^{2}\end{array}\label{eq:for-beta-chain}\\ & = & \begin{array}{c} f_{i,n,w-1}([\mathbf{{x}}^{n,w}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}-[\mathbf{{x}}^{n,w}]_{i}\|^{2}\end{array}\nonumber \\ & \geq & \begin{array}{c} f_{i,n,w-1}([\mathbf{{x}}^{n,w-1}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}-[\mathbf{{x}}^{n,w-1}]_{i}\|^{2}+\frac{1}{2}\|[\mathbf{{x}}^{n,w-1}]_{i}-[\mathbf{{x}}^{n,w}]_{i}\|^{2}.\end{array}\nonumber \end{aligned}$$ The equation in comes from the fact that $[\mathbf{{x}}^{n,w}]_{i}$ being the minimizer of is such that $f_{i,n,w-1}([\mathbf{{x}}^{n,w}]_{i})=\tilde{f}_{i,n,w-1}([\mathbf{{x}}^{n,w}]_{i})$, and $f_{i,n,w}(\cdot)$ is designed through so that $f_{i,n,w}([\mathbf{{x}}^{n,w}]_{i})=f_{i,n,w-1}([\mathbf{{x}}^{n,w}]_{i})$. The inequality in follows from the design of $f_{i,n,w-1}(\cdot)$ through , which implies that $[\mathbf{{x}}^{n,w-1}]_{i}$ is the minimizer of $f_{i,n,w-1}(\cdot)+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}-\cdot\|^{2}$. Since $S_{n,w}\cap\bar{\mathcal{E}}=\emptyset$, we have $\mathbf{{v}}_{H}^{n,w-1}=\mathbf{{v}}_{H}^{n,w}$. Let $\beta_{i}$ be defined by $$\begin{aligned} \beta_{i} & := & \begin{array}{c} \left(f_{i,n,w}^{*}([\mathbf{{z}}_{i}^{n,w}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}-[\mathbf{{z}}_{i}^{n,w}]_{i}\|^{2}\right)\end{array}\label{eq:beta-form}\\ & & \begin{array}{c} -\big(f_{i,n,w-1}^{*}([\mathbf{{z}}_{i}^{n,w-1}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w-1}]_{i}-[\mathbf{{z}}_{i}^{n,w-1}]_{i}\|^{2}\big).\end{array}\nonumber \end{aligned}$$ Proposition \[prop:P-D-of-one-block\] implies that $$\begin{aligned} & & \begin{array}{c} f_{i,n,w}^{*}([\mathbf{{z}}_{i}^{n,w}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}-[\mathbf{{z}}_{i}^{n,w}]_{i}\|^{2}\end{array}\label{eq:use-prop}\\ & = & \begin{array}{c} -f_{i,n,w}([\mathbf{{x}}^{n,w}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}\|^{2}-\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}-[\mathbf{{x}}^{n,w}]_{i}\|^{2}.\end{array}\nonumber \end{aligned}$$ An equation similar to involving $f_{i,n,w-1}(\cdot)$ plugged into and , and the fact that $\mathbf{{v}}_{H}^{n,w}=\mathbf{{v}}_{H}^{n,w-1}$ gives $\beta_{i}\geq\frac{1}{2}\|[\mathbf{{z}}_{i}^{n,w-1}-\mathbf{{z}}_{i}^{n,w}]_{i}\|^{2}$. One can easily check from the definitions that $$\begin{array}{c} F_{n,w}(\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})-F_{n,w-1}(\{\mathbf{{z}}_{\alpha}^{n,w-1}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})=\underset{i\in S_{n,w}}{\sum}\beta_{i},\end{array}$$ which leads to our result. The proof of the second statement is exactly the same. We remark on the design of Algorithm \[alg:Ext-Dyk\]. (On improving the affine models) In our design of Algorithm \[alg:Ext-Dyk\], we improve the affine model $f_{i,n,w}(\cdot)$ for $i\in\mathcal{V}_{4}$ only if $S_{n,w}\subset\mathcal{V}_{4}$. It is easy to see that we can apply the observation in Remark \[rem:min-max-of-2-quads\] to minimize the maximum of two quadratics analytically, but doing so without Assumption \[assu:to-start-subalg\] would affect the convergence proof. Further new steps in convergence proof -------------------------------------- Since the proof of convergence shares many similarities to the original proof in [@Pang_Dist_Dyk], we describe the new steps of the proof in this subsection that were not already covered and defer the rest of the proof to the appendix. Recall the definition of $\mathbf{f}_{\alpha,n,w}(\cdot)$ in . We have the following easy claim. \[claim:Fenchel-duality\]In Algorithm \[alg:Ext-Dyk\], for all $\alpha\in S_{n,w}$, we have 1. $-\mathbf{{x}}^{n,w}+\partial\mathbf{f}_{\alpha,n,w}^{*}(\mathbf{{z}}_{\alpha}^{n,w})\ni0$, 2. $-\mathbf{{z}}_{\alpha}^{n,w}+\partial\mathbf{f}_{\alpha,n,w}(\mathbf{{x}}^{n,w})\ni0$, and 3. $\mathbf{f}_{\alpha,n,w}(\mathbf{{x}}^{n,w})+\mathbf{f}_{\alpha,n,w}^{*}(\mathbf{{z}}_{\alpha}^{n,w})=\langle\mathbf{{x}}^{n,w},\mathbf{{z}}_{\alpha}^{n,w}\rangle$. There are two cases. The first case is when is invoked. By taking the optimality conditions in with respect to $z_{\alpha}$ for $\alpha\in S_{n,w}$ and making use of to get $\mathbf{{x}}^{n,w}=\bar{\mathbf{{x}}}-\sum_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}\mathbf{{z}}_{\alpha}^{n,w}$, we deduce (a). The second case is when Algorithm \[alg:subdiff-subalg\] is invoked, and is similar. The equivalence of (a), (b) and (c) is standard. For all valid $(n,w)$, since $\mathbf{f}_{\alpha,n,w}(\cdot)\leq\mathbf{f}_{\alpha}(\cdot)$ for all $\alpha\in\mathcal{V}_{4}$, we have $\mathbf{f}_{\alpha,n,w}^{*}(\cdot)\geq\mathbf{f}_{\alpha}^{*}(\cdot)$. Let $D_{\alpha,n}$ and $E_{\alpha,n}$ be defined to be \[eq\_m:error-def\] $$\begin{aligned} D_{\alpha,n} & := & \mathbf{f}_{\alpha,n,p(n,\alpha)}^{*}(\mathbf{{z}}_{\alpha}^{n,p(n,\alpha)})-\mathbf{f}_{\alpha}^{*}(\mathbf{{z}}_{\alpha}^{n,p(n,\alpha)})\geq0\label{eq:D-error-def}\\ \mbox{and }E_{\alpha,n} & := & \mathbf{f}_{\alpha}(\bar{x}-v_{A}^{n,p(n,\alpha)})-\mathbf{f}_{\alpha,n,p(n,\alpha)}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n,p(n,\alpha)})\geq0.\label{eq:E-error-def}\end{aligned}$$ When $\alpha\in[\mathcal{\bar{E}\cup}\mathcal{V}]\backslash\mathcal{V}_{4}$, then $E_{\alpha,n}=D_{\alpha,n}=0$ for all $n$. Next, we have $$\begin{aligned} & & \mathbf{f}_{\alpha}^{*}(\mathbf{{z}}_{\alpha}^{n,p(n,\alpha)})+\mathbf{f}_{\alpha}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n,p(n,\alpha)})\label{eq:error-deriv}\\ & \overset{\eqref{eq_m:error-def}}{=} & \mathbf{f}_{\alpha,n,p(n,\alpha)}^{*}(\mathbf{{z}}_{\alpha}^{n,p(n,\alpha)})+\mathbf{f}_{\alpha,n,p(n,\alpha)}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n,p(n,\alpha)})+E_{\alpha,n}-D_{\alpha,n}\nonumber \\ & \overset{\scriptsize{\alpha\in S_{n,p(n,\alpha)},\mbox{ Claim \ref{claim:Fenchel-duality}}}}{=} & \langle\mathbf{{z}}_{\alpha}^{n,p(n,\alpha)},\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n,p(n,\alpha)}\rangle+E_{\alpha,n}-D_{\alpha,n}.\nonumber \end{aligned}$$ We now state the main convergence theorem of this paper. \[thm:convergence\] (Convergence to primal minimizer) Consider Algorithm \[alg:Ext-Dyk\]. Assume that the problem is feasible, and for all $n\geq1$, $\bar{\mathcal{E}}_{n}=[\cup_{w=1}^{\bar{w}}S_{n,w}]\cap\bar{\mathcal{E}}$, and $[\cup_{w=1}^{\bar{w}}S_{n,w}]\supset\mathcal{V}$. Suppose that Assumption \[assu:to-start-subalg\] holds. For the sequence $\{\mathbf{{z}}_{\alpha}^{n,w}\}_{{1\leq n<\infty\atop 0\leq w\leq\bar{w}}}\subset[\mathbb{R}^{m}]^{|\mathcal{V}|}$ for each $\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}$ generated by Algorithm \[alg:Ext-Dyk\] and the sequences $\{\mathbf{{v}}_{H}^{n,w}\}_{{1\leq n<\infty\atop 0\leq w\leq\bar{w}}}\subset[\mathbb{R}^{m}]^{|\mathcal{V}|}$ and $\{\mathbf{{v}}_{A}^{n,w}\}_{{1\leq n<\infty\atop 0\leq w\leq\bar{w}}}\subset[\mathbb{R}^{m}]^{|\mathcal{V}|}$ thus derived, we have: 1. The sum $\sum_{n=1}^{\infty}\sum_{w=1}^{\bar{w}}\|\mathbf{{v}}_{A}^{n,w}-\mathbf{{v}}_{A}^{n,w-1}\|^{2}$ is finite and $\{F_{n,\bar{w}}(\{\mathbf{{z}}_{\alpha}^{n,\bar{w}}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\}_{n=1}^{\infty}$ is nondecreasing. 2. There is a constant $C$ such that $\|\mathbf{{v}}_{A}^{n,w}\|^{2}\leq C$ for all $n\in\mathbb{N}$ and $w\in\{1,\dots,\bar{w}\}$. 3. For all $i\in\mathcal{V}_{3}\cup\mathcal{V}_{4}$, $n\geq1$ and $w\in\{1,\dots,\bar{w}\}$, the vectors $\mathbf{{z}}_{i}^{n,w}$ are bounded. Suppose also that 1. There are constants $A$ and $B$ such that $$\sum_{\alpha\in\mathcal{\bar{E}}\cup\mathcal{V}}\|\mathbf{{z}}_{\alpha}^{n,\bar{w}}\|\leq A\sqrt{n}+B\mbox{ for all }n\geq0.\label{eq:sqrt-growth-sum-z}$$ Then 1. For all $\alpha\in[\mathcal{\bar{E}}\cup\mathcal{V}]\backslash\mathcal{V}_{4}$, we have $E_{\alpha,n}=0$. Also, for all $i\in\mathcal{V}_{4}$, we have $\lim_{n\to\infty}E_{i,n}=0$. 2. There exists a subsequence $\{\mathbf{{v}}_{A}^{n_{k},\bar{w}}\}_{k=1}^{\infty}$ of $\{\mathbf{{v}}_{A}^{n,\bar{w}}\}_{n=1}^{\infty}$ which converges to some $\mathbf{{v}}_{A}^{*}\in[\mathbb{R}^{m}]^{|\mathcal{V}|}$ and that $$\lim_{k\to\infty}\langle\mathbf{{v}}_{A}^{n_{k},\bar{w}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},\alpha)},\mathbf{{z}}_{\alpha}^{n_{k},\bar{w}}\rangle=0\mbox{ for all }\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}.$$ 3. Let $\mathbf{f}(\cdot)=\sum_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}\mathbf{f}_{\alpha}(\cdot)$. For the $\mathbf{{v}}_{A}^{*}$ in (v), $\mathbf{{x}}_{0}-\mathbf{{v}}_{A}^{*}$ is the minimizer of the primal problem and $$\lim_{k\to\infty}F_{n_{k},w}(\{\mathbf{{z}}_{\alpha}^{n_{k},w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})=\lim_{k\to\infty}F(\{\mathbf{{z}}_{\alpha}^{n_{k},w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})=\frac{1}{2}\|\mathbf{{v}}_{A}^{*}\|^{2}+\mathbf{f}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*}).\label{eq:thm-iv-concl}$$ The properties (i) to (vi) in turn imply that $\lim_{n\to\infty}\mathbf{{x}}^{n,\bar{w}}$ exists and equals $\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*}$, which is the primal minimizer of . The proofs of parts (i), (ii), (v) and (vi) are similar to the proof in [@Pang_Dist_Dyk], and (iii) and (iv) are new. We shall prove (iii) and (iv) here and defer the rest of the proof to the appendix. \[Proof of Theorem \[thm:convergence\](iii)\]In view of line 14 in Algorithm \[alg:Ext-Dyk\], it suffices to prove that $\mathbf{{z}}_{i}^{n,w}$ is bounded if $i\in S_{n,w}$. By the sparsity pattern in Proposition \[prop:sparsity\], for each $i\in\mathcal{V}_{3}\cup\mathcal{V}_{4}$, $\mathbf{{z}}_{i}^{n,w}$ is bounded if and only if $[\mathbf{{z}}_{i}^{n,w}]_{i}$ is bounded. Since $\{[\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n,w}]_{i}\}_{{1\leq n<\infty\atop 0\leq w\leq\bar{w}}}$ is bounded by (ii), it is clear that $\{[\mathbf{{z}}_{i}^{n,w}]_{i}\}_{{1\leq n<\infty\atop 0\leq w\leq\bar{w}}}$ is bounded if and only if $\{[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}\}_{{1\leq n<\infty\atop 0\leq w\leq\bar{w}}}$ is bounded. Seeking a contradiction, suppose $\{[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}\}_{{1\leq n<\infty\atop 0\leq w\leq\bar{w}}}$ is unbounded. We look at the problem $$\begin{array}{c} \underset{x\in\mathbb{R}^{m}}{\min}\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}-x\|^{2}+f_{i}(x)\end{array}\label{eq:in-pf-primal-1}$$ and consider two possibilities. Let $\tilde{x}_{i}^{n,w}$ be the primal solution to . Note that if $i\in\mathcal{V}_{3}$, then $\tilde{x}_{i}^{n,w}$ is $[\mathbf{{x}}^{n,w}]_{i}$. If the $\{\tilde{x}_{i}^{n,w}\}_{n,w}$ are bounded, then the dual solution of is $[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}-\tilde{x}_{i}^{n,w}$, which will be unbounded. A standard compactness argument shows that there is a point $\tilde{x}\in\mathbb{R}^{m}$ for which the set $\partial f_{i}(\tilde{x})$ is unbounded, which contradicts ${\mbox{\rm dom}}(f_{i})=\mathbb{R}^{m}$. If the corresponding primal solutions $\tilde{x}_{i}^{n,w}$ are unbounded, consider $\{\tilde{\mathbf{{z}}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}$, where $$\tilde{\mathbf{{z}}}_{\alpha}^{n,w}=\mathbf{{z}}_{\alpha}^{n,w}\mbox{ if }\alpha\in[\bar{\mathcal{E}}\cup\mathcal{V}]\backslash\{i\}\mbox{ and }[\tilde{\mathbf{{z}}}_{i}^{n,w}]_{j}=\begin{cases} [\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,w}]_{i}-\tilde{x}_{i}^{n,w} & \mbox{ if }j=i\\ 0 & \mbox{ otherwise. } \end{cases}$$ Let $\tilde{F}_{n,w}(\cdot)$ be defined to be $$\begin{array}{c} \tilde{F}_{n,w}(\{\mathbf{{z}}_{\alpha}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}):=-\frac{1}{2}\left\Vert \bar{\mathbf{{x}}}-\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{{z}}_{\alpha}\right\Vert ^{2}+\frac{1}{2}\|\bar{\mathbf{{x}}}\|^{2}-\underset{\alpha\in[\bar{\mathcal{E}}\cup\mathcal{V}]\backslash\{i\}}{\sum}\mathbf{f}_{\alpha,n,w}(\mathbf{{z}}_{\alpha})-\mathbf{f}_{i}(\mathbf{{z}}_{i})\end{array}\label{eq:Dykstra-dual-defn-2}$$ Then $F_{n,w}(\cdot)\leq\tilde{F}_{n,w}(\cdot)\leq F(\cdot)$. Also, Proposition \[prop:P-D-of-one-block\] shows that $[\tilde{\mathbf{{z}}}_{i}^{n,w}]_{i}$ is the dual solution to . So $$F_{n,w}(\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\leq\tilde{F}_{n,w}(\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\leq\tilde{F}_{n,w}(\{\tilde{\mathbf{{z}}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}\})\leq F(\{\tilde{\mathbf{{z}}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}\}).\label{eq:big-F-chain}$$ Next, suppose $\mathbf{{x}}^{*}$ is a solution of . Then $$\begin{aligned} & & \begin{array}{c} \frac{1}{2}\|\bar{\mathbf{{x}}}-\mathbf{{x}}^{*}\|^{2}+\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{f}_{\alpha}(\mathbf{{x}}^{*})-F_{n,w}(\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\end{array}\\ & \overset{\eqref{eq:big-F-chain}}{\geq} & \begin{array}{c} \frac{1}{2}\|\bar{\mathbf{{x}}}-\mathbf{{x}}^{*}\|^{2}+\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{f}_{\alpha}(\mathbf{{x}}^{*})-F(\{\tilde{\mathbf{{z}}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\end{array}\\ & \overset{\eqref{eq:From-8}}{\geq} & \begin{array}{c} \frac{1}{2}\left\Vert \bar{\mathbf{{x}}}-\mathbf{{x}}^{*}-\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\tilde{\mathbf{{z}}}_{\alpha}^{n,w}\right\Vert ^{2}\end{array}\\ & \overset{\scriptsize{\mbox{Take }i\mbox{-th component only}}}{\geq} & \begin{array}{c} \frac{1}{2}\left\Vert \tilde{x}_{i}^{n,w}-[\mathbf{{x}}^{*}]_{i}\right\Vert ^{2}.\end{array}\end{aligned}$$ The above inequality and the unboundedness of $\tilde{x}_{i}^{n,w}$ implies that the duality gap would go to infinity, which contradicts part (i). Thus we are done. \[Proof of Theorem \[thm:convergence\](iv)\]The first sentence of this claim is immediate from . We now prove the second sentence. Seeking a contradiction, suppose that $\limsup_{n\to\infty}E_{i,n}>0$. In Algorithm \[alg:subdiff-subalg\], in view of Assumption \[assu:to-start-subalg\](2), $[\mathbf{{z}}_{i}^{n,\bar{w}}]_{i}$ is the minimizer of the problem $$\begin{array}{c} \underset{z\in\mathbb{R}^{m}}{\min}\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,\bar{w}}]_{i}-z\|^{2}+f_{i,n,\bar{w}}^{*}(z).\end{array}\label{eq:in-pf-dual}$$ The associated primal problem is, up to a constant independent of $x$, $$\begin{array}{c} \underset{x\in\mathbb{R}^{m}}{\min}\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,\bar{w}}]_{i}-x\|^{2}+f_{i,n,\bar{w}}(x).\end{array}\label{eq:in-pf-primal}$$ The primal solution is $[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,\bar{w}}]_{i}-[\mathbf{{z}}_{i}^{n,\bar{w}}]_{i}\overset{\eqref{eq_m:all_acronyms}}{=}[\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n,\bar{w}}]_{i}$. The dual solution is $[\mathbf{{z}}_{i}^{n,\bar{w}}]_{i}$. So $$[\mathbf{{x}}_{i}^{n,\bar{w}}]_{i}\in\partial f_{i,n,\bar{w}}([\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n,\bar{w}}]_{i}).\label{eq:subgrad-birth}$$ Recall Assumption \[assu:to-start-subalg\](3) and $\mathbf{{v}}_{H}^{n,\bar{w}}=\mathbf{{v}}_{H}^{n+1,0}$ by line 16 in Algorithm \[alg:Ext-Dyk\]. We now analyze the increase in the dual objective value of each separate problem: $$\begin{aligned} \Delta_{i,n} & := & \begin{array}{c} \big[f_{i,n+1,0}^{*}([\mathbf{{z}}_{i}^{n+1,0}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n+1,0}]_{i}-[\mathbf{{z}}_{i}^{n+1,0}]_{i}\|^{2}\big]\end{array}\\ & & \begin{array}{c} -\big[f_{i,n+1,1}^{*}([\mathbf{{z}}_{i}^{n+1,1}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n+1,1}]_{i}-[\mathbf{{z}}_{i}^{n+1,1}]_{i}\|^{2}\big].\end{array}\end{aligned}$$ Recall that $E_{i,n}$ is also $f_{i}([\mathbf{{x}}^{n,\bar{w}}]_{i})-f_{i,n,\bar{w}}([\mathbf{{x}}^{n,\bar{w}}]_{i})=f_{i}([\mathbf{{x}}^{n+1,0}]_{i})-f_{i,n+1,0}([\mathbf{{x}}^{n+1,0}]_{i}).$ Proposition \[prop:P-D-of-one-block\] and Assumption \[assu:to-start-subalg\](2) tell us that $$\begin{aligned} & & \begin{array}{c} f_{i,n+1,0}^{*}([\mathbf{{z}}_{i}^{n+1,0}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n+1,0}]_{i}-[\mathbf{{z}}_{i}^{n+1,0}]_{i}\|^{2}\big]\end{array}\\ & = & \begin{array}{c} -f_{i,n+1,0}([\mathbf{{x}}^{n+1,0}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n+1,0}]_{i}\|^{2}-\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n+1,0}]_{i}-[\mathbf{{x}}^{n+1,0}]_{i}\|^{2}\big].\end{array}\end{aligned}$$ A similar result holds for the problem involving $f_{i,n+1,1}(\cdot)$. Since $S_{n+1,1}\cap\bar{\mathcal{E}}=\emptyset$ and $\mathbf{{v}}_{H}^{n+1,0}=\mathbf{{v}}_{H}^{n+1,1}$, we have $$\begin{aligned} \Delta_{i,n} & = & \begin{array}{c} \big[f_{i,n+1,1}([\mathbf{{x}}^{n+1,1}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n+1,1}]_{i}-[\mathbf{{x}}^{n+1,1}]_{i}\|^{2}\big]\end{array}\\ & & \begin{array}{c} -\big[f_{i,n+1,0}([\mathbf{{x}}^{n+1,0}]_{i})+\frac{1}{2}\|[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n+1,0}]_{i}-[\mathbf{{x}}^{n+1,0}]_{i}\|^{2}\big].\end{array}\end{aligned}$$ The analogue of Lemma \[lem:alpha-recurrs\](1) tells us that $\Delta_{i,n}\geq\frac{1}{2}t_{i,n}^{2}$, where $t_{i,n}$ is the positive root satisfying $$\begin{array}{c} \frac{1}{2}t_{i,n}^{2}+\|s_{i,n,\bar{w}}+[\mathbf{{x}}^{n+1,0}]_{i}-[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n+1,0}]_{i}\|t_{i,n}=E_{i,n},\end{array}$$ where $s_{i,n,\bar{w}}\in\partial f_{i}([\mathbf{{x}}^{n,\bar{w}}]_{i})$ is the subgradient used to form the linearization of $f(\cdot)$ at $[\mathbf{{x}}^{n+1,0}]_{i}$. Note that $[\mathbf{{x}}^{n,\bar{w}}]_{i}-[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,\bar{w}}]_{i}=-[\mathbf{{z}}_{i}^{n,\bar{w}}]_{i}$, so the term $\|s_{i,n,\bar{w}}+[\mathbf{{x}}^{n,\bar{w}}]_{i}-[\bar{\mathbf{{x}}}-\mathbf{{v}}_{H}^{n,\bar{w}}]_{i}\|$ becomes $\|s_{i,n,\bar{w}}-[\mathbf{{z}}_{i}^{n,\bar{w}}]_{i}\|$. Since both $s_{i,n,\bar{w}}$ and $[\mathbf{{z}}_{i}^{n,\bar{w}}]_{i}$ are bounded, $\limsup_{n\to\infty}t_{i,n}>0$, and so $\limsup_{n\to\infty}\Delta_{i,n}>0$. We can check from the definitions that $$\begin{array}{c} F_{n+1,1}(\{\mathbf{{z}}_{\alpha}^{n+1,1}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})-F_{n+1,0}(\{\mathbf{{z}}_{\alpha}^{n+1,0}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})=\underset{i\in S_{n,\bar{w}}}{\sum}\Delta_{i,n}.\end{array}$$ This means that the dual objective value can increase indefinitely, which then implies that the problem is infeasible, which is a contradiction. Proposition \[prop:control-growth\] below shows some reasonable conditions to guarantee . The ideas of its proof were already present in [@Pang_Dyk_spl; @Pang_Dist_Dyk], so we defer its proof to the appendix. \[prop:control-growth\](Growth of $\sum_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}\|\mathbf{{z}}_{\alpha}^{n,w}\|$) In Algorithm \[alg:Ext-Dyk\], suppose: 1. There are only finitely many $S_{n,w}$ for which $S_{n,w}\cap[\mathcal{V}_{1}\cup\mathcal{V}_{2}]$ contains more than one element. 2. There are constants $M_{1}>0$ and $M_{2}>0$ such that the size of the set $$\big\{(n',w):n'\leq n,\,w\in\{1,\dots,\bar{w}\},\,|S_{n',w}\cap\mathcal{V}|>1\big\}$$ is bounded by $M_{1}\sqrt{n}+M_{2}$ for all $n$. Then condition (1) in Theorem \[thm:convergence\] holds. \[subsec:composition-lin-op\]Composition with a linear operator ---------------------------------------------------------------- Suppose some $f_{i}(\cdot)$ were defined as $f_{i,1}\circ A_{i}(\cdot)$, where $f:Y\to\mathbb{R}$ is a closed convex function, $Y$ is another finite dimensional Hilbert space and $A_{i}:\mathbb{R}^{m}\to Y$ is a linear map. One may either still try to take the proximal mapping of $f_{i}(\cdot)$, but it may involve some expensive operations on $A_{i}$. Alternatively, we can write or we can write $f_{i,1}\circ A_{i}(x_{i})$ as $$f_{i}(y_{i})+\delta_{\{(x,y):A_{i}x=y\}}(x_{i},y_{i}),$$ which splits into the sum of two functions. Note however that since we require the problem to be strongly convex, creating the new variable $y$ adds new regularizing terms to the objective function. Conclusion ========== The main contribution in this paper is to show that the distributed Dykstra’s algorithm can be extended to incorporate subdifferentiable functions in a natural manner so that the algorithm converges to the primal minimizer, even if there is no dual minimizer. A next question is to find convergence rates of the algorithm. The derivation of such rates uses rather different techniques from that of this paper, and requires additional conditions to ensure the existence of a dual minimizer. We defer this to [@Pang_rate_D_Dyk], where we also perform numerical experiments that show that the distributed Dykstra’s algorithm is sound. Further proofs ============== In this appendix, we completing the parts of the proofs of Theorem \[thm:convergence\] and \[prop:control-growth\] that we consider to be too similar to the ones in [@Pang_Dist_Dyk]. The following inequality describes the duality gap between and . $$\begin{aligned} & & \begin{array}{c} \frac{1}{2}\|\bar{\mathbf{{x}}}-\mathbf{{x}}\|^{2}+\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{f}_{\alpha}(\mathbf{{x}})-F(\{\mathbf{{z}}_{\alpha}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\end{array}\label{eq:From-8}\\ & \overset{\eqref{eq:Dykstra-dual-defn}}{=} & \begin{array}{c} \frac{1}{2}\|\bar{\mathbf{{x}}}-\mathbf{{x}}\|^{2}+\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}[\mathbf{f}_{\alpha}(\mathbf{{x}})+\mathbf{f}_{\alpha}^{*}(\mathbf{{z}}_{\alpha})]-\left\langle \bar{\mathbf{{x}}},\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{{z}}_{\alpha}\right\rangle +\frac{1}{2}\left\Vert \underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{{z}}_{\alpha}\right\Vert ^{2}\end{array}\nonumber \\ & \overset{\scriptsize\mbox{Fenchel duality}}{\geq} & \begin{array}{c} \frac{1}{2}\|\bar{\mathbf{{x}}}-\mathbf{{x}}\|^{2}+\left\langle \mathbf{{x}},\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{{z}}_{\alpha}\right\rangle -\left\langle \bar{\mathbf{{x}}},\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{{z}}_{\alpha}\right\rangle +\frac{1}{2}\left\Vert \underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{{z}}_{\alpha}\right\Vert ^{2}\end{array}\nonumber \\ & = & \begin{array}{c} \frac{1}{2}\left\Vert \bar{\mathbf{{x}}}-\mathbf{{x}}-\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{{z}}_{\alpha}\right\Vert ^{2}\geq0.\end{array}\nonumber \end{aligned}$$ We continue with proving the rest of Theorem \[thm:convergence\]. \[Proof of rest of Theorem \[thm:convergence\]\] We first show that (i) to (vi) implies the final assertion. For all $n\in\mathbb{N}$ we have, from weak duality, $$\begin{array}{c} F(\{\mathbf{{z}}_{\alpha}^{n,\bar{w}}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\leq\frac{1}{2}\|\bar{\mathbf{{x}}}-(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*})\|^{2}+\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\overset{}{\sum}}\mathbf{f}_{\alpha}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*}).\end{array}\label{eq:weak-duality}$$ Since the values $\{F_{n,w}(\{\mathbf{{z}}_{\alpha}^{n,\bar{w}}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\}_{n=1}^{\infty}$ are nondecreasing in $n$, we make use of (v) to get $$\begin{array}{rcl} \underset{n\to\infty}{\lim}F_{n,w}(\{\mathbf{{z}}_{\alpha}^{n,\bar{w}}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}) & = & \frac{1}{2}\|\bar{\mathbf{{x}}}-(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*})\|^{2}+\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\overset{}{\sum}}\mathbf{f}_{\alpha}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*}).\end{array}$$ Since $F_{n,w}(\cdot)\leq F(\cdot)\leq\frac{1}{2}\|\bar{\mathbf{{x}}}-(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*})\|^{2}+\sum_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}\mathbf{f}_{\alpha}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*})$, we have . Hence $\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*}=\arg\min_{\mathbf{{x}}}\mathbf{f}(\mathbf{{x}})+\frac{1}{2}\|\mathbf{{x}}-\bar{\mathbf{{x}}}\|^{2}$, and (substituting $\mathbf{{x}}=\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*}$ in ) $$\begin{aligned} & & \begin{array}{c} \frac{1}{2}\|\bar{\mathbf{{x}}}-(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*})\|^{2}+\mathbf{f}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*})-F(\{\mathbf{{z}}_{\alpha}^{n,\bar{w}}\}_{\alpha\in\mathcal{E}\cup\mathcal{V}})\end{array}\\ & \overset{\eqref{eq:From-8},\eqref{eq:v-H-def},\eqref{eq:from-10}}{\geq} & \begin{array}{c} \frac{1}{2}\|\bar{\mathbf{{x}}}-(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*})-\mathbf{{v}}_{A}^{n,\bar{w}}\|^{2}\end{array}\\ & \overset{\eqref{eq:x-from-v-A}}{=} & \begin{array}{c} \frac{1}{2}\|\mathbf{{x}}^{n,\bar{w}}-(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*})\|^{2}.\end{array}\end{aligned}$$ Hence $\lim_{n\to\infty}\mathbf{{x}}^{n,\bar{w}}$ is the minimizer in (P). It remains to prove assertions (i) to (vi). **Proof of (i):** We separate into two cases. We first consider the case when $S_{n,w}\not\subset\mathcal{V}_{4}$. From the fact that $\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in S_{n,w}}$ minimizes (which includes the quadratic regularizer) we have $$F_{n,w}(\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\overset{\eqref{eq:Dykstra-min-subpblm}}{\geq}\begin{array}{c} F_{n,w-1}(\{\mathbf{{z}}_{\alpha}^{n,w-1}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})+\frac{1}{2}\|\mathbf{{v}}_{A}^{n,w}-\mathbf{{v}}_{A}^{n,w-1}\|^{2}.\end{array}\label{eq:SHQP-decrease}$$ (The last term in arises from the quadratic term in .) By line 16 of Algorithm \[alg:Ext-Dyk\], $\mathbf{{z}}_{i}^{n+1,0}=\mathbf{{z}}_{i}^{n,\bar{w}}$ for all $i\in\mathcal{V}$ and $\mathbf{{v}}_{H}^{n+1,0}=\mathbf{{v}}_{H}^{n,\bar{w}}$ (even though the decompositions of $\mathbf{{v}}_{H}^{n+1,0}$ and $\mathbf{{v}}_{H}^{n,\bar{w}}$ may be different). In the second case when $S_{n,w}\subset\mathcal{V}_{4}$, Proposition \[prop:quad-dec-case-2\] and show that the inequality holds. Combining over all $n'\in\{1,\dots,n\}$ and $w\in\{1,\dots,\bar{w}\}$, we have $$\begin{array}{c} F_{1,0}(\{\mathbf{{z}}_{\alpha}^{1,0}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})+\underset{n'=1}{\overset{n}{\sum}}\underset{w=1}{\overset{\bar{w}}{\sum}}\|\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\|^{2}\overset{\eqref{eq:SHQP-decrease}}{\leq}F_{n,\bar{w}}(\{\mathbf{{z}}_{\alpha}^{n,\bar{w}}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}).\end{array}$$ Next, $F_{n,\bar{w}}(\{\mathbf{{z}}_{\alpha}^{n,\bar{w}}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})$ is bounded from above by weak duality. The proof of the claim is complete. **Proof of (ii):** From part (i) and the fact that $F_{n,w}(\cdot)\leq F(\cdot)$, we have $$-F_{1,0}(\{\mathbf{{z}}_{\alpha}^{1,0}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\overset{\scriptsize\mbox{part (i)}}{\geq}-F_{n,w}(\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\geq-F(\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}).\label{eq:three-Fs}$$ Substituting $\mathbf{{x}}$ in to be the primal minimizer $\mathbf{{x}}^{*}$ and $\{\mathbf{{z}}_{\alpha}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}$ to be $\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}$, we have $$\begin{aligned} & & \begin{array}{c} \frac{1}{2}\|\bar{\mathbf{{x}}}-\mathbf{{x}}^{*}\|^{2}+\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\overset{}{\sum}}\mathbf{f}_{\alpha}(\mathbf{{x}}^{*})-F_{1,0}(\{\mathbf{{z}}_{\alpha}^{1,0}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\end{array}\\ & \overset{\eqref{eq:three-Fs}}{\geq} & \begin{array}{c} \frac{1}{2}\|\bar{\mathbf{{x}}}-\mathbf{{x}}^{*}\|^{2}+\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\overset{}{\sum}}\mathbf{f}_{\alpha}(\mathbf{{x}}^{*})-F(\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\end{array}\\ & \overset{\eqref{eq:From-8}}{\geq} & \begin{array}{c} \frac{1}{2}\left\Vert \bar{\mathbf{{x}}}-\mathbf{{x}}^{*}-\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\overset{}{\sum}}\mathbf{{z}}_{\alpha}^{n,w}\right\Vert ^{2}\overset{\eqref{eq:from-10}}{=}\frac{1}{2}\|\bar{\mathbf{{x}}}-\mathbf{{x}}^{*}-\mathbf{{v}}_{A}^{n,w}\|^{2}.\end{array}\end{aligned}$$ The conclusion is immediate. **Proof of (v):** We first make use of the technique in [@BauschkeCombettes11 Lemma 29.1] (which is in turn largely attributed to [@BD86]) to show that $$\begin{array}{c} \underset{n\to\infty}{\liminf}\left[\left(\underset{w=1}{\overset{\bar{w}}{\sum}}\|\mathbf{{v}}_{A}^{n,w}-\mathbf{{v}}_{A}^{n,w-1}\|\right)\sqrt{n}\right]=0.\end{array}\label{eq:root-n-dec}$$ Seeking a contradiction, suppose instead that there is an $\epsilon>0$ and $\bar{n}>0$ such that if $n>\bar{n}$, then $\left(\sum_{w=1}^{\bar{w}}\|\mathbf{{v}}_{A}^{n,w}-\mathbf{{v}}_{A}^{n,w-1}\|\right)\sqrt{n}>\epsilon$. By the Cauchy Schwarz inequality, we have $\begin{array}{c} \frac{\epsilon^{2}}{n}<\left(\underset{w=1}{\overset{\bar{w}}{\sum}}\|\mathbf{{v}}_{A}^{n,w}-\mathbf{{v}}_{A}^{n,w-1}\|\right)^{2}\leq\bar{w}\underset{w=1}{\overset{\bar{w}}{\sum}}\|\mathbf{{v}}_{A}^{n,w}-\mathbf{{v}}_{A}^{n,w-1}\|^{2}.\end{array}$ This contradicts the earlier claim in (i) that $\sum_{n=1}^{\infty}\sum_{w=1}^{\bar{w}}\|\mathbf{{v}}_{A}^{n,w}-\mathbf{{v}}_{A}^{n,w-1}\|^{2}$ is finite. Through , we find a sequence $\{n_{k}\}_{k=1}^{\infty}$ such that $$\begin{array}{c} \lim_{k\to\infty}\left[\left(\underset{w=1}{\overset{\bar{w}}{\sum}}\|\mathbf{{v}}_{A}^{n_{k},w}-\mathbf{{v}}_{A}^{n_{k},w-1}\|\right)\sqrt{n_{k}}\right]=0.\end{array}\label{eq:subseq-sqrt-limit}$$ Recalling the assumption , we get $$\begin{array}{c} \underset{k\to\infty}{\lim}\left[\left(\underset{w=1}{\overset{\bar{w}}{\sum}}\|\mathbf{{v}}_{A}^{n_{k},w}-\mathbf{{v}}_{A}^{n_{k},w-1}\|\right)\|\mathbf{{z}}_{\alpha}^{n_{k},\bar{w}}\|\right]\overset{\eqref{eq:sqrt-growth-sum-z},\eqref{eq:subseq-sqrt-limit}}{=}0\mbox{ for all }\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}.\end{array}\label{eq:lim-sum-norm-z}$$ Moreover, $$\begin{aligned} |\langle\mathbf{{v}}_{A}^{n_{k},\bar{w}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},\alpha)},\mathbf{{z}}_{\alpha}^{n_{k},\bar{w}}\rangle| & \leq & \begin{array}{c} \|\mathbf{{v}}_{A}^{n_{k},\bar{w}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},\alpha)}\|\|\mathbf{{z}}_{\alpha}^{n_{k},\bar{w}}\|\end{array}\label{eq:inn-pdt-sum-norm}\\ & \leq & \begin{array}{c} \left(\underset{w=1}{\overset{\bar{w}}{\sum}}\|\mathbf{{v}}_{A}^{n_{k},w}-\mathbf{{v}}_{A}^{n_{k},w-1}\|\right)\|\mathbf{{z}}_{\alpha}^{n_{k},\bar{w}}\|.\end{array}\nonumber \end{aligned}$$ By (ii), there exists a further subsequence of $\{\mathbf{{v}}_{A}^{n_{k},\bar{w}}\}_{k=1}^{\infty}$ which converges to some $\mathbf{{v}}_{A}^{*}\in\mathbb{R}^{m}$. Combining and gives (v). **Proof of (vi):** From earlier results, we obtain $$\begin{aligned} & & \begin{array}{c} -\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\overset{}{\sum}}\mathbf{f}_{\alpha}(\bar{\mathbf{{z}}}-\mathbf{{v}}_{A}^{*})\end{array}\label{eq:biggest-formula}\\ & \overset{\eqref{eq:From-8}}{\leq} & \begin{array}{c} \frac{1}{2}\|\bar{\mathbf{{x}}}-(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*})\|^{2}-F(\{\mathbf{{z}}_{\alpha}^{n_{k},\bar{w}}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})\end{array}\nonumber \\ & \overset{\eqref{eq:Dykstra-dual-defn},\eqref{eq:stagnant-indices}}{=} & \begin{array}{c} \frac{1}{2}\|\mathbf{{v}}_{A}^{*}\|^{2}+\underset{\alpha\in\bar{\mathcal{E}}_{n_{k}}\cup\mathcal{V}}{\overset{}{\sum}}\mathbf{f}_{\alpha}^{*}(\mathbf{{z}}_{\alpha}^{n_{k},p(n_{k},\alpha)})\end{array}\nonumber \\ & & \begin{array}{c} +\underset{((i,j),\bar{k})\notin\bar{\mathcal{E}}_{n_{k}}}{\overset{}{\sum}}\mathbf{f}_{((i,j),\bar{k})}^{*}(\mathbf{{z}}_{((i,j),\bar{k})}^{n_{k},\bar{w}})-\langle\bar{\mathbf{{x}}},\mathbf{{v}}_{A}^{n_{k},\bar{w}}\rangle+\frac{1}{2}\|\mathbf{{v}}_{A}^{n_{k},\bar{w}}\|^{2}\end{array}\nonumber \\ & \overset{\scriptsize\eqref{eq:error-deriv},\eqref{eq:zero-indices}}{=} & \begin{array}{c} \frac{1}{2}\|\mathbf{{v}}_{A}^{*}\|^{2}+\underset{\alpha\in\bar{\mathcal{E}}_{n_{k}}\cup\mathcal{V}}{\overset{}{\sum}}\langle\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},\alpha)},\mathbf{{z}}_{\alpha}^{n_{k},p(n_{k},\alpha)}\rangle+\underset{i\in\mathcal{V}_{4}}{\overset{}{\sum}}E_{i,n_{k}}-\underset{i\in\mathcal{V}_{4}}{\overset{}{\sum}}D_{i,n_{k}}\end{array}\nonumber \\ & & \begin{array}{c} -\underset{\alpha\in\bar{\mathcal{E}}_{n_{k}}\cup\mathcal{V}}{\overset{}{\sum}}\mathbf{f}_{\alpha}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},\alpha)})-\langle\bar{\mathbf{{x}}},\mathbf{{v}}_{A}^{n_{k},\bar{w}}\rangle+\frac{1}{2}\|\mathbf{{v}}_{A}^{n_{k},\bar{w}}\|^{2}\end{array}\nonumber \\ & \overset{\eqref{eq:stagnant-indices}}{=} & \begin{array}{c} \frac{1}{2}\|\mathbf{{v}}_{A}^{*}\|^{2}-\underset{\alpha\in\bar{\mathcal{E}}_{n_{k}}\cup\mathcal{V}}{\overset{}{\sum}}\langle\mathbf{{v}}_{A}^{n_{k},p(n_{k},\alpha)}-\mathbf{{v}}_{A}^{n_{k},\bar{w}},\mathbf{{z}}_{\alpha}^{n_{k},\bar{w}}\rangle\end{array}\nonumber \\ & & \begin{array}{c} -\underset{\alpha\in\bar{\mathcal{E}}_{n_{k}}\cup\mathcal{V}}{\overset{}{\sum}}\mathbf{f}_{\alpha}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},\alpha)})-\langle\bar{\mathbf{{x}}},\mathbf{{v}}_{A}^{n_{k},\bar{w}}\rangle+\underset{i\in\mathcal{V}_{4}}{\overset{}{\sum}}E_{i,n_{k}}-\underset{i\in\mathcal{V}_{4}}{\overset{}{\sum}}D_{i,n_{k}}\end{array}\nonumber \\ & & \begin{array}{c} +\left\langle \bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},\bar{w}},\underset{\alpha\in\bar{\mathcal{E}}_{n_{k}}\cup\mathcal{V}}{\overset{}{\sum}}\mathbf{{z}}_{\alpha}^{n_{k},p(n_{k},\alpha)}\right\rangle +\frac{1}{2}\|\mathbf{{v}}_{A}^{n_{k},\bar{w}}\|^{2}\end{array}\nonumber \\ & \overset{\eqref{eq:from-10},\eqref{eq:zero-indices}}{=} & \begin{array}{c} \frac{1}{2}\|\mathbf{{v}}_{A}^{*}\|^{2}-\frac{1}{2}\|\mathbf{{v}}_{A}^{n_{k},\bar{w}}\|^{2}-\underset{\alpha\in\bar{\mathcal{E}}_{n_{k}}\cup\mathcal{V}}{\overset{}{\sum}}\langle\mathbf{{v}}_{A}^{n_{k},p(n_{k},\alpha)}-\mathbf{{v}}_{A}^{n_{k},\bar{w}},\mathbf{{z}}_{\alpha}^{n_{k},\bar{w}}\rangle\end{array}\nonumber \\ & & \begin{array}{c} -\underset{\alpha\in\bar{\mathcal{E}}_{n_{k}}\cup\mathcal{V}}{\overset{}{\sum}}\mathbf{f}_{\alpha}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},\alpha)})+\underset{i\in\mathcal{V}_{4}}{\overset{}{\sum}}E_{i,n_{k}}-\underset{i\in\mathcal{V}_{4}}{\overset{}{\sum}}D_{i,n_{k}}.\end{array}\nonumber \end{aligned}$$ Since $\lim_{k\to\infty}\mathbf{{v}}_{A}^{n_{k},\bar{w}}=\mathbf{{v}}_{A}^{*}$, we have $\lim_{k\to\infty}\frac{1}{2}\|\mathbf{{v}}_{A}^{*}\|^{2}-\frac{1}{2}\|\mathbf{{v}}_{A}^{n_{k},\bar{w}}\|^{2}=0$. The third term in the last group of formulas (i.e., the sum involving the inner products) converges to 0 by (v). The term $\lim_{k\to\infty}\sum_{i\in\mathcal{V}_{4}}E_{i,n_{k}}$ equals to 0 by (iii). Next, recall that if $((i,j),\bar{k})\in\bar{\mathcal{E}}_{n_{k}}$, by , we have $\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},((i,j),\bar{k}))}\in H_{((i,j),\bar{k})}$. Note that from Claim \[claim:Fenchel-duality\](b), we have $\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n,p(n,((i,j),\bar{k}))}\in H_{((i,j),\bar{k})}$ for all $((i,j),\bar{k})\in\bar{\mathcal{E}}_{n}$. There is a constant $\kappa_{\bar{\mathcal{E}}_{n_{k}}}>0$ such that $$\begin{aligned} & & d(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},\bar{w}},\cap_{((i,j),\bar{k})\in\bar{\mathcal{E}}}H_{((i,j),\bar{k})})\label{eq:reg-argument}\\ & \overset{\scriptsize{\bar{\mathcal{E}}_{n_{k}}\mbox{ connects }\mathcal{V},\mbox{ Prop \ref{prop:E-connects-V}(1)}}}{=} & d(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},\bar{w}},\cap_{((i,j),\bar{k})\in\bar{\mathcal{E}}_{n_{k}}}H_{((i,j),\bar{k})})\nonumber \\ & \leq & \kappa_{\bar{\mathcal{E}}_{n_{k}}}\max_{((i,j),\bar{k})\in\bar{\mathcal{E}}_{n_{k}}}d(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},\bar{w}},H_{((i,j),\bar{k})})\nonumber \\ & \overset{\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},((i,j),\bar{k}))}\in H_{((i,j),\bar{k})}}{\leq} & \kappa_{\bar{\mathcal{E}}_{n_{k}}}\max_{((i,j),\bar{k})\in\bar{\mathcal{E}}_{n_{k}}}\|\mathbf{{v}}_{A}^{n_{k},\bar{w}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},((i,j),\bar{k}))}\|.\nonumber \end{aligned}$$ Let $\kappa:=\max\{\kappa_{\mathcal{\bar{E}}'}:\bar{\mathcal{E}}'\mbox{ connects }\mathcal{V}\}$. We have $\kappa_{\bar{\mathcal{E}}_{n_{k}}}\leq\kappa$. Taking limits of , the RHS converges to zero by (i), so $d(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*},\cap_{((i,j),\bar{k})\in\bar{\mathcal{E}}}H_{((i,j),\bar{k})})=0$, or $\bar{\mathbf{{x}}}-\mathbf{{\mathbf{v}}}_{A}^{*}\in\cap_{((i,j),\bar{k})\in\bar{\mathcal{E}}}H_{((i,j),\bar{k})}$. So $\sum_{((i,j),\bar{k})\in\bar{\mathcal{E}}}\mathbf{f}_{((i,j),\bar{k})}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*})=0$. Together with the fact that $\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},((i,j),\bar{k}))}\in H_{((i,j),\bar{k})}$, we have $$\sum_{((i,j),\bar{k})\in\bar{\mathcal{E}}_{n_{k}}}\mathbf{f}_{((i,j),\bar{k})}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},((i,j),\bar{k}))})=0=\underset{((i,j),\bar{k})\in\bar{\mathcal{E}}}{\overset{}{\sum}}\mathbf{f}_{((i,j),\bar{k})}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*}).\label{eq:all-indicator-edges-zero}$$ Lastly, by the lower semicontinuity of $\mathbf{f}_{i}(\cdot)$, we have $$\begin{array}{c} -\underset{k\to\infty}{\lim}\underset{i\in\mathcal{V}}{\sum}\mathbf{f}_{i}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},i)})\leq-\underset{i\in\mathcal{V}}{\overset{\phantom{\mathcal{V}}}{\sum}}\mathbf{f}_{i}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*}).\end{array}\label{eq:lsc-argument}$$ As mentioned after , taking the limits as $k\to\infty$ would result in the first three terms and the 5th term of the last formula in to be zero. Hence $$\begin{aligned} & & \begin{array}{c} -\underset{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}{\sum}\mathbf{f}_{\alpha}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*})\end{array}\\ & \overset{\eqref{eq:biggest-formula}}{\leq} & \begin{array}{c} \underset{k\to\infty}{\lim}-\underset{\alpha\in\bar{\mathcal{E}}_{n_{k}}\cup\mathcal{V}}{\sum}\mathbf{f}_{\alpha}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{n_{k},p(n_{k},\alpha)})-\underset{k\to\infty}{\lim}\underset{i\in\mathcal{V}_{4}}{\sum}D_{i,n_{k}}\end{array}\\ & \overset{\eqref{eq:all-indicator-edges-zero},\eqref{eq:lsc-argument},\eqref{eq:D-error-def}}{\leq} & \begin{array}{c} -\underset{\alpha\in\mathcal{\bar{E}}\cup\mathcal{V}}{\sum}\mathbf{f}_{\alpha}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*}).\end{array}\end{aligned}$$ So becomes an equation in the limit, and $\lim_{n_{k}\to\infty}D_{i,n_{k}}=0$ for all $i\in\mathcal{V}_{4}$. The first two lines of then gives $$\begin{array}{c} \underset{k\to\infty}{\lim}F(\{\mathbf{{z}}_{\alpha}^{n_{k},\bar{w}}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})=\frac{1}{2}\|\mathbf{{v}}_{A}^{*}\|^{2}+\underset{i\in\mathcal{V}}{\sum}\mathbf{f}_{i}(\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*}),\end{array}$$ which shows that $\bar{\mathbf{{x}}}-\mathbf{{v}}_{A}^{*}$ is the primal minimizer. Recall the definitions of $F_{n,w}(\cdot)$, $F(\cdot)$ and $D_{i,n}$ in , and . We recall . Also, from line 11 of Algorithm \[alg:Ext-Dyk\], we have $\mathbf{f}_{\alpha,n,w}(\cdot)=\mathbf{f}_{\alpha,n,p(n,\alpha)}(\cdot)$. This gives $F_{n_{k},\bar{w}}(\{\mathbf{{z}}_{\alpha}^{n_{k},\bar{w}}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})+\sum_{i\in\mathcal{V}_{4}}D_{i,n_{k}}=F(\{\mathbf{{z}}_{\alpha}^{n_{k},\bar{w}}\}_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}})$, from which we deduce the equation on the left of as well. \[Proof of Proposition \[prop:control-growth\]\]Since this result is used only in the proof of Theorem \[thm:convergence\](v), we can make use of Theorem \[thm:convergence\](i) and (iii) in its proof. To address condition (1), we can assume that $S_{n,w}\cap[\mathcal{V}_{1}\cup\mathcal{V}_{2}]$ always contains at most one element. Define the sets $\bar{S}_{n,1}$ and $\bar{S}_{n,2}\subset\{1,2,\dots\}\times\{1,\dots,\bar{w}\}$ as $$\begin{aligned} \bar{S}_{n,1} & := & \{(n',w):n'\leq n,\,w\in\{1,\dots,\bar{w}\},\,|S_{n',w}\cap\mathcal{V}|\leq1\}\\ \bar{S}_{n,2} & := & \{(n',w):n'\leq n,\,w\in\{1,\dots,\bar{w}\},\,|S_{n',w}\cap\mathcal{V}|>1\}.\end{aligned}$$ Either $S_{n',w}\cap[\mathcal{V}_{1}\cup\mathcal{V}_{2}]=\emptyset$ or $|S_{n',w}\cap[\mathcal{V}_{1}\cup\mathcal{V}_{2}]|=1$. In the second case, let $i^{*}$ be the index such that $i^{*}\in S_{n',w}\cap[\mathcal{V}_{1}\cup\mathcal{V}_{2}]$. Otherwise, in the first case, we let $i^{*}$ be any index in $[\mathcal{V}_{1}\cup\mathcal{V}_{2}]$. We prove claims based on whether $(n',w)$ lies in $\bar{S}_{n,1}$ or $\bar{S}_{n,2}$. Without loss of generality, we can assume that $S_{n',w}\cap\bar{\mathcal{E}}$ are linearly independent constraints. This also means that for a $\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}$, each $\mathbf{{z}}_{((i,j),\bar{k})}^{n',w}-\mathbf{{z}}_{((i,j),\bar{k})}^{n',w-1}$ can be determined uniquely with a linear map from the relation $$\sum_{\alpha\in\bar{\mathcal{E}}}[\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}]\overset{\eqref{eq:v-H-def}}{=}\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}.$$ Therefore for all $\alpha\in S_{n',w}\cap\bar{\mathcal{E}}$, there is a constant $\kappa_{\alpha,S_{n',w}\cap\bar{\mathcal{E}}}>0$ such that $$\|\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}\|\leq\kappa_{\alpha,S_{n',w}\cap\bar{\mathcal{E}}}\|\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}\|.\label{eq:basic-bdd-z-i-j}$$ Thus there is a constant $\kappa>0$ such that $$\sum_{\alpha\in\bar{\mathcal{E}}}\|\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}\|\overset{\scriptsize{\mbox{Alg \ref{alg:Ext-Dyk} line 14}}}{=}\sum_{\alpha\in S_{n,w}\cap\bar{\mathcal{E}}}\|\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}\|\overset{\eqref{eq:basic-bdd-z-i-j}}{\leq}\kappa\|\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}\|.\label{eq:bdd-z-i-j}$$ **** $$\begin{array}{c} \|\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}\|+\underset{\alpha\in\bar{\mathcal{E}}}{\sum}\|\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}\|+\underset{i\in\mathcal{V}}{\overset{\phantom{i\in\mathcal{V}}}{\sum}}\|\mathbf{{z}}_{i}^{n',w}-\mathbf{{z}}_{i}^{n',w-1}\|\leq C_{2}\|\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\|.\end{array}\label{eq:all-3-bdd}$$ We have $$\begin{aligned} \begin{array}{c} \underset{i\in\mathcal{V}}{\sum}[\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}]_{i}\end{array} & \overset{\scriptsize{\eqref{eq_m:all_acronyms}}}{=} & \begin{array}{c} \underset{i\in\mathcal{V}}{\sum}\underset{\alpha\in S_{n',w}}{\sum}[\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}]_{i}\end{array}\nonumber \\ & \overset{\mathbf{{z}}_{((i,j),\bar{k})}\in D^{\perp},\eqref{eq:D-and-D-perp}}{=} & \begin{array}{c} \underset{i\in\mathcal{V}}{\sum}[\mathbf{{z}}_{i^{*}}^{n',w}-\mathbf{{z}}_{i^{*}}^{n',w-1}]_{i}\end{array}\nonumber \\ & \overset{\scriptsize{\mbox{Prop \ref{prop:sparsity}}}}{=} & \begin{array}{c} [\mathbf{{z}}_{i^{*}}^{n',w}-\mathbf{{z}}_{i^{*}}^{n',w-1}]_{i^{*}}.\end{array}\label{eq:for-norm-rate}\end{aligned}$$ Recall that the norm $\|\cdot\|$ always refers to the $2$-norm unless stated otherwise. By the equivalence of norms in finite dimensions, we can find a constant $c_{1}$ such that $$\begin{aligned} \begin{array}{c} \|\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\|\end{array} & \geq & \begin{array}{c} c_{1}\underset{i\in\mathcal{V}}{\sum}\|[\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}]_{i}\|\end{array}\label{eq:bdd-z-i}\\ & \geq & \begin{array}{c} c_{1}\Big\|\underset{i\in\mathcal{V}}{\sum}[\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}]_{i}\Big\|\end{array}\nonumber \\ & \overset{\eqref{eq:for-norm-rate}}{=} & \begin{array}{c} c_{1}\|\mathbf{{z}}_{i^{*}}^{n,w}-\mathbf{{z}}_{i^{*}}^{n,w-1}\|\end{array}\nonumber \\ & \overset{\scriptsize{\mbox{Alg \ref{alg:Ext-Dyk} line 14}}}{=} & \begin{array}{c} c_{1}\underset{i\in\mathcal{V}}{\sum}\|\mathbf{{z}}_{i}^{n,w}-\mathbf{{z}}_{i}^{n,w-1}\|.\end{array}\nonumber \end{aligned}$$ Next, $\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}\overset{\eqref{eq:from-10}}{=}\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}-(\mathbf{{z}}_{i^{*}}^{n',w}-\mathbf{{z}}_{i^{*}}^{n',w-1})$, so $$\begin{aligned} \begin{array}{c} \|\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}\|\end{array} & \leq & \begin{array}{c} \|\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\|+\|\mathbf{{z}}_{i^{*}}^{n',w}-\mathbf{{z}}_{i^{*}}^{n',w-1}\|\end{array}\label{eq:bdd-v-H}\\ & \overset{\eqref{eq:bdd-z-i}}{\leq} & \begin{array}{c} \left(1+\frac{1}{c_{1}}\right)\|\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\|.\end{array}\nonumber \end{aligned}$$ We can choose $\{\mathbf{{z}}_{\alpha}^{n,w}\}_{\alpha\in\bar{\mathcal{E}}}$ such that $$\sum_{\alpha\in S_{n',w}\cap\bar{\mathcal{E}}}[\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}]\overset{\scriptsize{\mbox{Alg \ref{alg:Ext-Dyk} line 14}}}{=}\sum_{\alpha\in\bar{\mathcal{E}}}[\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}]\overset{\eqref{eq:v-H-def}}{=}\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}.\label{eq:decomp-v-H}$$ Combining , and together shows that there is a constant $C_{2}>1$ such that holds.$\hfill\triangle$ **** $$\begin{array}{c} \|\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}\|+\underset{\alpha\in\bar{\mathcal{E}}}{\sum}\|\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}\|+\underset{i\in\mathcal{V}}{\overset{\phantom{i\in\mathcal{V}}}{\sum}}\|\mathbf{{z}}_{i}^{n',w}-\mathbf{{z}}_{i}^{n',w-1}\|\leq C_{5}.\end{array}\label{eq:all-3-bdd-2}$$ We have $$\begin{aligned} & & \begin{array}{c} \|\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\|\end{array}\label{eq:to-bdd-i-star-terms}\\ & \geq & \begin{array}{c} c_{1}\Big\|\underset{i\in\mathcal{V}}{\sum}[\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}]_{i}\Big\|\end{array}\nonumber \\ & \overset{\eqref{eq_m:all_acronyms},\mathbf{{z}}_{((i,j),\bar{k})}\in D^{\perp},\eqref{eq:D-and-D-perp}}{=} & \begin{array}{c} c_{1}\Big\|\underset{i\in\mathcal{V}}{\sum}\underset{j\in\mathcal{V}}{\sum}[\mathbf{{z}}_{j}^{n',w}-\mathbf{{z}}_{j}^{n',w-1}]_{i}\Big\|\end{array}\nonumber \\ & \overset{\scriptsize{\mbox{Prop. }\ref{prop:sparsity}}}{=} & \begin{array}{c} c_{1}\Big\|\underset{i\in\mathcal{V}}{\sum}[\mathbf{{z}}_{i}^{n',w}-\mathbf{{z}}_{i}^{n',w-1}]_{i}\Big\|\end{array}\nonumber \\ & = & \begin{array}{c} c_{1}\Big\|[\mathbf{{z}}_{i^{*}}^{n',w}-\mathbf{{z}}_{i^{*}}^{n',w-1}]_{i^{*}}+\underset{i\in\mathcal{V}_{3}\cup\mathcal{V}_{4}}{\sum}[\mathbf{{z}}_{i}^{n',w}-\mathbf{{z}}_{i}^{n',w-1}]_{i}\Big\|.\end{array}\nonumber \end{aligned}$$ From Theorem \[thm:convergence\](i), there is a constant $C_{3}>0$ such that $\|\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\|\leq C_{3}$. By Theorem \[thm:convergence\](iii), there is a constant $C_{4}>0$ such that $$\|\mathbf{{z}}_{i}^{n',w}-\mathbf{{z}}_{i}^{n',w}\|\leq C_{4}\mbox{ for all }i\in\mathcal{V}_{3}\cup\mathcal{V}_{4}.\label{eq:final-i-bdd}$$ So $\|\mathbf{{z}}_{i^{*}}^{n',w}-\mathbf{{z}}_{i^{*}}^{n',w-1}\|=\|[\mathbf{{z}}_{i^{*}}^{n',w}-\mathbf{{z}}_{i^{*}}^{n',w-1}]_{i^{*}}\|\overset{\eqref{eq:to-bdd-i-star-terms},\eqref{eq:final-i-bdd}}{\leq}(|\mathcal{V}_{3}|+|\mathcal{V}_{4}|)C_{4}+\frac{1}{c_{1}}C_{3}$, and $$\begin{array}{c} \underset{i\in\mathcal{V}}{\sum}\|\mathbf{{z}}_{i}^{n',w}-\mathbf{{z}}_{i}^{n',w-1}\|\overset{\eqref{eq:to-bdd-i-star-terms},\eqref{eq:final-i-bdd}}{\leq}2(|\mathcal{V}_{3}|+|\mathcal{V}_{4}|)C_{4}+\frac{1}{c_{1}}C_{3}.\end{array}\label{eq:final-star-bdd}$$ Next, from , we have $$\begin{aligned} \begin{array}{c} \mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\end{array} & = & \begin{array}{c} \mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}+[\mathbf{{z}}_{i^{*}}^{n',w}-\mathbf{{z}}_{i^{*}}^{n',w-1}]+\underset{i\in\mathcal{V}_{3}\cup\mathcal{V}_{4}}{\sum}[\mathbf{{z}}_{i}^{n',w}-\mathbf{{z}}_{i}^{n',w-1}]\end{array}\nonumber \\ \begin{array}{c} \|\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}\|\end{array} & \leq & \begin{array}{c} \|\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\|+\|\mathbf{{z}}_{i^{*}}^{n',w}-\mathbf{{z}}_{i^{*}}^{n',w-1}\|+\underset{i\in\mathcal{V}_{3}\cup\mathcal{V}_{4}}{\sum}\|\mathbf{{z}}_{i}^{n',w}-\mathbf{{z}}_{i}^{n',w-1}\|\end{array}\nonumber \\ & \leq & \begin{array}{c} C_{3}+2(|\mathcal{V}_{3}|+|\mathcal{V}_{4}|)C_{4}+\frac{1}{c_{1}}C_{3}.\end{array}\label{eq:final-H-bdd}\end{aligned}$$ Combining , and , we can show that Claim 2 holds. $\hfill\triangle$ Since $\{\mathbf{{z}}_{\alpha}^{n,0}\}_{\alpha\in\bar{\mathcal{E}}}$ was chosen to satisfy , there is some $M>1$ such that $$\begin{array}{c} \underset{\alpha\in\bar{\mathcal{E}}}{\sum}\|\mathbf{{z}}_{\alpha}^{n,0}\|\overset{\eqref{eq:reset-z-i-j-3}}{\leq}M\|\mathbf{{v}}_{H}^{n,0}\|\overset{\eqref{eq:reset-z-i-j-4}}{\leq}M\left(\|\mathbf{{v}}_{H}^{1,0}\|+\underset{n'=1}{\overset{n-1}{\sum}}\underset{w=1}{\overset{\bar{w}}{\sum}}\|\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}\|\right)\end{array}\label{eq:z-bdd-for-E}$$ Now for any $n\geq1$, we have $$\begin{aligned} \sum_{\alpha\in\bar{\mathcal{E}}\cup\mathcal{V}}\|\mathbf{{z}}_{\alpha}^{n,\bar{w}}\| & \leq & \sum_{\alpha\in\bar{\mathcal{E}}}\|\mathbf{{z}}_{\alpha}^{n,0}\|+\sum_{w=1}^{\bar{w}}\sum_{\alpha\in\bar{\mathcal{E}}}\|\mathbf{{z}}_{\alpha}^{n,w}-\mathbf{{z}}_{\alpha}^{n,w-1}\|\label{eq:2nd-big-ineq}\\ & & +\sum_{n'=1}^{n}\sum_{w=1}^{\bar{w}}\sum_{\alpha\in\mathcal{V}}\|\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}\|+\sum_{\alpha\in\mathcal{V}}\|\mathbf{{z}}_{\alpha}^{1,0}\|\nonumber \\ & \overset{\eqref{eq:z-bdd-for-E}}{\leq} & M\|\mathbf{{v}}_{H}^{1,0}\|+\sum_{\alpha\in\mathcal{V}}\|\mathbf{{z}}_{\alpha}^{1,0}\|+\sum_{w=1}^{\bar{w}}\left(\sum_{\alpha\in\bar{\mathcal{E}}}\|\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}\|\right)\nonumber \\ & & +\sum_{n'=1}^{n-1}\sum_{w=1}^{\bar{w}}\left(M\|\mathbf{{v}}_{H}^{n',w}-\mathbf{{v}}_{H}^{n',w-1}\|+\sum_{\alpha\in\mathcal{V}}\|\mathbf{{z}}_{\alpha}^{n',w}-\mathbf{{z}}_{\alpha}^{n',w-1}\|\right)\nonumber \\ & \overset{\eqref{eq:all-3-bdd},\eqref{eq:all-3-bdd-2}}{\leq} & M\|\mathbf{{v}}_{H}^{1,0}\|+\sum_{\alpha\in\mathcal{V}}\|\mathbf{{z}}_{\alpha}^{1,0}\|+MC_{2}\sum_{m=1}^{n}\sum_{w=1}^{\bar{w}}\|\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\|\nonumber \\ & & +MC_{5}\left(M_{1}\sqrt{n}+M_{2}\right).\nonumber \end{aligned}$$ By the Cauchy Schwarz inequality, we have $$\sum_{n'=1}^{n}\sum_{w=1}^{\bar{w}}\|\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\|\leq\sqrt{n\bar{w}}\sqrt{\sum_{n'=1}^{n}\sum_{w=1}^{\bar{w}}\|\mathbf{{v}}_{A}^{n',w}-\mathbf{{v}}_{A}^{n',w-1}\|^{2}}.\label{eq:sum-bdd-by-sqrt-n}$$ Since the second square root of the right hand side of is bounded by Theorem \[thm:convergence\](i), we make use of to obtain the conclusion as needed. [^1]: C.H.J. Pang acknowledges grant R-146-000-214-112 from the Faculty of Science, National University of Singapore.
--- author: - 'Julien Baglio$^*_{}$,' - 'Cédric Weiland$^\dagger_{}$' bibliography: - 'hhh\_inverse-seesaw\_jhep-v2.bib' title: '**The triple Higgs coupling: A new probe of low-scale seesaw models**' --- Introduction ============ The CERN Large Hadron Collider (LHC) was home to one of the biggest discoveries in particle physics with the observation of a Higgs boson with a mass of around 125 GeV in 2012 [@Aad:2012tfa; @Chatrchyan:2012ufa], thanks to the data collected in Run 1 at 7 and 8 TeV. The Higgs boson is the remnant of the electroweak symmetry-breaking (EWSB) mechanism [@Higgs:1964ia; @Englert:1964et; @Higgs:1964pj; @Guralnik:1964eu] that generates the masses of the other fundamental particles and unitarizes the scattering of weak bosons [@Cornwall:1973tb; @LlewellynSmith:1973yud]. The Run 2 data collected in 2015 and 2016 at a center-of-mass energy of 13 TeV still displays a compatibility of this Higgs boson with the Standard Model (SM) hypothesis; nevertheless we know that the SM cannot be the ultimate theory. In particular the observation of neutrino oscillations, confirmed in 1998 at Super-Kamiokande [@Fukuda:1998mi], implies that neutrinos are massive, which cannot be explained in the SM framework and thus calls for an extension of the SM. One of the simplest possibilities to explain the non-zero neutrino masses and mixing is to add fermionic gauge singlets that will play the role of right-handed neutrinos. The addition of these heavy sterile neutrinos leads to the type I seesaw model and its various extensions [@Minkowski:1977sc; @Ramond:1979py; @GellMann:1980vs; @Yanagida:1979as; @Mohapatra:1979ia; @Schechter:1980gr; @Schechter:1981cv; @Mohapatra:1986aw; @Mohapatra:1986bd; @Bernabeu:1987gr; @Pilaftsis:1991ug; @Ilakovac:1994kj; @Akhmedov:1995ip; @Akhmedov:1995vm; @Barr:2003nn; @Malinsky:2005bi]. A very recent study summarizes the possible direct detection possibilities and indirect tests for heavy sterile neutrinos at lepton-lepton, proton-proton and lepton-proton colliders [@Antusch:2016ejd], see also references therein. In a recent article [@Baglio:2016ijw] we have presented the triple Higgs coupling $\lambda_{HHH}^{}$ as a new observable to test neutrino mass generating mechanisms in a regime of mass difficult to probe otherwise. The measure of $\lambda_{HHH}^{}$ is one of the main goals of the high-luminosity run of the LHC (HL-LHC) as well as of the future colliders, such as the electron-positron International Linear Collider (ILC) [@Baer:2013cma] or the Future Circular Collider in hadron-hadron mode (FCC-hh), a potential 100 TeV $pp$ collider (for the Higgs studies see reviews in refs. [@Arkani-Hamed:2015vfh; @Baglio:2015wcg; @Contino:2016spe]). It would be a direct probe of the shape of the scalar potential that triggers EWSB. Any deviation of this coupling from the SM prediction is then welcomed to unravel new physics. In ref. [@Baglio:2016ijw] the study of neutrino effects on $\lambda_{HHH}^{}$ was done in the context of a simplified model with the SM plus one heavy Dirac neutrino. It was found that effects as large as $+30\%$ at one-loop could be obtained, at the limit of the currently foreseen $\sim 35\,\%$ sensitivity that the HL-LHC will have to the SM triple Higgs coupling, when combining ATLAS and CMS data [@CMS-PAS-FTR-15-002], but clearly measurable at the ILC [@Fujii:2015jha] or the FCC [@He:2015spf]. A comprehensive study in a realistic and renormalizable model of neutrino masses was still left to be done. In this article, we fill the gap and present the first analysis of Majorana neutrino effects on $\lambda_{HHH}^{}$. We work within the inverse seesaw (ISS) model [@Mohapatra:1986aw; @Mohapatra:1986bd; @Bernabeu:1987gr], a renormalizable low-scale seesaw model generating neutrino masses. After taking into account all relevant constraints, we obtain effects that can be as large as a $\sim +30\%$ increase of $\lambda_{HHH}^{}$, similar to the effects that we found in our previous article [@Baglio:2016ijw] using a simplified model. In the case of the ISS model, more heavy neutrinos are present, enhancing the effects as we expected, but the constraints on the model are stronger, reducing the end-effect back to the simplified model expectations. This can be clearly measurable at the ILC and at the FCC-hh and is at the limit of the currently foreseen sensitivity of the HL-LHC. This article is organized as follows. In Section \[sec:model\] we present the ISS model as well as the theoretical and experimental constraints that we consider. We give the technical details of our calculation in Section \[sec:calc\] and present the numerical analysis of the ISS one-loop corrections to $\lambda_{HHH}^{}$ in Section \[sec:pheno\]. A short conclusion is given in Section \[sec:conc\]. We present the details of the parameterization adopted for the light neutrino mass matrix in Appendix \[app:NLO\] and the analytical expressions of the one-loop corrections involving the neutrinos are collected in Appendix \[app:HHH\]. Model and constraints {#sec:model} ===================== While our calculation and the analytical results presented in Section \[sec:calc\] are applicable to all models with extra fermionic gauge singlets and Majorana neutrinos like the type I seesaw [@Minkowski:1977sc; @Ramond:1979py; @GellMann:1980vs; @Yanagida:1979as; @Mohapatra:1979ia; @Schechter:1980gr; @Schechter:1981cv] or the linear seesaw [@Akhmedov:1995ip; @Akhmedov:1995vm; @Barr:2003nn; @Malinsky:2005bi], we will focus in this work on the inverse seesaw (ISS) model for illustrative purposes. After introducing the model and the different parameterizations used to reproduce neutrino oscillations data, we will present the theoretical and experimental constraints considered in our study. The inverse seesaw model {#sec:ISS} ------------------------ One particular variant of the type I seesaw is the ISS model [@Mohapatra:1986aw; @Mohapatra:1986bd; @Bernabeu:1987gr] which has very interesting characteristics leading to a rich phenomenology. In the ISS model the suppression mechanism that guarantees the smallness of neutrino masses is the introduction of a slight breaking of lepton number in the singlet sector (composed of right-handed neutrinos $\nu_R^{}$ and new gauge singlets $X$ with opposite lepton number), in the form of a small Majorana mass $\mu_X^{}$ for the $X$ singlets, compared to the electroweak scale $v\sim 246$ GeV. This allows for large Yukawa couplings compatible with a low (TeV or even lower) mass for the seesaw mediators, contrary to the seesaw model of type I for example, where the mediators have a mass of the order of the GUT scale or the Yukawa couplings are very small. In the inverse seesaw, the additional terms to the SM Lagrangian are $$\label{LagrangianISS} \mathcal{L}_\mathrm{ISS} = - Y^{ij}_\nu \overline{L_{i}} \widetilde{\Phi} \nu_{Rj} - M_R^{ij} \overline{\nu_{Ri}^C} X_j - \frac{1}{2} \mu_{X}^{ij} \overline{X_{i}^C} X_{j} + h.c.\,,$$ where $\Phi$ is the Higgs field and $\widetilde \Phi=\imath \sigma_2 \Phi^*$, $i,j=1\dots 3$, $Y_\nu$ and $M_R$ are complex matrices and $\mu_{X}$ is a complex symmetric matrix whose norm is taken to be small since lepton number is assumed to be nearly conserved. In this work, we do not consider a possible Majorana mass term for the right-handed neutrinos $\nu_R$ since this extra parameter is not relevant to our study. It would only induce negligible corrections to the heavy neutrino masses and the observable that we consider conserves lepton number. Assuming 3 pairs of $\nu_R$ and $X$, the $9\times 9$ neutrino mass matrix reads after electroweak symmetry breaking in the basis $(\nu_L^C\,,\;\nu_R\,,\;X)$, $$\label{ISSmatrix} M_{\mathrm{ISS}} = \left( \begin{array}{c c c} 0 & m_D & 0 \\ m_D^T & 0 & M_R \\ 0 & M_R^T & \mu_X \end{array}\right)\,,$$ with the $3\times3$ Dirac mass matrix given by $m_D=Y_\nu \langle \Phi\rangle$. $M_{\mathrm{ISS}}$ being complex and symmetric, we can use the Takagi factorization to write $$U^T_\nu M_{\mathrm{ISS}} U_\nu = \text{diag}(m_{n_1},\dots,m_{n_9})\,, \label{Unu}$$ where $U_\nu$ is a $9\times 9$ unitary matrix. A specificity of the ISS model is the presence of a nearly conserved lepton number. The light neutrino masses are then suppressed by the small lepton number breaking parameter $\mu_X$ and the heavy Majorana neutrinos, which have nearly degenerate masses, form pseudo-Dirac pairs. This can clearly be seen if we consider only one generation. In the inverse seesaw limit $\mu_X \ll m_D, M_R$, we have one light neutrino $\nu$ and two heavy neutrinos $N_1\,,N_2$ with masses $$\begin{aligned} m_\nu &\simeq \frac{m_{D}^2}{m_{D}^2+M_{R}^2} \mu_X\,\label{mnu},\\ m_{N_1,N_2} &\simeq \sqrt{M_{R}^2+m_{D}^2} \mp \frac{M_{R}^2 \mu_X}{2 (m_{D}^2+M_{R}^2)}\,.\label{mN}\end{aligned}$$ With three generations, $M_{\mathrm{ISS}}$ can be diagonalized by block to give the light neutrino mass matrix, at leading order in the seesaw expansion parameter $m_D M_R^{-1}$, $$\label{MlightLO} M_{\mathrm{light}} \simeq m_D M_R^{T-1} \mu_X M_R^{-1} m_D^T\,.$$ The next order terms are given in Appendix \[app:NLO\]. This $3\times 3$ complex symmetric mass matrix is diagonalized by using a unitary matrix identified with the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix $U_{\rm PMNS}$ [@pontecorvo1957mesonium; @Maki:1962mu]: $$\label{mnulight} U_{\rm PMNS}^T M_{\mathrm{light}} U_{\rm PMNS} = \mathrm{diag}(m_{n_1}\,, m_{n_2}\,, m_{n_3})\equiv m_\nu\,,$$ with $m_{n_1}$, $m_{n_2}$ and $m_{n_3}$ the masses of the three light neutrinos. In order to reproduce low-energy neutrino data, different parameterizations can be introduced. Working in the basis where $M_R$ is diagonal with entries $M_i$, neutrino oscillations are generated by off-diagonal terms in $m_D$ and $\mu_X$. In a first parameterization, we can reconstruct $m_D$ as a function of neutrino oscillation data and high energy parameters. This leads to a Casas-Ibarra parameterization [@Casas:2001sr] adapted to the inverse seesaw $$m_D^T= V^\dagger \mathrm{diag}(\sqrt{M_1}\,,\sqrt{M_2}\,,\sqrt{M_3})\; R\; \mathrm{diag}(\sqrt{m_{n_1}}\,, \sqrt{m_{n_2}}\,, \sqrt{m_{n_3}}) U^\dagger_{\rm PMNS}\,, \label{CasasIbarraISS}$$ where $M_1$, $M_2$, $M_3$ are the positive square roots of $M M^\dagger$ and $M$ is defined by $$M=M_R \mu_X^{-1} M_R^T\,. \label{CasasIbarraISSM}$$ $V$ is a unitary matrix that diagonalize $M$ according to $ M=V^\dagger \mathrm{diag}(M_1\,, M_2\,, M_3) V^*$ and $R$ is a complex orthogonal matrix that can be expressed as $$\label{R_Casas} R = \left( \begin{array}{ccc} c_{2} c_{3} & -c_{1} s_{3}-s_1 s_2 c_3& s_{1} s_3- c_1 s_2 c_3\\ c_{2} s_{3} & c_{1} c_{3}-s_{1}s_{2}s_{3} & -s_{1}c_{3}-c_1 s_2 s_3 \\ s_{2} & s_{1} c_{2} & c_{1}c_{2}\end{array} \right) \,,$$ with $c_i= \cos \theta_i$, $s_i = \sin\theta_i$, $\theta_i$ being arbitrary complex angles. The other possibility is to use the $\mu_X$-parameterization that was introduced in ref. [@Arganda:2014dta], giving $$\mu_X=M_R^T ~m_D^{-1}~ U_{\rm PMNS}^* m_\nu U_{\rm PMNS}^\dagger~ {m_D^T}^{-1} M_R\,.$$ Both parameterizations are based on eq.(\[MlightLO\]) where only the leading order term in the seesaw expansion is considered. While this is sufficient in most of the parameter space, these formulas fail to reproduce low-energy neutrino data when the active-sterile mixing becomes very large. Indeed, a large active-sterile mixing corresponds to a large seesaw expansion parameter $m_D M_R^{-1}$, which makes the next order terms presented in eq.(\[NLOterms\]) relevant. Including the next order terms in the seesaw expansion in the $\mu_X$-parameterization gives $$\begin{aligned} \label{muXparam} \begin{split} \mu_X \simeq & \left(\mathbf{1}-\frac{1}{2} M_R^{*-1} m_D^\dagger m_D M_R^{T-1} \right)^{-1} M_R^T m_D^{-1} U_{\rm PMNS}^* m_\nu U_{\rm PMNS}^\dagger m_D^{T-1} M_R\, \times\\ & \left(\mathbf{1}-\frac{1}{2} M_R^{-1} m_D^T m_D^* M_R^{\dagger-1}\right)^{-1}\,, \end{split}\end{aligned}$$ which allows to better reproduce neutrino oscillation data. The complete derivation of this formula is given in Appendix \[app:NLO\]. Finally, we need to specify the couplings between SM particles and the new fields that are relevant for our calculation of the corrections to the triple Higgs coupling $\lambda_{HHH}$. Following ref. [@Ilakovac:1994kj], we introduce the $B$ and $C$ matrices defined as $$\begin{aligned} B_{ij} &= \sum_{k=1}^3 V_{L\,ki}^* U_{\nu\, kj}^{*}\,,\\ C_{ij} &= \sum_{k=1}^3 U_{\nu\, k i} U_{\nu\, kj}^{*}\,,\end{aligned}$$ where $V_{L}$ is the unitary matrix that diagonalizes the charged lepton mass matrix $M_\mathrm{charged}$ according to $$V_L^\dagger\; M_\mathrm{charged}\; V_R = \mathrm{diag}(m_e\,, m_\mu\,, m_\tau)\,,$$ with $V_R$ another unitary matrix. In the Feynman-’t Hooft gauge and in the mass basis, the relevant interaction terms in the Lagrangian are $$\begin{aligned} \mathcal{L}_{\rm int}^{Z} &= -\frac{g_2}{4 \cos \theta_W} \bar n_i \slashed{Z} \left[C_{ij} P_L - C_{ij}^* P_R \right] n_j\,,\nonumber\\ \mathcal{L}_{\rm int}^{H} &= -\frac{g_2}{4 m_W} H \bar n_i \left[(C_{ij} m_{n_i} + C_{ij}^* m_{n_j}) P_L + (C_{ij} m_{n_j}+C_{ij}^* m_{n_i}) P_R \right] n_j\,,\nonumber\\ \mathcal{L}_{\rm int}^{G^0} &= \frac{\imath g_2}{4 m_W} G^0 \bar n_i \left[- ( C_{ij} m_{n_i} +C_{ij}^* m_{n_j}) P_L + (C_{ij} m_{n_j}+C_{ij}^* m_{n_i}) P_R \right] n_j\,,\nonumber\\ \mathcal{L}_{\rm int}^{W^{\pm}} &= -\frac{g_2}{\sqrt{2}} \bar{l_i} B_{ij} \slashed{W}^{-} P_L n_j + h.c\,,\nonumber\\ \mathcal{L}_{\rm int}^{G^{\pm}} &= \frac{-g_2}{\sqrt{2} m_W} G^{-}\left[\bar{l_i} B_{ij} (m_{l_i} P_L - m_{n_j} P_R) n_j \right] + h.c\,, \label{eqn:iss-lagrangian}\end{aligned}$$ where $g_2$ is the $\mathrm{SU}(2)$ coupling constant, $\theta_W$ is the weak mixing angle and $P_L$, $P_R$ are respectively $(1-\gamma_5)/2$ and $(1+\gamma_5)/2$. Constraints on the ISS model {#sec:constraints} ---------------------------- Strong experimental and theoretical constraints on the parameter space of the model have to be considered, in particular on the size of the active-sterile mixing. Our use of the modified Casas-Ibarra or $\mu_X$-parameterization allows to reproduce neutrino oscillation data. In our numerical study, we explicitly check the agreement with the neutrino masses and mixing obtained in the global fit [NuFIT]{} 3.0 [@Esteban:2016qun]. The light neutrino masses are also chosen to agree with the Planck result [@Ade:2015xua] $$\sum_{i=1}^{3} m_{n_i}^{} < 0.23\,\mathrm{eV}\,. \label{eq:planck}$$ The mixing between the active and sterile neutrinos will also induce deviations from unitarity in the $3\times 3$ sub-matrix $\tilde U_\mathrm{PMNS}^{}$ of the full $9\times 9$ mixing matrix $U_\nu^{}$, that controls the mixing between the light neutrinos [@Langacker:1988ur; @Antusch:2006vwa]. Using a polar decomposition, this square complex matrix can be expressed as $$\tilde U_\mathrm{PMNS}=(I-\eta)\,U_\mathrm{PMNS}\,,$$ where $\eta$ is a Hermitian matrix that encodes the deviations from unitarity. We have included the following constraints from a recent fit [@Fernandez-Martinez:2016lgt] to electroweak precision observables, tests of CKM unitarity and tests of lepton universality, $$\begin{aligned} \label{EWPOconstraints} \sqrt{2|\eta_{ee}|}&<0.050\,, & \sqrt{2|\eta_{e\mu}|}<0.026\,, \nonumber \\ \sqrt{2|\eta_{\mu\mu}|}&<0.021\,, & \sqrt{2|\eta_{e\tau}|}<0.052\,, \nonumber \\ \sqrt{2|\eta_{\tau\tau}|}&<0.075\,, & \sqrt{2|\eta_{\mu\tau}|}<0.035\,.\end{aligned}$$ In the presence of a large active-sterile mixing, the off-diagonal entries in the neutrino Yukawa couplings $Y_\nu$ might also induce large branching ratios for lepton flavor violating (LFV) decays. We have implemented the analytical expressions from ref. [@Ilakovac:1994kj] for the LFV radiative decays and the LFV three-body decays. The corresponding experimental upper limits on the LFV radiative decays [@TheMEG:2016wtm; @Aubert:2009ag] are $$\begin{aligned} \mathrm{Br}(\mu^+\rightarrow e^+ \gamma) &< 4.2\times10^{-13}\,, \\ \mathrm{Br}(\tau^\pm \rightarrow e^\pm \gamma) &< 3.3\times10^{-8}\,, \\ \mathrm{Br}(\tau^\pm \rightarrow \mu^\pm \gamma) &< 4.4\times10^{-8}\,,\end{aligned}$$ at $90\%$ C.L. while the upper limits on LFV three-body decays [@Bellgardt:1987du; @Hayasaka:2010np] are $$\begin{aligned} \mathrm{Br}(\mu^+\rightarrow e^+e^+e^-) &< 1.0\times10^{-12}\,, \\ \mathrm{Br}(\tau^-\rightarrow e^- e^+ e^- ) &< 2.7\times10^{-8} \,, \\ \mathrm{Br}(\tau^-\rightarrow \mu^- \mu^+ \mu^-) &< 2.1\times10^{-8} \,, \\ \mathrm{Br}(\tau^-\rightarrow e^- \mu^+ \mu^-) &< 2.7\times10^{-8} \,, \\ \mathrm{Br}(\tau^-\rightarrow \mu^- e^+ e^-) &< 1.8\times10^{-8} \,, \\ \mathrm{Br}(\tau^-\rightarrow e^+ \mu^- \mu^-) &< 1.7\times10^{-8} \,, \\ \mathrm{Br}(\tau^-\rightarrow \mu^+ e^- e^-) &< 1.5\times10^{-8} \,,\end{aligned}$$ at $90\%$ C.L. We will also require in our study that Yukawa couplings are perturbative since the complex angles of the R matrix in the Casas-Ibarra parameterization or the use of $Y_\nu$ as an input parameter in the $\mu_X$-parameterization can lead to arbitrarily large entries in $Y_\nu$. We will ensure the perturbativity of the Yukawa couplings by requiring $$\frac{|Y_{ij}|^2}{4 \pi} < 1.5\,,$$ for $i,j=1\dots 3$. Since the decay width of heavy neutrinos grows like $m_n^3$ when $m_n \gg m_H$, we also require that their decay width verifies, for $i=4\dots 9$, $$\Gamma_{n_i}<0.6 m_{n_i}\,, \label{eq:neutwith}$$ in order for the quantum state to be a definite particle. The formulae used to calculate the heavy neutrino widths are taken from ref. [@Atre:2009rg]. Framework of the calculation {#sec:calc} ============================ Our calculation is done in the Feynman-’t Hooft gauge and we use the Lagrangian of eq.(\[eqn:iss-lagrangian\]) for the neutrino interactions. The SM scalar potential is written as $$\begin{aligned} V(\Phi) = & -\mu_{}^2 |\Phi|_{}^2 + \lambda |\Phi|_{}^4,\end{aligned}$$ with the Higgs field $\Phi$ given by $$\begin{aligned} \Phi = & \frac{1}{\sqrt{2}} \left(\begin{matrix} \sqrt{2} G^+\\v+H+\imath G^0\end{matrix}\right).\end{aligned}$$ $H$ stands for the Higgs boson, $G_{}^0$ the neutral Goldstone boson, $G_{}^\pm$ the charged Goldstone bosons and $v\simeq 246$ GeV is the vacuum expectation value (vev) of the Higgs field. We can define the Higgs tadpole $t_H^{}$, the Higgs mass $M_H^{}$ and the triple Higgs coupling $\lambda_{HHH}^{}$ as follows, $$\begin{aligned} t_H^{} = & -\left\langle \frac{\partial V}{\partial H} \right\rangle,\nonumber\\ M_H^{ 2} = & \phantom{-} \left\langle \frac{\partial_{}^2 V}{\partial H_{}^2} \right\rangle,\\ \lambda_{HHH}^{} = & -\left\langle \frac{\partial_{}^3 V}{\partial H_{}^3} \right\rangle.\nonumber\end{aligned}$$ This helps to redefine the triple Higgs coupling using $t_H^{}$, $M_H^{}$ and $v$ as input parameters, $$\begin{aligned} \lambda_{HHH}^{} = & - \frac{3 M_H^2}{v} \left( 1 + \frac{t_H^{}}{v M_H^2}\right).\end{aligned}$$ At tree-level, $t_H^{}=0$ and we recover the usual definition of the tree-level triple Higgs coupling, $$\begin{aligned} \lambda^{0} = - \frac{3 M_H^2}{v}.\end{aligned}$$ For the one-loop corrections to the triple Higgs coupling, our set of input parameters that need to be renormalized in the on-shell (OS) scheme will be the following: $$\begin{aligned} M_H^{},\ M_W^{},\ M_Z^{},\ e,\ t_H^{}.\end{aligned}$$ We use the following relations to define the Higgs vev $v$ and the weak angle $\theta_W^{}$, $$\begin{aligned} v & = 2\, \frac{M_W^{} \sin\theta_W^{}}{e},\nonumber\\ \sin_{}^2\theta_W^{} & = 1-\frac{M_W^2}{M_Z^2},\end{aligned}$$ as well as $e^2_{}= 4\pi\alpha$. We require that we have no tadpoles at one loop: $$t^{(1)}_H + \delta t_H^{} = 0 \Rightarrow \delta t_H^{} = - t^{(1)}_H,$$ with $t^{(1)}_H$ being the one-loop un-renormalized contributions to $t_H^{}$. For the other parameters we introduce their counter-terms as follows, $$\begin{aligned} & M_H^2 \rightarrow M_H^2 + \delta M_H^2,\nonumber\\ & M_W^2 \rightarrow M_W^2 + \delta M_W^2,\nonumber\\ & M_Z^2 \rightarrow M_Z^2 + \delta M_Z^2,\\ & e \rightarrow (1+\delta Z_e^{})\, e,\nonumber\\ & H \rightarrow \sqrt{Z_H^{}} H = \left(1+ \frac12 \delta Z_H^{}\right) H.\nonumber\end{aligned}$$ The full renormalized one-loop triple Higgs coupling is finally $$\lambda^{1r}_{HHH}(q_H^*) = \lambda^0 + \lambda_{HHH}^{(1)}(q_H^*) + \delta\lambda_{HHH}^{}\,,$$ with $$\begin{aligned} \delta\lambda_{HHH}^{} = \lambda_{}^0 & \left[\frac32 \delta Z_H^{} + \delta t_H^{} \frac{e}{2 M_W^{} \sin\theta_W^{} M_H^2} +\delta Z_e^{} + \frac{\delta M_H^2}{M_H^2} - \frac{\delta M_W^2}{2 M_W^2} \right.\nonumber\\ & \left. + \frac12 \frac{\cos_{}^2\theta_W^{}}{\sin_{}^2\theta_W^{}} \left( \frac{\delta M_W^2}{M_W^2}-\frac{\delta M_Z^2}{M_Z^2}\right) \right],\end{aligned}$$ and $\lambda^{(1)}_{HHH}(q_H^*)$ stands for the un-renormalized one-loop contributions to the process $H^*_{}\to H H$ with the momentum $q_H^*$ for the off-shell Higgs boson $H^*_{}$. For the numerical analysis carried in the next section, we define the deviation induced by the BSM contribution $\Delta^{\rm BSM}_{}$ as $$\begin{aligned} \Delta^{\rm BSM}_{} = \frac{1}{\lambda^{1r, {\rm SM}}_{HHH}} \left(\lambda^{1r}_{HHH} - \lambda^{1r, {\rm SM}}_{HHH} \right),\end{aligned}$$ where $\lambda^{1r, {\rm SM}}_{HHH}$ stands for the renormalized one-loop SM contribution without the light neutrinos. Introducing the notation $\Sigma_{XY}^{}$ for the self-energy of the process $X\to Y$, we use the usual OS conditions for $M_W^{}$, $M_Z^{}$ and $M_H^{}$, $$\begin{aligned} \delta M_W^2 & = \operatorname{Re}\Sigma^T_{WW}(M_H^2),\nonumber\\ \delta M_Z^2 & = \operatorname{Re}\Sigma^T_{ZZ}(M_H^2),\nonumber\\ \delta M_H^2 & = \operatorname{Re}\Sigma_{HH}^{}(M_H^2).\end{aligned}$$ For the electric charge $e$ we use the following condition to be independent from the light fermion masses [@Denner:1991kt; @Nhung:2013lpa], $$\begin{aligned} \delta Z_e^{} = \frac{\sin\theta_W^{}}{\cos\theta_W^{}}\frac{\operatorname{Re}\Sigma^T_{\gamma Z}(0)}{M_Z^2} - \frac{\operatorname{Re}\Sigma^T_{\gamma \gamma}(M_Z^2)}{M_Z^2}.\end{aligned}$$ For the Higgs field renormalization we have $$\begin{aligned} \delta Z_H^{} & = -\operatorname{Re}\left.\frac{\partial \Sigma_{HH}^{} (k_{}^2)}{\partial k_{}^2}\right|^{}_{k_{}^2=M_H^2}.\end{aligned}$$ The neutrino interactions induce changes in the $W$ and $Z$ self-energies as well as in the Higgs tadpole, self-energy and self-couplings. We display in Fig. \[fig:feyndiag\] the Feynman diagrams for the neutrino contributions to the $W$, $Z$ and Higgs bosons self-energies, the Higgs tadpole and the one-loop un-renormalized triple Higgs coupling. We also collect in Appendix \[app:HHH\] the analytical expressions of the neutrino contributions to $\delta M_W^{}$, $\delta M_Z^{}$, $\delta t_H^{}$, $\Sigma_{HH}^{}$ and $\lambda_{HHH}^{(1)}$. They were obtained using [FeynArts 2.7]{} [@Hahn:2000kx] and [FormCalc 7.5]{} [@Hahn:1998yk], in which we have implemented our own Model File for the ISS model. The scalar and tensor loop functions [@'tHooft:1978xw; @Passarino:1978jh] have been evaluated with [LoopTools 2.13]{} [@vanOldenborgh:1990yc; @Hahn:1998yk; @Hahn:2006qw]. We have checked numerically that the UV divergences cancel in the final result and that the renormalized one-loop triple Higgs coupling does not depend on the choice of the renormalization scale. ![Feynman diagrams for the neutrino contributions to the one-loop $W$ and $Z$ boson self-energies (upper line) and the one-loop Higgs boson self energy, tadpole and triple Higgs coupling (lower line). In all diagrams, the indices i/j/k run from 1 to 9.[]{data-label="fig:feyndiag"}](./figures/feyndiag_iss.pdf){width="\textwidth"} Numerical results {#sec:pheno} ================= We present in this section the phenomenological study of the one-loop corrected triple Higgs coupling and the dependence of the corrections induced by the heavy neutrinos on the relevant input parameters of the ISS model. The SM parameters are taken from the Particle Data Group (PDG) [@Olive:2016xmw] (with the exception of the SM Higgs boson mass) and read as $$\begin{aligned} m_t^{\rm pole} = 173.5~\text{GeV},\ \ & m_b^{\rm pole} = 4.77~\text{GeV},\ \ & m_c^{\rm pole} = 1.42~\text{GeV},\nonumber\\ M_W^{} = 80.385~\text{GeV},\ \ & M_Z^{} = 91.1876~\text{GeV},\ \ & M_H^{} = 125~\text{GeV},\\ m_e^{} = 0.511~\text{MeV},\ \ & m_\mu^{} = 105.7~\text{MeV},\ \ & m_\tau^{} = 1.777~\text{GeV},\nonumber\\ \phantom{m_e^{} = 0.511~\text{MeV},\ \ } & \alpha^{-1}_{}(M_Z^2) = 127.934.\ \ & \phantom{m_\tau^{} = 1.777~\text{GeV},} \nonumber\end{aligned}$$ The up-, down- and strange-quark masses are also taken from the PDG, but their impact on the calculation is negligible so that we do not list them here. The lightest neutrino mass is chosen as $$\begin{aligned} m_{n_1}^{} = 0.01~\text{eV},\end{aligned}$$ to comply with cosmological constraints as stated in eq.(\[eq:planck\]). We have explicitly checked that choosing a smaller mass for $n_1$ does not qualitatively modify our results and would only induce negligible numerical corrections to our final conclusions. We chose the normal ordering for the neutrino masses and the light neutrino mixing parameters are taken from NuFIT 3.0 [@Esteban:2016qun], with $\delta_\mathrm{CP}=0$. Since the contributions of the light neutrinos are negligible and flavor constraints do not play an important role in our final conclusion, we do not expect our conclusion to change if we consider the inverted ordering. In our study, we will focus on two choices for the off-shell Higgs momentum $q^*_H=500$ GeV and $q^*_H=2500$ GeV. These choices follow from the behavior of the BSM corrections that exhibit a similar dependence on $q^*_H$ between the ISS model and the simplified Dirac 3+1 model that was studied in ref.[@Baglio:2016ijw]. In particular, the maximal negative deviation was obtained for $q^*_H=500$ GeV while the maximal positive deviation was obtained for large off-shell Higgs momenta. To facilitate the comparison between the Majorana ISS case and the simplified Dirac case we take the same fixed values of $q^*_H$ as in ref.[@Baglio:2016ijw] in all the scans. Casas-Ibarra parameterization ----------------------------- In order to get an insight into the parameter space of the ISS model we perform a scan in a Casas-Ibarra parameterization, see eq.(\[CasasIbarraISS\]). The goal is to get an idea of the corrections that are obtained in this parameterization and the impact of the constraints on the model. We perform a random scan using a flat prior on the three real rotation angles $\theta_{1/2/3}^{}$ of the orthogonal matrix $R$ and a logarithmic prior on both the lepton number violating term $\mu_X^{}$ so that the Majorana mass term is $\mu_X^{}\equiv \mu_X^{} {\rm I}_3^{}$, and the mass term $M_R^{}$ so that the matrix $M_R^{}$ is $M_R^{} \equiv M_R^{} {\rm I}_3^{}$. We take all mass and rotation matrices to be real in order to avoid generating CP violation. We use 180 000 randomly generated points in the following parameter range, $$\begin{aligned} 0 & \leq \theta_i^{} & \leq 2\pi, \ (i=1\dots 3),\nonumber\\ 0.2~\text{TeV} & \leq M_R^{} & \leq 1000~\text{TeV}, \\ 7.00\times 10^{-4}_{}~\text{eV} & \leq \mu_X^{} & \leq 8.26\times 10^{4}~\text{eV}. \nonumber\end{aligned}$$ The range choice for the parameter $\mu_X^{}$ follows $\displaystyle\mu_X^{\rm min[max]} = \left(M_R^{\rm min[max]}\right)^2_{} \frac{m_{n_1^{}}^{}}{3\pi v^2_{} [2 v^2_{}]}$, see eq.(\[mnu\]). Heavy neutrino masses below 200 GeV are better probed with direct searches at colliders [@Antusch:2006vwa] (see also ref. [@Antusch:2016ejd] and references therein), thus we do not take $M_R^{} < 0.2$ TeV. ![Random scan of the parameter space with 180 000 points in the Casas-Ibarra parameterization as a function of $M_R^{}$ (in TeV) and of $\mu_X^{}$ (in eV). Upper row: Map of the points according to the constraints on the model. The vermilion (solid) line stands for the LFV constraints and the black (dashed) line stands for the constraints coming from neutrino oscillations. All points below these lines are excluded. In green, the points that pass all the constraints; in yellow, the points that are excluded by theory constraints; in blue, the points that are excluded by EWPO; in purple, the points that are excluded both by EWPO and theory constraints. Lower row: Map of $\Delta^{\rm BSM}$ correction (in percent). In black: $\Delta^{\rm BSM}< -15\%$; in orange: $-15\% \leq \Delta^{\rm BSM}< -5\%$; in light blue: $-5\% \leq \Delta^{\rm BSM}< 0\%$; in green: $0\% \leq \Delta^{\rm BSM}< 5\%$; in vermilion: $5\% \leq \Delta^{\rm BSM}< 15\%$; in blue: $15\% \leq \Delta^{\rm BSM}< 25\%$; in yellow: $25\% \leq \Delta^{\rm BSM}< 35\%$; in purple: $\Delta^{\rm BSM}> 35\%$.[]{data-label="fig:casas-ibarra"}](./figures/hhh_scan_casas-ibarra.pdf) ![Random scan of the parameter space with 180 000 points in the Casas-Ibarra parameterization as a function of $M_R^{}$ (in TeV) and of $\mu_X^{}$ (in eV). Upper row: Map of the points according to the constraints on the model. The vermilion (solid) line stands for the LFV constraints and the black (dashed) line stands for the constraints coming from neutrino oscillations. All points below these lines are excluded. In green, the points that pass all the constraints; in yellow, the points that are excluded by theory constraints; in blue, the points that are excluded by EWPO; in purple, the points that are excluded both by EWPO and theory constraints. Lower row: Map of $\Delta^{\rm BSM}$ correction (in percent). In black: $\Delta^{\rm BSM}< -15\%$; in orange: $-15\% \leq \Delta^{\rm BSM}< -5\%$; in light blue: $-5\% \leq \Delta^{\rm BSM}< 0\%$; in green: $0\% \leq \Delta^{\rm BSM}< 5\%$; in vermilion: $5\% \leq \Delta^{\rm BSM}< 15\%$; in blue: $15\% \leq \Delta^{\rm BSM}< 25\%$; in yellow: $25\% \leq \Delta^{\rm BSM}< 35\%$; in purple: $\Delta^{\rm BSM}> 35\%$.[]{data-label="fig:casas-ibarra"}](./figures/hhh_scan_casas-ibarra-deltaBSM.pdf) The result of our scan is displayed in Fig. \[fig:casas-ibarra\] (upper row) in the $M_R^{}-\mu_X^{}$ plane. The top-right corner (in yellow) of the parameter space is excluded by theory constraints, essentially the perturbativity of the neutrino Yukawa couplings. The region in light blue is excluded by EWPO, while the region in purple is excluded both by EWPO and theory constraints. The dashed black line displays the limit coming from neutrino oscillations. This comes from a breakdown of the leading-order Casas-Ibarra parameterization when the active-sterile mixing is too large, as is evidenced by the flat behavior in $M_R$. In variants of the type I seesaw including the inverse seesaw, the active-sterile mixing is proportional to the seesaw expansion parameter $m_D M_R^{-1}$. However, in the Casas-Ibarra parametrization, $m_D$ grows linearly with $M_R$, see eqs.(\[CasasIbarraISS\])-(\[CasasIbarraISSM\]). As a consequence, $m_D M_R^{-1}$ appears constant in $M_R$ but increases when $\mu_X$ decreases according to $$\frac{m_D}{M_R}\sim \sqrt{\frac{m_\nu}{\mu_X}}\,. \label{scalingCI}$$ The breakdown happens for $\mu_X\sim3$ eV, which in turns roughly corresponds to $m_D/M_R\sim0.1$ when taking $m_\nu=m_{n_3}$. It is worth noting that this value can be predicted from eq.(\[NLOterms\]), where next-order corrections to the light neutrino mass matrix appear at $\mathcal{O}(m_D^2/M_R^2)$, and from the current error on $\Delta m^2$ being at the percent level [@Esteban:2016qun]. The most stringent experimental constraint comes from LFV observables as displayed by the solid vermilion line. The top-left corner (in green) is allowed by all the constraints. This scan has to be compared to the map of $\Delta^{\rm BSM}_{}$ displayed in Fig. \[fig:casas-ibarra\] (lower row), fixing the off-shell Higgs momentum at $q^*_H=2500$ GeV. The parameter space passing all the constraints only contains corrections up to $\sim +1\%$. The most interesting regions are in vermilion, blue, yellow and purple where $\Delta^{\rm BSM}_{}$ reaches $+15\%$, $+25\%$, $+35\%$ and more than $+35\%$, respectively. In order to enter these regions, it is needed to escape the LFV constraints as much as possible. Following ref. [@Arganda:2014dta] we will investigate this region using the $\mu_X^{}$-parameterization and start with the case of degenerate heavy neutrinos. Degenerate heavy neutrinos -------------------------- The scan in the Casas-Ibarra parameterization displayed in Fig. \[fig:casas-ibarra\] shows that the most stringent constraints come from LFV observables. In order to maximize the effects on the triple Higgs coupling we want to escape these constraints and we require for example $(Y_\nu^{} Y_{\nu}^\dagger)_{1 2}^{} = 0$ since decays that involve a $\mu-e$ transition usually give the strongest constraints. This leads to either a diagonal Yukawa matrix or a Yukawa texture as defined in ref. [@Arganda:2014dta], with degenerate heavy neutrinos, $M_R^{} \propto {\rm I}_3^{}$. We investigate in this sub-section the case of the degenerate heavy neutrinos in a with the texture $Y_{\tau\mu}^{(1)}$ taken from ref. [@Arganda:2014dta] and defined below, $$\begin{aligned} Y^{(1)}_{\tau\mu} = |Y_\nu^{}| \left( \begin{matrix} 0 & 1 & -1\\ 0.9 & 1 & 1\\ 1 & 1 & 1 \end{matrix}\right). \label{eq:texture}\end{aligned}$$ We display in Fig. \[fig:texture\] (left) the two-dimensional scan in the plane $(M_R^{},|Y_\nu^{}|)$ where $M_R^{}$ represents the common scaling factor of the $3\times 3$ diagonal mass matrix $M_R^{}$. The off-shell Higgs momentum is again fixed at $q^*_H=2500$ GeV. A large part of the parameter space is excluded and a maximum of $\Delta^{\rm BSM} \sim +5\%$ can be reached at $M_R \simeq 13$ TeV. When compared to the Casas-Ibarra scan, this is the expected order of magnitude for the correction when entering the vermilion region which is excluded by LFV observables only. For large $M_R^{}$ the most important constraint is the neutrino width (\[eq:neutwith\]). For lower $M_R^{}$ the constraints are driven by the violation of the unitarity of the $3\times 3$ matrix $\tilde{U}_{\rm PMNS}^{}$ controlling the mixing between the light neutrinos. ![Left: Contour map of the heavy neutrino correction $\Delta^{\rm BSM}_{}$ to the triple Higgs coupling $\lambda_{HHH}^{}$ (in percent) as a function of the neutrino parameters $M_R^{}$ (in TeV) and $|Y_\nu^{}|$ in the $\mu_X^{}$-parameterization. The Yukawa texture $Y_{\tau\mu}^{(1)}$ defined in eq.(\[eq:texture\]) is used and the off-shell Higgs boson momentum is fixed to $q^*_H = 2500$ GeV. The gray area is excluded by the constraints on the model. The green lines are the approximated contour lines using eq.(\[eq:approx\]) while the black lines correspond to the full calculation. Right: The heavy neutrino correction $\Delta^{\rm BSM}_{}$ (in percent) as a function of the Yukawa scaling parameter $|Y_\nu^{}|$, in the $\mu_X^{}$-parameterization with the texture $Y_{\tau\mu}^{(1)}$. We have fixed the other input parameters for the neutrino sector as $M_R^{} = 10$ TeV and $m_{n_1^{}}=0.01$ eV. The red (solid) curve corresponds to the full calculation, the blue (dashed) curve to the approximate result obtained with eq.(\[eq:approx\]).[]{data-label="fig:texture"}](./figures/hhh_scan_2500_muXparam_texture1.pdf){width="99.00000%"} ![Left: Contour map of the heavy neutrino correction $\Delta^{\rm BSM}_{}$ to the triple Higgs coupling $\lambda_{HHH}^{}$ (in percent) as a function of the neutrino parameters $M_R^{}$ (in TeV) and $|Y_\nu^{}|$ in the $\mu_X^{}$-parameterization. The Yukawa texture $Y_{\tau\mu}^{(1)}$ defined in eq.(\[eq:texture\]) is used and the off-shell Higgs boson momentum is fixed to $q^*_H = 2500$ GeV. The gray area is excluded by the constraints on the model. The green lines are the approximated contour lines using eq.(\[eq:approx\]) while the black lines correspond to the full calculation. Right: The heavy neutrino correction $\Delta^{\rm BSM}_{}$ (in percent) as a function of the Yukawa scaling parameter $|Y_\nu^{}|$, in the $\mu_X^{}$-parameterization with the texture $Y_{\tau\mu}^{(1)}$. We have fixed the other input parameters for the neutrino sector as $M_R^{} = 10$ TeV and $m_{n_1^{}}=0.01$ eV. The red (solid) curve corresponds to the full calculation, the blue (dashed) curve to the approximate result obtained with eq.(\[eq:approx\]).[]{data-label="fig:texture"}](./figures/hhh_1D_muXparam_texture1.pdf){width="90.00000%"} To get an insight into the behavior of the contour lines in Fig. \[fig:texture\] (left) we display a one-dimensional plot of the neutrino correction $\Delta^{\rm BSM}_{}$ at a given $M_R^{}=10$ TeV, as a function of the Yukawa scaling factor $|Y_\nu^{}|$, in Fig. \[fig:texture\] (right). The correction is negligible for low Yukawa scaling factors, then rises to a maximum at $\sim +60\%$ at $|Y_{\nu}^{}| \simeq 2.5$ before dropping rapidly and eventually becoming negative for large Yukawa scaling factors. From this behavior we devise the following approximate formula to reproduce $\Delta^{\rm BSM}$ at $M_R^{} > 3$ TeV, $$\begin{aligned} \Delta^{\rm BSM}_{\rm approx} = \frac{(1~\text{TeV})_{}^2}{M_R^2} \left( 8.45\, {\rm Tr} (Y_\nu^{} Y_\nu^\dagger Y_\nu^{} Y_\nu^\dagger) - 0.145\, {\rm Tr} (Y_\nu^{} Y_\nu^\dagger Y_\nu^{} Y_\nu^\dagger Y_\nu^{} Y_\nu^\dagger)\right). \label{eq:approx} \end{aligned}$$ The numerical coefficients are found to be universal in term of the parameters of the model and only depend on the kinematics of the off-shell Higgs boson, for the case of the three textures of ref. [@Arganda:2014dta] as well as for the case of a diagonal texture. The dependence of the numerical coefficients on the kinematics of the off-shell Higgs boson is expected, as when compared to the full calculation they would result from the loop functions depending on $q^*_H$, see Appendix \[app:HHH\]. It is expected that eq.(\[eq:approx\]) be valid for the whole class of textures introduced in ref. [@Arganda:2014dta]. At a given $M_R^{}>3$ TeV, the approximate formula in eq.(\[eq:approx\]) is driven at low $|Y_{\nu}^{}|$ by the positive contribution and by the negative contribution at high $|Y_{\nu}^{}|$, the latter falling more rapidly than the positive increase at low $|Y_{\nu}^{}|$. This reproduces the behavior seen in Fig. \[fig:texture\] (right) where the result of the fit is also displayed. We can also reproduce the contour lines for high $M_R^{}$ in Fig. \[fig:texture\] (left) as seen from the green contour lines coming from the fit, that agree to a very good extent with the full contour lines for $M_R> 3$ TeV. The approximate formula in eq.(\[eq:approx\]) implies that the best way to maximize the neutrino effects on the triple Higgs coupling would be to maximize the ratio $\displaystyle \frac{{\rm Tr} (Y_\nu^{} Y_\nu^\dagger Y_\nu^{} Y_\nu^\dagger)}{{\rm Tr} (Y_\nu^{} Y_\nu^\dagger Y_\nu^{} Y_\nu^\dagger Y_\nu^{} Y_\nu^\dagger)}$. The Yukawa couplings being real and limited by perturbativity requirements, this leads to the choice of a diagonal texture, $Y_\nu^{} \propto {\rm I}_3^{}$. This will be considered in the next sub-section, but with the condition of degenerate heavy neutrinos being relaxed. In such a way the constraints on the non-unitarity of the matrix $\tilde{U}_{\rm PMNS}^{}$ are softened and the blue region of the Casas-Ibarra scan of Fig. \[fig:casas-ibarra\], excluded by EWPO as well as by LFV observables, moves down. Hierarchical heavy neutrinos ---------------------------- The analysis carried in the previous sub-section has lead us to consider a diagonal Yukawa matrix, $Y_\nu^{} = |Y_\nu^{}| {\rm I}_3^{}$. In order to reduce as much as possible the impact of unitarity constraints on $\eta$ for the matrix $\tilde{U}_{\rm PMNS}^{}$, we chose hierarchical heavy neutrinos with $M_R^{} = {\rm diag}(M_{R^{}_1}^{},M_{R^{}_2}^{},M_{R^{}_3}^{})$ and we still work in the $\mu_X^{}$-parameterization. More specifically, for illustrative purpose within this class of parameters, we chose $$\begin{aligned} M_{R^{}_1}^{} = 1.51 M_R^{},\ \ M_{R^{}_2}^{} = 3.59 M_R^{},\ \ M_{R^{}_3}^{} = M_R^{}, \label{eq:hierarchical}\end{aligned}$$ with $M_R^{}$ being a rescaling factor that is varied between 200 GeV and 20 TeV. This ensures that all the diagonal constraints of eq.(\[EWPOconstraints\]) have the same impact on our study. This specific choice maximizes the individual contribution of each heavy neutrino, which in turns will maximize the one-loop correction to the triple Higgs coupling originating from the leptonic sector. Other choices for the heavy neutrino masses will only reduce the allowed maximum value of the triple Higgs coupling deviation from the SM. The result of the parameter scan in the $M_R^{}-|Y_\nu^{}|$ plane is displayed in Fig. \[fig:hierarchical\]. On the left-hand side, we display the map of $\Delta^{\rm BSM}_{}$ for an off-shell Higgs momentum $q_H^* = 500$ GeV. As already expected by the analysis in the simplified model of ref. [@Baglio:2016ijw], the heavy neutrino corrections are negative, and they reach a minimum of $\sim -8\%$, close to the minimum that was obtained in the simplified model. The most interesting results are displayed in the right-hand side of Fig. \[fig:hierarchical\], for $q_H^*=2500$ GeV. The corrections can now reach a maximum of $\sim +30\%$, similar to what has been obtained in the case of a simplified model. The corrections are generically bigger in the ISS model than in the simplified model, but the constraints are also stronger, reducing the heavy neutrino corrections back to the maximum obtained in the simplified model. This also confirms in a realistic, renormalizable, low-scale seesaw model that heavy Majorana neutrinos can induce sizable deviation to the triple Higgs coupling. As a further test of our approximate formula for the heavy neutrino corrections, the green lines in Fig. \[fig:hierarchical\] are the approximate contour lines obtained using eq.(\[eq:approx\]) but rescaled with a common factor $\gamma=0.51$. This type of rescaling was expected as now the heavy neutrino mass matrix is not proportional to the identity matrix anymore. Once again we obtain a very good approximation for $M_R^{} > 3$ TeV, and in particular in the region allowed by the constraints. This approximate formula thus describes well the behavior of $\Delta^{\rm BSM}_{}$ in the allowed region of the parameter space, for $q^*_H =2500$ GeV. ![Contour map of the heavy neutrino correction $\Delta^{\rm BSM}_{}$ to the triple Higgs coupling $\lambda_{HHH}^{}$ (in percent) as a function of the neutrino parameters $M_R^{}$ (in TeV) and $|Y_\nu^{}|$ in the $\mu_X^{}$-parameterization, using a diagonal Yukawa texture and a hierarchical heavy neutrino mass matrix with the parameters defined in eq.(\[eq:hierarchical\]). The off-shell Higgs boson momentum is fixed to $q^*_H = 500$ GeV (left) and $q^*_H = 2500$ GeV (right). The gray area is excluded by the constraints on the model and the green lines on the right figure are the approximated contour lines using eq.(\[eq:approx\]) with a common rescaling factor $0.51$, while the black lines correspond to the full calculation.[]{data-label="fig:hierarchical"}](./figures/hhh_scan_500_diagonal.pdf){width="99.00000%"} ![Contour map of the heavy neutrino correction $\Delta^{\rm BSM}_{}$ to the triple Higgs coupling $\lambda_{HHH}^{}$ (in percent) as a function of the neutrino parameters $M_R^{}$ (in TeV) and $|Y_\nu^{}|$ in the $\mu_X^{}$-parameterization, using a diagonal Yukawa texture and a hierarchical heavy neutrino mass matrix with the parameters defined in eq.(\[eq:hierarchical\]). The off-shell Higgs boson momentum is fixed to $q^*_H = 500$ GeV (left) and $q^*_H = 2500$ GeV (right). The gray area is excluded by the constraints on the model and the green lines on the right figure are the approximated contour lines using eq.(\[eq:approx\]) with a common rescaling factor $0.51$, while the black lines correspond to the full calculation.[]{data-label="fig:hierarchical"}](./figures/hhh_scan_2500_diagonal.pdf){width="99.00000%"} We end this section with a comparison with the currently expected sensitivity to the triple Higgs coupling at the HL-LHC and at the future planned colliders. The sensitivities to the SM triple Higgs coupling are defined by its measure extracted from the Higgs pair production yields. As stated for example in refs. [@Baglio:2012np; @Frederix:2014hta], a precision of $\sim 50\%$ on the total cross section leads to a precision of $\sim 50\%$ on the SM triple Higgs coupling. The sensitivity for the HL-LHC follows from ref. [@CMS-PAS-FTR-15-002] (see also ref. [@Campana:2016cqm]), scaled by a factor of $1/\sqrt{2}$ to account for both ATLAS and CMS accumulated data, while the sensitivity for the future colliders follow from refs. [@Fujii:2015jha; @He:2015spf]. For the FCC-hh we do the same as for the HL-LHC to account for both ATLAS and CMS accumulated data[^1], as well as for the fact that the analysis in ref. [@He:2015spf] is only done for one search channel; we expect the sensitivity to improve when more search channels are taken into account. We display in Fig. \[fig:sensitivity\] the maximally allowed deviation $\Delta^{\rm BSM}_{}$ (in percent), in black solid line, as a function of the heavy neutrino rescaling factor $M_R^{}$ (in TeV). This is compared to the sensitivities to the SM prediction for $\lambda_{HHH}^{}$ in the case of the HL-LHC with an integrated luminosity of 3 ab$^{-1}_{}$ (dashed black line); the ILC with different center-of-mass energies $\sqrt{s}$ and integrated luminosities $\mathcal{L}$, $\sqrt{s}=500$ GeV and $\mathcal{L} = 4$ ab$^{-1}_{}$ (double dotted blue line), $\sqrt{s}=1$ TeV and $\mathcal{L} =2$ ab$^{-1}_{}$ (dotted purple line), $\sqrt{s}=1$ TeV and $\mathcal{L} = 5$ ab$^{-1}_{}$ (long dash-dotted green line); and the case of the FCC-hh at 100 TeV and with $\mathcal{L} = 3$ ab$^{-1}_{}$ (dash-dotted red line). While the currently foreseen sensitivity of the HL-LHC would not allow to resolve the effect of the heavy neutrinos, new analysis techniques or the other future colliders would clearly allow to test these heavy neutrino corrections. More specifically and using current experimental constraints, the ILC at a center-of-mass energy $\sqrt{s}=500$ GeV could probe heavy neutrino masses in the range $8.5 < M_R^{} < 10.5$ TeV, at 1 TeV with 5 ab$^{-1}_{}$ of data this extends to the range $5 < M_R^{} < 17.5$ TeV. The FCC-hh collider could extend the analysis to a bigger range $3.3 < M_R^{} < 20$ TeV. Indirect searches and in particular EWPO could probe heavy neutrinos with masses in the multi-TeV range and future improvements are expected, especially at future $e^+e^-$ colliders [@Antusch:2015mia]. Improved constraints on the EWPO would tend to shift the left-hand part of the black curve in Fig. \[fig:sensitivity\] towards the right. This makes the triple Higgs coupling a new, viable and attractive observable to test low-scale seesaw mechanisms that will be complementary to improved EWPO measurements. ![The maximally allowed deviation $\Delta^{\rm BSM}_{\rm max}$ (in percent) as a function of the heavy neutrino mass parameter $M_R^{}$ (in TeV), compared to the currently expected sensitivities for the HL-LHC and the future ILC (with different integrated luminosities and center-of-mass energies $\sqrt{s}$) and FCC-hh colliders. The solid black line displays $\Delta^{\rm BSM}_{\rm max}$, the dashed black line is the LHC-LHC sensitivity at 3 ab$^{-1}_{}$, the double-dotted blue line is the ILC sensitivity at 4 ab$^{-1}_{}$ with $\sqrt{s}=500$ GeV, the dotted line is the ILC sensitivity at 2 ab$^{-1}_{}$ with $\sqrt{s}=1$ TeV, the green long dash-dotted line is the ILC sensitivity at 5 ab$^{-1}_{}$ with $\sqrt{s}=1$ TeV, and the red dash-dotted line is the FCC-hh sensitivity at 3 ab$^{-1}_{}$.[]{data-label="fig:sensitivity"}](./figures/hhh_iss_2500_diagonal_sensitivity.pdf) Conclusions {#sec:conc} =========== We have investigated in this article the one-loop effects of heavy neutrinos on the triple Higgs coupling in the framework of an inverse seesaw model, that is a realistic, renormalizable model accounting for the masses and mixings of the light neutrinos. After having presented the model and its constraints, both theoretical and experimental, in Section \[sec:model\], we have given the technical details of the one-loop calculation in Section \[sec:calc\]. We have presented in Section \[sec:pheno\] our numerical investigation of the model. After having performed a scan in a Casas-Ibarra parameterization of the neutrino input parameters we have found that a $\mu_X^{}$-parameterization is more suitable to get the maximal effects on the triple Higgs coupling and we have obtained a deviation as high as $\sim +30\%$ for the class of parameters in which the $3\times 3$ heavy neutrino mass matrix $M_R^{}$ is diagonal and hierarchical while the $3\times 3$ neutrino Yukawa texture is proportional to the identity matrix. This confirms our expectations coming from the simplified model analysis, and establishes the triple Higgs coupling as a viable, new observable to probe heavy neutrino mass regimes that are hard to probe otherwise, as this deviation is at the current limit for the expected sensitivity at the HL-LHC but clearly visible at the ILC and at the FCC-hh. Heavy neutrinos can also give rise to new diagrams that contribute to the complete $HH$ production cross section and need to be evaluated. We leave this for future projects. We warmly thank Nadège Bernard for her logistic support during the last stage of the project. C.W. heartfully thanks the University of Tübingen for its hospitality during the final stages of this project. We also acknowledge the discussions with Juraj Streicher and Bhupal Dev as well as with the participants of the *Focus Meeting on Collider Phenomenology*, organized at the IBS CTPU, Daejeon, South Korea. J.B. acknowledges the support from the Institutional Strategy of the University of Tübingen ( DFG, ZUK 63) and from the DFG Grant JA 1954/1. C.W. receives financial support from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC Grant NuMass Agreement No. 617143 and partial support from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreements No. 690575 and No. 674896. The Feynman diagrams of this article have been drawn with the program [JaxoDraw 2.0]{} [@Binosi:2003yf; @Binosi:2008ig]. Next order corrections in the seesaw expansion parameter to the $\mu_X$-parameterization {#app:NLO} ======================================================================================== Following the method of ref. [@Grimus:2000vj], we can diagonalize $M_{\mathrm{ISS}}$ by block to an arbitrary order in the seesaw expansion parameter $m_D M_R^{-1}$. This gives for the $3\times 3$ light neutrino mass matrix $$\begin{aligned} \label{NLOterms} \begin{split} M_{\mathrm{light}} =\, & m_D M_R^{T-1} \mu_X M_R^{-1} m_D^T - \frac{1}{2} m_D M_R^{T-1} M_R^{*-1} m_D^\dagger m_D M_R^{T-1} \mu_X M_R^{-1} m_D^T\\ & -\frac{1}{2} m_D M_R^{T-1} \mu_X M_R^{-1} m_D^T m_D^* M_R^{\dagger-1} M_R^{-1} m_D^T + \mathrm{o}\left( ||M_R^{-1} m_D||^4 \right)\times \mu_X \,, \end{split}\end{aligned}$$ in agreement with previous results [@Hettmansperger:2011bt]. This can be written in a symmetric form $$\begin{aligned} M_{\mathrm{light}} =\, & m_D M_R^{T-1} \left(\mathbf{1}-\frac{1}{2} M_R^{*-1} m_D^\dagger m_D M_R^{T-1} \right) \mu_X \left(\mathbf{1}-\frac{1}{2} M_R^{-1} m_D^T m_D^* M_R^{\dagger-1}\right) M_R^{-1} m_D^T \nonumber\\ & + \mathrm{o}\left( ||M_R^{-1} m_D||^4 \right)\times \mu_X \,. $$ If $m_D$ is invertible, we can then express $\mu_X$ as a function of $M_{\mathrm{light}}$ and the other blocks of $M_{\mathrm{ISS}}$, $$\label{nearlythere} \mu_X^{} \simeq \left(\mathbf{1}-\frac{1}{2} M_R^{*-1} m_D^\dagger m_D^{} M_R^{T-1} \right)^{-1}_{}\!\!\! M_R^T m_D^{-1} M_{\mathrm{light}}^{} m_D^{T-1} M_R^{} \left(\mathbf{1}-\frac{1}{2} M_R^{-1} m_D^T m_D^* M_R^{\dagger-1}\right)^{-1}_{}\,.$$ The light neutrino mass matrix is diagonalized by using the unitary PMNS matrix. Using eq.(\[mnulight\]) to rewrite $M_{\mathrm{light}}$ in eq.(\[nearlythere\]), we get a formula for the $\mu_X$-parameterization that includes the effect of sub-leading terms in the seesaw expansion, $$\begin{aligned} \label{app:muXparam} \begin{split} \mu_X \simeq & \left(\mathbf{1}-\frac{1}{2} M_R^{*-1} m_D^\dagger m_D M_R^{T-1} \right)^{-1} M_R^T m_D^{-1} U_{\rm PMNS}^* m_\nu U_{\rm PMNS}^\dagger m_D^{T-1} M_R\, \times\\ & \left(\mathbf{1}-\frac{1}{2} M_R^{-1} m_D^T m_D^* M_R^{\dagger-1}\right)^{-1}\,. \end{split}\end{aligned}$$ It is easy to see that if we were to consider only the leading order term in the seesaw expansion, we would recover eq.(45) from ref. [@Arganda:2014dta]. Interestingly, our results would not be modified by the addition of an extra mass term $\mu_R \overline{\nu_{R}^C} \nu_{R}$. The neutrino mass matrix would then be $$\label{ESSmatrix} M=\left( \begin{array}{c c c} 0 & m_D & 0\\ m_D^T & \mu_R & M_R \\ 0 & M_R^T & \mu_X \end{array}\right)\,,$$ where taking $||\mu_R||\ll ||m_D||,||M_R||$ corresponds to the inverse seesaw limit while taking $||\mu_R||\geq ||M_R||$ leads to the extended seesaw limit. In both cases, the next order corrections to $M_{\mathrm{light}}$ are given by eq.(\[NLOterms\]) in the limit where $||\mu_X M_R^{-1} \mu_R||\ll ||M_R||$. Thus, eq.(\[muXparam\]) would remain unchanged[^2]. Analytic expressions of the new ISS contributions {#app:HHH} ================================================= We give in this appendix all the analytic formulae of the new ISS contributions involved in the calculation of the renormalized one-loop triple Higgs coupling presented in Section \[sec:calc\]. The SM contributions, denoted with a SM, can be found in ref. [@Denner:1991kt] and will not be reproduced in this appendix. Counter-terms ------------- By convention all loop integrals in this sub-section are to be understood as their real part only. We use the conventions of [ LoopTools 2.13]{} [@vanOldenborgh:1990yc; @Hahn:1998yk; @Hahn:2006qw] for the scalar integrals and the tensor coefficients. $$\begin{aligned} \delta M_W^2 = \, & \delta M_W^2\big|_{\rm SM} - \frac{\alpha}{4 \pi s_W^2} \sum_{i=1}^3 \sum_{j=1}^9 \left| B_{i j}\right|^2\bigg(A_0(m_{n_j}^2)+m_{\ell_i}^2 B_{0}(M_W^2,m_{\ell_i}^2,m_{n_j}^2)\nonumber\\ & -2 B_{00}(M_W^2,m_{\ell_i}^2,m_{n_j}^2)+M_W^2 B_{1}(M_W^2,m_{\ell_i}^2,m_{n_j}^2)\bigg)\end{aligned}$$ $$\begin{aligned} \delta M_Z^2 = \, & \delta M_Z^2\big|_{\rm SM}- \frac{3 \alpha}{48\pi c_W^2 s_W^2} \sum_{j=1}^9 \sum_{k=1}^9 \bigg( \Big(C_{j k} C_{k j}^*+C_{j k}^* C_{k j}\Big) m_{n_j} m_{n_k} B_{0}(M_Z^2,m_{n_j}^2,m_{n_k}^2)\nonumber\\ & +\Big(C_{j k} C_{k j}+C_{j k}^* C_{k j}^*\Big) \Big(A_{0}(m_{n_k}^2)+m_{n_j}^2 B_{0}(M_Z^2,m_{n_j}^2,m_{n_k}^2)-2 B_{00}(M_Z^2,m_{n_j}^2,m_{n_k}^2) \nonumber\\ & +M_Z^2 B_{1}(M_Z^2,m_{n_j}^2,m_{n_k}^2)\Big)\bigg)\end{aligned}$$ $$\begin{aligned} \delta t_H =\, & \delta t_H\Big|_{\rm SM} -\frac{\sqrt{2\pi\alpha}}{8\pi^2 M_W s_W} \sum_{j=1}^9 m_{n_j}^2 \operatorname{Re}(C_{ j j}) A_{0}(m_{n_j}^2)\end{aligned}$$ One-loop un-renormalized self energy $\Sigma_{HH}^{}$ and vertex $\lambda_{HHH}^{(1)}$ -------------------------------------------------------------------------------------- The self-energy enters in the calculation of the field renormalization as well as the Higgs mass $M_H^{}$ counter-term. In the one-loop un-renormalized triple Higgs coupling, $q$ is the momentum of the off-shell Higgs boson splitting into two Higgs bosons, $H^*_{}(q)\to H H$. $$\begin{aligned} \Sigma_{HH}(p^2) = \, & \Sigma_{HH}^{\rm SM}(p^2) - \frac{\alpha }{16\pi M_W^2 s_W^2} \sum_{j=1}^9 \sum_{k=1}^9 \bigg( \Big(C_{j k} C_{k j}+C_{j k}^* C_{k j}^*\Big) m_{n_j}^2 \Big(A_{0}(m_{n_k}^2)\nonumber\\ & +p^2 B_{1}(p^2,m_{n_j}^2,m_{n_k}^2)+m_{n_j}^2 B_{0}(p^2,m_{n_j}^2,m_{n_k}^2) \Big) +\Big(C_{j k} C_{k j}+C_{j k}^* C_{k j}^*\Big) m_{n_k}^2 \Big(A_{0}(m_{n_k}^2)\nonumber\\ & +p^2 B_{1}(p^2,m_{n_j}^2,m_{n_k}^2) +3 m_{n_j}^2 B_{0}(p^2,m_{n_j}^2,m_{n_k}^2) \Big) + \Big(C_{k j} C_{j k}^*+C_{j k} C_{k j}^*\Big) m_{n_j} m_{n_k}\nonumber\\ & \Big(2 A_{0}(m_{n_k}^2) +2 p^2 B_{1}(p^2,m_{n_j}^2,m_{n_k}^2)+3 m_{n_j}^2 B_{0}(p^2,m_{n_j}^2,m_{n_k}^2) \Big) \nonumber\\ & +\Big(C_{k j} C_{j k}^*+C_{j k} C_{k j}^*\Big) m_{n_j} m_{n_k}^3 B_{0}(p^2,m_{n_j}^2,m_{n_k}^2)\bigg)\end{aligned}$$ $$\begin{aligned} \allowdisplaybreaks \lambda^{(1)}_{HHH}(q) =\, & \lambda^{(1), {\rm SM}}_{HHH}(q) -\frac{\alpha \sqrt{4\pi\alpha}}{32 \pi M_W^3 s_W^3} \sum _{j=1}^9 \sum _{k=1}^9 \sum _{l=1}^9 \bigg[ \Big(C_{j k}^{} C_{k l}^{} C_{l j}^{}+C_{j k}^* C_{k l}^* C_{l j}^*\Big)\nonumber\\ & \bigg(m_{n_j}^2 m_{n_k}^2 \left(4 B_0^{} + 4 M_H^2 C_2^{}+ q^2_{} (C_0^{} + 4 C_1^{} + C_2^{}) + 4 m_{n_j}^2 C_0^{}\right) \nonumber\\ & + m_{n_l}^2 m_{n_k}^2 \Big(4 B_0^{} + 2 M_H^2 C_2^{} + q^2_{} (3 C_1^{} + C_2^{}) \Big) + m_{n_l}^2 m_{n_j}^2 \Big( 4 B_0^{} + \nonumber\\ & 4 (m_{n_j}^2 + 2 m_{n_k}^2) C_0^{} + 2 M_H^2 C_2^{} + q^2_{} \left(C_0^{} + 5 C_1^{} + 2 C_2^{}\right) \Big) \bigg)+\nonumber\\ & m_{n_j} m_{n_l} \Big(C_{j k}^{} C_{k l}^{} C_{j l}^{}+C_{j k}^* C_{k l}^* C_{j l}^*\Big) \bigg( m_{n_l}^2 \Big(2 B_0^{} + q^2_{} \left(2 C_1^{} + C_2^{} \right)\Big) +\nonumber\\ & m_{n_k}^2 \Big(8 B_0^{} + 6 M_H^2 C_2^{} + q^2_{} (C_0^{} + 7 C_1^{} + 2 C_2^{}) + 2 m_{n_l}^2 C_0^{} \Big) + \nonumber\\ & m_{n_j}^2 \Big(2 B_0^{} + 2 M_H^2 C_2^{} + q^2_{} (C_0^{} + 3 C_1^{} + C_2^{}) + 2 (5 m_{n_k}^2 + m_{n_j}^2 + m_{n_l}^2) C_0^{} \Big)\bigg)+\nonumber\\ & m_{n_k} m_{n_l}\Big(C_{j k}^{} C_{l k}^{} C_{l j}^{}+C_{j k}^* C_{l k}^* C_{l j}^*\Big) \bigg( m_{n_k}^2 \Big(2 B_0^{} + 2 M_H^2 C_2^{} + q^2_{} C_1^{} \Big) + \nonumber\\ & m_{n_l}^2 \Big(2 B_0^{} + q^2_{} (2 C_1^{} + C_2^{} ) \Big) + m_{n_j}^2 \Big(8 B_0^{} + 6 M_H^2 C_2^{} + q^2_{} (2 C_0^{} + 9 C_1^{} + 3 C_2^{}) +\nonumber\\ & 4 \big(2 m_{n_j}^2 + m_{n_k}^2 + m_{n_l}^2\big) C_0^{} \Big)\bigg)+\nonumber\\ & m_{n_j} m_{n_k}\Big(C_{j k}^{} C_{l k}^{} C_{j l}^{}+C_{j k}^* C_{lk}^* C_{j l}^*\Big) \bigg(m_{n_l}^2 \Big(8 B_0^{} + 4 M_H^2 C_2^{} + q^2_{} (C_0^{} + 8 C_1^{} + 3 C_2^{}) \Big) + \nonumber\\ & m_{n_k}^2 \Big(2 B_0^{} + 2 M_H^2 C_2^{} + q^2_{} C_1^{} + 2 m_{n_l}^2 C_0^{}\Big) + m_{n_j}^2 \Big(2 B_0^{} + 2 M_H^2 C_2^{} + \nonumber\\ & q^2_{} (C_0^{} + 3 C_1^{} + C_2^{}) + 2 C_0^{} (m_{n_j}^2 + m_{n_k}^2 + 5 m_{n_l}^2)\Big)\bigg)\bigg]\end{aligned}$$ In the expression of the un-renormalized vertex $\lambda_{HHH}^{(1)}$ we have used the following abbreviations, $$\begin{aligned} B_0^{} & \equiv B_0^{}\left(M_H^2,m_{n_k}^2,m_{n_l}^2\right)\,,\nonumber\\ C_0^{} & \equiv C_0^{}\left(q_{}^2,M_H^2,M_H^2, m_{n_j}^2,m_{n_k}^2,m_{n_l}^2\right)\,,\nonumber\\ C_{1/2}^{} & \equiv C_{1/2}^{}\left(q_{}^2,M_H^2,M_H^2, m_{n_j}^2,m_{n_k}^2,m_{n_l}^2\right)\, .\end{aligned}$$ [^1]: It shall be mentioned that other analyses give more conservative prospects for the FCC-hh as well as for the HL-LHC, see for example ref. [@Azatov:2015oxa]. However, new techniques in the meantime can be developed to help increasing the sensitivity, as well as a better analysis of possible search channels, see for example the case of the $4b$ final state [@Behr:2015oqq]. [^2]: This conclusion is limited to the next-order term in the seesaw expansion. In general, one-loop corrections proportional to $\mu_R$ should also be included unless $\mu_R\ll\mu_X$ (see ref. [@Dev:2012sg] and references therein).
--- abstract: 'A shear flow of liquid metal (Galinstan) is driven in an annular channel by counter-rotating traveling magnetic fields imposed at the endcaps. When the traveling velocities are large, the flow is turbulent and its azimuthal component displays random reversals. Power spectra of the velocity field exhibit a $1/f^\alpha$ power law on several decades and are related to power-law probability distributions $P(\tau)\sim \tau^{-\beta}$ of the waiting times between successive reversals. This $1/f$ type spectrum is observed only when the Reynolds number is large enough. In addition, the exponents $\alpha$ and $\beta$ are controlled by the symmetry of the system: a continuous transition between two different types of Flicker noise is observed as the equatorial symmetry of the flow is broken, in agreement with theoretical predictions.' author: - 'M. Pereira, C. Gissinger, S. Fauve' bibliography: - 'biblio.bib' title: '1/f noise and long-term memory of coherent structures in a turbulent shear flow' --- A puzzling problem in physics is the ubiquity of ’1/f’ noise or ’Flicker’ noise, i.e. the existence of a wide range of frequencies over which the low frequency power spectrum $S(f)$ of a physical quantity follows a power law $S(f)\sim f^{-\alpha}$, with $\alpha$ close to 1 (or more generally $0 < \alpha < 2$). Such behavior is observed in a broad variety of physical systems, ranging from voltage and current fluctuations in vacuum tubes or transistors [@Hooge1981; @Dutta1981] to astrophysical magnetic fields [@Matthaeus1986], and including biological systems [@Gilden1995], climate [@Fraedrich2003] and turbulent flows [@Dmitruk2007; @Dmitruk2014; @Ravelet2008; @Costa2014] to quote a few . Surprisingly, this ubiquity of $1/f$ noise does not seem to rely on a single explanation: although many interesting models have been proposed during the last 80 years, there is currently no universal mechanism for the generation of $1/f$ fluctuations. Different levels of theoretical description of $1/f$ noise involve, the existence of a continuous distribution of relaxation times in the system [@Bernamont1937; @Vanderziel1950], fractional Brownian motion  [@Mandelbrot1968], low dimensional dynamical systems close to transition to chaos  [@Manneville1980; @Procaccia1983; @Geisel1987]. These systems often display an intermittent regime with bursts occurring after random waiting times $\tau$. For this type of point processes, it has been shown that a $f^{-\alpha}$ spectrum is related to a power law distribution $P(\tau) \propto \tau^{-\beta}$ with some relation between $\alpha$ and $\beta$ that depends on the symmetry of the signal [@Lowen1993]. Although most of the early experimental observations of $1/f^{\alpha}$ noise do not display such discrete events in their time recordings, switching events have been observed in small electronic systems [@Ralls1984] and more recently in blinking quantum dots [@Kuno2000; @Shimizu2001; @Pelton2004]. These waiting times, distributed as a power-law, reflect the scale-free nature of the statistics, and are associated to durations spent by the system in two different states (bright or dark state in the case of quantum dots). More recently, statistical analysis of quasi-bidimensional turbulence of an electromagnetically forced flow exhibited a similar dynamics, in which a large scale circulation driven by a turbulent flow randomly reverses [@Herault2015]. In this experiment, both $1/f$ power spectrum and power-law inter-event time probability distribution functions were observed. These results indicate that coherent structures generated in turbulent flows play a crucial role in the occurence of $1/f$ noise. On the other hand, it is known that such large scale coherent structures can exhibit very different dynamics depending on the level of turbulent fluctuations or the symmetry properties of the system. Whether these properties could affect $1/f$ noise is an open question. By carefully tuning the parameters of the experiment reported here, both the level of turbulence and the symmetry between two states can be independently controlled, allowing for such investigation: we show how the occurence of $1/f$ fluctuations is directly related to the power-law PDF of waiting times, but critically depends on the level of turbulence generated in the flow. In addition, the symmetry of the forcing plays a crucial role: different relations are satisfied by $\alpha$ and $\beta$ depending on whatever the two opposite states are symmetrical or not. In particular, a continuous transition between the different regimes predicted in [@Lowen1993] can be obtained as a function of the skewness of the velocity PDFs, ultimately controlled by the symmetry of the external driving.\ Fig.\[setup\] shows a schematic picture of the experiment : an annular channel made of Polyvinyl Chloride (PVC), with inner radius $r_i=65$ mm, outer radius $r_o=98$ mm, and vertical height $H=47$ mm, is filled with liquid Galinstan (GaInSn), an eutectic alloy which is liquid at ambiant temperature, with kinematic viscosity $\nu=0,37 \cdot10^{-6}$ $m^{2}.s^{-1}$, density $\rho=6,44\cdot10^{3}$ $kg.m^{-3}$ and electrical conductivity $\sigma=3,46\cdot10^{6}$ $S.m^{-1}$. ![Schematic view of the experiment. An annular channel made of PVC of mean radius $R=83$ mm and gap width $H=47$ mm is filled with a liquid metal (Galinstan).The flow is driven by the Lorentz force due to a traveling magnetic field (TMF) on each side of the experiment, created by $16$ Neodymium magnets placed on independently rotating discs.[]{data-label="setup"}](fig/setup_vkm.png){width="8"} At a distance $h=10$mm above and below the channel are located two rotating discs, each containing $16$ Neodymium magnets disposed with a regular spacing along a circle of radius $R=83$mm. These magnets are cylinders of diameter $d_m=20$mm and height $h_m=10$ mm, generating a magnetic field $B_m^0=0.45T$ at their surface. They are arranged such that two adjacent magnets, separated by a distance $d_m=2\pi R/16=32.5$mm, are oriented with opposite polarity. The rotating discs therefore generate on each side of the channel a spatially periodic magnetic field traveling in the azimutal direction with an angular frequency $\Omega_i=2\pi f_i/16$ and a wavenumber $k=\pi/d_m$, where $f_i$ is the rotation frequency of the disc $i$ . The flow is electromagnetically driven by the Lorentz force due to these traveling magnetic fields (TMF) and their related induced electrical currents. The frequencies of the discs $f_1$ and $f_2$ can be changed independently. This leads to the definition of $4$ dimensionless control parameters for the experiment: $F=(f_1-f_2)/(f_1+f_2)$ controls the asymmetry of the forcing provided by the top and bottom discs and $Re=[(f_1+f_2)/2]H^2/\nu$ is the Reynolds number based on the mean frequency of the discs. In addition, one can define the magnetic Prandtl number $Pm=\nu\mu_0\sigma$, which is of order $Pm\sim 10^{-6}$ for Galinstan, and the dimensionless magnetic field of the magnets which is represented by the Hartman number $Ha^2=B_0^2 \sigma H /(k \rho \nu)$, where $B_0$ is the magnetic field measured in the midplane of the channel and k is the wave number. For the experiments reported here, Ha=90. The velocity field is measured through Ultrasound Doppler Velocimetry (UDV) using three probes located in three different horizontal planes $z=0$ (midplane) and $z=\pm11 mm$. When the two traveling magnetic fields imposed at top and bottom endcaps rotate in the same direction, a strong azimuthal Lorentz force drives the flow in the same direction than the discs, and the device therefore acts as an induction pump. In that case, the velocity of the flow increases with both the magnitude of the applied field and the rotation rate of the discs. Note however that the fluid velocity is always smaller than the speed of the discs, and can be much smaller if the magnetic field is expelled outside the channel at large magnetic Reynolds number, $Rm=Re Pm$ [@Gissinger2016; @Rodriguez2016]. ![Most probable velocities measured in the midplane as a function of $F$, for $Re=7,1\cdot10^{3}$. The two vertical dashed lines indicate the region of bistability between positive and negative flow velocity. Upper-left inset: time series of the velocity in the bistable regime. Lower-right inset: bimodality of the PDF related to the bistability of the flow. []{data-label="cycle"}](fig/cycle2.png){width="9.5cm"} We focus here on the configuration in which the two discs are counter-rotating. A strong shear flow develops in the channel, due to the opposing Lorentz forces generated at the top and bottom boundaries by the corresponding traveling magnetic fields. Fig. \[cycle\] shows the bifurcation of the most probable velocities measured by UDV in the midplane of the channel, as a function of the asymmetry parameter $F$. Red squares (respectively blue circles) indicate positive (resp. negative) mean velocity, meaning that the fluid in the midplane moves in the same direction than the upper (resp. lower) disc. Close to $F=0$, a bistability between this two states is observed. The upper-left inset shows a typical time series of the velocity field in this regime (here for F=0): the instantaneous velocity is strongly fluctuating and exhibits chaotic reversals of its polarity, the fluid following alternatively one disc or the other. As a consequence, the corresponding probability density function (PDF) shows a bimodal structure (lower-right inset), characterized by two maxima in the PDF. The two vertical dotted lines delimitate the region for which such a bistability between positive and negative velocity is observed (characterized by bimodal PDFs). Note that the bifurcation diagram should be symmetrical with respect to $F=0$ exactly, for which none of the two states is favored by the forcing. In practice, the curve is slightly shifted to positive values (symmetrical PDFs obtained for $F\sim 0.05$ for this Reynolds number), which may be due to some imperfections in the experimental setup. Similar reversals of the velocity field have been described in von Karman swirling flows, in which the shear layer generated by two counter rotating bladed discs can undergo chaotic jumps from the midplane [@Ravelet2008]. UDV measurements above and below the midplane indicate that a similar large scale dynamics of the central shear layer occurs in the present experiment. ![Frequency power spectra $S(f)$ of the velocity $V(t)$ for different Reynolds numbers ($Re=4,7\cdot 10^{3}$ ; $Re=1,6\cdot 10^{4}$ and $Re=3,6\cdot 10^{4}$ from bottom to top). For clarity, the spectra have been multiplied by 1, 10 and 1000. Note the $-\frac{5}{3}$ slope at high frequency, and the occurence of $1/f^\alpha$ noise at low frequency. Inset: exponent $\alpha$ as a function of $Re$.[]{data-label="spectre"}](fig/spectre_chris2.png){width="9cm"} We first study the evolution of the statistical properties of the velocity field in the bistable regime, for $F\sim 0$. In Fig.\[spectre\], we report the frequency power spectra extracted from time series $V(t)$ measured by UDV in the midplane, for different values of the Reynolds number. First note that all power spectra show a $f^{-\frac{5}{3}}$ direct cascade of energy from the injection scale $f_0\sim \frac{U}{H}$, where $U$ is the mean velocity of the flow (measured close to each disc) and $d$ is the gap of the channel. We focus here on the behavior of the spectra at frequency below the injection scale $f_0$. We first observe that they strongly depend on the Reynolds number: at the lowest $Re$, the spectrum is flat for $f \ll f_0$, but as $Re$ is increased beyond a critical value $Re_c\sim 10^4$, the system shows a build up of the energy towards low frequency, such that $1/f^{\alpha}$ noise is observed for large Reynolds numbers. The inset of Fig.\[spectre\] shows the dependence of $\alpha$ on $Re$, and suggests that it rapidly converges to values slightly larger than $\alpha=1$ in the limit of large $Re$. We emphasize that the spectra below the injection scale $f_0$ are not related to any turbulent cascade process since the frequencies are too low to correspond to any spatial scale within the fluid container. In particular, the $1/f$ spectra observed here in the bulk flow are not similar to the $1/f$ spectra observed in turbulent boundary layers that trace back to $1/k$ spectra through the Taylor hypothesis [@Perry1986]. ![Distribution of the waiting time $P(\tau)$ between two successive reversals of the flow for $Re=7,1\cdot 10^{4}$. For sufficiently large $Re$, the distribution follows a power law $\tau^{-\beta}$. Inset: $\beta$ as a function of $Re$.[]{data-label="dist"}](fig/dist.png){width="8cm"} Since these results have been obtained for $F\sim 0$, all the power spectra shown in Fig.\[spectre\] are related to time series exhibiting chaotic reversals between two symmetrical states. These random reversals can be characterized by the distribution $P(\tau)$ of the waiting time (WT) $\tau$ between two successive transitions, as shown in Fig. \[dist\] for $Re=7,1\cdot10^4$. We observe that the waiting times are distributed according to a power law $P(\tau)\sim \tau^{-\beta}$, in contrast to the exponential distribution generally observed in the case of a memoryless system. The presence of such power-law PDF therefore suggests a more complex non-Poissonian physics underlying the occurence of polarity changes. Note that similarly to $\alpha$, the exponent of the power law depends on $Re$, and slowly tends to $\beta=2$ as $Re$ is increased to large values. ![Probability density function of the velocity field for different values of the asymmetry $F$ of the forcing. Note the transition from a Gaussian distribution at large $|F|$ to bimodal behavior for $F\sim 0$.[]{data-label="PDF_F"}](fig/PDF.png){width="9cm"} ![: Frequency power spectra $S(f)$ of the velocity $V(t)$ for different values of the asymmetry parameter $F$. For clarity, the spectra have been multiplied by $10$ for each increment of $F$. Inset shows the exponent $\alpha$ as a function of $F$.[]{data-label="spec_F"}](fig/spectre_final.png){width="9cm"} The exponents of the power spectra and of the WT distribution also strongly depend on the asymmetry of the magnetic forcing, controlled by the value of $F$. In Fig. \[PDF\_F\], we report the probability density function (PDF) of the velocity field in the midplane, for various values of F and a fixed value of the Reynolds number $Re=6.10^4$. When $F$ has large negative or positive values, the system is in a non-reversing regime with negative (respectively positive) mean velocity, and the fluid follows the bottom (respectively the top) disc with a gaussian distribution of the velocity fluctuations. For values of $F$ close to $0$, the distribution is either bimodal and roughly symmetrical with respect to zero (for instance $F=0.09$), or asymmetrical with a non-gaussian tail (for instance $F=0.14$). Interestingly, this asymmetry in the forcing clearly controls the value of the exponent of the power spectrum at low frequency, as shown by Fig.\[spec\_F\]. For strongly asymmetric forcing ($F=-0.21$ and $F=-0.14$), the spectrum is flat, with $f^0$ behavior on several decades for $f<f_0$. As the flows starts to randomly explore the other polarity, $\alpha$ increases, even when the corresponding PDF is not bimodal (see $F=0$). The exponent $\alpha$ reaches its maximum value $\alpha=1.1$ for symmetrical PDFs of the velocity, and then decreases again with $F$ as the flows come back to a non-reversing state. In fact, it has been shown [@Lowen1993; @Niemann2013] that in the presence of a heavy-tailed distribution similar to the one shown in Fig.\[dist\], the exponent $\alpha$ of the power spectrum and the exponent $\beta$ of the WT distribution are related: in the case of a symmetric process (meaning that the two states have similar transition probability), one expects the relation $\alpha+\beta=3$, whereas $\beta-\alpha=1$ is predicted in a non-symmetric process (e.g. for random bursts). It has been shown in [@Herault2015b] that one prediction or the other can be observed in different experiments : for instance, pressure fluctuations in 3D turbulence [@Abry1994] follow $\beta-\alpha=1$ scaling, whereas $\alpha+\beta=3$ is observed for random reversals of a large scale flow generated by Kolmogorov forcing [@Herault2015b]. We show here that both regimes can be observed in the same experiment and for the same measured quantity, depending only on the asymmetry parameter $F$ and the Reynolds number: Fig \[exponents\] reports most of our experimental runs (obtained for various values of $F$ and $Re$) in the parameter space $\{\alpha,\beta\}$, in which the dashed line indicates the regime $\beta-\alpha=1$ and the solid line indicates $\alpha+\beta=3$. For each point, we have computed the skewness of the PDFs of the velocity $\theta=\langle [(V(t)-\mu)/\sigma)]^3 \rangle$, where $\sigma$ and $\mu$ are respectively the standard deviation and the mean. When the probability density function of the flow exhibits a roughtly bimodal distribution ($\theta<0.1$, blue circles), most of the points tend to collapse on the line $\alpha+\beta=3$, while asymmetrical reversals ($\theta>0.1$, red squares) lie along $\beta-\alpha=1$, valid for bursting processes only. While these results show that the asymmetry of the forcing controls the type of 1/f noise (i.e. the value of the sum or the difference of the exponents) which is observed, what controls exactly the values of the exponents remains unclear. It is also important to note that Fig.\[exponents\] reports results obtained only for sufficiently large Reynolds numbers (in practice $Re\ge5\cdot 10^4$) and $F$ not too large (keeping only non-gaussian distributions). ![$\alpha$ as a function of $\beta$ for various values of $F$ and $Re$. The dashed line indicates the regime $\beta-\alpha=1$, while the solid line corresponds to $\beta+\alpha=3$. The skewness $\theta$ of the PDFs determines in which regime the system lies ($\theta <0.1$ for blue circles, $\theta>0.1$ for red squares).[]{data-label="exponents"}](fig/final.png){width="7.5cm"} The problem we studied experimentally is related to the general question of the low frequency behavior of the turbulent velocity spectrum. As seen in Fig. \[spectre\], the power increases at low frequency as the Reynolds number is increased. This could be somewhat surprising since the phenomenology of three-dimensional turbulence predicts an increase of the inertial range toward the small spatial scales. The increase of power at low frequency results from the instability of the shear layer that develops on the turbulent background. We therefore showed that the low frequency behavior of turbulent flows is strongly related to the dynamics of coherent structures, here the shear layer, and confirmed observations made in several other flow configurations [@Herault2015b]. As shown in another context, large scale instabilities of turbulent flows can be modeled by keeping only large scale modes that obey the truncated Euler equation (TEE) [@Shukla2016]. Numerical simulations of TEE have displayed $1/f$ spectra [@Shukla2018]. Numerical simulations of this type of models are presently studied in the case of a turbulent shear layer. We emphasize that this process for generating 1/f noise involves a large number of degrees of freedom with many triads in nonlinear interaction and therefore strongly differs from low dimensional dissipative dynamical systems. Our experimental results also show that although the power law exponents $\alpha$ and $\beta$ only slightly change with $Re$, they strongly depends on the asymmetry parameter. This second observation is interesting, because the continuous transition from $\alpha+\beta=3$ to $\beta-\alpha=1$ generated as the equatorial symmetry of the flow is broken shows that both regimes can be observed within the same system. In other words, some features of $1/f$ noise can be directly related to the asymmetry of the system. It would be interesting to see if this relation can be used to understand some systems from the characteristics of their $1/f$ fluctuations. For instance, the study of the exponents $\alpha$ and $\beta$ from $1/f$ fluctuations of the solar wind or the luminosity of some stars may help probing their symmetry properties, less accessible from observations. From a fundamental viewpoint, it could be argued that we have replaced the problem of finding a mechanism for $1/f$ noise by the problem of providing an explanation for the power law PDF of waiting times. However, this could be a useful step since a generic mechanism has been proposed for the later [@Montroll1982]. Finally, we can understand the particular role played by the value $\alpha = 1$ that is common to the symmetric and asymmetric cases (see Fig.\[exponents\]). With symmetric forcing, we expect $\alpha = 1$ to be selected by small asymmetric perturbations. F. N. Hooge, T. G. M. Kleinpenning and L. K. J. Vandamme, Rep. Prog. Phys. [**44**]{}, 479 (1981) P. Dutta and P.M. Horn, Rev. Mod. Phys. [**53**]{}, 497 (1981) W.H. Matthaeus and M. L. Goldstein, Phys. Rev. Lett. [**57**]{}, 495 (1986) D.L. Gilden, T. Thornton, M.W. Mallon,Science, [**267**]{}, 1837-1839 (1995). K. Fraedrich and R. Blender, Phys. Rev. Lett. [**90**]{}, 108501 (2003) P. Dmitruk and W.H. Matthaeus, Phys.Rev.E [**76**]{}, 036305 (2007). P. Dmitruk et al, Phys.Rev.E [**90**]{}, 043010 (2014). F. Ravelet, A. Chiffaudel and F. Daviaud, J. Fluid. Mech. [**339**]{}, 601 (2008) A. Costa et al., Phys. Rev. Lett. [**113**]{}, 108501 (2014) J. Bernamont, Proc. Physical Soc. [**49**]{}, 138 (1937) A. van der Ziel, Physica [**16**]{}, 359 (1950) B. B. Mandelbrot and I. W. van Ness, SIAM [**10**]{}, 422 (1968) P. Manneville, J. Physique [**41**]{} 1235 (1980). I. Procaccia and H. Schuster, Phys. Rev. A [**28**]{}, 1210 (1983) T. Geisel, A. Zacherl and G. Radons, Phys. Rev. Lett. [**59**]{}, 2503 (1987) S.B. Lowen and M. Teich. C., Phys. Rev. E, [**47**]{} , 992 (1993) K. S. Ralls K. S. et al., Phys. Rev. Lett. [**52**]{}, 228 (1984) M. Kuno et al, J. Chem. Phys. [**112**]{}, 3117 (2000) K. T. Shimizu et al., Phys. Rev. B [**63**]{}, 205316 (2001) M. Pelton, D. G. Grier, and P. Guyot-Sionnest, Appl. Phys. Lett. [**85**]{} 819 (2004). J. Herault, F. Petrelis, S. Fauve, Euro. Phys. Lett. [**111**]{} 44002 (2015). C. Gissinger, P. Rodriguez-Imazio, S. Fauve, Phys. Fluids. [**28**]{} 034101 (2016) P. Rodriguez-Imazio and C. Gissinger, Phys. Fluids. [**28**]{} 034102 (2016) A. E. Perry, S. Henbest and M. S. Chong, J. Fluid Mech. [**165**]{}, 163 (1986) and references therein. M. Niemann, H. Kantz and E. Barakai, Phys. Rev. Lett., [**110**]{}, 140603 (2013) J. Herault, F. Petrelis, S. Fauve, Journ. Stat. Phys. [**161**]{} 1379 (2015). P. Abry et al, J. Physique II, [**4**]{}, 725 (1994) V. Shukla, S. Fauve and M. Brachet, Phys. Rev. E [**94**]{}, 061101 (2016). V. Shukla, S. Fauve and M. Brachet, 1/f noise in Kolmogorov flows, in preparation (2018). E. W. Montroll and M. F. Shlesinger, Proc. Natl. Acad. Sci. USA [**79**]{}, 3380 (1982)
--- abstract: 'The trust region method is an algorithm traditionally used in the field of derivative free optimization. The method works by iteratively constructing surrogate models (often linear or quadratic functions) to approximate the true objective function inside some neighborhood of a current iterate. The neighborhood is called “trust region” in the sense that the model is trusted to be good enough inside the neighborhood. Updated points are found by solving the corresponding trust region subproblems. In this paper, we describe an application of random projections to solving trust region subproblems approximately.' bibliography: - 'jll\_tr.bib' --- [Random projections for trust region subproblems]{} Introduction ============ Derivative free optimization (DFO) (see [@conn2009; @Kramer2011]) is a field of optimization including techniques for solving optimization problems in the absence of derivatives. This often occurs in the presence of [*black-box functions*]{}, i.e. functions for which no compact or computable representation is available. It may also occur for computable functions having unwieldy representations (e.g. a worst-case exponential-time or simply inefficient evaluation algorithm). A DFO problem in general form is defined as follows: $$\min \; \{ f(x) \; | \; x \in \mathcal{D}\},$$ where $\mathcal{D}$ is a subset of $ \mathbb{R}^n$ and $f(\cdot)$ is a continuous function such that no derivative information about it is available [@Conn1997]. As an example, consider a simulator $f(p,y)$ where $p$ are parameters and $y$ are state variables indexed by timesteps $t$ (so $y=(y^t\;|\;t\le h)$), including some given boundary conditions $y^0$. The output of the simulator is a vector at each timestep $t\le h$. We denote this output in function of parameters and state variables as $f^t(p,y)$. We also suppose that we collect some actual measurements from the process being simulated, say $\hat{f}^t$ for some $t\in H$, where $H$ is a proper index subset of $\{1,\ldots,h\}$. We would like to choose $p$ such that the behaviour of the simulator is as close as possible to the observed points $\hat{f}^t$ for $t\in H$. The resulting optimization problem is: $$\min \left\{ \bigoplus\limits_{t\in H} \|\hat{f}^t - f^t(p,y)\| \;\bigg|\; p \in \mathcal{P}\land y\in \mathcal{Y} \right\},$$ where $\mathcal{P}$ and $\mathcal{Y}$ are appropriate domains for $p,y$ respectively, $\|\cdot\|$ is a given norm, and $\oplus$ is either $\sum$ or $\max$. This is known to be a black-box optimization problem whenever $f(\cdot)$ is given as an [*oracle*]{}, i.e. the value of $f$ can be obtained given inputs $p,y$, but no other estimate can be obtained directly. In particular, the derivatives of $f(\cdot)$ w.r.t. $p$ and $y$ are assumed to be impossible to obtain. Note that the lack of derivatives essentially implies that it is hard to define and therefore compute local optima. This obviously makes it even harder to compute global optima. This is why most methods in DFO focuses on finding local optima. Trust Region (TR) methods are considered among the most suitable methods for solving DFO problems [@Conn1996; @conn2009; @Marazzi2002]. TR methods involve the construction of surrogate models to approximate the true function (locally) in “small” subsets $D\subset\mathcal{D}$, and rely on those models to search for optimal solutions within $D$. Such subsets are called [*trust regions*]{} because the surrogate models are “trusted” to be “good enough” limited to $D$. TRs are often chosen to be closed balls $B(c,r)$ (with $c$ center and $r$ radius) with respect to some norms. There are several ways to obtain new data points, but the most common way is to find them as minima of the current model over the current trust region. Formally, one solves a sequence of so-called [*TR subproblems*]{}: $$\begin{aligned} \min \, \{m(x) \; | \; x \in B(c, r) \cap \mathcal{D} \},\end{aligned}$$ where $m(\cdot)$ is a surrogate model of the true objective function $f(\cdot)$, which will then be evaluated at the solution of the TR subproblems. Depending on the discrepancy between the model and the true objective, the balls $B(c,r)$ and the models $m(\cdot)$ are updated: the TRs can change radius and/or center. The iterative application of this idea yields a TR method (we leave the explanation of TR methods somewhat vague since it is not our main focus; see [@conn2009] for more information). In this paper we assume that the TR subproblem is defined by a linear or quadratic model $m(\cdot)$ and that $\mathcal{D}$ is a full-dimensional polyhedron defined by a set of linear inequality constraints. After appropriate scaling, the TR subproblem can be written as: $$\min \{x^{\top} Q x + c^{\top} x \; | \; A x \le b, \; \|x\| \le 1 \}, \label{TR:main-prob}$$ where $Q \in \mathbb{R}^{n \times n}$ ($Q$ is not assumed to be positive semidefinite), $A \in \mathbb{R}^{m \times n}$, $b \in \mathbb{R}^{m}$. Here, $\| \cdot \|$ refers to Euclidean norm and such notation will be used consistently throughout the paper. This problem has been studied extensively [@Bienstock; @Jeyakumar; @Salahi2016; @Burer2015]. Specifically, its complexity status varies between $\mathbf{P}$ and $\mathbf{NP}$-hard, meaning it is $\mathbf{NP}$-hard in general [@murty], but many polynomially solvable sub-cases have been discovered. For example, without linear constraints $Ax \le b$ the TR subproblem can be solved in polynomial time [@Ye:1992]; in particular, it can be rewritten as a Semidefinite Programming (SDP) problem. When adding one linear constraint or two parallel constraints, it still remains polynomial-time solvable [@Ye03; @sturm2003cones]; see the introduction to [@Bienstock] for a more comprehensive picture. The issue… ---------- In practice, the theoretical complexity of the TR subproblem Eq.  is a moot point, since the time taken by each function evaluation $f(p,x)$ is normally expected to exceed the time taken to solve Eq. . Since the solution sought is not necessarily global, any local Nonlinear Programming (NLP) solver [@snopt; @ipopt] can be deployed on Eq.  to identify a locally optimal solution. The issue we address in this paper arises in cases when the sheer number of variables in Eq.  prevents even local NLP solvers from converging to some local optima in acceptable solution times. The usefulness of TR methods is severely hampered if the TR subproblem solution phase represents a non-negligible fraction of the total solution time. As a method for addressing the issue we propose to solve the TR subproblem [*approximately*]{} in exchange for speed. We justify our choice because the general lack of optimality guarantees in DFO and black-box function optimization makes it impossible to evaluate the loss one would incur in trading off local optimality guarantees in the TR subproblem for an approximate solution. As an approximation method, we consider [*random projections*]{}. Random projections are simple but powerful tools for dimension reduction [@Woodruff; @vu2015; @pilanci2014; @jll_ctw16]. They are often constructed as random matrices sampled from some given distribution classes. The simplest examples are matrices sampled componentwise from independently identically distributed (i.i.d.) random variables with Gaussian $\mathcal{N}(0,1)$, uniform $[-1,1]$ or Rademacher $\pm 1$ distributions. Despite their simplicity, random projections are competitive with more popular dimensional reduction methods such as Principal Component Analysis (PCA) / Multi-Dimensional Scaling (MDS) [@jolliffe_10], Isomap [@tenenbaum_00] and more. One of the most important features of a random projection is that it approximately preserves the norm of any given vector with high probability. In particular, let $P \in \mathbb{R}^{d \times n}$ be a random projection (such as those introduced above), then for any $x \in \mathbb{R}^{n}$ and $\varepsilon \in (0,1)$, we have $$\label{RP} \mbox{\sf Prob} \bigg[ (1 - \varepsilon) \|x\|^2 \le \|Px\|^2 \le (1 + \varepsilon) \|x\|^2\bigg] \ge 1-2e^{-\mathcal{C}\varepsilon^2 d},$$ where $\mathcal{C}$ is a [*universal constant*]{} (in fact a more precise statement should be existentially quantified by “there exists a constant $\mathcal{C}$ such that…”). Perhaps the most famous application of random projections is the so-called *Johnson-Lindenstrauss lemma* [@jllemma]. It states that for any $\varepsilon \in (0,1)$ and for any finite set $X \subseteq \mathbb{R}^n$, there is a mapping $F:\mathbb{R}^n \to \mathbb{R}^d$, in which $d = O(\frac{\log |X|}{\varepsilon^2})$, such that $$\forall x, y \in X \qquad (1 - \varepsilon) \|x - y\|^2 \le \|F(x) - F(y)\|^2 \le (1 + \varepsilon) \|x - y\|^2 .$$ Such a mapping $F$ can be found as a realization of the random projection $P$ above; and the existence of the correct mapping is shown (by the probabilistic method) using the union bound. Moreover, the probability of sampling a correct mapping is also very high, i.e. in practice there is often no need to re-sample $P$. …and how we address it ---------------------- The object of this paper is the applicability of random projections to TR subproblems Eq. . Let $P \in \mathbb{R}^{d \times n}$ be a random projection with i.i.d. $\mathcal{N}(0,1)$ entries. We want to “project" each vector $x \in \mathbb{R}^n$ to a lower-dimension vector $Px \in \mathbb{R}^d$ and study the following *projected problem* $$\begin{aligned} \min \, \{x^{\top} (P^\top P Q P^\top P) x + c^{\top} P^\top P x \; | \; A P^\top P x \le b, \; \|Px\| \le 1 \}.\end{aligned}$$ By setting $u = Px, \; \bar{c} = Pc, \; \bar{A} = A P^\top$, we can rewrite it as $$\begin{aligned} \min_{u \in \;{\mbox{\sf\scriptsize Im}}(P)} \, \{u^{\top} (P Q P^\top) u + \bar{c}^{\top} u \; | \; \bar{A} u \le b, \; \|u\| \le 1 \},\end{aligned}$$ where ${\mbox{\sf Im}}(P)$ is the image space generated by $P$. Intuitively, since $P$ is a projection from a (supposedly very high dimensional) space to a lower dimensional space, it is very likely to be a surjective mapping. Therefore, we assume it is safe to remove the constraint $u \in \;{\mbox{\sf Im}}(P)$ and study the smaller dimensional problem: $$\begin{aligned} \label{TR:proj-prob} \min_{u \in \mathbb{R}^d} \, \{u^{\top} (P Q P^\top) u + \bar{c}^{\top} u \; | \; \bar{A} u \le b, \; \|u\| \le 1 \},\end{aligned}$$ where $u$ ranges in $\mathbb{R}^d$. As we will show later, Eq.  yields a good approximate solution of the TR subproblem with very high probability. Random projections for linear and quadratic models ================================================== In this section, we will explain the motivations for the study of the projected problem (\[TR:proj-prob\]). We start with the following simple lemma, which says that linear and quadratic models can be approximated well using random projections. Approximation results --------------------- \[jll-approx\] Let $P: \mathbb{R}^n \to \mathbb{R}^d$ be a random projection satisfying Eq.  and let $0 < \varepsilon < 1$. Then there is a universal constant $\mathcal{C}_0$ such that - For any $x, y \in \mathbb{R}^n$: $${{\color{black}\langle x, y\rangle - \varepsilon \|x\|\,\|y\| \le \langle Px, Py \rangle \le \langle x, y\rangle + \varepsilon \|x\|\,\|y\|}}$$ with probability at least $1 - 4e^{-\mathcal{C}_0\varepsilon^2 d}$. - For any $x\in \mathbb{R}^n$ and $A \in \mathbb{R}^{m \times n}$ whose rows are unit vectors: $${{\color{black} Ax - \varepsilon \|x\| \begin{bmatrix}1 \\ \ldots \\ 1\end{bmatrix} \le AP^{\top}Px \le Ax + \varepsilon \|x\| \begin{bmatrix}1 \\ \ldots \\ 1\end{bmatrix} }}$$ with probability at least $1 - 4me^{-\mathcal{C}_0\varepsilon^2 d}$. - For any two vectors $x, y\in \mathbb{R}^n$ and a square matrix $Q \in \mathbb{R}^{n \times n}$, then with probability at least $1 - 8ke^{-\mathcal{C}_0\varepsilon^2 d}$, we have: $${{\color{black} x^{\top} Q y - 3\varepsilon \|x\|\, \|y\|\, \|Q\|_* \le x^{\top}P^{\top}PQP^{\top}Py \le x^{\top} Q y + 3\varepsilon \|x\|\, \|y\|\, \|Q\|_*}},$$ in which $\|Q\|_*$ is the nuclear norm of $Q$ and $k$ is the rank of $Q$. \ (i) Let $\mathcal{C}_0$ be the same universal constant (denoted by $\mathcal{C}$) in Eq. . By the property in Eq. , for any two vectors $u+v, \, u-v$ and using the union bound, we have $$\begin{aligned} |\langle Pu, Pv \rangle - \langle u, v \rangle| & = \frac{1}{4} \big|\|P(u+v)\|^2 - \|P(u-v)\|^2 - \|u+v\|^2 + \|u-v\|^2 \big| \\ & \le \frac{1}{4} \big|\|P(u+v)\|^2 - \|u+v\|^2 \big| + \frac{1}{4}\big|\|P(u-v)\|^2 - \|u-v\|^2\big| \\ & \le \frac{\varepsilon}{4} (\|u+v\|^2 + \|u-v\|^2) = \frac{\varepsilon}{2} (\|u\|^2 + \|v\|^2), \end{aligned}$$ with probability at least $1 - 4 e^{-\mathcal{C}_0\varepsilon^2d}$. Apply this result for $u= \frac{x}{\|x\|}$ and $v = \frac{y}{\|y\|}$, we obtain the desired inequality. \(ii) Let $A_1,\ldots, A_m$ be (unit) row vectors of $A$. Then $$AP^\top Px - Ax = \begin{pmatrix} A_1^\top P^\top Px - A_1^\top x\\ \ldots \\ A_m^\top P^\top Px - A_m^\top x\end{pmatrix} = \begin{pmatrix} \langle PA_1, Px \rangle - \langle A_1, x \rangle \\ \ldots \\ \langle PA_m, Px \rangle - \langle A_m, x \rangle\end{pmatrix}.$$ The claim follows by applying Part (i) and the union bound. \(iii) Let $Q = U\Sigma V^{\top}$ be the Singular Value Decomposition (SVD) of $Q$. Here $U, V$ are $(n \times k)$-real matrices with orthogonal unit column vectors $u_1, \ldots, u_k$ and $v_1,\ldots,v_k$, respectively and $\Sigma = \mbox{\sf diag} (\sigma_1,\ldots,\sigma_k)$ is a diagonal real matrix with positive entries. Denote by $\textbf{1}_k = (1,\ldots, 1)^\top$ the $k$-dimensional column vector of all $1$ entries. Since $$\begin{aligned} x^{\top}P^{\top}PQP^{\top}Py & = (U^\top P^\top P x)^\top \Sigma (V^\top P^\top P y) \\ & = \big[U^\top x + U^\top (P^\top P - \mathbb{I}_n) x\big]^\top \Sigma \big[V^\top y + V^\top (P^\top P - \mathbb{I}_n) y \big] \end{aligned}$$ then the two inequalities that $$\begin{aligned} (U^\top x - \varepsilon \|x\| \textbf{1}_k)^\top \; \Sigma \;( V^\top y - \varepsilon \|y\| \textbf{1}_k) \le x^{\top}P^{\top}PQP^{\top}Py \le (U^\top x + \varepsilon \|x\| \textbf{1}_k)^\top \; \Sigma \;( V^\top y + \varepsilon \|y\| \textbf{1}_k)\end{aligned}$$ occurs with probability at least $1 - 8ke^{-\mathcal{C}\varepsilon^2 d}$ (by applying part (ii) and the union bound). Moreover $$\begin{aligned} (U^\top x - \varepsilon \|x\| \textbf{1}_k)^\top \; \Sigma \;( V^\top y - \varepsilon \|y\| \textbf{1}_k) = & \; \; x^{\top}Qy - \varepsilon \|x\| (\textbf{1}_k^\top \Sigma V^\top y) - \varepsilon \|y\| (x^\top U \Sigma \textbf{1}_k ) + \varepsilon^2 \|x\|\, \|y\| \sum_{i=1}^k \sigma_i\\ = & \; \; x^{\top}Qy - \varepsilon(\sigma_1, \ldots, \sigma_k) \big( \|x\| V^\top y + \|y\| U^\top x \big) + \varepsilon^2 \|x\|\, \|y\| \sum_{i=1}^k \sigma_i,\end{aligned}$$ and $$\begin{aligned} (U^\top x + \varepsilon \|x\| \textbf{1}_k)^\top \; \Sigma \;( V^\top y + \varepsilon \|y\| \textbf{1}_k) = & \; \; x^{\top}Qy + \varepsilon(\sigma_1, \ldots, \sigma_k) \big( \|x\| V^\top y + \|y\| U^\top x \big) + \varepsilon^2 \|x\|\, \|y\| \sum_{i=1}^k \sigma_i. \end{aligned}$$ Therefore, $$\begin{aligned} | x^{\top}P^{\top}PQP^{\top}Py - x^{\top}Qy| \le &\;\; \|x\|\,\|y\| \left(2 \varepsilon \sqrt{\sum_{i=1}^k \sigma^2_i} + \varepsilon^2 \sum_{i=1}^k \sigma_i\right) \\ \le &\;\; 3 \varepsilon \|x\|\,\|y\| \sum_{i=1}^k \sigma_i = 3 \varepsilon \|x\|\,\|y\|\, \|Q\|_*\end{aligned}$$ with probability at least $1 - 8ke^{-\mathcal{C}\varepsilon^2 d}$. It is known that singular values of random matrices often concentrate around their expectations. In the case when the random matrix is sampled from Gaussian ensembles, this phenomenon is well-understood due to many current research efforts. The following lemma, which is proved in [@Zhang2013], uses this phenomenon to show that when $P \in \mathbb{R}^{d \times n}$ is a Gaussian random matrix (with the number of row significantly smaller than the number of columns), then $PP^{\top}$ is very close to the identity matrix. \[Zhang\] Let $P \in \mathbb{R}^{d \times n}$ be a random matrix in which each entry is an i.i.d $\mathcal{N}(0, \frac{1}{\sqrt{n}})$ random variable. Then for any $\delta > 0$ and $0 < \varepsilon < \frac{1}{2}$, with probability at least $1 - \delta$, we have: $$\|PP^{\top} - I\|_2 \le \varepsilon$$ provided that $$\label{condition1} n \ge \frac{(d+1) \log (\frac{2d}{\delta})} {\mathcal{C}_1 \varepsilon^2},$$ where $\|\;.\, \|_2$ is the spectral norm of the matrix and $\mathcal{C}_1 > \frac{1}{4}$ is some universal constant. This lemma also tells us that when we go from low to high dimensions, with high probability we can ensure that the norms of all the points endure small distortions. Indeed, for any vector $u \in \mathbb{R}^d$, then $$\|P^\top u\|^2 - \|u\|^2 = \langle P^\top u,P^\top u\rangle -\langle u,u\rangle = \langle (PP^\top - I) u, u\rangle \in \mbox{range}(- \varepsilon \|u\|^2, \varepsilon \|u\|^2),$$ due to the Cauchy-Schwarz inequality. Moreover, it implies that $\|P\|_2 \le (1+{\varepsilon})$ with probability at least $1 - \delta$. Condition is not difficult to satisfy, since $d$ is often very small compared to $n$. It suffices that $n$ should be large enough to dominate the effect of $\frac{1}{\varepsilon^2}$. Trust region subproblems with linear models ------------------------------------------- We will first work with a simple case, i.e. when the surrogate models used in TR methods are linear: $$\begin{aligned} \label{TR:main-prob-linear} \min \, \{c^{\top} x \; | \; A x \le b, \; \|x\| \le 1, x \in \mathbb{R}^n \}.\end{aligned}$$ We will establish a relationship between problem and its corresponding projected problem: $$\begin{aligned} \tag{$P^{-}_{\varepsilon}$} \min \, \{(Pc)^{\top} u \; | \; AP^{\top} u \le b, \; \|u\| \le 1-\varepsilon, u \in \mathbb{R}^d \}.\end{aligned}$$ Note that, in the above problem, we shrink the unit ball by $\varepsilon$. We first obtain the following feasibility result: \[thm:feasibility1\] Let $P \in \mathbb{R}^{d \times n}$ be a random matrix in which each entry is an i.i.d $\mathcal{N}(0, \frac{1}{\sqrt{n}})$ random variable. Let $\delta \in (0,1)$. Assume further that $$n \ge \frac{(d+1) \log (\frac{2d}{\delta})} {\mathcal{C}_1 \varepsilon^2},$$ for some universal constant $\mathcal{C}_1 > \frac{1}{4}$. Then with probability at least $1- \delta$, for any feasible solution $u$ of the projected problem ($P^{-}_{\varepsilon}$), $P^\top u$ is also feasible for the original problem (\[TR:main-prob-linear\]). We remark the following universal property of Theorem \[thm:feasibility1\]: with a fixed probability, feasibility holds for all (instead of a specific) vectors $u$. Let $\mathcal{C}_1$ be as in Lemma \[Zhang\]. Let $u$ be any feasible solution for the projected problem ($P^{-}_{\varepsilon}$) and take $\hat{x} = P^\top u$. Then we have $A\hat{x} = AP^\top u \le b$ and $$\|\hat{x}\|^2 = \| P^\top u \|^2 = u^\top P^\top P u = u^\top u + u^\top (P^\top P - I) u \le (1+\varepsilon) \|u\|^2,$$ with probability at least $1 - \delta$ (by Lemma \[Zhang\]). This implies that $\|\hat{x}\| \le (1+\varepsilon/2) \|u\|$; and since $\|u\| \le 1 - {\varepsilon}$, we have $$\|\hat{x}\| \le (1+\frac{\varepsilon}{2}) (1-\varepsilon) < 1,$$ with probability at least $1 - \delta$, which proves the theorem. In order to estimate the quality of the objective function value, we define another projected problem, which can be considered as a relaxation of $(P^{-}_{\varepsilon})$ (we enlarge the feasible set by $\varepsilon$): $$\begin{aligned} \tag{$P^{+}_{\varepsilon}$} \min \, \{(Pc)^{\top} u \; | \; AP^{\top} u \le b + \varepsilon, \; \|u\| \le 1 + \varepsilon, u \in \mathbb{R}^d \}.\end{aligned}$$ Intuitively, these two projected problems $(P^{-}_{\varepsilon})$ and $(P^{+}_{\varepsilon})$ are very close to each other when $\varepsilon$ is small enough (under some additional assumptions, such as the “fullness" of the original polyhedron). Moreover, to make our discussions meaningful, we need to assume that they are feasible (in fact, it is enough to assume the feasibility of $(P^{-}_{\varepsilon})$ only). Let $u_{\varepsilon}^{-}$ and $u_{\varepsilon}^{+}$ be optimal solutions for these two problems, respectively. Denote by $x_{\varepsilon}^{-} = P^\top u_\varepsilon^{-}$ and $x_{\varepsilon}^{+} = P^\top u_\varepsilon^{+}$. Let $x^*$ be an optimal solution for the original problem (\[TR:main-prob\]). We will bound $c^\top x^*$ between $c^\top x_{\varepsilon}^{-}$ and $c^\top x_{\varepsilon}^{+}$, the two values that are expected to be approximately close to each other. Let $P \in \mathbb{R}^{d \times n}$ be a random matrix in which each entry is an i.i.d $\mathcal{N}(0, \frac{1}{\sqrt{n}})$ random variable. Let $\delta \in (0,1)$. Then there are universal constants $\mathcal{C}_0 > 1$ and $\mathcal{C}_1 > \frac{1}{4}$ such that if the two following conditions $$d \ge \frac{\log (m/\delta)}{\mathcal{C}_0 {\varepsilon}^2}, \quad \mbox{and } \quad n \ge \frac{(d+1) \log (\frac{2d}{\delta})} {\mathcal{C}_1 \varepsilon^2},$$ are satisfied, we will have:\ (i) With probability at least $1 - \delta $, the solution $x_{\varepsilon}^{-}$ is feasible for the original problem (\[TR:main-prob\]).\ (ii) With probability at least $1 - \delta$, we have: $$c^{\top}x_{\varepsilon}^{-} \ge c^{\top} x^* \ge c^{\top}x_{\varepsilon}^{+} - \varepsilon \|c\|.$$ Select $\mathcal{C}_0$ and $\mathcal{C}_1$ as in Lemma \[jll-approx\] and Lemma \[Zhang\]. Note that the second condition is the requirement for the Lemma \[Zhang\] to hold, and the first condition is equivalent to $ m \,e ^{-\mathcal{C}_0 \varepsilon^2 d} \le \delta$. Therefore, we can choose the universal constant $\mathcal{C}_0$ such that $1 - (4m+6) \,e^{-\mathcal{C}_0\varepsilon^2 d} \ge 1 - \delta$.\ (i) From the previous theorem, with probability at least $1 - \delta $, for any feasible point $u$ of the projected problem ($P^{-}_{\varepsilon}$), $P^\top u$ is also feasible for the original problem (\[TR:main-prob-linear\]). Therefore, it must hold also for $x_{\varepsilon}^{-}$. \(ii) From part (i), with probability at least $1- \delta$, $x_{\varepsilon}^{-}$ is feasible for the original problem (\[TR:main-prob\]). Therefore, we have $$\begin{aligned} c^{\top}x_{\varepsilon}^{-} \ge c^{\top} x^* ,\end{aligned}$$ with probability at least $1- \delta$. Moreover, due to Lemma \[jll-approx\], with probability at least $1 - 4e^{-\mathcal{C}_0\varepsilon^2 d}$, we have $$\begin{aligned} c^{\top} x^* \ge c^{\top} P^{\top} P x^* - \varepsilon \|c\| \,\|x^*\| \ge c^{\top} P^{\top} P x^* - \varepsilon \|c\|,\end{aligned}$$ (the last inequality follows from $\|x^*\| \le 1$). On the other hand, let $\hat{u}= Px^*$, due to Lemma \[jll-approx\], we have $$AP^\top \hat{u} = AP^\top P x^* \le Ax^*+ \varepsilon \|x^*\| \begin{bmatrix}1 \\ \cdots \\ 1\end{bmatrix} \le Ax^* + \varepsilon \begin{bmatrix}1 \\ \cdots \\ 1\end{bmatrix} \le b + {\varepsilon},$$ with probability at least $1 - 4me^{-\mathcal{C}_0\varepsilon^2 d}$, and $$\|\hat{u}\| = \|Px^*\| \le (1+ {\varepsilon}) \|x^*\| \le (1+ {\varepsilon}),$$ with probability at least $1 - 2e^{-\mathcal{C}_0\varepsilon^2 d}$, by Property . Therefore, $\hat{u}$ is a feasible solution for the problem ($P^{+}_{\varepsilon}$) with probability at least $1 - (4m+2)e^{-\mathcal{C}_0\varepsilon^2 d}$. Since $u^+_{\varepsilon}$ is the optimal solution for the problem ($P^{+}_{\varepsilon}$), it follows that $$\begin{aligned} c^{\top} x^* \ge c^{\top} P^{\top} P x^* - \varepsilon \|c\| = c^{\top} P^{\top} \hat{u} - \varepsilon \|c\| \ge c^{\top} P^{\top} u_{\varepsilon}^{+} - \varepsilon \|c\| = c^{\top} x_{\varepsilon}^{+} - \varepsilon \|c\|,\end{aligned}$$ with probability at least $1 - (4m+6)e^{-\mathcal{C}_0\varepsilon^2 d}$, which is at least $1- \delta$ for our chosen universal constant $\mathcal{C}_0$. We have showed that $c^\top x^*$ is bounded between $c^{\top}x_{\varepsilon}^{-}$ and $c^{\top}x_{\varepsilon}^{+} $. Now we will estimate the gap between these two bounds. First of all, as stated at the outset, we assume that the feasible set $$S^* = \{ x \in \mathbb{R}^n \; | \; A x \le b, \; \|x\| \le 1\}$$ is full-dimensional. This is a natural assumption since otherwise the polyhedron $\{ x \in \mathbb{R}^n \; | \; A x \le b\}$ is almost surely not full-dimensional either. We associate with each set $S$ a positive number $\mbox{\sc full}(S) > 0$, which is considered as a fullness measure of $S$ and is defined as the maximum radius of any closed ball contained in $S$. Now, from the our assumption, we have $\mbox{\sc full} (S^*) = r^* > 0$, where $r^\ast$ is the radius of the greatest ball inscribed in $S^\ast$ (see Fig. \[f:fullness\]). ![Fullness of a set.[]{data-label="f:fullness"}](fullness){width="10cm"} The following lemma characterizes the fullness of $S^+_{\varepsilon}$ with respect to $r^*$, where $$S_{\varepsilon}^{+}= \{ u \in \mathbb{R}^d \; | \; AP^{\top} u \le b + \varepsilon, \; \|u\| \le 1 + \varepsilon\},$$ that is, the feasible set of the problem ($P_{\varepsilon}^{+}$). \[fullness-lem\] Let $S^*$ be full-dimensional with $\mbox{\sc full} (S^*)= r^*$. Then with probability at least $1 - 3\delta$, $S^+_{\varepsilon}$ is also full-dimensional with the fullness measure: $${\mbox{\sc full} }(S^+_{\varepsilon}) \ge (1 - {\varepsilon}) r^*.$$ In the proof of this lemma, we will extensively use the fact that, for any row vector $a \in \mathbb{R}^n$ $$\sup_{\|u\| \le r} a^\top u = r\|a\|,$$ which is actually the equality condition in the Cauchy-Schwartz inequality. For any $i \in \{1,\ldots,n\},$ let $A_i$ denotes the $i$th row of $A$. Let $B(x_0, r^*)$ be a closed ball contained in $S^*$. Then for any $x \in \mathbb{R}^n$ with $\|x\| \le r^*$, we have $A(x_0 + x) = Ax_0 + Ax \le b,$ which implies that for any $i \in \{1,\ldots,n\}$, $$\label{eq:1} b_i \ge (Ax_0)_i + \sup_{\|x\| \le r^*} A_ix = (Ax_0)_i + r^* \|A_i\|= (Ax_0)_i + r^*,$$ hence $$b \ge Ax_0 + r^* \quad \mbox{or equivalently, } \quad Ax_0 \le b - r^* .$$ By Lemma \[jll-approx\], with probability at least $1 - \delta$, we have $$AP^\top P x_0 \le Ax_0 + {\varepsilon}\le b - r^* + {\varepsilon}.$$ Let $u \in \mathbb{R}^n$ with $\|u\| \le (1-{\varepsilon})r^*$, then for any $i \in \{1,\ldots,n\}$, by the above inequality we have: $$\begin{aligned} (AP^\top(Px_0+u))_i & = (AP^\top Px_0)_i + (AP^\top u)_i \\ & \le b_i+{\varepsilon}-r^* + (AP^\top)_iu \\ &= b_i+{\varepsilon}-r^* +A_iP^\top u \end{aligned}$$ where $(AP^\top)_i$ denotes the $i$th row of $AP^\top$. Since it holds for all such vector $u$, hence, by Cauchy-Schwartz inequality, $$(AP^\top(Px_0+u))_i\le b_i+{\varepsilon}-r^* + (1-{\varepsilon})r^*\|A_iP^\top\| .$$ Using Eq. and the union bound, we can see that with probability at least $1-2me^{-\mathcal{C}\varepsilon^2 d}\ge 1-\delta$, we have: for all $i \in \{1,\ldots,m\}$, $\|A_iP^\top\|\le (1+{\varepsilon})\|A_i\|=(1+{\varepsilon})$. Hence $$AP^\top(Px_0+u)\le b_i+{\varepsilon}-r^* + (1-{\varepsilon})r^*(1+{\varepsilon}) \le b+{\varepsilon}$$ with probability at least $1-2\delta$. In other words, with probability at least $1-2\delta$, the closed ball $B^*$ centered at $Px_0$ with radius $(1-{\varepsilon})r^*$ is contained in $\{u\;|\; AP^\top u \le b + {\varepsilon}\}$. Moreover, since $B(x_0, r^*)$ is contained in $S^*$, which is the subset of the unit ball, then $\|x_0\| \le 1 - r^*$. With probability at least $1 - \delta$, for all vectors $u$ in $B(Px_0, (1-{\varepsilon})r^*)$, we have $$\|u\|\le \|Px_0\| + (1-{\varepsilon})r^* \le (1+{\varepsilon}) \|x_0\| + (1-{\varepsilon})r^* \le (1 + {\varepsilon})(1-r^*) + (1-{\varepsilon})r^* \le 1 + {\varepsilon}.$$ Therefore, by the definition of $S^+_{\varepsilon}$ we have $$B\big(Px_0, (1-{\varepsilon})r^*\big) \subseteq S^+_{\varepsilon},$$ which implies that the fullness of $S^+_{\varepsilon}$ is at least $(1 - {\varepsilon}) r^*$, with probability at least $1 - 3\delta$. Now we will estimate the gap between the two objective functions of the problems ($P^+_\varepsilon$) and ($P^-_\varepsilon$) using the fullness measure. The theorem states that as long as the fullness of the original polyhedron is large enough, the gap between them is small. \[thm:sandwich1\] With probability at least $1 - 2\delta$, we have $$c^\top x^+_{\varepsilon}\le c^{\top} x^-_{\varepsilon}\le c^{\top} x_{\varepsilon}^{+} + \frac{18{\varepsilon}}{\mbox{\sc full} (S^*) }\|c\|.$$ Let $B(u_0, r_0)$ a closed ball with maximum radius that is contained in $S^+_{\varepsilon}$. In order to establish the relation between $u^+_{\varepsilon}$ and $u^-_{\varepsilon}$, our idea is to move $u^+_{\varepsilon}$ closer to $u_0$ (defined in the above lemma), so that the new point is contained in $S^-_{\varepsilon}$. Therefore, its objective value will be at least that of $u^-_{\varepsilon}$, but quite close to the objective value of $u^+_{\varepsilon}$. ![Idea of the proof](idea-proof){width="8cm"} We define $\hat{u} : = (1-\lambda) u_{\varepsilon}^{+} + \lambda u_0$ for some $\lambda \in (0,1)$ specified later. We want to find $\lambda$ such that $\hat{u}$ is feasible for $P_{\varepsilon}^{-}$ while its corresponding objective value is not so different from $c^{\top}x_{\varepsilon}^{+} $. Since for all $\|u\| \le r_0$: $$AP^{\top} (u_0 + u) = AP^{\top} u_0 + AP^{\top} u \le b + \varepsilon,$$ then, similarly to Eq. , $$AP^{\top} u_0 \le b + \varepsilon - r_0 \begin{pmatrix} \|A_1P^\top\| \\ \vdots \\ \|A_mP^\top\| \end{pmatrix} .$$ Therefore, we have, with probability at least $1-\delta$, $$AP^{\top} u_0 \le b + \varepsilon - r_0 (1-{\varepsilon}) \begin{pmatrix} \|A_1\| \\ \vdots \\ \|A_m\| \end{pmatrix}= b + \varepsilon - r_0 (1-{\varepsilon}).$$ Hence $$AP^{\top} \hat{u} = (1-\lambda) AP^{\top}u_{\varepsilon}^{+} + \lambda AP^{\top}u_0 \le b + \varepsilon - \lambda r_0 (1-{\varepsilon}) \le b + \varepsilon - \frac{1}{2}\lambda r_0 ,$$ as we can assume w.l.o.g. that ${\varepsilon}\le \frac{1}{2}$. Hence, $AP^{\top} \hat{u} \le b$ if we choose $\varepsilon \le \lambda \frac{r_0}{2} $. Moreover, let $u\in B(u_0, r_0)$ such that $u$ is collinear with $0$ and $u_0$, and such that with $\|u\|=r_0$; we then have $$\|u_0\| \le \|u_0 + u\| - r_0 \le 1 + \varepsilon - r_0,$$ so that $$\|\hat{u}\| \le (1-\lambda) \|u_{\varepsilon}^{+} \|+ \lambda \|u_0\| \le (1-\lambda) (1 + {\varepsilon}) + \lambda (1 + \varepsilon - r_0) = 1 + {\varepsilon}- \lambda r_0,$$ which is less than or equal to $1-{\varepsilon}$ for $$\varepsilon \le \frac{1}{2} \lambda r_0.$$ We now can choose $\lambda = 2\frac{{\varepsilon}}{r_0 }$. With this choice, $\hat{u}$ is a feasible point for the problem $P^-_{\varepsilon}$. Therefore, we have $$c^{\top} P^{\top} u^-_{\varepsilon}\le c^{\top} P^{\top} \hat{u} = c^{\top} P^{\top} u_{\varepsilon}^{+} + \lambda c^{\top} P^{\top} (u_0 - u_{\varepsilon}^{+} ) \le c^{\top} P^{\top} u_{\varepsilon}^{+} + \frac{4 (1+{\varepsilon}){\varepsilon}}{r_0 }\|Pc\|.$$ By the above Lemma, we know that $r_0 \ge (1- {\varepsilon}) r^*$, therefore, we have: $$c^{\top} P^{\top} u^-_{\varepsilon}\le c^{\top} P^{\top} \hat{u} \le c^{\top} P^{\top} u_{\varepsilon}^{+} + \frac{4 (1+{\varepsilon}){\varepsilon}}{r_0 }\|Pc\| \le c^{\top} P^{\top} u_{\varepsilon}^{+} + \frac{4(1+{\varepsilon})^2{\varepsilon}}{(1- {\varepsilon}) r^* }\|c\|,$$ with probability at least $1 - \delta$. The claim of the theorem now follows, by noticing that, when $0 \le {\varepsilon}\le \frac{1}{2}$, we can simplify: $$\frac{2(1+{\varepsilon})^2}{(1- {\varepsilon})} \le \frac{2(1 + \frac{1}{2})^2}{(1- \frac{1}{2})} = 9.$$ Trust region subproblems with quadratic models ---------------------------------------------- In this subsection, we consider the case when the surrogate models using in TR methods are quadratic and defined as follows: $$\begin{aligned} \label{TR:main-prob-quadratic} \min \, \{x^\top Q x + c^\top x \; | \; A x \le b, \; \|x\| \le 1, x \in \mathbb{R}^n \}.\end{aligned}$$ Similar to the previous section, we also study the relations between this and two other problems: $$\begin{aligned} \tag{$Q^{-}_{\varepsilon}$} \min \, \{u^{\top} PQP^\top u + (Pc)^\top u\; | \; AP^{\top} u \le b, \; \|u\| \le 1-\varepsilon, u \in \mathbb{R}^d \} \end{aligned}$$ and $$\begin{aligned} \tag{$Q^{+}_{\varepsilon}$} \min \, \{u^{\top} PQP^\top u + (Pc)^\top u \; | \; AP^{\top} u \le b + \varepsilon, \; \|u\| \le 1 + \varepsilon, u \in \mathbb{R}^d \}. \end{aligned}$$ We will just state the following feasibility result, as the proof is very similar to that of Thm. \[thm:feasibility1\]. \[thm:feasibility2\] Let $P \in \mathbb{R}^{d \times n}$ be a random matrix in which each entry is an i.i.d $\mathcal{N}(0, \frac{1}{\sqrt{n}})$ random variable. Let $\delta \in (0,1)$. Assume further that $$n \ge \frac{(d+1) \log (\frac{2d}{\delta})} {\mathcal{C}_1 \varepsilon^2},$$ for some universal constant $\mathcal{C}_1> \frac{1}{4}$. Then with probability at least $1- \delta$, for any feasible solution $u$ of the projected problem ($Q^{-}_{\varepsilon}$), $P^\top u$ is also feasible for the original problem (\[TR:main-prob-quadratic\]). Let $u_{\varepsilon}^{-}$ and $u_{\varepsilon}^{+}$ be optimal solutions for these two problems, respectively. Denote by $x_{\varepsilon}^{-} = P^\top u_\varepsilon^{-}$ and $x_{\varepsilon}^{+} = P^\top u_\varepsilon^{+}$. Let $x^*$ be an optimal solution for the original problem (\[TR:main-prob-quadratic\]). We will bound $x^{*\top} Q x^*+ c^\top x^*$ between $x^{-\top}_\varepsilon Q x^-_\varepsilon + c^\top x^{-\top}_\varepsilon$ and $x^{+\top}_\varepsilon Q x^+_\varepsilon + c^\top x^{+\top}_\varepsilon$, the two values that are expected to be approximately close to each other. Let $P \in \mathbb{R}^{d \times n}$ be a random matrix in which each entry is an i.i.d $\mathcal{N}(0, \frac{1}{\sqrt{n}})$ random variable. Let $\delta \in (0,1)$. Let $x^*$ be an optimal solution for the original problems (\[TR:main-prob-quadratic\]). Then there are universal constants $\mathcal{C}_0 > 1$ and $\mathcal{C}_1 > \frac{1}{4}$ such that if the two following conditions $$d \ge \frac{\log (m/\delta)}{\mathcal{C}_0 {\varepsilon}^2}, \quad \mbox{and } \quad n \ge \frac{(d+1) \log (\frac{2d}{\delta})} {\mathcal{C}_1 \varepsilon^2},$$ are satisfied, we will have:\ (i) With probability at least $1 - \delta $, the solution $x_{\varepsilon}^{-}$ is feasible for the original problem (\[TR:main-prob-quadratic\]).\ (ii) With probability at least $1 - \delta$, we have: $$x_{\varepsilon}^{-\top} Q x_{\varepsilon}^{-} + c^\top x_{\varepsilon}^{-} \ge x^{*\top} Q x^{*\top}+c^\top x^{*\top} \ge x_{\varepsilon}^{+\top} Q x_{\varepsilon}^{+} + c^\top x_{\varepsilon}^{+} - 3\varepsilon \|Q\|_* - \varepsilon \|c\|.$$ The constants $\mathcal{C}_0$ and $\mathcal{C}_1$ are chosen in the same way as before.\ (i) From the previous theorem, with probability at least $1 - \delta $, for any feasible point $u$ of the projected problem ($P^{-}_{\varepsilon}$), $P^\top u$ is also feasible for the original problem (\[TR:main-prob-quadratic\]). Therefore, it must hold also for $x_{\varepsilon}^{-}$. \(ii) From part (i), with probability at least $1- \delta$, $x_{\varepsilon}^{-}$ is feasible for the original problem (\[TR:main-prob-quadratic\]). Therefore, we have $$\begin{aligned} x_{\varepsilon}^{-\top} Q x_{\varepsilon}^{-} + c^\top x_{\varepsilon}^{-}\ge x^{*\top} Q x^{*} +c^\top x^{*}, \end{aligned}$$ with probability at least $1- \delta$. Moreover, due to Lemma \[jll-approx\], with probability at least $1 - 8(k+1)e^{-\mathcal{C}_0\varepsilon^2 d}$, where $k$ is the rank of $Q$, we have $$\begin{aligned} x^{*\top} Q x^{*} \ge x^{*\top} P^\top PQ P^\top P x^{*} - 3 \varepsilon \|x^*\|^2\, \|Q\|_* \ge x^{*\top} P^\top PQ P^\top P x^{*} - 3 \varepsilon \|Q\|_*, \end{aligned}$$ and $$\begin{aligned} c^{\top} x^* \ge c^{\top} P^{\top} P x^* - \varepsilon \|c\|\, \|x^*\| \ge c^{\top} P^{\top} P x^* - \varepsilon \|c\|, \end{aligned}$$ since $\|x^*\| \le 1$. Hence $$\begin{aligned} x^{*\top} Q x^{*} + c^{\top} x^* \ge x^{*\top} P^\top PQ P^\top P x^{*} + c^{\top} P^{\top} P x^* - \varepsilon \|c\| - 3 \varepsilon \|Q\|_*, \end{aligned}$$ On the other hand, let $\hat{u} = Px^*$, due to Lemma \[jll-approx\], we have $$AP^\top \hat{u} = AP^\top P x^* \le Ax^*+ \varepsilon \|x^*\| \begin{bmatrix}1 \\ \ldots \\ 1\end{bmatrix} \le Ax^* + \varepsilon \begin{bmatrix}1 \\ \ldots \\ 1\end{bmatrix} \le b + {\varepsilon},$$ with probability at least $1 - 4me^{-\mathcal{C}_0\varepsilon^2 d}$, and $$\|\hat{u}\| = \|Px^*\| \le (1+ {\varepsilon}) \|x^*\| \le (1+ {\varepsilon}),$$ with probability at leats $1 - 2e^{-\mathcal{C}_0\varepsilon^2 d}$ (by Lemma \[jll-approx\]). Therefore, $\hat{u}$ is a feasible solution for the problem ($P^{+}_{\varepsilon}$) with probability at least $1 - (4m+2)e^{-\mathcal{C}_0\varepsilon^2 d}$. Due to the optimality of $u^+_{\varepsilon}$ for the problem ($P^{+}_{\varepsilon}$), it follows that $$\begin{aligned} x^{*\top} Q x^{*} + c^{\top} x^* &\ge x^{*\top} P^\top PQ P^\top P x^{*} + c^{\top} P^{\top} P x^* - \varepsilon \|c\| - 3 \varepsilon \|Q\|_*,\\ &= \hat{u}^{\top} PQ P^\top \hat{u} + c^{\top} P^{\top} \hat{u} - \varepsilon \|c\|- 3 \varepsilon \|Q\|_* \\ &\ge u_\varepsilon^{+\top} PQ P^\top u_\varepsilon^{+} +(Pc)^\top u_\varepsilon^{+\top} - \varepsilon \|c\| - 3 \varepsilon \|Q\|_* \\ & = x_{\varepsilon}^{+\top} Q x_{\varepsilon}^{+} + c^\top x_{\varepsilon}^{+\top} - \varepsilon \|c\|- 3 \varepsilon \|Q\|_*, \end{aligned}$$ with probability at least $1 - (4m+6)e^{-\mathcal{C}_0\varepsilon^2 d}$, which is at least $1- \delta$ for the chosen universal constant $\mathcal{C}_0$. Moreover, $$\begin{aligned} c^{\top} x^* \ge c^{\top} P^{\top} P x^* - \varepsilon \|c\| = c^{\top} P^{\top} \hat{u} - \varepsilon \|c\| \ge c^{\top} P^{\top} u_{\varepsilon}^{+} - \varepsilon \|c\| = c^{\top} x_{\varepsilon}^{+} - \varepsilon \|c\|, \end{aligned}$$ Hence $$x^{*\top} Q x^{*\top}+c^\top x^{*\top} \ge x_{\varepsilon}^{+\top} Q x_{\varepsilon}^{+} + c^\top x_{\varepsilon}^{+} - 3\varepsilon \|Q\|_* - \varepsilon \|c\|$$ which concludes the proof. The above result implies that the value of $x^{*\top} Q x^{*}+ c^\top x^* $ lies between $x_{\varepsilon}^{-\top} Q x_{\varepsilon}^{-} +c^\top x_{\varepsilon}^{-} $ and $x_{\varepsilon}^{+\top} Q x_{\varepsilon}^{+} +c^\top x_{\varepsilon}^{+} $. It remains to prove that these two values are not so far from each other. For that, we also use the definition of fullness measure. We have the following result: \[thm:sandwich2\] Let $0 < \varepsilon < 0.1$. Then with probability at least $1 - 4\delta$, we have $$x_{\varepsilon}^{+\top} Q x_{\varepsilon}^{+}+ c^\top x_{\varepsilon}^{+} \le x_{\varepsilon}^{-} Q x_{\varepsilon}^{-}+c^\top x_{\varepsilon}^{-} < x_{\varepsilon}^{+\top} Q x_{\varepsilon}^{+} + c^\top x_{\varepsilon}^{+} + \frac{{\varepsilon}}{\mbox{\sc full}(S^*)}(36+18\|c\|) .$$ Let $B(u_0, r_0)$ be a closed ball with maximum radius that is contained in $S^+_{\varepsilon}$. We define $\hat{u} : = (1-\lambda) u_{\varepsilon}^{+} + \lambda u_0$ for some $\lambda \in (0,1)$ specified later. We want to find “small" $\lambda$ such that $\hat{u}$ is feasible for $Q_{\varepsilon}^{-}$ while its corresponding objective value is still close to $x_{\varepsilon}^{+\top} Q x_{\varepsilon}^{+} + c^\top x_{\varepsilon}^{+} $. Similar to the proof of Th. \[thm:sandwich1\], when we choose $\lambda = 2\frac{{\varepsilon}}{r_0 }$, then $\hat{u}$ is feasible for the problem $Q^-_{\varepsilon}$ with probability at least $1 - \delta$. Therefore, $u^{-\top}_{\varepsilon}PQP^\top u^{-}_{\varepsilon}+ (Pc)^\top u^{-}_{\varepsilon}$ is smaller than or equal to $$\begin{aligned} & \mbox{ } \hat{u}^{\top} PQP^\top \hat{u} + (Pc)^\top \hat{u} \\ &= \big(u^+_{\varepsilon}+ \lambda (u_0 - u^+_{\varepsilon}) \big)^{\top} PQP^\top \big(u^+_{\varepsilon}+ \lambda (u_0 - u^+_{\varepsilon}) \big) + (Pc)^\top \hat{u}\\ & = u^{+\top}_{\varepsilon}PQP^\top u^{+}_{\varepsilon}+ \lambda u^{+\top}_{\varepsilon}PQP^\top \big(u_0 - u^+_{\varepsilon}\big) + \lambda (u_0 - u^+_{\varepsilon})^{\top} PQP^\top u^+_{\varepsilon}\\ & \hspace*{1em} + \lambda^2 (u_0 - u^+_{\varepsilon})^{\top} PQP^\top (u_0 - u^+_{\varepsilon}) + (Pc)^\top \hat{u}. \end{aligned}$$ However, from Lemma \[jll-approx\] and the Cauchy-Schwartz inequality, we have $$\begin{aligned} u^{+\top}_{\varepsilon}PQP^\top \big(u_0 - u^+_{\varepsilon}\big) & \le \|P^\top u^{+}_{\varepsilon}\| \cdot \| Q \|_2 \cdot \|P^\top (u_0 - u^+_{\varepsilon}\big) \| \\ & \le (1 +\varepsilon)^2 \|u^{+}_{\varepsilon}\| \cdot \| Q \|_2 \cdot \| (u_0 - u^+_{\varepsilon}\big) \| \\ & \le 2 (1+\varepsilon)^4 \; \|Q\|_2 \\ & \quad \mbox{(Since $\|u^+_\varepsilon\|$ and $\|u^-_\varepsilon\| \le 1 + \varepsilon$)} \end{aligned}$$ Similar for other terms, then we have $$\begin{aligned} \mbox{ } \hat{u}^{\top} PQP^\top \hat{u} \le u^{+\top}_{\varepsilon}PQP^\top u^{+}_{\varepsilon}+ (4 \lambda + 4 \lambda^2) (1 + \varepsilon)^4 \; \|Q\|_2. \end{aligned}$$ Since ${\varepsilon}< 0.1$, we have $(1+\varepsilon)^4 < 2$ and we can assume that $\lambda < 1$. Then we have $$\begin{aligned} \hat{u}^{\top} PQP^\top \hat{u} & < u^{+\top}_{\varepsilon}PQP^\top u^{+}_{\varepsilon}+ 16 \lambda \|Q\|_2 \\ & = u^{+\top}_{\varepsilon}PQP^\top u^{+}_{\varepsilon}+ \frac{32 \varepsilon}{r_0 } \qquad \mbox{(since $\|Q\|_2 = 1$)}\\ & \le u^{+\top}_{\varepsilon}PQP^\top u^{+}_{\varepsilon}+ \frac{32 \varepsilon}{(1-{\varepsilon}) \mbox{\sc full}(S^*) } \qquad \mbox{(due to Lemma \ref{fullness-lem})}\\ & < u^{+\top}_{\varepsilon}PQP^\top u^{+}_{\varepsilon}+ \frac{36 \varepsilon}{\mbox{\sc full}(S^*) } \qquad \mbox{(since $\varepsilon \le 0.1$)}, \end{aligned}$$ with probability at least $1 - 2 \delta$. Furthermore, similarly to the proof of Th. \[thm:sandwich1\], we have $$c^{\top} P^{\top} \hat{u} = (1-\lambda) (Pc)^\top u_{\varepsilon}^{+} + \lambda (Pc)^\top u_0 \le c^{\top} P^{\top} u_{\varepsilon}^{+} + \frac{4(1+{\varepsilon})^2{\varepsilon}}{(1- {\varepsilon}) r^* }\|c\| \le c^{\top} P^{\top} u_{\varepsilon}^{+} + \frac{18{\varepsilon}}{\mbox{\sc full}(S^*)} \|c\|,$$ with probability at least $1 - 2 \delta$. Conclusion {#conclusion .unnumbered} ========== In this paper, we have shown theoretically that random projection can be used in combination with trust region method to study high-dimensional derivative free optimization. We have proved that the solutions provided by solving low-dimensional projected version of trust region subproblems are good approximations of the true ones. We hope this provides a useful insight in the solution of very high dimensional derivative free problems. [[As a last remark, we note in passing that the results of this paper extend to the case of arbitrary (non-unit and non-zero) trust region radii, through scaling. This observation also suggests that the LP and QP trust region problem formulations analyzed in this paper can inner- and outer-approximate arbitrary bounded LPs and QPs.]{}]{}
--- abstract: 'The problem of decentralized sequential change detection is considered, where an abrupt change occurs in an area monitored by a number of sensors; the sensors transmit their data to a fusion center, subject to bandwidth and energy constraints, and the fusion center is responsible for detecting the change as soon as possible. A novel sequential detection rule is proposed that requires communication from the sensors at random times and transmission of only low-bit messages, on which the fusion center runs in parallel a CUSUM test. The second-order asymptotic optimality of the proposed scheme is established both in discrete and in continuous time. Specifically, it is shown that the inflicted performance loss (with respect to the optimal detection rule that uses the complete sensor observations) is asymptotically bounded as the rate of false alarms goes to 0, for any fixed rate of communication. When the rate of communication from the sensors is asymptotically low, the proposed scheme remains first-order asymptotically optimal. Finally, simulation experiments illustrate its efficiency and its superiority over a decentralized detection rule that relies on communication at deterministic times.' address: - 'Department of Statistics, University of Illinois, Urbana-Champaign, IL 61820, USA. ' - 'Department of Electrical and Computer Engineering, University of Patras, 26500 Rion, Greece.\' author: - - title: Bandwidth and Energy Efficient Decentralized Sequential Change Detection --- Introduction ============ Suppose that an area is being monitored by a number of sensors which transmit their observations to a central location, that we will call fusion center. At some unknown time, an abrupt disorder occurs, such as an unexpected intrusion, and changes the dynamics of the observed processes in all sensors simultaneously. The goal is to raise an alarm at the fusion center as soon as possible after the occurrence of the change. When the sensors transmit their complete observations to the fusion center, this is the classical problem of sequential change detection, for exhaustive reviews on which we refer to [@niki], [@poor], [@lairoyal], [@shir], [@polu]. However, classical detection rules typically are not applicable in modern application areas, such as mobile and wireless communications and distributed surveillance systems. In such systems, the sensors are typically low-power devices whose links with the fusion center are characterized by limited communication bandwidth [@vij],[@veera]. Thus, in order to preserve the robustness of the network, it is necessary to limit the overall communication load and, in particular, the transmission activity of each sensor. This primarily implies a *quantization* constraint, i.e., each sensor should transmit a small number of bits each time it communicates with the fusion center, but also a *rate* constraint, i.e., each sensor should communicate with the fusion center at a lower rate than its sampling rate. As a result, before constructing a sequential detection rule at the fusion center, the designer must first decide what kind of information should be transmitted from the sensors, taking into account the above communication constraints. In what follows, we will call detection rules that respect such constraints *decentralized*, in contrast to the *centralized* ones that require knowledge of the full sensor observations. Most papers in the decentralized literature (see, e.g., [@crow], [@veer], [@veera], [@tart]) assume that each sensor transmits a quantized version of *every* observation it takes, i.e., the communication rate is equal to the sampling rate. For a discussion on one-shot schemes, where each sensor transmits to the fusion center a single bit *at most once*, we refer to [@moustdec]. A decentralized detection rule which enjoys an asymptotic optimality property was proposed by Mei [@mei], however the performance of this scheme in practice is often worse than that of asymptotically suboptimal detection rules. Thus, it has been an open problem to find an asymptotically optimal decentralized detection rule that is also efficient in practice. The main contribution of this work is that we propose such a rule. Specifically, we suggest that each sensor communicates with the fusion center at stopping times of its local filtration; at every communication, it transmits a *low-bit* message which “summarizes” the evolution of its local sufficient statistic since the previous communication; the fusion center, in parallel, runs a CUSUM test on the transmitted messages in order to detect the change. For similar communication schemes in the context of decentralized sequential hypothesis testing we refer to [@fel] and [@yil]. The design and analysis of the proposed scheme, that we call D-CUSUM, is different in discrete and continuous time. However, in both cases we establish a *second-order* asymptotic optimality property, that is stronger than the first-order asymptotic optimality of the detection rule in [@mei]. In particular, we show that the performance loss of D-CUSUM with respect to the optimal centralized CUSUM remains bounded as the period of false alarms goes to infinity. Moreover, we show that D-CUSUM remains first-order asymptotically optimal even when it induces an asymptotically low communication rate and there is an asymptotically large number of sensors. Simulation experiments suggest that these strong theoretical properties are also accompanied by very good performance in practice and that D-CUSUM is much more efficient than a similar, CUSUM-based decentralized detection rule that relies on communication from the sensors at deterministic times. In what follows, in Section 2, we formulate the problem of (decentralized) sequential change detection and describe the main decentralized schemes in the literature. In Section 3, we define and analyze the proposed scheme both in continuous and in discrete time. In Section 4, we summarize and discuss an extension in the case of correlated sensors. The proof of all results, as well as some supporting lemmas, are presented in Appendices A-E. Sequential Change Detection =========================== Let $\{(\xi_{t}:=\xi_{t}^{1},\ldots, \xi_{t}^{K})\}$ be a $K$-dimensional stochastic process, where $\xi_{0}^{k}:=0$ and $\xi^{k}$ is the observed process at sensor $k$, $1 \leq k \leq K$. We denote by $\{{ {{\mathscr{F}}}_{t}}^{k}\}$ the local filtration at sensor $k$ and by $\{{ {{\mathscr{F}}}_{t}}\}$ the global filtration, i.e., ${ {{\mathscr{F}}}_{t}}^{k}:= \sigma(\xi_{s}^k, \, 0 \leq s \leq t)$ and ${\mathscr{F}}_{t}:= \vee_{k} {\mathscr{F}}_{t}^{k}$. Time may be either discrete $(t \in \mathbb{N})$ or continuous $(t \in [0, \infty))$ and in the latter case all filtrations are considered to be right-continuous. We assume that at some unknown, deterministic time $\tau \geq 0$, the distribution of $\xi$, which we denote by ${\mathsf{P}}_\tau$, changes from ${\mathsf{P}}_{\infty}$ to ${\mathsf{P}}_{0}$, where ${\mathsf{P}}_{0}$ and ${\mathsf{P}}_{\infty}$ are two completely specified, locally equivalent probability measures on the canonical space of $\xi$. In other words, ${\mathsf{P}}_\tau$ coincides with ${\mathsf{P}}_{\infty}$ when both measures are restricted to ${ {{\mathscr{F}}}_{t}}$ and $t \leq \tau$, whereas for $t> \tau$ we can define the following log-likelihood ratio process $$u_{t}-u_{\tau}:=\log \frac{\text{d} {\mathsf{P}}_{\tau}}{\text{d} {\mathsf{P}}_{\infty}} \Big|_{{ {{\mathscr{F}}}_{t}}} , \quad t \geq \tau; \quad u_{0}:=0.$$ The centralized setup --------------------- In the centralized setup, where the fusion center has access to all sensor observations, the problem is to find an $\{{ {{\mathscr{F}}}_{t}}\}$-stopping time ${\mathcal{T}}$ that has small detection delay and rare false alarms, i.e., ${\mathcal{T}}$ should take large values under ${\mathsf{P}}_{\infty}$ and ${\mathcal{T}}-\tau$ small values under ${\mathsf{P}}_{\tau}$. There are different approaches in how to quantify detection delay and false alarms, such as the Bayesian formulation due to Shiryaev [@shiry] (see also [@bei2], [@pes], [@gap], [@savas], [@sezer]) or the minimax formulation due to Pollak [@poll] (see also [@polu], [@tpp]). In this work, we focus on the formulation suggested by Lorden [@lord], where the performance of a detection rule ${\mathcal{T}}$ is measured by its worst-case (with respect to $\tau$) conditional expected delay given the worst possible history of observations up to $\tau$, $$\label{lord} {\mathcal{J}}_{\text{L}}[{\mathcal{T}}] = \sup_{\tau \geq 0} \, \text{ess\,sup} \; {\mathsf{E}}_{\tau} [ ({\mathcal{T}}-\tau)^{+} | {\mathscr{F}}_{\tau} ],$$ and an optimal detection rule is a solution to the following optimization problem $$\label{lord_crit} \inf_{{\mathcal{T}}} {\mathcal{J}}_{\text{L}}[{\mathcal{T}}] ~ \text{when} ~ {\mathsf{E}}_{\infty} [{\mathcal{T}}] \geq \gamma ,$$ where $\gamma>0$. In other words, the goal in this approach is to minimize the detection delay under the worst-case scenario with respect to both the changepoint and the history of observations before the change, while controlling the period of false alarms above a desired level, $\gamma$. It is well known (see [@moust], [@moustext]) that when $\{u_{t}\}_{t \in \mathbb{N}}$ is a random walk, the solution to this problem is given by Page’s [@page] Cumulative Sums (CUSUM) test, $$\label{cusum} { {\mathcal{S}}}:= \inf \{ t \geq 0: y_{t} \geq \nu \} , ~ \text{where} \quad y_{t}:= u_{t} - \inf_{0 \leq s < t} u_{s},$$ and $\nu$ is defined so that the false alarm constraint in (\[lord\_crit\]) be satisfied with equality, i.e., ${\mathsf{E}}_{\infty} [{ {\mathcal{S}}}]=\gamma$. This exact (i.e., non-asymptotic) optimality of the CUSUM test can be extended to a much richer class of dynamics if we adopt an idea of Liptser and Shiryaev [@lip] and measure detection delay and period of false alarms not in terms of actual time, but in terms of Kullback-Leibler divergence. Indeed, working similarly to [@moustito], we replace the performance measure ${\mathcal{J}}_{L}$ by $$\label{mod} {\mathcal{J}}[{\mathcal{T}}] := \sup_{\tau \geq 0} \, \text{ess\,sup} \; {\mathsf{E}}_{\tau} [ ( u_{{\mathcal{T}}}- u _{\tau} ){\mathbbm{1}_{\{{\mathcal{T}}>\tau\}}} | {\mathscr{F}}_{\tau} ]$$ and define an optimal detection rule as a solution to $$\label{mod_crit} \inf_{{\mathcal{T}}} {\mathcal{J}}[{\mathcal{T}}] ~ \text{when} ~ {\mathsf{E}}_{\infty} [-u_{{\mathcal{T}}}] \geq \gamma,$$ a problem that is equivalent to when $\{u_{t}\}$ is a random walk. However, it has been shown in [@moustito], [@chro] that the CUSUM test, with threshold $\nu$ chosen so that ${\mathsf{E}}_{\infty} [-u_{{ {\mathcal{S}}}}] = \gamma$, also solves problem whenever $\{u_{t}\}$ has continuous paths and $$\label{full} \lim_{t\rightarrow \infty} \langle u \rangle_{t}=\infty \quad {\mathsf{P}}_{0}, {\mathsf{P}}_{\infty}-\text{a.s.},$$ where $\langle u \rangle_{t}$ is the quadratic variation of $u_t$. The latter optimality result implies that CUSUM solves Lorden’s original problem (\[lord\_crit\]) whenever $\{u_{t}\}$ has continuous paths and $\langle u\rangle_{t}$ is proportional to $t$. This is the case, for example, when each $\xi^{k}$ is a fractional Brownian motion (fBm) with Hurst index $H$ before the change and adopts a polynomial drift term with exponent $H+1/2$ after the change [@chro]. In the special case that $H=1/2$, this implies the well-known optimality of CUSUM for detecting a constant drift in a Brownian motion, established by Shiryaev [@shircus] and Beibel [@bei]. The decentralized setup ----------------------- Centralized (classical) detection rules as the CUSUM test cannot be applied in a decentralized setup, where communication constraints must be taken into account. In this context, before defining a detection rule at the fusion center, we must first specify a *communication scheme*, that will determine the information that will be transmitted from the sensors to the fusion center. Therefore, we define a *decentralized* sequential detection rule as a pair $(\{\tilde{{ {{\mathscr{F}}}_{t}}}\}, {\mathcal{T}})$, where ${\mathcal{T}}$ is an $\{\tilde{{ {{\mathscr{F}}}_{t}}}\}$-stopping time and $\{\tilde{{ {{\mathscr{F}}}_{t}}}\}$ is a filtration of the form $$\label{flow} \tilde{{ {{\mathscr{F}}}_{t}}}:= \sigma (( \tau^k_{n},z_{n}^k): \tau_{n}^k \leq t, k=1, \ldots, K),$$ where each $\{\tau_{n}^k\}_{n \in \mathbb{N}}$ is the sequence of communication times for sensor $k$ and $z_{n}^k$ is the message transmitted to the fusion center at time $\tau_{n}^{k}$. Each $\tau_{n}^k$ must be an $\{{ {{\mathscr{F}}}_{t}}^k\}$-stopping time and each $z_{n}^k$ an ${\mathscr{F}}^k_{\tau_{n}^k}$-measurable random variable that takes values in a *finite* set, so that a small number of bits is required for its transmission to the fusion center. Moreover, since many applications are characterized by limited storage capacity, we require additionally that each $z_{n}^{k}$ is measurable with respect to $\sigma(\xi_{s}^{k}, \; \tau_{n-1}^{k} \leq s \leq \tau_{n}^{k})$, the $\sigma$-algebra generated by the observations at sensor $k$ between its $n-1$ and $nth$ transmission. Note that this framework forbids communication between sensors or feedback from the fusion center to the sensors. Such possibilities impose a much heavier communication load on the network and raise questions regarding the design of the network architecture, which we do not consider here. For decentralized detection rules that require feedback we refer to [@veer]. Ideally, we would like to find the best possible decentralized detection rule, performing a joint optimization over the communication scheme at the sensors and the detection rule at the fusion center. Such an optimization problem is highly intractable, even if one makes a number of simplifying assumptions [@veer]. For this reason, we will use the centralized CUSUM as the ultimate benchmark and compare any decentralized detection rule against it. We can only hope that such a detection rule attains the optimal centralized performance asymptotically. Thus, if $(\{\tilde{{ {{\mathscr{F}}}_{t}}}\},{\mathcal{T}})$ is an arbitrary decentralized detection rule and ${\mathcal{S}}$ the centralized CUSUM test so that ${\mathsf{E}}_{\infty}[-u_{{\mathcal{T}}}]\ge\gamma= {\mathsf{E}}_{\infty}[-u_{{\mathcal{S}}}]$ for any $\gamma>0$, we will say that ${\mathcal{T}}$ is asymptotically optimal of *first* order if ${\mathcal{J}}[{\mathcal{T}}]/{\mathcal{J}}[{\mathcal{S}}] \rightarrow 1$ as $\gamma \rightarrow \infty$ and of *second* order if ${\mathcal{J}}[{\mathcal{T}}] - {\mathcal{J}}[{\mathcal{S}}] = {\mathcal{O}}(1)$ as $\gamma\to\infty$. Clearly, since ${\mathcal{J}}[S] \rightarrow \infty$ as $\gamma \rightarrow \infty$, second order asymptotic optimality is a stronger property, which guarantees that the inflicted performance loss remains bounded as the rate of false alarms goes to 0. As it is common in the literature of decentralized sequential detection, we will assume that observations from different sensors are independent. Thus, if ${\mathsf{P}}_{\tau}^{k}$ is the distribution of $\xi^{k}$, then ${\mathsf{P}}_{\tau}= {\mathsf{P}}_{\tau}^{1} \times \ldots \times {\mathsf{P}}_{\tau}^{K}$ for any $\tau \in [0, \infty]$ and, consequently, $$u_{t}:= u_{t}^{1}+ \ldots + u_{t}^{K}, \quad \text{where} \quad u_{t}^k :=\log \frac{\text{d} {\mathsf{P}}_{0}^{k}}{\text{d} {\mathsf{P}}_{\infty}^{k}} \Big|_{{ {{\mathscr{F}}}_{t}}^k},$$ for any $t \geq 0$. We also assume that the local Kullback-Leibler (KL) information numbers, $I_{0}^k:= {\mathsf{E}}_{0}[u_{1}^k]$ and $I_{\infty}^k:= -{\mathsf{E}}_{\infty}[u_{1}^k]$, are positive and finite for every $1\leq k \leq K$ and, furthermore, we define the corresponding average KL-numbers $$\label{klaver} \bar{I}_{0}:= \frac{1}{K}{\mathsf{E}}_{0}[u_{1}]= \frac{1}{K}\sum_{k=1}^{K} I_{0}^k \quad \text{and} \quad \bar{I}_{\infty}:= \frac{1}{K}{\mathsf{E}}_{\infty}[-u_{1}]=\frac{1}{K}\sum_{k=1}^{K} I_{\infty}^k.$$ In the remainder of this section, we describe the main decentralized sequential detection rules in the literature, embedding them in the above framework. We classify them into two categories; in the first, the sensors transmit systematically compressed versions of their data to the fusion center and the latter combines the received messages in order to detect the change; in the second, each sensor detects individually the change and the fusion center combines the local sensor decisions. ### Q-CUSUM Suppose that each sensor transmits to the fusion center quantized versions of its local log-likelihood ratio process at deterministic, equidistant times. Specifically, if for each sensor the communication period is $r$ and the available alphabet $\{1,\ldots, b\}$, where $b\geq 2$ is an integer, then $$\label{zdd} \tau_{n}^k=rn \; \text{and} \; z_n^k= \sum_{j=1}^{b} j \, {\mathbbm{1}_{\{\Gamma_{j-1}^k \leq u^k_{rn}- u^k_{r(n-1)} < {\Gamma}_{j}^k\}}},$$ where $-\infty=:\Gamma_{0}^k< {\Gamma}_{1}^k<\ldots < {\Gamma}_{b}^k:=\infty$ are fixed thresholds. This communication scheme induces synchronous communication to the fusion center, which receives at each time $\tau_{n}^k=rn$ the $K$-dimensional vector $(z_{n}^{1}, \ldots, z_{n}^{K})$. If we additionally assume that each $\{u_{t}^k\}$ has stationary and independent increments, then a natural detection rule at the fusion center is the corresponding CUSUM stopping time $$\label{eq:cus2} \hat{{\mathcal{S}}} := r \cdot \inf\{n \in \mathbb{N} : \hat{y}_{n} \geq \hat{\nu}\},$$ where the threshold $\hat{\nu}$ is chosen so that the false alarm constraint be satisfied with equality and the CUSUM statistic $\{\hat{y}_{n}\}$ admits the following recursion: $$\hat{y}_{n}:= (\hat{y}_{n-1})^{+} + \sum_{k=1}^{K} \sum_{j=1}^{b} \left[ {\mathbbm{1}_{\{z_{n}^k=j\}}} \log \frac{{\mathsf{P}}_{0}(z_{n}^k=j)}{{\mathsf{P}}_{\infty}(z_{n}^k=j)} \right] , ~ \hat{y}_{0}:=0, \label{eq:cus1}$$ Note that we have to multiply by $r$ in in order to return to physical time units, since the samples are acquired with a rate $1/r$. We call this detection scheme Q-CUSUM, where Q stands for the “quantization” employed by this method. This detection rule has been studied in [@crow], [@mei], [@tart] in the case that the sensors take i.i.d. observations and each sensor communicates with the fusion center at *every* observation time ($r=1$). It is easy to see that as $\gamma \rightarrow \infty$ $$\frac{{\mathcal{J}}[\hat{{\mathcal{S}}}]}{{\mathcal{J}}[{\mathcal{S}}]} \rightarrow \frac{r \bar{I}_{0}}{\hat{I}_{0}}, \quad \text{where} \; \hat{I}_{0}:= \frac{1}{K} \sum_{i=1}^{K} \sum_{j=1}^{b} {\mathsf{P}}_{0}(z_{n}^k=j)\log \frac{ {\mathsf{P}}_{0}(z_{n}^k=j)}{{\mathsf{P}}_{\infty}(z_{n}^k=j)} ,$$ and $\bar{I}_{0}$ is the average KL-number defined in (\[klaver\]), which implies that the asymptotic performance of $\hat{{\mathcal{S}}}$ is optimized by selecting thresholds $\{\Gamma_{j}^k\}$ in order to maximize $\hat{I}_{0}$. However, for any choice of thresholds, $\hat{{\mathcal{S}}}$ is not (even first-order) asymptotically optimal, since $r \bar{I}_{0}> \hat{I}_{0}$ (see, e.g., [@tsi2]). ### Fusion of local CUSUM rules Suppose now that each sensor $k$ communicates at the following times $$\label{meitau} \tau_{n}^k= \inf\{ t \geq \tau_{n-1}^k: y^k_{t} \geq c^{k} \} ,$$ where $y^k_{t}:= u^k_{t} - \min_{0 \leq s \leq t} u^k_{s}$ is the local CUSUM statistic and $c^{k}$ is a fixed, positive threshold. In this way, the sensors communicate with the fusion center only to announce they have detected the change. This requires only *one-bit* transmissions, which means that even if the network supports the transmission of multi-bit messages, this flexibility is not going to be useful. There are many reasonable fusion center policies that can be based on (\[meitau\]). For example, the fusion center may raise an alarm the first time any sensor communicates, i.e., at $\min_{k} \tau_{1}^k$ (min-CUSUM). This is clearly a one-shot scheme, i.e., it requires transmission of at most one bit from each sensor, and as one would expect it is asymptotically suboptimal (see, e.g., [@tart] for the case of i.i.d. observations and [@moustdec] for the case of Brownian motions). An alternative possibility is to raise an alarm the first time that all sensors communicate *simultaneously*, i.e., at $${\mathcal{M}}:= \inf\{ t: y_{t}^k \geq c^{k} , \; \forall \; k=1, \ldots, K\}.$$ This rule was suggested (although in a different form) by Mei [@mei], where it was shown that when each $u^{k}$ is a random walk with a finite second moment, ${\mathcal{M}}$ is first-order asymptotically optimal (in particular, ${\mathcal{J}}[{\mathcal{M}}]- {\mathcal{J}}[{\mathcal{S}}]= {\mathcal{O}}(\sqrt{\log \gamma})$), as long as each $c^{k}$ is proportional to the local KL-number, $I_{0}^k$. Since the constant of proportionality is determined by $\gamma$, this means that for this decentralized scheme, contrary to Q-CUSUM, it is not possible to control how often each sensor communicates with the fusion center. However, by construction, the induced communication activity will be intense only after the change has occurred; before the change, a sensor communicates only to report a “local false alarm”, which is a rare event. Finally, despite its asymptotic optimality, it is known (see, e.g., [@mei], [@tart]) that the non-asymptotic performance of ${\mathcal{M}}$ can be worse than that of Q-CUSUM when the latter requires transmission of one-bit messages ($b=2$) at every observation time ($r=1$), especially when $K$ is large. D-CUSUM ======= In this section, we define and analyze the decentralized detection structure that we propose. Thus, we suggest that each sensor $k$ communicates with the fusion center at the following sequence of $\{{ {{\mathscr{F}}}_{t}}^k\}$-stopping times $$\label{taucd} \tau_{n}^k := \inf \{ t > \tau_{n-1}^k: u^k_{t} - u^k_{\tau_{n-1}^k} \notin (-{\underline{\Delta}^k}, {\bar{\Delta}^k}) \} , ~ n \in \mathbb{N},$$ where $\tau_0^k:=0$ and ${\bar{\Delta}^k}, {\underline{\Delta}^k}$ are fixed, positive thresholds. For every $n \in \mathbb{N}$ and $t>0$ we set $$\tau^{k}(t):=\tau^{k}_{m_{t}^{k}}, \quad m_{t}^k:= \max\{n \in \mathbb{N}: \tau_{n}^k \leq t\}, \quad \ell_{n}^{k}:= u^k_{\tau_{n}^k}- u^k_{\tau_{n-1}^k},$$ i.e., $m_{t}^k$ is the number of messages that have been transmitted by sensor $k$ up to time $t$, $\tau^{k}(t)$ is the most recent communication time for sensor $k$ at time $t$ and $\ell_{n}^{k}$ is the accumulated log-likelihood ratio at sensor $k$ in the time-interval $[\tau_{n-1}^k, \tau^k_{\tau_{n}^k}]$. At time $\tau_n^k$, we suggest that sensor $k$ transmits to the fusion center the following message $$\label{zcdmany} z_n^k:=\left\{\begin{array}{cl} j , &\text{if} \quad {\bar{\epsilon}^k}_{j-1} \leq \ell_{n}^{k} -{\bar{\Delta}^k}< {\bar{\epsilon}^k}_{j}\\ -j , &\text{if} \quad -{\underline{\epsilon}^k}_{j} < \ell_{n}^{k} + {\underline{\Delta}^k}\leq -{\underline{\epsilon}^k}_{j-1} \end{array}\right. j=1,\ldots,d,$$ where ${\bar{\epsilon}^k}_{0}:={\underline{\epsilon}^k}_{0}:=0$, ${\bar{\epsilon}^k}_{d}:={\underline{\epsilon}^k}_{d}:= \infty$, $\{{\bar{\epsilon}^k}_{j}, {\underline{\epsilon}^k}_{j}\}_{1 \leq j \leq d-1}$ are fixed, positive threshold and $d$ a positive integer. We will also use the following notation $${\bar{\Delta}^k}_{j}:={\bar{\Delta}^k}+ {\bar{\epsilon}^k}_{j-1} , \quad {\underline{\Delta}^k}_{j}:={\underline{\Delta}^k}+ {\underline{\epsilon}^k}_{j-1}, \quad j=1,\ldots,d,$$ which allows us to rewrite (\[zcdmany\]) as follows $$z_n^k=\left\{\begin{array}{cl} j , &\text{if} \quad {\bar{\Delta}^k}_{j} \leq \ell_{n}^{k} < {\bar{\Delta}^k}_{j+1}\\ -j , &\text{if} \quad -{\underline{\Delta}^k}_{j+1} < \ell_{n}^{k} \leq -{\underline{\Delta}^k}_{j} \end{array}\right., \quad j=1,\ldots,d.$$ When $d=1$, $z_{n}^{k}$ is a one-bit message of the form $$\label{zcd} z_n^k:=\left\{\begin{array}{cl}1 , &\text{if}~ \ell^k_{n} \geq {\bar{\Delta}^k}\\ -1 , &\text{if}~ \ell^k_{n} \leq -{\underline{\Delta}^k}\end{array}\right.$$ that simply informs the fusion center whether $\ell^k_{n} \geq {\bar{\Delta}^k}$ or $\ell^k_{n}\leq -{\underline{\Delta}^k}$. When $d \geq 2$, $z_{n}^{k}$ requires the transmission of $\lceil \log_{2}(2d) \rceil=1+ \lceil \log_{2} d\rceil$ bits and the fusion center also obtains information regarding the size of the overshoot. The stopping times (\[taucd\]) and the messages (\[zcdmany\]) determine the flow of information (\[flow\]) at the fusion center. Assuming that the fusion center uses this information and approximates each local log-likelihood ratio $\{u_{t}^{k}\}$ by some statistic $\{\tilde{u}_{t}^{k}\}$, we suggest the following detection rule $$\label{dcusum} {\tilde{{ {\mathcal{S}}}}}:= \inf \{ t \geq 0: \tilde{y}_{t} \geq \tilde{\nu} \}, ~\text{where}~ \tilde{y}_{t}:=\tilde{u}_{t} - \inf_{0 \leq s \leq t} \tilde{u}_{s},\; \tilde{u}_{t}:= \sum_{k=1}^{K} \tilde{u}_{t}^k$$ and threshold $\tilde{\nu}$ is defined so that ${\mathsf{E}}_{\infty}[-u_{{\tilde{{ {\mathcal{S}}}}}}]=\gamma$. The appropriate selection for $\tilde{u}_{t}^{k}$, as well as the design and analysis of the resulting detection rule, is different in discrete and continuous time and, for this reason, we will treat these two setups separately. We will see, however, that the proposed detection structure, that we will call D-CUSUM, can be designed in order to have strong asymptotic optimality properties in both cases. Continuous-time setup {#sec:D-CUSUM-cont} --------------------- Suppose that each $\{u^{k}_{t}\}$ is a continuous-time process with continuous paths so that condition (\[full\]) is satisfied, in which case we have the following closed-form expressions for ${\mathcal{J}}[S]$ and $\gamma$ in terms of threshold $\nu$ (see, e.g., [@moustito],[@chro]): $$\begin{aligned} \label{fapito} \begin{split} \gamma &= {\mathsf{E}}_{\infty}[-u _{{ {\mathcal{S}}}}]= {\mathsf{E}}_{\infty}[\langle u \rangle_{{ {\mathcal{S}}}}] = e^{\nu}-\nu-1 , \\ {\mathcal{J}}[{ {\mathcal{S}}}] &= {\mathsf{E}}_{0}[u _{{ {\mathcal{S}}}}]= {\mathsf{E}}_{0}[\langle u \rangle_{{ {\mathcal{S}}}}]= e^{-\nu}+\nu-1. \end{split}\end{aligned}$$ Then, each $\ell_{n}^{k}$ is exactly equal to either ${\bar{\Delta}^k}$ or $-{\underline{\Delta}^k}$ and, consequently, at $\tau_{n}^k$ sensor $k$ can transmit to the fusion center the *exact* value of $\ell_{n}^{k}$ by simply communicating a *one-bit* message of the form (\[zcd\]). As a result, the fusion center is able to recover the value of $u^k$ at any time $\tau_{n}^k$, since $u_{\tau_{n}^k}^k = \ell_{1}^{k}+ \ldots+\ell_{n}^{k}$, and a natural approximation for $u_t^k$ at some arbitrary time $t$ is the corresponding most recently reproduced value, i.e., $$\label{freecd} \tilde{u}_{t}^k := u^{k}_{\tau^{k}(t)} =\sum_{n=1}^{m_{t}^{k}} \ell_{n}^{k}.$$ The proposed scheme has a number of practical advantages. First of all, the fusion statistic $\{\tilde{y}_{t}\}$ is piecewise-constant and its value needs to be updated only at communication times, according to the following convenient formula: $$\tilde{y}_{\tau_{n}^{k}}=(\tilde{y}_{\tau_{n}^{k}\text{-}})^{+} + {\bar{\Delta}^k}{\mathbbm{1}_{\{z_n^k=1\}}}-{\underline{\Delta}^k}{\mathbbm{1}_{\{z_n^k=-1\}}}.$$ Compare this with the centralized, continuous-time CUSUM statistic, $\{y_{t}\}$, which does not in general admit such a recursion and whose calculation at the fusion center requires high-frequency transmission of “infinite-bit” messages from the sensors. Moreover, it is possible to control the communication rate of sensor $k$ by selecting appropriately ${\bar{\Delta}^k}$ and ${\underline{\Delta}^k}$. Since ${\mathsf{E}}_{i}[\tau_{n}^k -\tau_{n-1}^k]$, $i=0, \infty$ in general depend on $n$, these thresholds can be selected in order to attain target values for ${\mathsf{E}}_{0}[\ell^k_{n}]$ and ${\mathsf{E}}_{\infty}[-\ell^k_{n}]$, which do not depend on $n$ and are given by ${\mathsf{E}}_{0}[\ell^k_{n}]=s({\underline{\Delta}^k}, {\bar{\Delta}^k})$ and ${\mathsf{E}}_{\infty}[-\ell^k_{n}] =s({\bar{\Delta}^k},{\underline{\Delta}^k})$, where $$s(x,y) := \frac{-x(e^{y}-1)+ye^{y}(e^{x}-1)}{e^{x+y}-1}.$$ In this way, the specification of ${\bar{\Delta}^k}$ and ${\underline{\Delta}^k}$ simply requires the solution of a (non-linear) system of two equations. From the previous discussion it should be clear that D-CUSUM is much more preferable than the corresponding centralized CUSUM from a practical point of view. It turns out that it also has excellent performance characteristics, making any additional benefit of the optimal centralized CUSUM test negligible relative to its implementation cost. This becomes clear with the following theorem, which provides a non-asymptotic upper bound on the performance loss of the proposed detection structure. \[prop1\] For any $\gamma$ and $\{{\bar{\Delta}^k},{\underline{\Delta}^k}\}_{1 \leq k \leq K}$ we have $$\label{order2cd} {\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}] - {\mathcal{J}}[{ {\mathcal{S}}}] \leq 4 \, K \, {\Delta}_{\max} , \quad \text{where} \quad {\Delta}_{\max}:= \max_{1 \leq k \leq K} \max \{{\bar{\Delta}^k}, {\underline{\Delta}^k}\}.$$ The proof is presented in Appendix\[app:A\]. The bound provided in implies that for any fixed thresholds $\{{\bar{\Delta}^k}, {\underline{\Delta}^k}\}$ and any number of sensors $K$, ${\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}] - {\mathcal{J}}[{ {\mathcal{S}}}]={\mathcal{O}}(1)$ as $\gamma \rightarrow \infty$, i.e., ${\tilde{{ {\mathcal{S}}}}}$ is *second*-order asymptotically optimal. In the case of a large sensor-network ($K \rightarrow \infty$), this property is preserved only if we have an asymptotically high rate of communication, specifically if ${\Delta}_{\max} \rightarrow 0$ so that $K {\Delta}_{\max} ={\mathcal{O}}(1)$. However, since we want to avoid intense transmission activity, it is more interesting to see that ${\tilde{{ {\mathcal{S}}}}}$ remains *first*-order asymptotically optimal when $K \rightarrow \infty$ and ${\Delta}_{\max} \rightarrow \infty$ so that $K {\Delta}_{\max}=o(\log \gamma)$. Indeed, from (\[fapito\]) and (\[order2cd\]) we have $$\frac{{\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}]}{{\mathcal{J}}[{ {\mathcal{S}}}]} = 1+ \frac{{\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}] - {\mathcal{J}}[{ {\mathcal{S}}}]}{{\mathcal{J}}[{ {\mathcal{S}}}]} \leq 1+ \frac{4 K {\Delta}_{\max}}{e^{-\nu}+\nu-1}$$ and our claim now also follows from (\[fapito\]), which implies that $\nu=\log \gamma+o(1)$. Discrete-time setup ------------------- Suppose now that each $\{u^{k}_{t}\}$ is a random walk, i.e., the increments $\{u^k_{t}-u_{t-1}^{k}\}_{t \in \mathbb{N}}$ are i.i.d. This implies that each $(\tau_{n}^{k}-\tau_{n-1}^{k}, z_{n}^{k},\ell_{n}^{k})_{n\in \mathbb{N}}$ is a sequence of independent triplets with the same distribution as $(\tau_{1}^{k}, z_{1}^{k},\ell_{1}^{k})$. As a result, thresholds ${\bar{\Delta}^k}$ and ${\underline{\Delta}^k}$ can now be selected in order to attain target values for ${\mathsf{E}}_{i}[\tau_{1}^{k}]$, $i=0,\infty$. However, the main difference with the continuous-time setup is that now each $\ell_{n}^{k}$ is no longer restricted to the binary set $\{{\bar{\Delta}^k}, -{\underline{\Delta}^k}\}$. Thus, it now makes sense to have larger than binary alphabets ($d>1$), in which case we also need to select thresholds $\{{\bar{\epsilon}^k}_{j}, {\underline{\epsilon}^k}_{j}\}_{1 \leq j \leq d-1}$ (recall that ${\bar{\epsilon}^k}_{0}={\underline{\epsilon}^k}_{0}:=0$, ${\bar{\epsilon}^k}_{d}={\underline{\epsilon}^k}_{d}:= \infty$). We suggest the following specification $$\begin{aligned} &{\mathsf{P}}_0(\ell^k_{1}-{\bar{\Delta}^k}\geq {\bar{\epsilon}^k}_j \, | \, \ell_{1}^{k} \geq {\bar{\Delta}^k})=1-\frac{j}{d} ={\mathsf{P}}_\infty(\ell^k_{1}+{\underline{\Delta}^k}\leq -{\underline{\epsilon}^k}_j \, | \,\ell_{1}^{k} \leq -{\underline{\Delta}^k}), \label{eq:levels}\end{aligned}$$ which guarantees that the overshoot $\ell_{1}^{k} - {\bar{\Delta}^k}$ (resp. $-(\ell_{1}^{k} + {\underline{\Delta}^k})$ is equally likely to lie in each interval $[{\bar{\epsilon}^k}_{j-1},{\bar{\epsilon}^k}_{j})$ (resp. $(-{\underline{\epsilon}^k}_{j}, -{\underline{\epsilon}^k}_{j-1}]$ given that $\ell_{1}^{k} \geq {\bar{\Delta}^k}$ (resp. $\ell_{1}^{k} \leq -{\underline{\Delta}^k}$), i.e., $$\begin{aligned} &{\mathsf{P}}_0(\ell^k_{1}-{\bar{\Delta}^k}\in [{\bar{\epsilon}^k}_{j-1}, {\bar{\epsilon}^k}_j) \, | \, \ell_{1}^{k} \geq {\bar{\Delta}^k})=\frac{1}{d} ={\mathsf{P}}_\infty(\ell^k_{1}+{\underline{\Delta}^k}\in (-{\underline{\epsilon}^k}_{j}, -{\underline{\epsilon}^k}_{j-1}] \, | \, \ell_{1}^{k} \leq -{\underline{\Delta}^k}),\end{aligned}$$ or, equivalently, ${\mathsf{P}}_0( z_{1}^{k}=j \, | \, z_{1}^{k}>0 )=1/d ={\mathsf{P}}_\infty( z_{1}^{k}=-j \, | \, z_{1}^{k}<0 )$, for every $1 \leq j \leq d$. Clearly, all these thresholds can be easily computed off-line, as their computation requires the simulation of the pair $(\tau_{1}^{k}, \ell_{1}^{k})$ under both ${\mathsf{P}}_{0}$ and ${\mathsf{P}}_{\infty}$. Moreover, in what follows, we assume that $u_{1}^{k}$ is unbounded and absolutely continuous with a positive density. Then, ${\bar{\epsilon}^k}_{d-1}, {\underline{\epsilon}^k}_{d-1} \rightarrow \infty$ as $d \rightarrow \infty$, whereas $$\label{del} \epsilon^{k}:= \max_{1 \leq j \leq d-1} \, \{ {\bar{\epsilon}^k}_{j}-{\bar{\epsilon}^k}_{j-1} \, , \, {\underline{\epsilon}^k}_{j}-{\underline{\epsilon}^k}_{j-1} \} \rightarrow 0 \quad \text{as} \quad d \rightarrow \infty.$$ In order to establish a *second-order* asymptotic optimality property for $\tilde{{\mathcal{S}}}$, as in the continuous-time setup, we need a lower bound for the optimal centralized performance ${\mathcal{J}}[{\mathcal{S}}]$ up to a constant term as $\gamma \rightarrow \infty$. Moreover, in order to obtain the inflicted performance loss as $K \rightarrow \infty$, we need to characterize the growth of this constant term as $K \rightarrow \infty$. This is done in the following lemma, under a second moment condition on each $u_{1}^{k}$. \[lem:6\] If ${\mathsf{E}}_{0}[(u_{1}^{k})^{2}]<\infty$ for every $1 \leq k \leq K$, then for any $\gamma$ we have $${\mathcal{J}}[{\mathcal{S}}]= {\mathsf{E}}_0[u_{{\mathcal{S}}}]\ge\log\gamma- {\mit \Theta}(K).$$ It is well known that the worst case for the optimal centralized CUSUM is when the change occurs at $\tau=0$, which implies the equality in the lemma. The proof of the inequality is presented in Appendix \[app:B\]. If each sensor $k$ transmitted the exact value of each $\ell_{n}^{k}$ at time $\tau_{n}^{k}$, as in the continuous-time setup, then we could approximate $u_{t}^{k}$ by (\[freecd\]) and we could work in the same way as Theorem \[prop1\] to show that ${\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}] -{\mathcal{J}}[{ {\mathcal{S}}}] = {\mathcal{O}}(K {\Delta}_{\max})$. However, this is not possible in a discrete-time setup, since $\ell_{n}^{k}$ cannot be fully recovered at the fusion center when sensor $k$ transmits only a small number of bits at time $\tau_{n}^{k}$. Our main goal in the remainder of the paper is to show that it is actually possible to design D-CUSUM in discrete time so that it is second-order asymptotically optimal even if each sensor transmits a small number of bits (such as 2 or 3) in every communication. In order to do this, we approximate $u_{t}^{k}$ by $$\label{tilder} \tilde{u}_{t}^k := \sum_{n=1}^{m_{t}^{k}} \tilde{\ell}_{n}^{k},$$ where $\tilde{\ell}_{n}^{k}$ is the log-likelihood ratio of $z_n^k$, i.e., $$\begin{aligned} \tilde{\ell}_{n}^{k} &:= \sum_{j=1}^{d} \Bigl[ {\bar{\Lambda}^k}_{j} \, {\mathbbm{1}_{\{z_{n}^k=j\}}} - {\underline{\Lambda}^k}_{j} \, {\mathbbm{1}_{\{z_{n}^k=-j\}}} \Bigr], \quad \label{ells} \\ {\bar{\Lambda}^k}_{j} &:= \log \frac{ {\mathsf{P}}_{0}(z_1^k=j)}{{\mathsf{P}}_{\infty}(z_1^k=j)}, ~ -{\underline{\Lambda}^k}_{j} := \log \frac{{\mathsf{P}}_{0}(z_1^k=-j)}{{\mathsf{P}}_{\infty}(z_1^k=-j)}. \label{lambdas}\end{aligned}$$ The log-likelihood ratios $\{{\bar{\Lambda}^k}_{j},{\underline{\Lambda}^k}_{j}\}$ do not admit closed-form expressions, however they can be easily computed via simulation. This is not an easy task if one uses their definition in (\[lambdas\]), which requires simulation of rare events, especially when ${\bar{\Delta}^k}, {\underline{\Delta}^k}$ are large. However, we can overcome this problem using the following lemma. \[lem:0\] For every $1\leq j \leq d$, ${\bar{\Lambda}^k}_{j}={\bar{\Delta}^k}_{j}+{\overline{R}^k}_{j}$ and ${\underline{\Lambda}^k}_{j}={\underline{\Delta}^k}_{j}+{\underline{R}^k}_{j}$, where $$\begin{aligned} \label{import} \begin{split} {\overline{R}^k}_{j} &:= - \log {\mathsf{E}}_{0}[ e^{-(\ell^k_{1}- {\bar{\Delta}^k}_{j})} \, | \, z_{1}^{k}=j ] >0 , \\ {\underline{R}^k}_{j} &:= - \log {\mathsf{E}}_{\infty}[ e^{\ell^k_{1}+ {\underline{\Delta}^k}_{j}} \, | \, z_{1}^{k}=-j]>0. \end{split}\end{aligned}$$ Moreover, for every $1\leq j \leq d-1$, ${\overline{R}^k}_{j}, {\underline{R}^k}_{j} \leq \epsilon^{k}$ and if, additionally, ${\mathsf{E}}_{i}[(u_{1}^{k})^{2}]<\infty$, $i=0, \infty$, then $$\begin{aligned} \label{import2} \begin{split} {\overline{R}^k}_{d} &\leq {\mathsf{E}}_{0}[ \ell^k_{1}- {\bar{\Delta}^k}_{d} \, | \, z_{1}^{k}=d ] \leq \Theta(1) \; d \; {\mathsf{E}}_{0}[(u_{1}^{k})^{2} {\mathbbm{1}_{\{u_{1}^{k} \geq {\bar{\epsilon}^k}_{d-1}\}}}], \\ {\underline{R}^k}_{d} &\leq {\mathsf{E}}_{\infty}[ -(\ell^k_{1} + {\underline{\Delta}^k}_{d}) \, | \, z_{1}^{k}=-d ] \leq \Theta(1) \; d \; {\mathsf{E}}_{\infty}[(u_{1}^{k})^{2} {\mathbbm{1}_{\{-u_{1}^{k} \geq {\underline{\epsilon}^k}_{d-1}\}}}] , \end{split}\end{aligned}$$ where $\Theta(1)$ is a term that does not depend on $d$ and is bounded from above and below as ${\bar{\Delta}^k}, {\underline{\Delta}^k}\rightarrow \infty$. The proof can be found in Appendix\[app:D\]. Lemma \[lem:0\] shows that, similarly to the thresholds $\{{\bar{\Delta}^k}_{j},{\underline{\Delta}^k}_{j}\}$ and $\{{\bar{\epsilon}^k}_{j},{\underline{\epsilon}^k}_{j}\}$, the log-likelihood ratios $\{{\bar{\Lambda}^k}_{j},{\underline{\Lambda}^k}_{j}\}$ can be computed off-line and efficiently if we simulate $(\tau_{1}^{k}, \ell_{1}^{k})$ under ${\mathsf{P}}_{0}$ and ${\mathsf{P}}_{\infty}$. Moreover, Lemma \[lem:0\] shows that defining $\tilde{\ell}_{n}^{k}$ as the log-likelihood ratio of $z_{n}^{k}$ accounts for the unobserved overshoots at the fusion center. Specifically, when the fusion center receives message $z_{n}^{k}=j$ for some $j=1, \ldots, d$, it understands that $\ell_{n}^{k} \in [{\bar{\Delta}^k}_{j}, {\bar{\Delta}^k}_{j+1})$ and it approximates $\ell_{n}^{k}$ by ${\bar{\Delta}^k}_{j}+ {\overline{R}^k}_{j}$; in other words, the fusion center approximates the random overshoot $\ell_{n}^{k}-{\bar{\Delta}^k}_{j}$ that it does not observe by the constant ${\overline{R}^k}_{j}$, which is clearly an ${\mathcal{O}}(1)$ term as ${\bar{\Delta}^k}, {\underline{\Delta}^k}\rightarrow \infty$. The following lemma is important for quantifying the additional detection delay due to using $\tilde{\ell}_{n}^{k}$ instead of the actual value of $\ell_{n}^{k}$ in (\[tilder\]). \[lem:1\] If ${\mathsf{E}}_{i}[(u_{1}^{k})^{2}]<\infty$, $i=0, \infty$, then ${\mathsf{E}}_{0}[\ell^k_1- \tilde{\ell}^k_1 ] \leq 2\theta^{k}$, where $$\label{thetak} \theta^{k}:= \epsilon^{k} + \Theta(1) \, {\mathsf{E}}_{0}[(u_{1}^{k})^{2} {\mathbbm{1}_{\{u_{1}^{k} \geq {\bar{\epsilon}^k}_{d-1}\}}}]+ \Theta(1) \, {\mathsf{E}}_{\infty}[(u_{1}^{k})^{2} {\mathbbm{1}_{\{-u_{1}^{k} \geq {\underline{\epsilon}^k}_{d-1}\}}}]$$ and $\Theta(1)$ is a term that does not depend on $d$ and is bounded from above and below as ${\bar{\Delta}^k}, {\underline{\Delta}^k}\rightarrow \infty$. Moreover, $\theta^{k} \rightarrow 0$ as $d \rightarrow \infty$. The proof of this lemma can be found in Appendix\[app:D\]. Note that an alternative approach would have been to define $\tilde{\ell}_{n}^{k}$ as in (\[ells\]), but with ${\bar{\Lambda}^k}_{j}$ and ${\underline{\Lambda}^k}_{j}$ replaced by ${\bar{\Delta}^k}_{j}$ and ${\underline{\Delta}^k}_{j}$, respectively. In this way, the overshoots are simply ignored by the fusion center. However, the main reason for defining $\tilde{\ell}_{n}^{k}$ as the log-likelihood ratio of $z_{n}^{k}$ is that it allows us to prove the following lemma, which connects threshold $\tilde{\nu}$ with the false alarm period $\gamma$ and plays a crucial role in establishing the (second-order) asymptotic optimality of the resulting detection rule. \[lem3\] For any $\gamma>0$ we have $\tilde{\nu} \leq \log \gamma - \log(\bar{I}_\infty)$, thus, $\tilde{\nu} =\log \gamma +\Theta(1)$ as $\gamma \rightarrow \infty$. The proof is presented in Appendix \[app:C\]. It is possible to prove Lemma \[lem3\] and, consequently, to establish the asymptotic optimality of $\tilde{S}$ if $\tilde{\ell}_{n}^{k}$ is defined as the log-likelihood ratio of the pair $(\tau_{n}^k-\tau_{n-1}^{k}, z_{n}^{k})$, and not only of $z_{n}^{k}$. Unfortunately, the distribution of $\tau_{1}^k$ is typically intractable, thus, the resulting rule could not be implemented in practice. We are now ready to state the discrete-time analogue of Theorem \[prop1\]. For simplicity, we assume that communication rates, before and after the change, are of the same order of magnitude for all sensors, i.e., there is a quantity ${\Delta}$ so that ${\bar{\Delta}^k},{\underline{\Delta}^k}=\Theta({\Delta})$ as ${\Delta},{\bar{\Delta}^k},{\underline{\Delta}^k}\rightarrow \infty$ for all $1\leq k \leq K$. Moreover, we set $\theta:=\max_{1 \leq k \leq K} \theta^{k}$. \[th:2\] If ${\mathsf{E}}_{0}[(u_{1}^{k})^{2}]<\infty$ for every $1 \leq k \leq K$, then $$\label{order1cd} {\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}] -{\mathcal{J}}[{ {\mathcal{S}}}] \leq \frac{\theta}{\Theta({\Delta})} \, \log\gamma + K \, \Theta({\Delta}).$$ For the optimum CUSUM ${\mathcal{S}}$, it is well known that ${\mathcal{J}}[{\mathcal{S}}]={\mathsf{E}}_0[u_{{\mathcal{S}}}]$. In order to see that this is also the case for D-CUSUM, i.e., ${\mathcal{J}}[\tilde{{\mathcal{S}}}]={\mathsf{E}}_0[u_{\tilde{{\mathcal{S}}}}]$, from the nonnegativity of the KL-divergence it is clear that it suffices to show that $\tilde{S}{\mathbbm{1}_{\{\tilde{S} \geq \tau\}}}= \inf\{t \geq\tau: \tilde{y}_{t} \geq \tilde{\nu}\}$ is *pathwise* decreasing with respect to $\tilde{y}_\tau$, or equivalently that the process $\{\tilde{y}_t, t>\tau\}$ is *pathwise* increasing with respect to $\tilde{y}_\tau$. Indeed, if we denote by ($\tau_{n})$ the sequence of times at which there is a communication from at least one sensor, then $$\tilde{y}_{\tau_n}=(\tilde{y}_{\tau_n-})^++\omega_{\tau_n}$$ where $\omega_{\tau_n}$ is information coming from the sensors that communicate at time $\tau_n$ and is clearly independent from the past. This implies that $\tilde{y}_t$ will be increasing in $(\tilde{y}_\tau)^+$ for any $t \geq \tau$ and our claim follows because the smallest value of the latter quantity is 0. Based on the above, we can write $${\mathcal{J}}[\tilde{{\mathcal{S}}}]-{\mathcal{J}}[{\mathcal{S}}]={\mathsf{E}}_0[u_{\tilde{{\mathcal{S}}}}]-{\mathsf{E}}_0[u_{{\mathcal{S}}}]={\mathsf{E}}_0[u_{\tilde{{\mathcal{S}}}}-\tilde{u}_{\tilde{{\mathcal{S}}}}] +{\mathsf{E}}_0[\tilde{u}_{\tilde{{\mathcal{S}}}}]-{\mathsf{E}}_0[u_{{\mathcal{S}}}]. \label{eq:3-part}$$ From Lemma\[lem:8\] we have that ${\mathsf{E}}_0[\tilde{u}_{\tilde{{\mathcal{S}}}}] \leq \log \gamma +K \Theta({\Delta})$ and $${\mathsf{E}}_0[u_{\tilde{{\mathcal{S}}}}-\tilde{u}_{\tilde{{\mathcal{S}}}}] \leq K \Theta ({\Delta})+ \theta \, \frac{\log \gamma}{\Theta({\Delta})}.$$ Applying these inequalities and Lemma\[lem:6\] to , we obtain the desired result. Lemma \[lem:8\], as well as some additional auxiliary results, are stated and proved in Appendix \[app:E\]. The main consequence of Theorem \[th:2\] is that D-CUSUM is second-order asymptotically optimal, i.e., ${\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}] -{\mathcal{J}}[{ {\mathcal{S}}}]= {\mathcal{O}}(1)$, when $K={\mathcal{O}}(1)$, ${\Delta}={\mathcal{O}}(1)$ and $\theta \rightarrow 0$ so that $\theta \log\gamma={\mathcal{O}}(1)$ as $\gamma \rightarrow \infty$. We have seen in Lemma \[lem:1\] that $\theta \rightarrow 0$ as $d \rightarrow \infty$. If, in particular, $\theta= {\mathcal{O}}(1/d^{\alpha})$, where $\alpha$ is some positive constant, then the above analysis implies that $d$ may go to infinity with a rate as low as ${\mathcal{O}}((\log \gamma)^{1/\alpha})$ and, as a result, the required number of bits per transmission, $1+ \lceil \log_{2} d \rceil$, can be of an order as low as ${\mathcal{O}}(\frac{1}{\alpha} \log \log \gamma)$. This means that second-order asymptotic optimality is achieved in practice with a very low number of bits per transmission, a conclusion that will also be supported by some simulation experiments in the end of this section. As in continuous time, second-order asymptotic optimality is not preserved with an asymptotically low-rate of communication (${\Delta}\rightarrow \infty$). However, from Theorem\[th:2\] and Lemma\[lem:6\] we have $$\label{asy2} \frac{{\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}]}{{\mathcal{J}}[{ {\mathcal{S}}}]} =1+ \frac{{\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}] -{\mathcal{J}}[{ {\mathcal{S}}}]}{{\mathcal{J}}[{ {\mathcal{S}}}]} \leq 1+ \frac{ \frac{\theta}{\Theta({\Delta})} + \frac{K \Theta({\Delta})}{\log\gamma}}{1- \frac{\Theta(K)}{\log \gamma}},$$ which implies that D-CUSUM is first-order asymptotically optimal, i.e., ${\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}]/{\mathcal{J}}[{ {\mathcal{S}}}] \rightarrow 1$, when ${\Delta}\rightarrow \infty$ so that $K {\Delta}=o(\log \gamma)$. In this context, the performance of D-CUSUM is optimized when ${\Delta}, \theta, K$ are selected so that the two terms in the upper bound of are of the same order magnitude. This happens when ${\Delta}=\Theta(\sqrt{\theta \log\gamma /K})$, in which case ${\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}] -{\mathcal{J}}[{ {\mathcal{S}}}]= {\mathcal{O}}(\sqrt{K \, \theta \, \log\gamma})$. We should emphasize that in the case of a binary alphabet ($d=1$), where $\theta$ is bounded away from 0 (i.e., $\theta=\Theta(1)$), first-order asymptotic optimality cannot be achieved with a fixed rate of communication, i.e., when ${\Delta}={\mathcal{O}}(1)$ as $\gamma \rightarrow \infty$. This may seem counterintuitive at first, however it is quite reasonable since a high rate of communication leads to fast accumulation of quantization error. Nevertheless, this source of error can be suppressed if we have a sufficiently large alphabet size that allows us to quantize the overshoots. This explains why first-order asymptotic optimality can be achieved even with ${\Delta}={\mathcal{O}}(1)$ when $\theta \rightarrow 0$. We conclude that, either with a high or a low communication rate, the performance of D-CUSUM is improved with a larger than binary alphabet $(d>1)$, but in practice a small value of $d$ should be sufficient. In order to elaborate more on this point, let us note that the statistical behavior of the overshoots depends on the parameter ${\Delta}$, which controls the average period of communication in the sensors. However, this dependence is only minor since the distribution of the overshoots converges to some limiting distribution as ${\Delta}$ becomes large. In other words, quantizing the overshoots is like quantizing a random variable with (almost) fixed statistics. Consequently, the mean square quantization error, or any other similar quality measure, will be (almost) independent from ${\Delta}$ for fixed number of bits. On the contrary, for the classical quantization scheme , employed by Q-CUSUM, quantization is applied on the value of each $u^k_{nr}-u^k_{(n-1)r}$, where $r$ denotes the fixed corresponding period. It is very simple to realize that for fixed number of bits, if we increase the period $r$, the mean square quantization error will *increase*, since the difference $u^k_{nr}-u^k_{(n-1)r}$ will involve a larger sum of i.i.d. random variables. This becomes particularly obvious when these random variables are bounded, in which case the support of the sum increases linearly with $r$ and we are asked, with the same number of bits, to quantize a larger range of values. This suggests that if we want to communicate with the fusion center at a smaller rate and preserve the same number of bits, this will inflict larger quantization errors and therefore additional performance degradation for Q-CUSUM. As we mentioned above, this is not the case with the quantization scheme we adopt for D-CUSUM, since increasing ${\Delta}$ (to reduce the communication rate) leaves the mean square quantization error almost intact. Let us now illustrate these conclusions with a simulation study. Specifically, suppose that each sensor $k$ takes independent, normally distributed observations with variance $1$ and mean that changes from $0$ to $\mu$, i.e., $\xi_{t}^k \sim {\mathcal{N}}(0,1)$ when $t \leq \tau$ and $\xi_{t}^k \sim {\mathcal{N}}(\mu,1)$ when $t > \tau$. Then, for every $t \in \mathbb{N}$ we have $u_{t}^k-u_{t-1}^{k} = \mu \, \xi_{t}^k- \mu^{2}/2$. We assume that ${\bar{\Delta}^k}={\underline{\Delta}^k}={\Delta^k}$ and for every $j=1, \ldots, d-1$ we set ${\bar{\epsilon}^k}_{j}={\underline{\epsilon}^k}_{j}={\epsilon^k}_{j}$ and, consequently, we have ${\bar{\Lambda}^k}_{j}={\underline{\Lambda}^k}_{j}={\Lambda^k}_{j}$. Moreover, we assume that each ${\Delta}^{k}$ is chosen so that ${\mathsf{E}}_{0}[\tau_{1}^{k}]=r$. In Table\[tab:1\] we present the values of these parameters when the number of transmitted bits per message is $d=1$ or $d=2$, the communication period is $r=3$ or $r=6$ and $\mu=1$. [cc]{} ${\Delta^k}_{1}$ ${\Lambda^k}_{1}$ ---------------- ------------------ ------------------- -- -- $r=3$, $\mu=1$ 1.287 1.87 $r=6$, $\mu=1$ 2.54 3.12 : Thresholds and Log-Likelihood Ratios & ${\Delta^k}_{1}$ ${\Delta^k}_{2}$ ${\Lambda^k}_{1}$ ${\Lambda^k}_{2}$ ---------------- ------------------ ------------------ ------------------- ------------------- $r=3$, $\mu=1$ 1.287 1.87 1.54 2.94 $r=6$, $\mu=1$ 2.54 3.12 2.80 3.62 : Thresholds and Log-Likelihood Ratios \ &\ (a) $\;d=1$ & (b) $\;d=2$ \[tab:1\] Our goal is to compare D-CUSUM $\tilde{{\mathcal{S}}}$ with Q-CUSUM $\hat{{\mathcal{S}}}$, which was defined in (\[eq:cus2\]), when both rules use the same resources, i.e., the same number of bits per communication and the same (average) rate of communication. Note that such a fair comparison is not possible with decentralized rules that do not explicitly control their transmission rate. Of course, the ultimate benchmark is the centralized CUSUM test, which requires transmission of the observation of each sensor at every time $t$. ![Case of $K=5$ sensors with communication period $r=6$.[]{data-label="fig:2"}](fig1.pdf) 0.5cm ![Case of $K=5$ sensors with communication period $r=6$.[]{data-label="fig:2"}](fig2.pdf "fig:") Fig.\[fig:1\] and Fig.\[fig:2\] depict the main results of our simulations. First of all, we observe that in all cases the operating characteristic curve of D-CUSUM ${\tilde{{ {\mathcal{S}}}}}$ is essentially parallel to that of the optimal centralized CUSUM, ${\mathcal{S}}$. This is exactly the *second-order* asymptotic optimality that we established theoretically. On the contrary, the operating characteristic curve of Q-CUSUM $\hat{{\mathcal{S}}}$ diverges as $\gamma$ increases, as expected, since this not an asymptotically optimal scheme (even of first order). Of course, when an “infinite-bit” message is transmitted at each communication time, Q-CUSUM corresponds to the centralized CUSUM with period $r$ and its operating characteristic curve is parallel to the optimal one. However, what is really interesting is that D-CUSUM with one-bit or two-bit transmissions is either very close or even outperforms this *infinite-bit* Q-CUSUM. Finally, we should also note that when the average communication period is small ($r=3$), there is a considerable improvement in D-CUSUM when using two, instead of one, bits per transmission (see Fig.\[fig:1\]). On the other hand, when the average communication period is large ($r=6$), we do not observe similar performance gains for D-CUSUM by having the sensors transmit additional bits to the fusion center (see Fig.\[fig:2\]). Conclusions =========== The main contribution of this paper is a novel decentralized sequential detection rule, that we called D-CUSUM, according to which each sensor communicates with the fusion center at two-sided exit times of its local log-likelihood ratio and the fusion center uses in parallel a CUSUM-like rule in order to detect the change. We showed that the performance loss of D-CUSUM with respect to the optimal centralized CUSUM is bounded as the rate of false alarms goes to 0 (*second order* asymptotic optimality). Moreover, we showed that its first-order asymptotic optimality is preserved even with an asymptotically low communication rate and large number of sensors. We illustrated these properties with simulation experiments, which also showed that D-CUSUM performs significantly better than a CUSUM-based, decentralized detection rule that requires communication at deterministic times. We assumed throughout the paper that observations from different sensors are independent, an assumption which is not needed for the optimality of the centralized CUSUM test, but is universal in the decentralized literature. This assumption is necessary both for the design and the analysis of D-CUSUM in discrete time, however it is possible to remove it in continuous time, at least when the sensors observe *correlated* Brownian motions. Indeed, going over the proof of Theorem \[prop1\] in Appendix\[app:A\], we realize that this assumption is needed only to the extent that it guarantees a decomposition of the form $u_{t}= \sum_{k=1}^{K} u_{t}^k$, where $\{u^k_{t}\}$ is an ${ {{\mathscr{F}}}_{t}}^k$-adapted process with continuous paths. That is, we did not use explicitly the fact that $\{u^k_{t}\}$ is the local log-likelihood ratio at sensor $k$. This implies that Theorem \[prop1\] will remain valid even for sensors with correlated dynamics, as long as such a decomposition is possible. This is indeed the case when the sensors observe correlated Brownian motions before and after the change, i.e., for every $1 \leq k \leq K$ it is $$\xi^k_{t}= \sum_{j=1}^{K} \sigma_{kj} W^{j}_{t} + {\mathbbm{1}_{\{t > \tau\}}} \, \mu^k t, ~ t \geq 0,$$ where $(W^{1}, \ldots,W^{K})$ is a standard $K$-dimensional Wiener process, $\mu=[\mu^{1}, \ldots, \mu^{K}]'$ a $K$-dimensional real vector and $\sigma:=[\sigma_{ij}]$ a square matrix of dimension $K$ so that the diffusion coefficient matrix ${\mit\Sigma}= \sigma \sigma'$ is invertible. Then, we can write $u_{t}= \sum_{k=1}^{K} [ b^k \, \xi_{t}^k - 0.5 \, \mu^k \, b^k \, t]$, where $b=[b^1,\ldots,b^K]'= {\mit\Sigma}^{-1} \mu$, and Theorem\[prop1\] remains valid as long as we define $u^{k}_{t}$ in (\[taucd\]) not as the local log-likelihood ratio $\mu^k \, \xi_{t}^k - 0.5 \, (\mu^k)^{2} \, t$, but as $b^k \xi_{t}^k - 0.5 \, \mu^k b^k t$. However, it remains an open problem to establish asymptotically optimal, decentralized detection rules for more general continuous-time models, and of course in the i.i.d. setup, when the sensor observations are correlated. {#app:A} In this Appendix, we focus on the continuous-time setup of Subsection \[sec:D-CUSUM-cont\] and we note that $${\mathsf{E}}_{0}[u_{T}]= {\mathsf{E}}_{0}[\langle u \rangle_{T}] ,~ {\mathsf{E}}_{\infty}[-u_{T}]= {\mathsf{E}}_{\infty}[\langle u \rangle_{T}]$$ for any stopping time $T$ for which the above quantities are finite. Moreover, for any $x>0$ we use the following notation $${ {\mathcal{S}}}_{x}= \inf \{ t \geq 0: y_{t} \geq x \} ,~ {\tilde{{ {\mathcal{S}}}}}_{x}= \inf \{ t \geq 0: \tilde{y}_{t} \geq x \}.$$ Then, thresholds $\nu$ and $\tilde{\nu}$ are chosen so that ${\mathsf{E}}_{\infty}[-u_{{ {\mathcal{S}}}_{\nu}}]= \gamma={\mathsf{E}}_{\infty}[-u_{{\tilde{{ {\mathcal{S}}}}}_{\tilde{\nu}}}]$, or equivalently, $$\label{lop} {\mathsf{E}}_{\infty}[\langle u \rangle _{{ {\mathcal{S}}}_{\nu}}]= \gamma={\mathsf{E}}_{\infty}[\langle u \rangle _{{\tilde{{ {\mathcal{S}}}}}_{\tilde{\nu}}}].$$ The proof of Theorem\[prop1\] is based on the following lemma, for which we set $C:=K {\Delta}_{\max}$, where ${\Delta}_{\max}:=\max_{1 \leq k \leq K} \max\{{\bar{\Delta}^k}, {\underline{\Delta}^k}\}$. \[lem\] For any $\gamma>0$\ (i) ${\mathcal{S}}_{\tilde{\nu} -2 C} \leq {\tilde{{ {\mathcal{S}}}}}_{\tilde{\nu}} \leq {\mathcal{S}}_{\tilde{\nu}+ 2 C}$ ${\mathsf{P}}_{0}, {\mathsf{P}}_{\infty}$- (ii) $|\nu-\tilde{\nu}| \leq 2 C$. For any $t>0$, from (\[taucd\]) and (\[freecd\]) it is clear that for every $1 \leq k \leq K$ $$|u_{t}^{k}- \tilde{u}_{t}^{k}| \leq \max\{{\bar{\Delta}^k}, {\underline{\Delta}^k}\}\leq {\Delta}_{\max}.$$ Then, summing over $k$ we obtain $|u_{t}- \tilde{u}_{t}| \leq K {\Delta}_{\max}= C$ and, consequently, $|m_{t}- \tilde{m}_{t}| \leq C$, where $m_{t}:=\inf_{ 0 \leq s \leq t} u_{s}$ and $\tilde{m}_{t}:=\inf_{ 0 \leq s \leq t} \tilde{u}_{s}$. Therefore, from the definition of $y_{t}$ and $\tilde{y}_t$ we have $$|y_{t}-\tilde{y}_{t}|\leq |u_{t}-\tilde{u}_{t}|+ |m_{t}-\tilde{m}_{t}| \leq 2C,$$ which implies (i). From (i) and the fact that $\langle u \rangle$ is an increasing process we have $${\mathsf{E}}_{\infty}[\langle u\rangle_{{\cal{S}}_{\tilde{\nu}-2 C}}] \leq {\mathsf{E}}_{\infty}[\langle u\rangle_{{\tilde{{ {\mathcal{S}}}}}_{\tilde{\nu}}}] \leq {\mathsf{E}}_{\infty}[\langle u\rangle_{{\cal{S}}_{\tilde{\nu}+2 C}}].$$ From the last inequality and (\[lop\]) we obtain $${\mathsf{E}}_{\infty}[\langle u\rangle_{{\cal{S}}_{\tilde{\nu}-2 C}}] \leq {\mathsf{E}}_{\infty}[\langle u\rangle_{{ {\mathcal{S}}}_\nu}] \leq {\mathsf{E}}_{\infty}[\langle u\rangle_{{\cal{S}}_{\tilde{\nu}+2 C}}].$$ Let us now recall (\[fapito\]) and define the function $$\psi(x) := {\mathsf{E}}_{\infty}[-u_{{\cal{S}}_{x}}]={\mathsf{E}}_{\infty}[\langle u\rangle_{{\cal{S}}_{x}}] = e^{x}-x-1, \; x>0.$$ Then, the last pair of inequalities takes the form $\psi(\tilde{\nu}-2 C) \leq \psi(\nu) \leq \psi(\tilde{\nu}+2 C)$ and (ii) then follows from the fact that $\psi$ is strictly increasing. The proof is a direct consequence of Lemma \[lem\](i) and (\[fapito\]). Indeed, $$\begin{aligned} {\mathcal{J}}[{\tilde{{ {\mathcal{S}}}}}_{\tilde{\nu}}]- {\mathcal{J}}[{ {\mathcal{S}}}_\nu] &\leq {\mathcal{J}}[{\mathcal{S}}_{\tilde{\nu}+ 2 C}]- {\mathcal{J}}[{ {\mathcal{S}}}_\nu] = ( e^{-\tilde{\nu} - 2 C} -e^{-\nu}) + (\tilde{\nu} -\nu) +2 C \leq 4 C,\end{aligned}$$ where the first inequality follows from the nonnegativity of KL-divergences and the fact that ${\mathcal{S}}_{\tilde{\nu}+ 2 C} \geq { {\mathcal{S}}}_\nu$, the equality is due to the second relationship in (\[fapito\]) and the second inequality due to the fact that $|\nu-\tilde{\nu}| \leq 2 C$. {#app:B} Let us first define for any $r\ge0$ the stopping times $$T_r^+=\inf\{t>0:u_t\geq r\},\quad T_r^-=\inf\{t>0:-u_t\geq r\}.$$ Due to the representation of the CUSUM stopping time as a repeated SPRT with thresholds 0 and $\nu$, we have the following well-known formula (see for example Siegmund, [@seigmund Page 25]) for its expectation under ${\mathsf{P}}_{0}$ and ${\mathsf{P}}_{\infty}$ $${\mathsf{E}}_i[u_{{\mathcal{S}}}]=\frac{{\mathsf{E}}_i[u_{{\mathcal{T}}}]}{{\mathsf{P}}_i(u_{{\mathcal{T}}}\geq\nu)},~i=0,\infty, \label{eq:APP1}$$ where ${\mathcal{T}}=\min\{T_0^-,T_\nu^+\}$ is the SPRT stopping time with boundaries 0 and $\nu$. Using for $i=0$, we can now write $$\begin{aligned} \label{eq:APP-10} \begin{split} {\mathsf{E}}_0[u_{{\mathcal{S}}}] &= \frac{{\mathsf{E}}_0[u_{{\mathcal{T}}}{\mathbbm{1}_{\{u_{{\mathcal{T}}}\ge\nu\}}}] + {\mathsf{E}}_0[u_{{\mathcal{T}}} {\mathbbm{1}_{\{u_{{\mathcal{T}}}\leq0\}}}]}{{\mathsf{P}}_0(u_{{\mathcal{T}}}\geq\nu)} \\ &\geq \nu - \frac{{\mathsf{E}}_0[(-u_{{\mathcal{T}}}){\mathbbm{1}_{\{u_{{\mathcal{T}}}\leq0\}}}]}{{\mathsf{P}}_0(u_{{\mathcal{T}}}\geq\nu)}. \end{split}\end{aligned}$$ We start with the numerator and with a change of measure we have $$\label{eq:APP-22} {\mathsf{E}}_0[-u_{{\mathcal{T}}} {\mathbbm{1}_{\{u_{{\mathcal{T}}}\leq0\}}}] = {\mathsf{E}}_\infty[e^{u_{{\mathcal{T}}}} (-u_{{\mathcal{T}}}) {\mathbbm{1}_{\{u_{{\mathcal{T}}}\leq0\}}}] \leq {\mathsf{E}}_\infty[-u_{{\mathcal{T}}} {\mathbbm{1}_{\{u_{{\mathcal{T}}}\leq0\}}}].$$ We can now strengthen this inequality as follows: $$\begin{aligned} \begin{split} {\mathsf{E}}_\infty[-u_{{\mathcal{T}}} {\mathbbm{1}_{\{u_{{\mathcal{T}}}\leq0\}}}] &= {\mathsf{E}}_\infty[ -u_{{\mathcal{T}}_{0}^{-}} {\mathbbm{1}_{\{{\mathcal{T}}_{0}^{-} \leq {\mathcal{T}}_{\nu}^{+}\}}}] \leq {\mathsf{E}}_\infty[-u_{{\mathcal{T}}_{0}^{-}}] \\ &\leq \sup_{r \geq 0} {\mathsf{E}}_\infty[-u_{{\mathcal{T}}_{r}^{-}}-r] \leq \frac{{\mathsf{E}}_\infty[(u_1)^2]}{{\mathsf{E}}_\infty[-u_1]} \\ &\leq \frac{\sum_{k=1}^K{\mathsf{E}}_\infty[(u_1^k-I_\infty^k)^2]+(\sum_{k=1}^KI_\infty^k)^2}{\sum_{k=1}^KI^k_\infty} \\ &= \frac{\bar{\sigma}_\infty^2}{\bar{I}_\infty} + K\bar{I}_\infty, \end{split} \label{eq:APP-20}\end{aligned}$$ where $\bar{I}_i=\frac{1}{K}\sum_{k=1}^K I_i^k$ is the average, over all sensors, of the Kullback-Leibler information numbers and $\bar{\sigma}_i^2:=\frac{1}{K}\sum_{k=1}^K\text{Var}_{i}\{u_1^k\}$ the average, over all sensors, of the variances of the local likelihood ratios $u_1^k$, under the probability measure ${\mathsf{P}}_i,~i=0,\infty$. The second inequality in the second line in follows from Lorden’s [@lorden] upper bound for the average overshoot, strengthened by observing that $(u_1^-)^2\leq(u_1)^2$. Furthermore, for the denominator in (\[eq:APP-10\]) we have $$\begin{aligned} \begin{split} {\mathsf{P}}_0(u_{{\mathcal{T}}}\geq\nu) &={\mathsf{P}}_0(T^+_\nu<T_0^-) \geq{\mathsf{P}}_0(T_0^-=\infty)=\frac{1}{{\mathsf{E}}_0[T_{0}^+]}=\frac{K\bar{I}_0}{{\mathsf{E}}_0[u_{T_{0}^+}]} \\ &\geq\frac{K\bar{I}_0}{\sup_{r\geq0}{\mathsf{E}}_0[u_{T^+_r}-r]} \geq\frac{(K\bar{I}_0)^2}{K\bar{\sigma}_0^2+(K\bar{I}_0)^2}=\frac{\bar{I}_0^2}{K^{-1}\bar{\sigma}_0^2+\bar{I}_0^2}\\ &\geq\frac{\bar{I}_0^2}{\bar{\sigma}_0^2+\bar{I}_0^2}. \end{split} \label{eq:APP-40}\end{aligned}$$ The second equality in the first line is a classical result of random walk theory (see for example Siegmund [@seigmund Corollary 8.39, Page 173]), whereas the third equality in the first line is an application of Wald’s identity. The second inequality in the second line is again the upper bound provided by Lorden [@lorden] for the overshoot, while the last inequality is true because $K\geq1$. From (\[eq:APP-22\]), (\[eq:APP-20\]) and (\[eq:APP-40\]) we obtain $$\frac{{\mathsf{E}}_0[(-u_{{\mathcal{T}}}){\mathbbm{1}_{\{u_{{\mathcal{T}}}\leq0\}}}]}{{\mathsf{P}}_0(u_{{\mathcal{T}}}\geq\nu)} \leq \frac{\bar{\sigma}_\infty^2 + K(\bar{I}_\infty)^{2}}{\bar{I}_\infty} \, \frac{\bar{\sigma}_0^2+\bar{I}_0^2}{\bar{I}_0^2} = \Theta(K)$$ and consequently from it follows that ${\mathsf{E}}_0[u_{{\mathcal{S}}}] \geq \nu-\Theta(K)$. It remains to find a lower bound for $\gamma$ in terms of $\nu$. From the false alarm constraint and we have $$\gamma={\mathsf{E}}_\infty[-u_{{\mathcal{S}}}]=\frac{{\mathsf{E}}_\infty[-u_{{\mathcal{T}}}]}{{\mathsf{P}}_\infty(u_{{\mathcal{T}}}\geq\nu)}. \label{eq:APP-2}$$ For the expectation in the numerator, we can obtain the following upper bound $$\begin{aligned} {\mathsf{E}}_\infty[-u_{{\mathcal{T}}}]&={\mathsf{E}}_\infty[-u_{{\mathcal{T}}} {\mathbbm{1}_{\{u_{{\mathcal{T}}}\leq0\}}}]+{\mathsf{E}}_\infty[-u_{{\mathcal{T}}} {\mathbbm{1}_{\{u_{{\mathcal{T}}}\geq\nu\}}}] \nonumber \\ &\leq {\mathsf{E}}_\infty[-u_{{\mathcal{T}}} {\mathbbm{1}_{\{u_{{\mathcal{T}}}\leq0\}}}] \leq \frac{\bar{\sigma}_\infty^2}{\bar{I}_\infty}+K\bar{I}_\infty, \label{eq:APP-2.1}\end{aligned}$$ where the final inequality follows from (\[eq:APP-20\]). In order to obtain a lower bound for the probability ${\mathsf{P}}_\infty(u_{{\mathcal{T}}}\geq\nu)$ in the denominator we start with a change of measure, thus $$\label{lbo} {\mathsf{P}}_\infty(u_{{\mathcal{T}}}\geq\nu)={\mathsf{E}}_0[e^{-u_{{\mathcal{T}}}}{\mathbbm{1}_{\{u_{{\mathcal{T}}}\geq\nu\}}}]={\mathsf{E}}_0[e^{-u_{{\mathcal{T}}}}|u_{{\mathcal{T}}}\geq\nu]{\mathsf{P}}_0(u_{{\mathcal{T}}}\geq\nu).$$ Then, with an application of the conditional Jensen inequality we have $$\begin{aligned} \label{lb} \begin{split} {\mathsf{E}}_0[e^{-u_{{\mathcal{T}}}}|u_{{\mathcal{T}}}\geq\nu] &\geq \exp(-{\mathsf{E}}_0[u_{{\mathcal{T}}}|u_{{\mathcal{T}}}\geq\nu]) \\ &\geq \exp\left(-\nu-\frac{{\mathsf{E}}_0[(u_{{\mathcal{T}}}-\nu){\mathbbm{1}_{\{u_{{\mathcal{T}}}\geq\nu\}}}]}{{\mathsf{P}}_0(u_{{\mathcal{T}}}\geq\nu)}\right) \\ &\geq \exp\left(-\nu-\frac{\sup_{r\ge0}{\mathsf{E}}_0[u_{T^+_r}-r]}{{\mathsf{P}}_0(u_{{\mathcal{T}}}\geq\nu)}\right) \\ &\geq \exp\left(-\nu-\frac{\frac{\bar{\sigma}_0^2}{\bar{I}_0}+K\bar{I}_0}{{\mathsf{P}}_0(u_{{\mathcal{T}}}\geq\nu)}\right). \end{split}\end{aligned}$$ where in the last inequality we have used, again, Lorden’s [@lorden] upper bound for the maximal average overshoot. Combining (\[lbo\]) and (\[lb\]) we obtain $$\begin{aligned} \begin{split} {\mathsf{P}}_\infty(u_{{\mathcal{T}}}\geq\nu) &\geq \exp\left(-\nu-\frac{\frac{\bar{\sigma}_0^2}{\bar{I}_0}+K\bar{I}_0}{{\mathsf{P}}_0(u_{{\mathcal{T}}}\geq\nu)}\right) \, {\mathsf{P}}_0(u_{{\mathcal{T}}}\geq\nu) \\ &\geq \exp\left(-\nu-\frac{\frac{\bar{\sigma}_0^2}{\bar{I}_0}+K\bar{I}_0}{\frac{\bar{I}_0^2}{\bar{\sigma}_0^2+\bar{I}_0^2}}\right) \, \frac{\bar{I}_0^2}{\bar{\sigma}_0^2+\bar{I}_0^2}, \end{split} \label{eq:APP-2.9} \end{aligned}$$ where the second inequality follows from . Then, from (\[eq:APP-2\]), (\[eq:APP-2.1\]) and (\[eq:APP-2.9\]) we have $$\gamma \leq \Bigl(\frac{\bar{\sigma}_\infty^2}{\bar{I}_\infty}+K\bar{I}_\infty \Bigr)\, \exp\left(\nu+\frac{\frac{\bar{\sigma}_0^2}{\bar{I}_0}+K\bar{I}_0}{\frac{\bar{I}_0^2}{\bar{\sigma}_0^2+\bar{I}_0^2}}\right) \, \Bigl(\frac{\bar{I}_0^2}{\bar{\sigma}_0^2+\bar{I}_0^2} \Bigr)^{-1}.$$ Taking logarithms we obtain $\log \gamma \leq \Theta(\log K) + \nu + K \Theta(1)$, which implies that $\log \gamma \leq \nu + \Theta(K)$ and completes the proof. {#app:C} Our goal in this Appendix is to prove Lemma\[lem3\], which connects the threshold $\tilde{\nu}$ to the false-alarm period, $\gamma$. In order to provide an elegant proof of this result, we need to adopt an alternative representation of the fusion center policy (that we will use only in this Appendix). Indeed, since the implementation of ${\tilde{{ {\mathcal{S}}}}}$ requires only the knowledge of the transmitted messages at the fusion center, it is possible to describe the fusion rule without any reference to the communication times $\{\tau_n^k\}$. Thus, let $z_{n}$ be the $n$th message that arrives at the fusion center and $k_{n}$ the corresponding identity of the sensor which transmitted this message. Of course, since time is discrete, there is non-zero probability that the fusion center may receive messages from two or more sensors concurrently. In this case, we enumerate the simultaneous messages in an arbitrary order and we keep the same order for the labels. We can then describe the flow of information at the fusion center by the filtration $\{ {\cal{C}}_{n}\}_{n \in \mathbb{N}}$, where ${\mathcal{C}}_{n}= \sigma ( (z_{1},k_{1}) \ldots, (z_{n},k_{n}))$. For any $n \in \mathbb{N}$ we set $$\begin{aligned} \begin{split} \phi_{n} &:= \log \frac{{\mathsf{P}}_{0}(k_{1}, \ldots, k_{n})}{{\mathsf{P}}_{\infty}(k_{1}, \ldots, k_{n})}\\ v_{n} &:= \log \frac{{\mathsf{P}}_{0}(z_{1}, \ldots, z_{n}|k_{1}, \ldots, k_{n})}{{\mathsf{P}}_{\infty}(z_{1}, \ldots, z_{n}|k_{1}, \ldots, k_{n})}. \end{split} \label{les}\end{aligned}$$ and recalling the definition of the log-likelihood ratios ${\bar{\Lambda}^k}_{j},{\underline{\Lambda}^k}_{j}$ in (\[lambdas\]), we have $$v_{n} = \sum_{m=1}^{n} \sum_{j=1}^{d^{k_{m}}} \Bigl[ \overline{\Lambda}_{j}^{k_{m}} \, {\mathbbm{1}_{\{z_{m}=j\}}} - \underline{\Lambda}_{j}^{k_{m}} \, \, {\mathbbm{1}_{\{z_{m}=-j\}}} \Bigr].$$ Then, the *number of messages* which the fusion center has received until an alarm is raised by D-CUSUM is given by the following $\{{\cal{C}}_{n}\}$-stopping time: $$\label{alterfcp} \tilde{{\mathcal{N}}} = \inf \{ n \in \mathbb{N}: v_{n}- \min_{m=1, \ldots, n} v_{m} \geq \tilde{\nu} \}.$$ The process $\{v_n\}$ and the stopping time $\tilde{{\mathcal{N}}}$ are closely related to $\{\tilde{u}_t\}$ and ${\tilde{{ {\mathcal{S}}}}}$, respectively. Their main difference is that $\{\tilde{u}_t\}$ and ${\tilde{{ {\mathcal{S}}}}}$ are expressed in terms of “physical time”, whereas $\{v_n\}$ and $\tilde{{\mathcal{N}}}$ in terms of number of messages transmitted to the fusion center. If we denote by $\tau_n$ the time-instant at which the $n$th message arrives at the fusion center, then we can explicitly specify the following connection between these quantities: $\tilde{u}_{\tau_n}=v_n$ and ${\tilde{{ {\mathcal{S}}}}}=\tau_{\tilde{{\mathcal{N}}}}$. In other words $\tilde{{\mathcal{N}}}$ *denotes the number of received messages at the fusion center until stopping at time* $\tilde{{\mathcal{S}}}$. After these definitions, we can now prove Lemma \[lem3\], which connects $\tilde{\nu}$ to $\gamma$ through an inequality that will be important for the performance analysis of ${\tilde{{ {\mathcal{S}}}}}$. For that, recall the definition of $\bar{I}_\infty$ in . We first observe that $$\label{jjj} \gamma ={\mathsf{E}}_{\infty}[-u_{{\tilde{{ {\mathcal{S}}}}}}]= K \bar{I}_{\infty} \, {\mathsf{E}}_{\infty} [{\tilde{{ {\mathcal{S}}}}}] \geq \bar{I}_{\infty} \, {\mathsf{E}}_{\infty} [\tilde{{\mathcal{N}}}].$$ The second equality follows from an application of Wald’s identity, whereas the inequality from the fact that $\tilde{{\mathcal{N}}}\le K\tilde{{\mathcal{S}}}$. Indeed, the maximum number of received messages until stopping at $\tilde{{\mathcal{S}}}$ is obtained when at every time instant we have all sensors transmitting a message to the fusion center and this yields $K\tilde{{\mathcal{S}}}$. From (\[jjj\]) it is clear that it suffices to prove ${\mathsf{E}}_\infty[\tilde{{\mathcal{N}}}] \geq e^{\tilde{\nu}}$. In order to do so, let us define the sequence $\{n_j\}$ of epochs where the CUSUM process $v_{n}- \min_{0\le m\le n} v_{m}$ either returns to zero (restarts) or exceeds $\tilde{\nu}$. This is the classical way to write the CUSUM stopping time as a sum of a random number of components. Specifically, let us define $$\begin{aligned} \label{repeat0} \begin{split} n_{j} &:= \inf \{ n > n_{j-1} : v_{n} - v_{n_{j-1}} \notin (0, \tilde{\nu})\}\\ {\mathcal{R}}&:= \inf\{j \in \mathbb{N}: v_{n_{j}}- v_{n_{j-1}} \geq \tilde{\nu} \}. \end{split}\end{aligned}$$ Then we clearly have $\tilde{{\mathcal{N}}}=n_{{\mathcal{R}}}$. Since from one epoch to the next we count at least one additional message, we trivially conclude that ${\mathcal{R}}\le\tilde{{\mathcal{N}}}$ and, therefore, ${\mathsf{E}}_\infty[{\mathcal{R}}]\le{\mathsf{E}}_\infty[\tilde{{\mathcal{N}}}]$. We can now claim that it suffices to show that $$\label{todo} {\mathsf{P}}_\infty({\mathcal{R}}>j)\ge (1-e^{-\tilde{\nu}})^j, \quad \forall j \in \mathbb{N}.$$ In order to justify this claim, observe first that ${\mathsf{E}}_\infty[\tilde{{\mathcal{N}}}]<\infty$, since $\tilde{{\mathcal{N}}}$ is a CUSUM stopping time. As a result, ${\mathsf{E}}_\infty[{\mathcal{R}}]$ is finite as well and consequently (\[todo\]) implies that $${\mathsf{E}}_\infty[\tilde{{\mathcal{N}}}] \geq {\mathsf{E}}_\infty[{\mathcal{R}}] =\sum_{j=0}^\infty{\mathsf{P}}_\infty({\mathcal{R}}>j) \geq \sum_{j=0}^\infty (1-e^{-\tilde{\nu}})^j \geq e^{\tilde{\nu}}. $$ In order to prove (\[todo\]), we start with the following observation: $$\begin{aligned} \begin{split} {\mathsf{P}}_\infty({\mathcal{R}}>j)&={\mathsf{P}}_\infty({\mathcal{R}}>j-1;v_{n_j}-v_{n_{j-1}}\le0)\\ &={\mathsf{P}}_\infty({\mathcal{R}}>j-1)-{\mathsf{P}}_\infty({\mathcal{R}}>j-1;v_{n_j}-v_{n_{j-1}}\ge\tilde{\nu}). \end{split} \label{eq:mbifla2}\end{aligned}$$ Let us now set $A:= \{{\mathcal{R}}> j-1 \, , \, v_{n_j}-v_{n_{j-1}}\ge\tilde{\nu}\}$. Then, it is clear that $A \in {\mathcal{C}}_{ n_{j}}$ and with a change of measure ${\mathsf{P}}_{\infty} \mapsto {\mathsf{P}}_{0}$ we obtain $$\begin{aligned} \label{teno} {\mathsf{P}}_{\infty}(A) &= \int_{A} {\mathcal{L}}_{n_{j}}^{-1} \; d{\mathsf{P}}_{0}, \quad \text{where} \quad {\mathcal{L}}_{n}:= e^{\phi_{n}+v_{n}}, \quad \forall \; n \in \mathbb{N}.\end{aligned}$$ We now argue as follows $$\begin{aligned} \label{ten} \begin{split} {\mathsf{P}}_{\infty}(A) &= \int_{A} {\mathcal{L}}_{n_{j-1}}^{-1} \, e^{-( \phi_{n_{j}}- \phi_{n_{j-1}}) - (v_{n_{j}}-v_{n_{j-1}})} \; d{\mathsf{P}}_{0} \\ &\leq e^{-\tilde{\nu}} \, \int_{A} {\mathcal{L}}_{n_{j-1}}^{-1} \, e^{-( \phi_{n_{j}}- \phi_{n_{j-1}})} \; d{\mathsf{P}}_{0} \\ &\leq e^{-\tilde{\nu}} \, \int_{{\mathcal{R}}> j-1} {\mathcal{L}}_{n_{j-1}}^{-1} \, e^{-( \phi_{n_{j}}- \phi_{n_{j-1}})} \; d{\mathsf{P}}_{0} \\ &= e^{-\tilde{\nu}} \, \int_{{\mathcal{R}}> j-1} {\mathcal{L}}_{n_{j-1}}^{-1} \, {\mathsf{E}}_{0}[ e^{-( \phi_{n_{j}}- \phi_{n_{j}-1})} | \mathcal{C}_{n_{j-1}} ] \; d{\mathsf{P}}_{0}. \end{split} \end{aligned}$$ The first inequality is due to the fact that $v_{n_j}-v_{n_{j-1}}\ge\tilde{\nu}$ on the event $A$. The second inequality holds because $A \subset \{{\mathcal{R}}> j-1\}$, whereas the last equality follows from the law of iterated expectation and the fact that $\{{\mathcal{R}}> j-1\} \in {\mathcal{C}}_{n_{j-1}}$ and ${\mathcal{L}}_{n_{j-1}}^{-1}$ is a ${\mathcal{C}}_{n_{j-1}}$-measurable random variable. As a likelihood ratio process, $\{e^{-\phi_{n}}\}_{n \in \mathbb{N}}$ is a positive $({\mathsf{P}}_{0}, {\mathcal{C}}_{n})$-martingale and, consequently supermartingale. As a result, we can apply the Optional Sampling Theorem and obtain $$\label{os} {\mathsf{E}}_{0}[ e^{-( \phi_{n_{j}}- \phi_{n_{j}-1})} \, | \, \mathcal{C}_{n_{j-1}} ] \leq 1.$$ Then, it is clear with a change of measure ${\mathsf{P}}_{\infty} \mapsto {\mathsf{P}}_{0}$ that (\[ten\]) reduces to $$\begin{aligned} \label{eleven} \begin{split} {\mathsf{P}}_{\infty}(A) &\leq e^{-\tilde{\nu}} \, \int_{{\mathcal{R}}> j-1} {\mathcal{L}}_{n_{j-1}}^{-1} \; d{\mathsf{P}}_{0} = e^{-\tilde{\nu}} \, {\mathsf{P}}_{\infty}( {\mathcal{R}}> j-1) . \end{split} \end{aligned}$$ Substituting the outcome of in and applying it repeatedly yields $${\mathsf{P}}_\infty({\mathcal{R}}>j)\ge(1-e^{-\tilde{\nu}}) \, {\mathsf{P}}_\infty({\mathcal{R}}>j-1)\ge(1-e^{-\tilde{\nu}})^j,$$ which completes the proof. {#app:D} From the definition of ${\bar{\Lambda}^k}_{j}$ in and a change of measure ${\mathsf{P}}_{\infty} \mapsto {\mathsf{P}}_{0}$ we have $$\begin{aligned} e^{-{\bar{\Lambda}^k}_{j}} &= \frac{{\mathsf{P}}_{\infty}( z_{1}^k=j)}{{\mathsf{P}}_{0}( z_{1}^k=j)} = e^{-{\bar{\Delta}^k}_{j}} \, \frac{{\mathsf{E}}_{0}[ e^{-(\ell^k_{1}- {\bar{\Delta}^k}_{j})} {\mathbbm{1}_{\{z_{1}^k=j\}}}]}{{\mathsf{P}}_{0}( z_{1}^k=j)} = e^{-{\bar{\Delta}^k}_{j}} \, {\mathsf{E}}_{0}[ e^{-(\ell^k_{1}- {\bar{\Delta}^k}_{j})} \, | \, z_{1}^k=j].\end{aligned}$$ Taking logarithms we obtain the first equality in (\[import\]), whereas the second one can be shown in a similar way. It is clear that ${\overline{R}^k}_{j}, {\underline{R}^k}_{j}>0$ for every $1 \leq j \leq d$ and that ${\overline{R}^k}_{j}, {\underline{R}^k}_{j} \leq \epsilon^{k}$ for every $1 \leq j \leq d-1$, thus, it remains to prove (\[import2\]). We will prove only the first relationship in it, as the second one can be shown in a similar way. From the conditional Jensen inequality we obtain $$\begin{aligned} \label{qqq} {\overline{R}^k}_{d} &\leq {\mathsf{E}}_{0}[ \ell^k_{1}- {\bar{\Delta}^k}_{d} \, | \, z_{1}^k=d ] = \frac{{\mathsf{E}}_{0}[ (\ell^k_{1}- {\bar{\Delta}^k}_{d}) \, {\mathbbm{1}_{\{z_{1}^k=d\}}}]}{{\mathsf{P}}_{0}(z_{1}^k=d)}\end{aligned}$$ and from (\[eq:levels\]) we have $$\label{ov111} {\mathsf{P}}_{0}(z_{1}^k=d)= {\mathsf{P}}_{0}(z_{1}^k=d|z_{1}^{k}>0) \, {\mathsf{P}}_{0}(z_{1}^{k}>0) = \frac{{\mathsf{P}}_{0}(z_{1}^{k}>0)}{d} = \frac{1-o(1)}{d},$$ where $o(1)$ is a term that vanishes as ${\bar{\Delta}^k}, {\underline{\Delta}^k}\rightarrow \infty$ and does not depend on $d$. Moreover, since ${\bar{\Delta}^k}_{d}= {\bar{\Delta}^k}+{\bar{\epsilon}^k}_{d-1}$ we have $$\begin{aligned} {\mathsf{E}}_{0}[ (\ell^k_{1}- {\bar{\Delta}^k}_{d}) \, {\mathbbm{1}_{\{z_{1}^k=d\}}}] &= \int_{{\bar{\epsilon}^k}_{d-1}}^{{\bar{\epsilon}^k}_{d}} {\mathsf{P}}_0(\ell_{1}^{k}>{\bar{\Delta}^k}+x) \, dx \\ &\leq \int_{{\bar{\epsilon}^k}_{d-1}}^{{\bar{\epsilon}^k}_{d}} {\mathsf{P}}_0(\ell_{1}^{k}>{\bar{\Delta}^k}+x | \ell_{1}^{k} \geq {\bar{\Delta}^k}) \, dx, $$ Setting $D:={\mathsf{E}}_0[((u_1^k)^+)^2]/I_{0}^{k}$, which is clearly a finite quantity since ${\mathsf{E}}_0[(u_1^k)^2]<\infty$, (recall also that $I_{0}^{k}= {\mathsf{E}}_0[u_1^k]$), we can apply [@lorden Theorem4, Eq. (13)] and obtain the following upper bound for the probability inside the integral: $$\begin{aligned} {\mathsf{P}}_0(\ell_{1}^{k}-{\bar{\Delta}^k}>x | \ell_{1}^{k} \geq {\bar{\Delta}^k}) &\leq\frac{1}{I_0^k}\left(\frac{{\bar{\Delta}^k}+D}{{\bar{\Delta}^k}+x}\right){\mathsf{E}}_0[(2u_1^k-x){\mathbbm{1}_{\{u_1^k\geq x\}}}] \\ &\leq \Theta(1) \, {\mathsf{E}}_0[u_1^k{\mathbbm{1}_{\{u_1^k\geq x\}}}] , $$ where $\Theta(1)$ is a term that is independent of $d$ and is bounded from above and below as ${\bar{\Delta}^k}, {\underline{\Delta}^k}\rightarrow \infty$. Then, applying Fubini’s theorem we obtain $$\begin{aligned} \label{ov222} \begin{split} {\mathsf{E}}_{0}[ (\ell^k_{1}- {\bar{\Delta}^k}_{d}) \, {\mathbbm{1}_{\{z_{1}^k=d\}}}] &\leq \Theta(1) \, \int_{{\bar{\epsilon}^k}_{d}-1}^{{\bar{\epsilon}^k}_{d}} {\mathsf{E}}_0[u_1^k{\mathbbm{1}_{\{u_1^k\geq x\}}}]dx \\ &= \Theta(1) \, {\mathsf{E}}_0[u_1^k(u_1^k-{\bar{\epsilon}^k}_{d-1})^+] \leq \Theta(1) \, {\mathsf{E}}_0[(u_1^k)^2{\mathbbm{1}_{\{u_1^k>{\bar{\epsilon}^k}_{d-1}\}}}]. \end{split}\end{aligned}$$ Combining (\[qqq\]), (\[ov111\]) and (\[ov222\]) completes the proof. \[Proof of Lemma\[lem:1\]\] From (\[ells\]) and (\[import\]) we have $$\begin{aligned} \label{error} \begin{split} \ell^k_1- \tilde{\ell}^k_1 &= \sum_{j=1}^{d}\Bigl[ (\ell^k_1-{\bar{\Delta}^k}_{j} -{\overline{R}^k}_j) {\mathbbm{1}_{\{z_{1}^{k}= j\}}} + (\ell^k_1+ {\underline{\Delta}^k}_{j} +{\underline{R}^k}_j) {\mathbbm{1}_{\{z_{1}^{k}=-j\}}}\Bigr] \\ &\leq \sum_{j=1}^{d}\Bigl[ (\ell^k_1-{\bar{\Delta}^k}_{j}) {\mathbbm{1}_{\{z_{1}^{k}= j\}}} + {\underline{R}^k}_j {\mathbbm{1}_{\{z_{1}^{k}=-j\}}}\Bigr] \\ &\leq \sum_{j=1}^{d-1} \Bigl[ \epsilon^{k} {\mathbbm{1}_{\{z_{1}^{k}= j\}}} + \epsilon^{k} {\mathbbm{1}_{\{z_{1}^{k}=-j\}}}\Bigr] + (\ell^k_1-{\bar{\Delta}^k}_{d}) {\mathbbm{1}_{\{z_{1}^{k}= d^\}}} + {\underline{R}^k}_{d} {\mathbbm{1}_{\{z_{1}^{k}=-d\}}}. \end{split} \end{aligned}$$ where the first inequality holds because ${\overline{R}^k}_{j}>0$ and $\ell^k_n+ {\underline{\Delta}^k}_{j}<0$ on $\{z_{n}^{k}=-j\}$ and the second one because $\ell_{n}^{k} -{\bar{\Delta}^k}_{j} \leq \epsilon^{k}$ on $\{z_{n}^{k}=j\}$ and ${\underline{R}^k}_{j} \leq \epsilon^{k}$ for every $1 \leq j \leq d-1$. From (\[eq:levels\]) it follows that for any $1 \leq j \leq d$ $$\begin{aligned} {\mathsf{P}}_{0}(z_{1}^{k}= j) &\leq {\mathsf{P}}_{0}(z_{1}^{k}= j| z_{1}^{k}>0)= 1/d , \\ {\mathsf{P}}_{0}(z_{1}^{k}=-j) &= {\mathsf{E}}_{\infty}[e^{\ell_{1}^{k}} {\mathbbm{1}_{\{z_{1}^{k}=-j\}}}] \leq {\mathsf{P}}_{\infty}(z_{1}^{k}=-j) \leq {\mathsf{P}}_{\infty}(z_{1}^{k}=-j| z_{1}^{k}<0 )= 1/d,\end{aligned}$$ therefore, taking expectations in (\[error\]) we obtain $${\mathsf{E}}_{0}[\ell^k_1- \tilde{\ell}^k_1] \leq 2\epsilon^{k} \frac{d-1}{d} + {\mathsf{E}}_{0}[ (\ell^k_1-{\bar{\Delta}^k}_{d}) {\mathbbm{1}_{\{z_{1}^{k}= d\}}}] + \frac{{\underline{R}^k}_{d}}{d}.$$ Using now (\[ov222\])and (\[import2\]), we obtain upper bounds for the second and third term of right-hand side, respectively, which lead to (\[thetak\]). This expression implies that $\theta^{k} \rightarrow 0$ as $d \rightarrow \infty$, since $\epsilon^{k} \rightarrow 0$ as $d \rightarrow \infty$ and $u_{1}^{k}$ has a finite second moment. {#app:E} In this Appendix, we state and prove Lemma \[lem:8\], which is used in the proof of Theorem \[th:2\]. In order to do so, we need a very useful for our purposes, asynchronous version of Wald’s identity (Lemma\[lem:4\]), as well as the following lemma. We set: $$\Lambda_{\max} :=\max_{1 \leq k \leq K} \max_{1 \leq j \leq d} \max \{{\bar{\Lambda}^k}_{j}, {\underline{\Lambda}^k}_{j}\}.$$ \[lem:7\] If ${\mathsf{E}}_{i}[(u_{1}^{k})^{2}]<\infty$ for every $1 \leq k \leq K$, then as ${\Delta}\rightarrow \infty$ we have $\Lambda_{\max} =\Theta({\Delta})$ and $$\min_{1 \leq k \leq K} {\mathsf{E}}_{0}[\tilde{\ell}_{1}^{k}] \geq \Theta({\Delta}).$$ From Lemma \[lem:0\] it is clear that ${\overline{R}^k}_{j}, {\underline{R}^k}=O(1)$ and, consequently, ${\bar{\Lambda}^k}_{j}, {\underline{\Lambda}^k}_{j}=\Theta({\Delta})$ as ${\Delta}\rightarrow \infty$ for every $j=1, \ldots,d$, which proves that $\Lambda_{\max} =\Theta({\Delta})$. Furthermore, since ${\bar{\Lambda}^k}_{j} \geq {\bar{\Delta}^k}_{j} \geq {\bar{\Delta}^k}$ and ${\underline{\Lambda}^k}_{j} \leq \Lambda_{\max}$ we have $$\begin{aligned} {\mathsf{E}}_{0}[\tilde{\ell}_{1}^{k}] &= \sum_{j=1}^{d} [ {\bar{\Lambda}^k}_{j} \, {\mathsf{P}}_{0}(z_{1}^{k}=j)- {\underline{\Lambda}^k}_{j} \, {\mathsf{P}}_{0}(z_{1}^{k}=-j) ] \\ &\geq {\bar{\Delta}^k}\, {\mathsf{P}}_{0}(z_{1}^{k}>0) - \Lambda_{\max} \, {\mathsf{P}}_{0}(z_{1}^{k}<0) \\ &= {\bar{\Delta}^k}- ({\bar{\Delta}^k}+ \Lambda_{\max} ) \, {\mathsf{P}}_{0}(z_{1}^{k}<0),\end{aligned}$$ thus, it suffices to show that ${\mathsf{P}}_{0}(z_{1}^{k}<0)=o(1/{\Delta})$. Indeed, with a change of measure we have $${\bar{\Delta}^k}{\mathsf{P}}_{0}(z_{1}^{k}<0) = {\bar{\Delta}^k}\, {\mathsf{E}}_{\infty}[e^{\ell_{1}^{k}} \, {\mathbbm{1}_{\{\ell_{1}^{k}<-{\underline{\Delta}^k}\}}}] \leq {\bar{\Delta}^k}\, e^{-{\underline{\Delta}^k}}$$ and the upper bound clearly goes to 0 as ${\Delta}\rightarrow \infty$. \[lem:4\] Consider a generic sequence $\{\zeta_{n}^k\}$, where each $\zeta_{n}^k$ is an arbitrary (Borel) function of the triplet $(\tau_{n}^k-\tau_{n-1}^k, z_{n}^k, \ell_{n}^{k})$. Thus, $\{\zeta_{n}^k\}$ is a sequence of independent and identically distributed random variables under both ${\mathsf{P}}_{0}$ (and ${\mathsf{P}}_{\infty}$). If ${\mathcal{T}}$ is a ${\mathsf{P}}_{0}$-integrable $\{{\mathscr{F}}_t\}$-stopping time and ${\mathsf{E}}_{0}[|\zeta_{1}^k|]< \infty$, then $$\label{abra1} {\mathsf{E}}_{0} \left[ \sum_{n=1}^{m_{{\mathcal{T}}}^k+1} \zeta_{n}^k \right] = {\mathsf{E}}_{0} [ m_{{\mathcal{T}}}^k+1] {\mathsf{E}}_{0}[\zeta_{1}^k].$$ If moreover $\zeta_{n}^k\geq 0$, then $$\label{abra2} {\mathsf{E}}_{0} \left[ \sum_{n=1}^{m_{{\mathcal{T}}}^k} \zeta_{n}^k \right] \leq ({\mathsf{E}}_{0} [m_{{\mathcal{T}}}^k]+1) \; {\mathsf{E}}_{0}[\zeta_{1}^k].$$ Finally, if $|\zeta_{n}^k|\leq M^{k}$, where $M^{k}$ is some finite constant, then $$\label{abra3} {\mathsf{E}}_{0} \left[ \sum_{n=1}^{m_{{\mathcal{T}}}^k} \zeta_{n}^k \right] \geq {\mathsf{E}}_{0} [m_{{\mathcal{T}}}^k] \; {\mathsf{E}}_{0}[\zeta_{1}^k] - 2M^{k}.$$ The proof can be found in [@fel]. \[lem:8\] If ${\mathsf{E}}_{i}[(u_{1}^{k})^{2}]<\infty$ for every $1 \leq k \leq K$, then as ${\Delta}\rightarrow \infty$ $$\begin{aligned} \tilde{u}_{\tilde{{\mathcal{S}}}} &\leq \log\gamma+ K \Theta({\Delta}) \label{flas1} \\ {\mathsf{E}}_0[u_{\tilde{{\mathcal{S}}}}-\tilde{u}_{\tilde{{\mathcal{S}}}}] &\leq K \Theta({\Delta}) + \frac{\theta}{\Theta({\Delta})} \, \log \gamma. \label{flas2}\end{aligned}$$ In order to prove (\[flas1\]), it suffices to observe that the overshoot $\tilde{y}_{\tilde{{\mathcal{S}}}}-\tilde{\nu}$ cannot be larger than $K \Lambda_{\max}$, therefore, $$\tilde{u}_{\tilde{{\mathcal{S}}}} \leq \tilde{y}_{\tilde{{\mathcal{S}}}} \leq \tilde{\nu}+ K \, \Lambda_{\max} \leq \log \gamma + K \Theta({\Delta}) ,$$ where the last inequality follows from Lemmas \[lem3\] and \[lem:7\]. In order to prove (\[flas2\]), we observe that for any $t$ and $k$ we have $$\begin{aligned} u^k_t-\tilde{u}^k_t &=u^k_t-u^k_{\tau^k(t)}+u^k_{\tau^k(t)}-\tilde{u}^k_{\tau^k(t)} \leq {\bar{\Delta}^k}+ \sum_{n=1}^{m^k_t} [ \ell_{n}^{k}- \tilde{\ell_{n}^{k}}],\end{aligned}$$ If we now replace $t$ with $\tilde{{\mathcal{S}}}$, take expectations with respect to ${\mathsf{P}}_0$ and apply (\[abra2\]) and Lemma\[lem:1\] we obtain $${\mathsf{E}}_0[u^k_{\tilde{{\mathcal{S}}}}-\tilde{u}^k_{\tilde{{\mathcal{S}}}}] \leq {\bar{\Delta}^k}+ ({\mathsf{E}}_0[m^k_{\tilde{{\mathcal{S}}}}]+1) \, \theta^{k} = {\bar{\Delta}^k}+ \theta^{k}+ \theta^{k} {\mathsf{E}}_0[m^k_{\tilde{{\mathcal{S}}}}].$$ Since from (\[thetak\]) it is clear that $\theta^{k}={\mathcal{O}}(1)$ as ${\Delta}\rightarrow \infty$, summing over $k$ we obtain $$\begin{aligned} \label{sss} \begin{split} {\mathsf{E}}_0[u_{\tilde{{\mathcal{S}}}}-\tilde{u}_{\tilde{{\mathcal{S}}}}] &\leq \sum_{k=1}^{K} ({\bar{\Delta}^k}+\theta^{k}) + \sum_{k=1}^{K} \theta^{k} \, {\mathsf{E}}_0[m^{k}_{\tilde{{\mathcal{S}}}}] \leq K \Theta({\Delta}) + \theta \, {\mathsf{E}}_0[m_{\tilde{{\mathcal{S}}}}] , \end{split}\end{aligned}$$ where $m_{t}:=\sum_{k=1}^{K} m_{t}^{k}$. Now, it is obvious that $|\tilde{\ell}_{n}^{k}| \leq \Lambda_{\max}$ for every $n$ and $k$, therefore applying (\[abra3\]) we have $${\mathsf{E}}_{0}[\tilde{u}^{k}_{\tilde{{\mathcal{S}}}}]= {\mathsf{E}}_{0} \Bigl[ \sum_{n=1}^{m_{\tilde{{\mathcal{S}}}}^k} \tilde{\ell}_{n}^{k} \Bigr] \geq {\mathsf{E}}_{0}[m^{k}_{\tilde{{\mathcal{S}}}}] \, {\mathsf{E}}_{0}[\tilde{\ell}_{1}^{k}] -2 \Lambda_{\max}.$$ Thus, summing over $k$ we obtain $$\begin{aligned} {\mathsf{E}}_{0}[\tilde{u}_{\tilde{{\mathcal{S}}}}] & \geq \Bigl(\min_{1 \leq k \leq K} {\mathsf{E}}_{0}[\tilde{\ell}_{1}^{k}] \Bigr) \, {\mathsf{E}}_{0}[m_{\tilde{{\mathcal{S}}}}] - 2 \, K \, \Lambda_{\max} \end{aligned}$$ and, consequently, $$\begin{aligned} {\mathsf{E}}_{0}[m_{\tilde{{\mathcal{S}}}}] &\leq \frac{{\mathsf{E}}_{0}[\tilde{u}_{\tilde{{\mathcal{S}}}}] + 2 K \, \Lambda_{\max}}{\min_{1 \leq k \leq K} {\mathsf{E}}_{0}[\tilde{\ell}_{1}^{k}]} \leq \frac{ \log \gamma + K \Theta({\Delta})}{\Theta({\Delta})}= \frac{ \log \gamma}{\Theta({\Delta})}+ K \Theta({\Delta}).\end{aligned}$$ where the second inequality is due to (\[flas1\]) and Lemma \[lem:7\]. Combining the latter relationship with (\[sss\]) we obtain the desired result. [99]{} <span style="font-variant:small-caps;">Basseville, M. and Nikiforov, I. V.</span> (1993) [*Detection of Abrupt Changes: Theory and Applications.*]{} NJ Prentice-Hall, Engelwood Cliffs. <span style="font-variant:small-caps;">Beibel, M.</span> (1996). A note on Ritov’s Bayes approach to the minimax property of the CUSUM procedure. *Ann. Stat.* **24**(4) 1804–1812. <span style="font-variant:small-caps;">Beibel, M.</span> (1997). Sequential change-point detection in continuous time when the post-change drift is unknown. *Bernoulli* **3**(4) 457–478. <span style="font-variant:small-caps;">Chronopoulou, A. and Fellouris, G.</span> (2013). Optimal sequential change detection for fractional diffusion-type processes. *J. App. Prob.* **50**(1). <span style="font-variant:small-caps;">Crow, R. W. and Schwartz, S. C.</span> (1996). Quickest detection for sequential decentralized decision systems. *IEEE Trans. Aerosp. Electron. Syst.* **32** 267–283. <span style="font-variant:small-caps;">Dayanik, S., Poor, H. V. and Sezer, S. O.</span> (2008). Multisource Bayesian sequential change detection. *Ann. Appl. Probab.* **18**(2) 552–590. <span style="font-variant:small-caps;">Fellouris, G. and Moustakides, G. V.</span> (2011). Decentralized sequential hypothesis testing using asynchronous communication. *IEEE Trans. Inf. Th.* **57**(1) 534–548. <span style="font-variant:small-caps;">Gapeev, P.V.</span> (2005). The disorder problem for compound Poisson processes with exponential jumps. *Ann. Appl. Probab.* **15** 487–499. <span style="font-variant:small-caps;">Lai, T. L.</span> (1995). Sequential change-point detection in quality control and dynamical systems. *J. Roy. Statist. Soc. Ser. B* **57** 613–658. <span style="font-variant:small-caps;">Liptser, R.L. and Shiryaev, A.N.</span> (2001) [*Statistics of Random Processes II, Applications, 2nd ed*]{}. New York: Springer. <span style="font-variant:small-caps;">Lorden, G.</span> (1970). On excess over the boundary. [*Ann. Math. Stat.*]{} **41**(2) 520–527. <span style="font-variant:small-caps;">Lorden, G.</span> (1971). Procedures for reacting to a change in distribution. [*Ann. Math. Stat.*]{} **42** 1897–1908. <span style="font-variant:small-caps;">Mei, Y.</span> (2005). Information bounds and quickest change detection in decentralized decision systems. [*IEEE Tr. Inf. Th.*]{} **51**(7), 2669–2681, <span style="font-variant:small-caps;">Moustakides, G.V.</span> (1986). Optimal stopping times for detecting changes in distributions. [*Ann. Stat.*]{} **14**(4) 1379–1387. <span style="font-variant:small-caps;">Moustakides, G.V.</span>(1998). Quickest detection of abrupt changes for a class of random processes. [*IEEE Tran. Inf. Th.*]{} **44**(5) 1965–1968 <span style="font-variant:small-caps;">Moustakides, G.V.</span> (2004). Optimality of the CUSUM procedure in continuous time. [*Ann. Stat.*]{} **32**(1) 302–315. <span style="font-variant:small-caps;">Moustakides, G.V.</span> (2006). Decentralized CUSUM change detection. [*Proc. 9th IEEE Int. Conf. Inf. Fusion*]{}, Florence, Italy. <span style="font-variant:small-caps;">Page, E. S.</span> (1954). Continuous inspection schemes. [*Biometrika*]{} **41** 100–115. <span style="font-variant:small-caps;">Peskir, G. and Shiryaev, A.N.</span> (2002). Solving the Poisson disorder problem. [*In Advances in Finance and Stochastics. Essays in Honour of Dieter Sondermann (K. Sandmann and P. Schoenbucher, eds.) 295–312.*]{} Springer, Berlin. <span style="font-variant:small-caps;">Pollak, M.</span> (1985). Optimal detection of a change in distribution. [*Ann. Stat.*]{} **13**, 206–227. <span style="font-variant:small-caps;">Polunchenko, A. and Tartakovsky, A.G.</span> (2012). State-of-the-art in sequential change-point detection. [*Methodol. Comput. Appl. Probab.*]{} **14**(3), 649–684. <span style="font-variant:small-caps;">Poor, H.V. and Hadjiliadis, O.</span> (2009). [*Quickest Detection.*]{} Cambridge University Press, UK. <span style="font-variant:small-caps;">Raghunathan, V., Schurgers, C., Park, S. and Srivastava, M.B.</span> (2002). Energy-aware wireless microsensor networks. [*IEEE Sig. Proc. Mag.*]{} **19**(2) 40–50. <span style="font-variant:small-caps;">Sezer, S.0.</span> (2010). On the Wiener disorder problem. [*Ann. Appl. Probab.*]{} **20**(4) 1537–1566. <span style="font-variant:small-caps;">Shewhart, W.A.</span> (1931). *Economic Control of Quality of Manufactured Product.* Van Nostrand, New York. <span style="font-variant:small-caps;">A. N. Shiryaev</span> (1978). [*Optimal Stopping Rules.*]{} Springer, New York. — (1996). Minimax optimality of the method of cumulative sums (CUSUM) in the case of continuous time. [*Russ. Math. Surv.*]{} **51** 750-751. — (2011). Quickest detection problems: Fifty years later. [*Seq. Anal.*]{} **294** 345-385. <span style="font-variant:small-caps;">Tartakovsky A. G. and Veeravalli V.V.</span> (2008). Asymptotically optimal quickest change detection in distributed sensor systems. [*Seq. Anal.*]{} **27**  441-475. <span style="font-variant:small-caps;">Tartakovsky, A.G., Pollak, M. and Polunchenko, A.S.</span> (2011). Third-order Asymptotic Optimality of the Generalized Shiryaev-Roberts Detection Procedures. [*Theory Probab. Appl.*]{} **58**(3) 534-565. <span style="font-variant:small-caps;">Tsitsiklis, J. N.</span> (1993). Extremal properties of likelihood-ratio quantizers. [*IEEE Trans. Comm.*]{} **41** 550-558. <span style="font-variant:small-caps;">Veeravalli, V.V.</span> (1999). Sequential decision fusion: Theory and applications. [*J. Fran. Inst.*]{} **336** 301-322 — (2001). Decentralized quickest change detection. [*IEEE Tran. Inf. Th.*]{} **47**(4) 1657–1665. <span style="font-variant:small-caps;">Siegmund, D.</span> (1985). [*Sequential Analysis, Tests and Confidence Intervals.*]{} Springer-Verlag, New York. <span style="font-variant:small-caps;">Yilmaz,Y., Moustakides, G.V. and Wang, X.</span> (2012). Cooperative sequential spectrum sensing based on event-triggered sampling. [*IEEE Trans. Signal Process.*]{} **60**(9) 4509–4524.
--- abstract: 'We discuss the implications of a recently proposed pattern of Lorentz symmetry violation on very high-energy cross sections. As a consequence of the breaking of local Lorentz invariance by the introduction of a fundamental length, $a$ , the kinematics is modified and the properties of final states are fundamentally different in collider-like (two incoming particles with equal, opposite momenta with respect to the vacuum rest frame) and fixed-target (one of the incoming particles at rest with respect to the vacuum rest frame) situations. In the first case, the properties of the allowed final states are similar to relativistic kinematics, as long as the relevant wave vectors are much smaller than the critical wave vector scale $a^{-1}$ . But, if one of the incoming particles is close to rest in the vacuum rest frame, energy conservation reduces the final-state phase space at very high energy and can lead to a sharp fall of cross sections starting at incoming-particle wave vectors well below the inverse of the fundamental length. Then, the Froissart bound may cease to be relevant, as total cross sections seem to become much smaller than it would be allowed by local, Lorentz-invariant, field theory. Important experimental implications of the new scenario are found for cosmic-ray astrophysics and for very high-energy cosmic rays reaching the earth.' --- -3mm \#1[( \#1 )]{} \#1[{ \#1 }]{} \#1 \#1[\#1 ]{} [**LORENTZ SYMMETRY VIOLATION**]{}\ \ [^1]\ Introduction ============ In two previous papers (Gonzalez-Mestres, 1997a and 1997b), we suggested that, as a consequence of nonlocal dynamics at Planck scale or at some other fundamental length scale, Lorentz symmetry violation can result in a modification of the equation relating energy and momentum which would write in the vacuum rest frame: E  =  (2)\^[-1]{} h c a\^[-1]{} e (k a) where $E$ is the energy of the particle, $h$ the Planck constant, $c$ the speed of light, $a$ a fundamental length scale (that we can naturally identify with the Planck length, but other choices of the fundamental distance scale are possible), $k$ the wave vector modulus and $[e~(k~a)]^2$ is a convex function of $(k~a)^2$ obtained from nonlocal vacuum dynamics. Rather generally, we find that, at wave vector scales below the inverse of the fundamental length scale, Lorentz symmetry violation in relativistic kinematics can be parameterized writing: e (k a)    \[(k a)\^2 -  (k a)\^4 + (2 a)\^2 h\^[-2]{} m\^2 c\^2\]\^[1/2]{} where $\alpha $ is a positive constant between $10^{-1}$ and $10^{-2}$ . At high energy, we can write: e (k a)    k a \[1 -  (k a)\^2/2\]  + 2 \^2 h\^[-2]{} k\^[-1]{} a m\^2 c\^2 and, in any case, we expect observable kinematical effects when the term $\alpha (ka)^3/2$ becomes as large as the term $2~\pi ^2~h^{-2}~k^{-1}~a~m^2~c^2$ . Assuming that, apart form the value of the mass, expression (2) is universal for all existing particles whose critical speed in vacuum is equal to the speed of light in the Lorentz-invariant limit, we found three important efects: a\) The Greisen-Zatsepin-Kuzmin (GZK) cutoff on very high-energy cosmic protons and nuclei (Greisen, 1966; Zatsepin and Kuzmin, 1966) does no longer apply. b\) Unstable particles with at least two massive particles in the final state of all their decay channels become stable at very high energy. c\) In any case, unstable particles live longer than naively expected with exact Lorentz invariance and, at high enough energy, the effect becomes much stronger than previously estimated for nonlocal models (Anchordoqui, Dova, Gómez Dumm and Lacentre, 1997) ignoring the small violation of relativistic kinematics. Furthermore, velocity reaches its maximum at $k~\approx ~(4\pi ^2~\alpha ^{-1}/3)^{1/4}~(m~c~h^{-1}~a^{-1})^{1/2}$ . Above this value, increase of momentum amounts to deceleration. In our ansatz, observable effects of local Lorentz invariance breaking arise, at leading level, well below the critical wavelength scale $a^{-1}$ due to the fact that, contrary to previous models (f.i. Rédei, 1967), we directly apply non-locality to particle propagators and not only to the interaction hamiltonian. In contrast with previous patterns (f.i. Blokhintsev, 1966), $s-t-u$ kinematics ceases to make sense and the motion of the global system with respect to the vacuum rest frame plays a crucial role. The physics of elastic two-body scattering will depend on five kinematical variables. Noncausal dispersion relations (Blokhintsev and Kolerov, 1964) should be reconsidered, taking into account the departure from relativistic kinematics. In this note, we would like to discuss another important consequence of the new kinematics, i.e. the appearence of strong limitations in the allowed phase space for final states of two-body collisions, especially when the target is moving slowly with respect to the vacuum rest frame. As in previous papers (Gonzalez-Mestres, 1997a and 1997b), we assume that $c$ and $\alpha $ are universal constants for all particles under consideration. If this were not the case, our analysis would require modifications but other new physical phenomena would equally emerge. Such an alternative will be discussed in a forthcoming paper. The new kinematics ================== No special constraint seems to arise from (2) if, in the vacuum rest frame, two particles with equal, opposite momenta of modulus $p$ with $\alpha ~(k~a)^2~ \ll ~1$ collide to produce a multiparticle final state. When the term $\alpha ~(k~a)^2~p~c/2$ becomes $\approx m^2~c^2~p^{-1}/2$ or larger, the new kinematics favours large momenta and allows for new final-state phase space, as compared to relativistic kinematics. But, as a consequence of Lorentz symmetry violation (the required transformation would have relative speed $v~\simeq ~c$), the situation becomes fundamentally different at very high energy if one of the incoming particles is close to rest with respect to the “absolute” frame where formulae (1) - (3) apply. Assume a very high-energy particle (particle 1) with momentum ${\vec {\bf p}}$ , impinging on a particle at rest (particle 2) in the vacuum rest frame. We take both particles to have mass $m$ , and $p~\gg ~mc$ . In relativistic kinematics, we would have elastic final states where particle 1 has, with respect to the direction of ${\vec {\bf p}}$ , longitudinal momentum $p_{1,L}~\gg ~mc$ and particle 2 has longitudinal momentum $p_{2,L}~\gg ~mc$ with $p_{1,L}~+~p_{2,L}~=~p$ . A total transverse energy $E_T~\simeq ~mc^2$ would still be left for the outgoing particles. However, the situation is drastically modified if the kinematics is given by expressions (1) - (3) and if $\alpha ~(k~a)^2~p$ ($k$ being the modulus of the wave vector of the incoming particle) becomes of the same order as $m~c$ or larger. As the energy increases, stronger and stronger limitations of the available final-state phase space appear: with the approximation (3), the final-state configuration $p_{1,L}~=~p~-~p_{2,L}~=~(1~-~\lambda )~p$ becomes kinematically forbidden for $\alpha ~(k~a)^2~p~>~2~m~c~\lambda ^{-1} (1~-~\lambda )^{-1}/3$ . Thus, for momenta above $\approx (m~c~a^{-2}~h^2)^{1/3}$ , “hard” interactions become severely limited by kinematical constraints. Similarly, with the same initial state, a multiperipheral final state configuration with $N$ particles ($N~>~2$) of mass $m$ and longitudinal momenta $g^{i-1}~p'_L$ ($i~=~1,...,N$ , $g~> ~1$), where $p'_L~=~p~(g~-~1)~(g^{N}~-~1)^{-1}$ , $g^N~\gg ~1$ and $p'_L~\gg m~c$ , would have in standard relativity an allowed total transverse energy $E_T~(N~,~g)~\simeq ~ m~c^2~[1~-~m~c~(2~p'_L)^{-1}~(1~-~g^{-1})^{-1}]$ which is positive definite. Again, using the new kinematics and the approximation (3), we find that such a longitudinal final state configuration is forbidden for values of the incoming momentum such that $\alpha ~(k~a)^2~p~c~>~2~(3~g)^{-1}~(1~+~g~+~g^2)~E_T~(N~,~g)$ . The above, or similar, considerations apply to strong interactions as well as to electromagnetic processes. For the initial state configuration where the target is at rest in the vacuum rest frame, and compared to standard expectations based on relativistic kinematics, a [*sharp fall of elastic, multiparticle and total cross sections*]{} can be expected at very high energy. For “soft” strong interactions, the approach were the two-body total cross section is the less sensitive to final-state phase space is, in principle, that based on dual resonance models and considering the imaginary part of the elastic amplitude as being dominated by the shadow of the production of pairs of very heavy resonances of masses $M_1$ and $M_2$ of order $\approx (p~m~c^3/2)^{1/2}$ in the direct channel (Aurenche and Gonzalez-Mestres, 1978 and 1979). But, even in this scenario, we find important limitations to the allowed values of $M_1$ and $M_2$ , and to the two-resonance phase space, when $\alpha ~(k~a)^2~p$ becomes $\approx m~c$ or larger. In all cases, the departure from the standard relativistic situation occurs, if the target is close to rest in the vacuum rest frame, at incoming energies $E$ above $\approx (m~a^{-2}~h^2~c^4)^{1/3}$ which corresponds to a transition energy scale $\approx 10^{22}~eV$ for $m~\approx~1~GeV/c^2$ and $a~\approx ~10^{-33}~cm$ , and $\approx 10^{21}~eV$ if the target mass is $\approx 500~keV/c^2$ . Lowering the critical wave vector scale $a^{-1}$ to $~\approx ~10^{26}~cm^{-1}$ (just above the wave vector scale of the highest-energy cosmic rays), the fall of cross sections would start at $E~\approx 10^{16}~-~10^{17}~eV$ , which seems excluded by cosmic ray data if the earth is moving slowly with respect to the vacuum rest frame. In astrophysical processes, the new kinematics may inhibit phenomena such as GZK-like cutoffs, photodisintegration of nuclei, decays, radiation emission under external forces, momentum loss (which at very high energy does not imply deceleration) through collisions, production of lower-energy secondaries... [*potentially solving the basic problems raised by the highest-energy cosmic rays*]{}. Above $E~\approx (m~a^{-2}~h^2~c^4)^{1/3}$ , nonlocal effects play a crucial role and invalidate considerations based on Lorentz invariance and local field theory used to derive the Froissart bound (Froissart, 1961), which seems not to be violated but ceases to be significant given the expected behaviour of total cross sections which, at very high-energy, seem to fall far below this bound. An updated study of noncausal dispersion relations, incorporating the new kinematics from nonlocal dynamics, can possibly lead new bounds. As previously stressed (Gonzalez-Mestres, 1997a) , this apparent nonlocality may actually reflect the existence of superluminal sectors of matter (Gonzalez-Mestres, 1996) where causality would hold at the superluminal level (Gonzalez-Mestres, 1997c). Other initial state configurations can be considered. We may have two incoming particles with momenta of moduli $p_1^i$ and $p_2^i$ and opposite directions in the vacuum rest frame, and $p_1^i~ \gg ~p_2^i~\gg ~mc$ . Keeping a constant value of $\lambda ~=~p_2^i~ (p_1^i)^{-1}$ , we find that the fall of final-state phase space occurs for $p_1^i$ above $\approx \lambda ^{1/2}~a^{-1}~h$ . The incoming momenta $p_1^i$ and $p_2^i$ may also be pointing in the same direction. Then, the final-state phase space starts to fall at $p_1^i~\approx \lambda ^{-1/4}~(m~c~h~a^{-1})^{1/2}$ . A more complete discussion, including non-parallel incoming momenta and the case $m~=~0$ , will be presented elsewhere. Experimental considerations =========================== Lorentz symmetry violation prevents naive extrapolations from reactions between two particles with equal, opposite momenta in the vacuum rest frame (similar to colliders) to reactions where the target is at rest in this frame (similar to cosmic-ray events). [*Assuming the earth to move slowly with respect to the vacuum rest frame*]{} (for instance, if the “absolute” frame is close to that defined by the requirement of cosmic microwave background isotropy), the described kinematics predicts the existence of a maximum energy deposition for high-energy cosmic rays in the atmosphere, in the rock or in a given underground or underwater detector. Well below Planck energy, a very high-energy cosmic ray would not necessarily deposit most of its energy in the atmosphere: its energy deposition decreases for energies above a transition scale, far below the energy scale associated to the fundamental length. The maximum allowed momentum transfer in a single collision occurs at an energy just below $E~\approx (m~a^{-2}~h^2~c^4)^{1/3}$ . For $E$ above $\approx (m~a^{-2}~h^2~c^4)^{1/3}$ , the allowed longitudinal momentum transfer falls, typically, like $p^{-2}$ (obtained differentiating the term $\alpha ~k^2~a^2~p~c/2$). To set upper limits, we can take for $m$ the mass of oxigen or nitrogen in the case of air, oxigen in water, and heavier elements in the rock. At energies around $\approx (m~a^{-2}~h^2~c^4)^{1/3}$ , the cosmic ray will in our scenario undergo several scatterings in the atmosphere and still lose there most of its energy, possibly leading to unconventional longitudinal cascade development profiles that could be observed by very large-surface air shower detectors like the AUGER observatory (AUGER Collaboration, 1997). Above $E~ \approx (m~a^{-2}~h^2~c^4)^{1/3}$ , it can indeed cross the atmosphere keeping most of its momentum and energy and deposit its energy in the rock or in water, or possibly reach and underground or underwater detector. Thus, some cosmic ray events of apparent energy far below $10^{20}~eV$ (perhaps apparently muon or neutrino-like, or exotic-like), as seen by earth-surface (e.g. air shower), underground or underwater detectors, may actually be originated by extremely-high energy cosmic rays well above this energy scale. Interesting constraints on the fundamental length $a$ can be derived from this analysis, assuming simultaneoulsy (Gonzalez-Mestres, 1997a and 1997b) that the absence of GZK cutoff is due to the same pattern of Lorentz symmetry violation. The combined absence of GZK cutoff and existence of $\approx 10^{20}~eV$ energy deposition from cosmic rays in the atmosphere lead to $a$ in the range $10^{-35}~cm~<~a~<~10^{-30}~cm$ (energy scale between $10^{16}$ and $10^{21}$ $GeV$). The lower bound comes from the requirement that the violation of local Lorentz invariance at the fundamental length scale be able to influence particle interactions at the $10^{19}~-~10^{20}~eV$ energy scale strongly enough to suppress the GZK cutoff. The upper bound is derived from the existence of events with $\approx 10^{20}~eV$ energy deposition in the atmosphere (Linsley, 1963; Lawrence, Reid and Watson, 1991; Afanasiev et al., 1995; Bird et al., 1994; Yoshida et al., 1995). Then, very high-energy accelerator and cosmic-ray experiments would indeed be complementary research lines: the results of both kinds of experiments would not be equivalent up to Lorentz tranformations. If the transition energy scale for cross-sections corresponds to $p_1^i~c~\approx 10^{20}~eV$ , a $p~-~p$ collider at $\approx 700~TeV$ per beam could make possible direct tests of Lorentz symmetry violation, comparing collisions at the accelerator with collisions between a $\approx 10^{21}~eV$ proton of cosmic origin and a proton or nucleus from the atmosphere. Simultaneously, other kinds of tests may be possible through the lifetimes and decay products of very high-energy unstable particles (Gonzalez-Mestres, 1997a and 1997b) in the cosmic-ray events producing the highest-energy secondaries. We would be confronted to a new situation, contrary to previous expectations, if the cosmic rays at the highest possible energies interact more and more weakly with matter because of kinematical constraints. The existence of a maximum energy of events generated in the atmosphere would not correspond to a maximum energy of incoming cosmic rays. Unconventional events originated by such particles may have been erroneously interpreted as being associated to cosmic rays of much lower energy. New analysis seem necessary, as well as new experimental designs using perhaps in coincidence very large-surface detectors devoted to interactions in the atmosphere with very large-volume underground or undewater detectors. It is a pleasure to thank P. Espigat and other colleagues at LPC, Collège de France, for useful discussions. Afanasiev, B.N. et al., Proc. of the $24^{th}$ International Cosmic Ray Conference, Rome, Italy, Vol. 2 , p. 756 (1995).Anchordoqui, L., Dova, M.T., Gómez Dumm, D. and Lacentre, P., [*Zeitschrift für Physik C*]{} 73 , 465 (1997).AUGER Collaboration, “The Pierre Auger Observatory Design Report” (1997).Aurenche, P. and Gonzalez-Mestres, L., [*Phys. Rev. D*]{} 18 , 2995 (1978). Aurenche, P. and Gonzalez-Mestres, L., [*Zeitschrift für Physik C*]{} 1 , 307 (1979).Bird, D.J. et al., [*Ap. J.*]{} 424 , 491 (1994). Blokhintsev, D.I. and Kolerov, G.I., [*Nuovo Cimento*]{} 34 , 163 (1964).Blokhintsev, D.I., [*Sov. Phys. Usp.*]{} 9 , 405 (1966).Froissart, M., [*Phys. Rev.*]{} 123, 1053 (1961). Gonzalez-Mestres, L., “Physical and Cosmological Implications of a Possible Class of Particles Able to Travel Faster than Light”, contribution to the 28$^{th}$ International Conference on High-Energy Physics, Warsaw July 1996 . Paper hep-ph/9610474 of LANL (Los Alamos) electronic archive (1996).Gonzalez-Mestres, L., “Vacuum Structure, Lorentz Symmetry and Superluminal Particles”, paper physics/9704017 of LANL electronic archive (1997a).Gonzalez-Mestres, L., “Absence of Greisen-Zatsepin-Kuzmin Cutoff and Stability of Unstable Particles at Very High Energy, as a Consequence of Lorentz Symmetry Violation”, paper physics/9705031 of LANL electronic archive (1997b).Gonzalez-Mestres, L., “Space, Time and Superluminal Particles”, paper physics/9702026 of LANL electronic archive (1997c).Greisen, K., [*Phys. Rev. Lett.*]{} 16 , 748 (1966).Lawrence, M.A., Reid, R.J.O. and Watson, A.A., [*J. Phys. G* ]{}, 17 , 773 (1991).Linsley, J., [*Phys. Rev. Lett.*]{} 10 , 146 (1963).Rédei, L.B., [*Phys. Rev.*]{} 162 , 1299 (1967).Yoshida, S. et al., Proc. of the $24^{th}$ International Cosmic Ray Conference, Rome, Italy, Vol. 1 , p. 793 (1995).Zatsepin, G.T. and Kuzmin, V.A., [*Pisma Zh. Eksp. Teor. Fiz.*]{} 4 , 114 (1966). [^1]: E-mail: lgonzalz@vxcern.cern.ch
--- abstract: 'We report a measurement of muon-neutrino disappearance in the T2K experiment. The 295-km muon-neutrino beam from Tokai to Kamioka is the first implementation of the off-axis technique in a long-baseline neutrino oscillation experiment. With data corresponding to 1.43$\times$10$^{20}$ protons on target, we observe 31 fully-contained single $\mu$-like ring events in Super-Kamiokande, compared with an expectation of 104 $\pm$ 14 (syst) events without neutrino oscillations. The best-fit point for two-flavor $\nu_{\mu} \rightarrow \nu_{\tau}$ oscillations is $\sin^{2}(2 \theta_{23})$ = 0.98 and $|\Delta m_{32}^{2}|$ = 2.65 $\times$ $10^{-3}$ eV$^{2}$. The boundary of the 90% confidence region includes the points ($\sin^{2}(2 \theta_{23})$, $|\Delta m_{32}^{2}|$) = (1.0, 3.1$\times$10$^{-3}$eV$^{2}$), (0.84, 2.65$\times$10$^{-3}$eV$^{2}$) and (1.0, 2.2$\times$10$^{-3}$eV$^{2}$).' author: - 'K.Abe' - 'N.Abgrall' - 'Y.Ajima' - 'H.Aihara' - 'J.B.Albert' - 'C.Andreopoulos' - 'B.Andrieu' - 'M.D.Anerella' - 'S.Aoki' - 'O.Araoka' - 'J.Argyriades' - 'A.Ariga' - 'T.Ariga' - 'S.Assylbekov' - 'D.Autiero' - 'A.Badertscher' - 'M.Barbi' - 'G.J.Barker' - 'G.Barr' - 'M.Bass' - 'M.Batkiewicz' - 'F.Bay' - 'S.Bentham' - 'V.Berardi' - 'B.E.Berger' - 'I.Bertram' - 'M.Besnier' - 'J.Beucher' - 'D.Beznosko' - 'S.Bhadra' - 'F.d.M.Blaszczyk' - 'A.Blondel' - 'C.Bojechko' - 'J.Bouchez' - 'S.B.Boyd' - 'A.Bravar' - 'C.Bronner' - 'D.G.Brook-Roberge' - 'N.Buchanan' - 'H.Budd' - 'R.Calland' - 'D.Calvet' - 'J.Caravaca Rodríguez' - 'S.L.Cartwright' - 'A.Carver' - 'R.Castillo' - 'M.G.Catanesi' - 'A.Cazes' - 'A.Cervera' - 'C.Chavez' - 'S.Choi' - 'G.Christodoulou' - 'J.Coleman' - 'G.Collazuol' - 'W.Coleman' - 'K.Connolly' - 'A.Curioni' - 'A.Dabrowska' - 'I.Danko' - 'R.Das' - 'G.S.Davies' - 'S.Davis' - 'M.Day' - 'G.De Rosa' - 'J.P.A.M.de André' - 'P.de Perio' - 'T.Dealtry' - 'A.Delbart' - 'C.Densham' - 'F.Di Lodovico' - 'S.Di Luise' - 'P.Dinh Tran' - 'J.Dobson' - 'U.Dore' - 'O.Drapier' - 'T.Duboyski' - 'F.Dufour' - 'J.Dumarchez' - 'S.Dytman' - 'M.Dziewiecki' - 'M.Dziomba' - 'S.Emery' - 'A.Ereditato' - 'J.E.Escallier' - 'L.Escudero' - 'L.S.Esposito' - 'M.Fechner' - 'A.Ferrero' - 'A.J.Finch' - 'E.Frank' - 'Y.Fujii' - 'Y.Fukuda' - 'V.Galymov' - 'G.L.Ganetis' - 'F.C.Gannaway' - 'A.Gaudin' - 'A.Gendotti' - 'M.A.George' - 'S.Giffin' - 'C.Giganti' - 'K.Gilje' - 'A.K.Ghosh' - 'T.Golan' - 'M.Goldhaber' - 'J.J.Gomez-Cadenas' - 'S.Gomi' - 'M.Gonin' - 'N.Grant' - 'A.Grant' - 'P.Gumplinger' - 'P.Guzowski' - 'D.R.Hadley' - 'A.Haesler' - 'M.D.Haigh' - 'K.Hamano' - 'C.Hansen' - 'D.Hansen' - 'T.Hara' - 'P.F.Harrison' - 'B.Hartfiel' - 'M.Hartz' - 'T.Haruyama' - 'T.Hasegawa' - 'N.C.Hastings' - 'A.Hatzikoutelis' - 'K.Hayashi' - 'Y.Hayato' - 'C.Hearty' - 'R.L.Helmer' - 'R.Henderson' - 'N.Higashi' - 'J.Hignight' - 'A.Hillairet' - 'T.Hiraki' - 'E.Hirose' - 'J.Holeczek' - 'S.Horikawa' - 'K.Huang' - 'A.Hyndman' - 'A.K.Ichikawa' - 'K.Ieki' - 'M.Ieva' - 'M.Iida' - 'M.Ikeda' - 'J.Ilic' - 'J.Imber' - 'T.Ishida' - 'C.Ishihara' - 'T.Ishii' - 'S.J.Ives' - 'M.Iwasaki' - 'K.Iyogi' - 'A.Izmaylov' - 'B.Jamieson' - 'R.A.Johnson' - 'K.K.Joo' - 'G.V.Jover-Manas' - 'C.K.Jung' - 'H.Kaji' - 'T.Kajita' - 'H.Kakuno' - 'J.Kameda' - 'K.Kaneyuki' - 'D.Karlen' - 'K.Kasami' - 'I.Kato' - 'H.Kawamuko' - 'E.Kearns' - 'M.Khabibullin' - 'F.Khanam' - 'A.Khotjantsev' - 'D.Kielczewska' - 'T.Kikawa' - 'J.Kim' - 'J.Y.Kim' - 'S.B.Kim' - 'N.Kimura' - 'B.Kirby' - 'J.Kisiel' - 'P.Kitching' - 'T.Kobayashi' - 'G.Kogan' - 'S.Koike' - 'A.Konaka' - 'L.L.Kormos' - 'A.Korzenev' - 'K.Koseki' - 'Y.Koshio' - 'Y.Kouzuma' - 'K.Kowalik' - 'V.Kravtsov' - 'I.Kreslo' - 'W.Kropp' - 'H.Kubo' - 'J.Kubota' - 'Y.Kudenko' - 'N.Kulkarni' - 'Y.Kurimoto' - 'R.Kurjata' - 'T.Kutter' - 'J.Lagoda' - 'K.Laihem' - 'M.Laveder' - 'M.Lawe' - 'K.P.Lee' - 'P.T.Le' - 'J.M.Levy' - 'C.Licciardi' - 'I.T.Lim' - 'T.Lindner' - 'C.Lister' - 'R.P.Litchfield' - 'M.Litos' - 'A.Longhin' - 'G.D.Lopez' - 'P.F.Loverre' - 'L.Ludovici' - 'T.Lux' - 'M.Macaire' - 'L.Magaletti' - 'K.Mahn' - 'Y.Makida' - 'M.Malek' - 'S.Manly' - 'A.Marchionni' - 'A.D.Marino' - 'A.J.Marone' - 'J.Marteau' - 'J.F.Martin' - 'T.Maruyama' - 'T.Maryon' - 'J.Marzec' - 'P.Masliah' - 'E.L.Mathie' - 'C.Matsumura' - 'K.Matsuoka' - 'V.Matveev' - 'K.Mavrokoridis' - 'E.Mazzucato' - 'N.McCauley' - 'K.S.McFarland' - 'C.McGrew' - 'T.McLachlan' - 'M.Messina' - 'W.Metcalf' - 'C.Metelko' - 'M.Mezzetto' - 'P.Mijakowski' - 'C.A.Miller' - 'A.Minamino' - 'O.Mineev' - 'S.Mine' - 'A.D.Missert' - 'G.Mituka' - 'M.Miura' - 'K.Mizouchi' - 'L.Monfregola' - 'F.Moreau' - 'B.Morgan' - 'S.Moriyama' - 'A.Muir' - 'A.Murakami' - 'J.F.Muratore' - 'M.Murdoch' - 'S.Murphy' - 'J.Myslik' - 'N.Nagai' - 'T.Nakadaira' - 'M.Nakahata' - 'T.Nakai' - 'K.Nakajima' - 'T.Nakamoto' - 'K.Nakamura' - 'S.Nakayama' - 'T.Nakaya' - 'D.Naples' - 'M.L.Navin' - 'T.C.Nicholls' - 'B.Nielsen' - 'C.Nielsen' - 'K.Nishikawa' - 'H.Nishino' - 'K.Nitta' - 'T.Nobuhara' - 'J.A.Nowak' - 'Y.Obayashi' - 'T.Ogitsu' - 'H.Ohhata' - 'T.Okamura' - 'K.Okumura' - 'T.Okusawa' - 'S.M.Oser' - 'M.Otani' - 'R.A.Owen' - 'Y.Oyama' - 'T.Ozaki' - 'M.Y.Pac' - 'V.Palladino' - 'V.Paolone' - 'P.Paul' - 'D.Payne' - 'G.F.Pearce' - 'J.D.Perkin' - 'V.Pettinacci' - 'F.Pierre' - 'E.Poplawska' - 'B.Popov' - 'M.Posiadala' - 'J.-M.Poutissou' - 'R.Poutissou' - 'P.Przewlocki' - 'W.Qian' - 'J.L.Raaf' - 'E.Radicioni' - 'P.N.Ratoff' - 'T.M.Raufer' - 'M.Ravonel' - 'M.Raymond' - 'F.Retiere' - 'A.Robert' - 'P.A.Rodrigues' - 'E.Rondio' - 'J.M.Roney' - 'B.Rossi' - 'S.Roth' - 'A.Rubbia' - 'D.Ruterbories' - 'S.Sabouri' - 'R.Sacco' - 'K.Sakashita' - 'F.Sánchez' - 'A.Sarrat' - 'K.Sasaki' - 'K.Scholberg' - 'J.Schwehr' - 'M.Scott' - 'D.I.Scully' - 'Y.Seiya' - 'T.Sekiguchi' - 'H.Sekiya' - 'M.Shibata' - 'Y.Shimizu' - 'M.Shiozawa' - 'S.Short' - 'P.D.Sinclair' - 'M.Siyad' - 'B.M.Smith' - 'R.J.Smith' - 'M.Smy' - 'J.T.Sobczyk' - 'H.Sobel' - 'M.Sorel' - 'A.Stahl' - 'P.Stamoulis' - 'J.Steinmann' - 'B.Still' - 'J.Stone' - 'C.Strabel' - 'R.Sulej' - 'A.Suzuki' - 'K.Suzuki' - 'S.Suzuki' - 'S.Y.Suzuki' - 'Y.Suzuki' - 'Y.Suzuki' - 'T.Szeglowski' - 'M.Szeptycka' - 'R.Tacik' - 'M.Tada' - 'M.Taguchi' - 'S.Takahashi' - 'A.Takeda' - 'Y.Takenaga' - 'Y.Takeuchi' - 'K.Tanaka' - 'H.A.Tanaka' - 'M.Tanaka' - 'M.M.Tanaka' - 'N.Tanimoto' - 'K.Tashiro' - 'I.Taylor' - 'A.Terashima' - 'D.Terhorst' - 'R.Terri' - 'L.F.Thompson' - 'A.Thorley' - 'W.Toki' - 'S.Tobayama' - 'T.Tomaru' - 'Y.Totsuka' - 'C.Touramanis' - 'T.Tsukamoto' - 'M.Tzanov' - 'Y.Uchida' - 'K.Ueno' - 'A.Vacheret' - 'M.Vagins' - 'G.Vasseur' - 'O.Veledar' - 'T.Wachala' - 'J.J.Walding' - 'A.V.Waldron' - 'C.W.Walter' - 'P.J.Wanderer' - 'J.Wang' - 'M.A.Ward' - 'G.P.Ward' - 'D.Wark' - 'M.O.Wascko' - 'A.Weber' - 'R.Wendell' - 'N.West' - 'L.H.Whitehead' - 'G.Wikström' - 'R.J.Wilkes' - 'M.J.Wilking' - 'Z.Williamson' - 'J.R.Wilson' - 'R.J.Wilson' - 'T.Wongjirad' - 'S.Yamada' - 'Y.Yamada' - 'A.Yamamoto' - 'K.Yamamoto' - 'Y.Yamanoi' - 'H.Yamaoka' - 'T.Yamauchi' - 'C.Yanagisawa' - 'T.Yano' - 'S.Yen' - 'N.Yershov' - 'M.Yokoyama' - 'T.Yuan' - 'A.Zalewska' - 'J.Zalipska' - 'L.Zambelli' - 'K.Zaremba' - 'M.Ziembicki' - 'E.D.Zimmerman' - 'M.Zito' - 'J.Żmuda' bibliography: - 'references.bib' title: ' First Muon-Neutrino Disappearance Study with an Off-Axis Beam ' --- [^1] [^2] [^3] [^4] [^5] [^6] [^7] [^8] [^9] [^10] [^11] [^12] [^13] [^14] [^15] [^16] [^17] [^18] [^19] [^20] [^21] [^22] [^23] [^24] [^25] [^26] [^27] [^28] [^29] [^30] [^31] [^32] [^33] [^34] [^35] [^36] [^37] [^38][^39] [^40] [^41] [^42] [^43] [^44] [^45] [^46] [^47] [^48] [^49] [^50] [^51] [^52] [^53] [^54] [^55] [^56] [^57] [^58] [^59] [^60] [^61] [^62] [^63] [^64] [^65] [^66] [^67] [^68] [^69] [^70] [^71] [^72] [^73] [^74] [^75] [^76] [^77] [^78] [^79] [^80] [^81] [^82] [^83] We report a measurement of muon-neutrino disappearance in the T2K experiment. The muon-neutrino beam from Tokai to Kamioka is the first implementation of the off-axis technique [@beavis:bnl] in a long-baseline neutrino oscillation experiment. The off-axis technique is used to provide a narrow-band neutrino energy spectrum tuned to the value of $L/E$ that maximizes the neutrino oscillation effect due to $\Delta m^2_{32}$, the mass splitting first observed in atmospheric neutrinos [@Fukuda:1998mi]. This narrow-band energy spectrum also provides a clean signature for subdominant electron neutrino appearance, as we have recently reported [@Abe:2011sj]. Muon-neutrino disappearance depends on the survival probability, which, in the framework of two-flavor $\nu_{\mu} \rightarrow \nu_{\tau}$ oscillations, is given by $$P_{surv}= 1-\sin^2 (2 \theta_{23}) \: \sin^2 \left({{\Delta m^{2}_{32} L} \over {4E}} \right), \label{eq:surv}$$ where $ E$ is the neutrino energy and $L$ is the neutrino propagation length. We have neglected subleading oscillation terms. In this paper we describe our observation of $\nu_{\mu}$ disappearance, and we use the result to measure $|\Delta m^{2}_{32}|$ and $\sin^2 (2\theta_{23})$. Previous measurements of these neutrino mixing parameters have been reported by K2K [@Ahn:2006K2K] and MINOS [@Adamson:2011ig], which use on-axis neutrino beams, and Super-Kamiokande [@sk-2011], which uses atmospheric neutrinos. Details of the T2K experimental setup are described elsewhere [@Abe:2011ks]. Here we briefly review the components relevant for the $\nu_\mu$ oscillation analysis. The J-PARC Main Ring (MR) accelerator [@cite:Jparc] provides 30 GeV protons with a cycle of 0.3 Hz. Six bunches (Run 1) or eight bunches (Run 2) are extracted in a 5-$\mu$s spill and are transported to the production target through an arc instrumented by superconducting magnets. The proton beam position, profile, timing and intensity are measured by 21 electrostatic beam position monitors (ESM), 19 segmented secondary emission monitors (SSEM), one optical transition radiation monitor (OTR) and five current transformers. The secondary beamline, filled with helium at atmospheric pressure, is composed of the target, focusing horns and decay tunnel. The graphite target is 2.6 cm in diameter and 90 cm (1.9 $\lambda_{int}$) long. Positively-charged particles exiting the target are focused into the 96-m long decay tunnel by three magnetic horns pulsed at 250 kA. Neutrinos are primarily produced in the decays of charged pions and kaons. A beam dump is located at the end of the tunnel and is followed by muon monitors measuring the beam direction of each spill. The neutrino beam is directed 2.5$^\circ$ off the axis between the target and the Super-Kamiokande (SK) far detector 295 km away. This configuration produces a narrow-band [$\nu_{\mu}$]{}beam with peak energy tuned to the first oscillation maximum $E_{\nu}=|\Delta m^{2}_{32}| L/(2\pi)\simeq$ 0.6 GeV. The near detector complex (ND280) [@Abe:2011ks] is located 280 m downstream from the target and hosts two detectors. The on-axis Interactive Neutrino GRID (INGRID) [@ingrid-nim] records neutrino interactions with high statistics to monitor the beam intensity, direction and profile. It consists of 14 identical 7-ton modules composed of an iron-absorber/scintillator-tracker sandwich arranged in 10 m by 10 m crossed horizontal and vertical arrays centered on the beam. The off-axis detector reconstructs exclusive final states to study neutrino interactions and beam properties corresponding to those expected at the far detector. Embedded in the refurbished UA1/NOMAD magnet (field strength 0.2 T), it consists of three large-volume time projection chambers (TPCs) [@Abgrall:2010hi] interleaved with two fine-grained tracking detectors (FGDs, each 1 ton). It also has a $\pi^0$-optimized detector and a surrounding electromagnetic calorimeter. The magnet yoke is instrumented as a side muon range detector. The SK water-Cherenkov far detector [@fukuda:2002uc] has a fiducial volume (FV) of 22.5 kt within its cylindrical inner detector (ID). Enclosing the ID is the 2 m-wide outer detector (OD). The front-end readout electronics [@Abe:2011ks] allow for a dead-time-free trigger. Spill timing information, synchronized by the Global Positioning System (GPS) with $<150$ ns precision, is transferred from J-PARC to SK and triggers the recording of photomultiplier (PMT) hits within $\pm$500 $\mu$s of the expected neutrino arrival time. The results presented in this Letter are based on the first two physics runs: Run 1 (Jan–Jun 2010) and Run 2 (Nov 2010–Mar 2011). During this time period, the MR proton beam power was continually increased and reached 145 kW with $9\times 10^{13}$ protons per pulse. The fraction of protons hitting the target was monitored by the ESM, SSEM and OTR and found to be greater than 99% and stable in time. A total of 2,474,419 spills was retained for analysis after beam and far-detector quality cuts, corresponding to $1.43\times10^{20}$ protons on target (POT). We present the study of events in the far detector with a single muon-like ($\mu$-like) ring. The event selection enhances $\nu_{\mu}$ charged-current quasi-elastic interactions (CCQE). For these events, neglecting the Fermi motion, the neutrino energy $E_{\nu} $ can be reconstructed as $$\label{eqn:erecccqe} E_{\nu}= {{m^2_p-(m_{n}-E_b)^2-m^2_{\mu}+ 2 (m_n -E_b) E_{\mu} } \over {2(m_{n}-E_b-E_{\mu}+p_{\mu}\cos \theta_{\mu})}},$$ where $m_p$ is the proton mass, $m_{n}$ the neutron mass, and $E_b=27$ MeV the binding energy of a nucleon inside a $^{16}$O nucleus. In Eq. \[eqn:erecccqe\] $E_{\mu}$, $p_{\mu}$, and $\theta_{\mu}$ are respectively the measured muon energy, momentum and angle with respect to the incoming neutrino. The selection criteria for this analysis were fixed from Monte Carlo (MC) studies before the data were collected. The observed number of events and spectrum are compared with signal and background expectations, which are based on neutrino flux and cross-section predictions and are corrected using an inclusive measurement in the off-axis near detector. ![(Top) The predicted flux of $\nu_\mu$ as a function of neutrino energy without oscillations at Super-Kamiokande and at the off-axis near detector; (Bottom) the flux of $\nu_\mu$ and $\overline{\nu}_\mu$ at Super-Kamiokande. The shaded boxes indicate the total systematic uncertainty for each energy bin.[]{data-label="fig:beamflux"}](MuonNeutrinoFlux.pdf){width="2.9in"} Our predicted beam flux (Fig. \[fig:beamflux\]) is based on models tuned to experimental data. The most significant constraint comes from NA61 measurements of pion production [@Abgrall:2011ae] in ($p$, $\theta$) bins, where $p$ is the pion momentum and $\theta$ the polar angle with respect to the proton beam; there are 5%-10% systematic and similar statistical uncertainties in most of the measured phase space. The production of pions in the target outside the NA61-measured phase space and all kaon production are modeled using FLUKA [@fluka1; @fluka2]. The production rate of these pions is assigned systematic uncertainties of 50%, and kaon production uncertanties are estimated to be between 15% and 100% based on a comparison of FLUKA with data from Eichten et al. [@Eichten]. The software package GEANT3 [@GEANT3], with GCALOR [@GCALOR] for hadronic interactions, handles particle propagation through the magnetic horns, target hall, decay volume and beam dump. Additional systematic errors in the neutrino fluxes are included for uncertainties in secondary nucleon production and total hadronic inelastic cross sections, uncertainties in the proton beam direction, spatial extent and angular divergence, the horn current, and the secondary beam line component alignment uncertainties. The stability of the beam direction and neutrino rate per proton on target are monitored continuously with INGRID and are within the assigned systematic uncertainties [@Abe:2011sj]. Systematic uncertainties in the shape of the flux as a function of neutrino energy require knowledge of the correlations of the uncertainties in ($p$, $\theta$) bins of hadron production. For the NA61 pion-production data [@Abgrall:2011ae], we assume full correlation between ($p$, $\theta$) bins for each individual source of systematic uncertainty, except for particle identification where there is a known momentum-dependent correlation. Where correlations of hadron-production uncertainties are unknown, we choose correlations in kinematic variables to maximize the uncertainty in the normalization of the predicted flux. Neutrino interactions are simulated using the NEUT event generator [@hayato:neut2]. Uncertainties in cross sections of the exclusive neutrino processes are determined by comparisons with recent measurements from the SciBooNE [@sciboone:ccqe], MiniBooNE [@miniboone:ccqe; @miniboone:cc1pirat], and K2K [@Gran:2006K2K; @Rodriguez:2008K2K] experiments, comparisons with the GENIE [@Andreopoulos:2009rq] and NuWro [@Juszczak:2009] generators and recent theoretical work [@Juszczak:2010]. An inclusive $\nu_\mu$ charged-current (CC) measurement in the off-axis near detector (ND) is used to constrain the expected event rate at the far detector. From a data sample collected in Run 1 of $2.88\times10^{19}$ POT, neutrino interactions are selected in the FGDs with charged particles entering the downstream TPC. The most energetic negatively charged particle in the TPC is required to have ionization energy loss compatible with that of a muon. The analysis selects 1529 data events with 38% $\nu_\mu$ CC efficiency and 90% purity. The agreement between the reconstructed neutrino energy in data and MC is shown in Fig. \[fig:ND280momentum\]. The ratio of measured $\nu_\mu$ CC interactions to MC is $$\begin{aligned} R^{\nu_{\mu} CC}_{ND} &=& \frac{N^{Data,\nu_{\mu} CC}_{ND}}{N^{MC, \nu_{\mu} CC}_{ND}} = 1.036 \pm 0.028 (\mathrm{stat.}) \nonumber \\ && ^{+0.044}_{-0.037} (\mathrm{det.syst.}) \pm 0.038(\mathrm{phys.syst.}), \label{eq:ratiodmc}\end{aligned}$$ where $N^{Data,\nu_{\mu} CC}_{ND}$ is the number of $\nu_{\mu}$ CC events, and $N^{MC, \nu_{\mu} CC}_{ND}$ is the MC prediction normalized by POT. The detector systematic errors in Eq. \[eq:ratiodmc\] are mainly due to uncertainties in tracking and particle identification efficiencies. The physics uncertainties result from cross section uncertainties but exclude normalization uncertainties that cancel in a far/near ratio. ![ Neutrino energy reconstructed for the CCQE hypothesis for $\nu_\mu$ CC candidates interacting in the FGD target. The data are shown using points with error bars (statistical only) and the MC predictions are in shaded histograms. []{data-label="fig:ND280momentum"}](EnergyNuDataMC_GeV.pdf){width="2.9in"} At the far detector we select a $\nu_\mu$ CCQE enriched sample. The SK event reconstruction [@ashie:2005ik] uses PMT hits in time with a neutrino spill. We select a fully-contained fiducial volume (FCFV) sample by requiring no activity in the OD, no pre-activity in the 100 $\mu$s before the event trigger time, at least $30$ MeV electron-equivalent energy deposited in the ID, and a reconstructed event vertex in the fiducial region. The OD veto rejects events induced by neutrino interactions outside of the ID, and events where energy escapes from the ID. The visible energy requirement rejects events from radioactive decays in the detector. The fiducial vertex requirement rejects particles entering from outside the ID. Further conditions are required to enrich the sample in $\nu_\mu$ CCQE events: a single Cherenkov ring identified as a muon, with momentum $p_\mu>200$ MeV/c, and no more than one delayed electron. The muon momentum requirement rejects charged pions and misidentified electrons from the decay of unseen muons and pions, and the delayed-electron veto rejects events with muons accompanied by unseen pions and muons. The number of events in data and MC after each selection criterion is shown in Table \[table:number\_of\_events\]. The efficiency and purity of $\nu_{\mu}$ CCQE events are estimated to be 72% and 61% respectively. Data [$\nu_{\mu}$]{}CCQE [$\nu_{\mu}$]{}CC non-QE [$\nu_{e}$]{}CC NC --------------------- ------ --------------------- -------------------------- ----------------- ------ FV interaction n/a 24.0 43.7 3.1 71.0 FCFV 88 19.0 33.8 3.0 18.3 single ring 41 17.9 13.1 1.9 5.7 $\mu$-like 33 17.6 12.4 $<$0.1 1.9 $p_{\mu}>200$ MeV/c 33 17.5 12.4 $<$0.1 1.9 0 or 1 delayed $e$ 31 17.3 9.2 $<$0.1 1.8 : Event reduction at the far detector. After each selection criterion is applied, the number of observed (Data) and MC expected events of $\nu_\mu$ CCQE, $\nu_\mu$ CC non-QE, intrinsic [$\nu_{e}$]{}, and neutral current (NC) are given. The columns denoted by [$\nu_{\mu}$]{}include [$\bar{\nu}_{\mu}$]{}. All MC CC samples assume $\nu_\mu \rightarrow \nu_\tau$ oscillations with $\sin^2 (2\theta_{23})$=1.0 and $|\Delta{m}^2_{32}|$=$2.4\times10^{-3}$eV$^2$. \[table:number\_of\_events\] We calculate the expected number of signal events in the far detector ($N_{SK}^{exp}$) by correcting the far-detector MC prediction with $R^{\nu_{\mu} CC}_{ND}$ from Eq. \[eq:ratiodmc\]: $$N_{SK}^{exp}(E_{r}) = R^{\nu_{\mu} CC}_{ND} \sum_{E_{t}} P_{surv}(E_{t}) N_{SK}^{MC}(E_{r},E_{t}). \label{eq:nskexp}$$ In Eq. \[eq:nskexp\], $N_{SK}^{MC}(E_{r},E_{t})$ is the expected number of events for the no-disappearance hypothesis for T2K Runs 1 and 2 in bins of reconstructed ($E_{r}$) and true ($E_{t}$) energies. $P_{surv}(E_{t})$ is the two-flavor $\nu_{\mu}$-survival probability, and is applied to $\nu_{\mu}$ and $\bar{\nu}_{\mu}$ CC interactions but not to neutral-current interactions. The sources of systematic uncertainty in $N_{SK}^{exp}$ are listed in Table \[table:nsksystematics\]. Uncertainties in the near-detector and far-detector selection efficiencies are energy-independent except for the ring-counting efficiency. Uncertainty in the near-detector event rate is applied to $N^{Data,\nu_{\mu} CC}_{ND}$ in Eq. \[eq:ratiodmc\]. The flux normalization uncertainty is reduced because of the near-detector constraint. The uncertainty in the flux shape is propagated using the covariance matrix when calculating $N_{SK}^{exp}$. The near-detector constraint also leads to partial cancellation in the uncertainty in cross section modeling, but the cancellation is not complete due to the different fluxes, different acceptances and different nuclei in the near and far detectors. The total uncertainty in $N_{SK}^{exp}$ is $^{+13.3\%}_{-13.0\%}$ without oscillations and $^{+15.0\%}_{-14.8\%}$ with oscillations with $\sin^{2}(2 \theta_{23})$ = 1.0 and $|\Delta m_{32}^{2}|$ = 2.4 $\times$ $10^{-3}$ eV$^{2}$. ------------------------------- ------------------------------------ ------------------------------------ Source $\delta N_{SK}^{exp}/N_{SK}^{exp}$ $\delta N_{SK}^{exp}/N_{SK}^{exp}$ (%, no osc) (%, with osc) SK CCQE efficiency $\pm 3.4$ $\pm 3.4$ SK CC non-QE efficiency $\pm 3.3$ $\pm 6.5$ SK NC efficiency $\pm 2.0$ $\pm 7.2$ ND280 efficiency +5.5 -5.3 +5.5 -5.3 ND280 event rate $\pm 2.6$ $\pm 2.6$ Flux normalization (SK/ND280) $\pm 7.3$ $\pm 4.8$ CCQE cross section $\pm 4.1$ $\pm 2.5$ CC1$\pi$/CCQE cross section +2.2 -1.9 +0.4 -0.5 Other CC/CCQE cross section +5.3 -4.7 +4.1 -3.6 NC/CCQE cross section $\pm 0.8$ $\pm 0.9$ Final-state interactions $\pm 3.2$ $\pm 5.9$ Total +13.3 -13.0 +15.0 -14.8 ------------------------------- ------------------------------------ ------------------------------------ : Systematic uncertainties on the predicted number of SK selected events without oscillations and for oscillations with $\sin^{2}(2 \theta_{23})$ = 1.0 and $|\Delta m_{32}^{2}|$ = 2.4 $\times$ $10^{-3}$ eV$^{2}$. \[table:nsksystematics\] We find the best-fit values of the oscillation parameters using a binned likelihood-ratio method, in which $\sin^{2}(2 \theta_{23})$ and $|\Delta m_{32}^{2}|$ are varied in the input to the calculation of $N_{SK}^{exp}$ until $$2 \sum_{E_{r}} \left [ N_{SK}^{data} \ln \left ( \frac{N_{SK}^{data}}{N_{SK}^{exp}} \right ) + (N_{SK}^{exp} - N_{SK}^{data}) \right ] \label{eq:chisq}$$ is minimized. The sum in Eq. \[eq:chisq\] is over 50 MeV bins of reconstructed energy of selected events in the far detector from 0-10 GeV. Using the near-detector measurement and setting $P_{surv}$ = 1.0 in Eq. \[eq:nskexp\], we expect a total of 103.6 $^{+13.8}_{-13.4}$ (syst) single $\mu$-like ring events in the far detector without disappearance, but we observe 31 events. If $\nu_{\mu} \rightarrow \nu_{\tau}$ oscillations are assumed, the best-fit point determined using Eq. \[eq:chisq\] is $\sin^{2}(2 \theta_{23})$ = 0.98 and $|\Delta m_{32}^{2}|$ = 2.65 $\times$ $10^{-3}$ eV$^{2}$. We estimate the systematic uncertainty in the best-fit value of $\sin^{2}(2 \theta_{23})$ to be $\pm$4.7% and that in $|\Delta m_{32}^{2}|$ to be $\pm$4.5%. The reconstructed energy spectrum of the 31 data events is shown in Fig. \[fig:recospectra\] along with the expected far-detector spectra without disappearance and with best-fit oscillations. ![Reconstructed energy spectrum of the 31 data events compared with the expected spectra in the far detector without disappearance and with best-fit $\nu_{\mu} \rightarrow \nu_{\tau}$ oscillations. A variable binning scheme is used here for the purpose of illustration only; the actual analysis used equal-sized 50 MeV bins.[]{data-label="fig:recospectra"}](run1p2_neut_bestfit_spectrum_rebin.pdf){width="2.9in"} We construct confidence regions [^84] in the oscillation parameters using the method of Feldman and Cousins [@cite:feldman_cousins]. Statistical variations are taken into account by Poisson fluctuations of toy MC datasets, and systematic uncertainties are incorporated using the method of Cousins and Highland [@cite:Cousins_Highland; @cite:conradetal]. The 90% confidence region for $\sin^{2}(2 \theta_{23})$ and $|\Delta m_{32}^{2}|$ is shown in Fig. \[fig:fccontours\] for combined statistical and systematic uncertainties. ![The 90% confidence regions for $\sin^{2}(2 \theta_{23})$ and $|\Delta m_{32}^{2}|$; results from the two analyses reported here are compared with those from MINOS [@Adamson:2011ig] and Super-Kamiokande [@sk-2011; @takeuchi].[]{data-label="fig:fccontours"}](111220_numupaper_contour_v7_wobest.pdf){width="2.9in"} We also carried out an alternate analysis with a maximum likelihood method. The likelihood is defined as: $$\begin{aligned} \label{eqn:LikelihoodAnalysisA} L&=&L_{\mbox{norm}}(\sin^2(2\theta_{23}),\Delta{m_{32}^2},{\bf f}) \nonumber \\ && L_{\mbox{shape}}(\sin^2(2\theta_{23}),\Delta{m_{32}^2},{ \bf f}) L_{\mbox{syst}}({\bf f}),\end{aligned}$$ where the first term is the Poisson probability for the observed number of events, and the second term is the unbinned likelihood for the reconstructed neutrino energy spectrum. The vector ${ \bf f}$ represents parameters related to systematic uncertainties that have been allowed to vary in the fit to maximize the likelihood, and the last term in Eq. \[eqn:LikelihoodAnalysisA\] is a multidimensional Gaussian probability for the systematic error parameters. The result is consistent with the analysis described earlier. The best-fit point for this alternate analysis is $\sin^{2}(2 \theta_{23})$ = 0.99 and $|\Delta m_{32}^{2}|$ = 2.63 $\times$ $10^{-3}$ eV$^{2}$. The 90% confidence region for the neutrino oscillation parameters is shown in Fig. \[fig:fccontours\]. In conclusion, we have reported the first observation of $\nu_{\mu}$ disappearance using detectors positioned off-axis in the beam of a long-baseline neutrino experiment. The values of the oscillation parameters $\sin^{2}(2 \theta_{23})$ and $|\Delta m_{32}^{2}|$ obtained are consistent with those reported by MINOS [@Adamson:2011ig] and Super-Kamiokande [@sk-2011; @takeuchi]. We thank the J-PARC accelerator team for the superb accelerator performance and CERN NA61 colleagues for providing essential particle production data and for their fruitful collaboration. We acknowledge the support of MEXT, Japan; NSERC, NRC and CFI, Canada; CEA and CNRS/IN2P3, France; DFG, Germany; INFN, Italy; Ministry of Science and Higher Education, Poland; RAS, RFBR and the Ministry of Education and Science of the Russian Federation; MEST and NRF, South Korea; MICINN and CPAN, Spain; SNSF and SER, Switzerland; STFC, U.K.; NSF and DOE, U.S.A. We also thank CERN for their donation of the UA1/NOMAD magnet and DESY for the HERA-B magnet mover system. In addition, participation of individual researchers and institutions in T2K has been further supported by funds from: ERC (FP7), EU; JSPS, Japan; Royal Society, UK; DOE Early Career program, and the A. P. Sloan Foundation, U.S.A. [^1]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^2]: also at J-PARC Center [^3]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^4]: also at J-PARC Center [^5]: deceased [^6]: also at J-PARC Center [^7]: deceased [^8]: now at CERN [^9]: also at J-PARC Center [^10]: also at J-PARC Center [^11]: also at J-PARC Center [^12]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^13]: also at Institute of Particle Physics, Canada [^14]: also at J-PARC Center [^15]: also at J-PARC Center [^16]: also at J-PARC Center [^17]: also at J-PARC Center [^18]: also at J-PARC Center [^19]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^20]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^21]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^22]: deceased [^23]: also at J-PARC Center [^24]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^25]: also at J-PARC Center [^26]: also at J-PARC Center [^27]: also at J-PARC Center [^28]: also at J-PARC Center [^29]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^30]: also at J-PARC Center [^31]: also at Institute of Particle Physics, Canada [^32]: also at J-PARC Center [^33]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^34]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^35]: also at J-PARC Center [^36]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^37]: also at J-PARC Center [^38]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^39]: also at J-PARC Center [^40]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^41]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^42]: also at J-PARC Center [^43]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^44]: also at J-PARC Center [^45]: also at J-PARC Center [^46]: also at J-PARC Center [^47]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^48]: also at J-PARC Center [^49]: deceased [^50]: also at JINR, Dubna, Russia [^51]: also at J-PARC Center [^52]: also at J-PARC Center [^53]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^54]: also at J-PARC Center [^55]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^56]: also at J-PARC Center [^57]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^58]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^59]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^60]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^61]: also at J-PARC Center [^62]: also at J-PARC Center [^63]: also at J-PARC Center [^64]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^65]: also at J-PARC Center [^66]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^67]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^68]: also at J-PARC Center [^69]: also at Institute of Particle Physics, Canada [^70]: also at J-PARC Center [^71]: also at J-PARC Center [^72]: also at J-PARC Center [^73]: also at J-PARC Center [^74]: deceased [^75]: also at J-PARC Center [^76]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^77]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^78]: also at J-PARC Center [^79]: also at J-PARC Center [^80]: also at J-PARC Center [^81]: also at J-PARC Center [^82]: also at BMCC/CUNY, New York, New York, U.S.A. [^83]: also at IPMU, TODIAS, Univ. of Tokyo, Japan [^84]: In the T2K narrow-band beam, for a low-statistics data set, there is a possible degeneracy between the first oscillation maximum and other oscillation maxima in $L/E$. Therefore we decided in advance to report confidence regions both with and without an explicit bound at $|\Delta m_{32}^2|<5\times 10^{-3}$eV$^2$. For this data set, the bounded and unbounded confidence regions are identical.
--- abstract: 'Starting from a system of $N$ radial Schrödinger equations with a vanishing potential and finite threshold differences between the channels, a coupled $N \times N$ exactly-solvable potential model is obtained with the help of a single non-conservative supersymmetric transformation. The obtained potential matrix, which subsumes a result obtained in the literature, has a compact analytical form, as well as its Jost matrix. It depends on $N (N+1)/2$ unconstrained parameters and on one upper-bounded parameter, the factorization energy. A detailed study of the model is done for the $2\times 2$ case: a geometrical analysis of the zeros of the Jost-matrix determinant shows that the model has 0, 1 or 2 bound states, and 0 or 1 resonance; the potential parameters are explicitly expressed in terms of its bound-state energies, of its resonance energy and width, or of the open-channel scattering length, which solves schematic inverse problems. As a first physical application, exactly-solvable $2\times 2$ atom-atom interaction potentials are constructed, for cases where a magnetic Feshbach resonance interplays with a bound or virtual state close to threshold, which results in a large background scattering length.' author: - 'Andrey M. Pupasov' - 'Boris F. Samsonov' - 'Jean-Marc Sparenberg' bibliography: - '\$HOME/Biblio/own.bib' - '\$HOME/Biblio/others.bib' title: 'Exactly-solvable coupled-channel potential models of atom-atom magnetic Feshbach resonances from supersymmetric quantum mechanics' --- Introduction ============ Coupled-channel quantum-scattering models are experiencing today a strong renewal of interest, mainly thanks to the impressive experimental progress in the field of ultracold gases. Actually, an indispensable tool to control the atom-atom interactions in these systems relies on the coupling between channels defined by different hyperfine states of the atom-atom pairs. When dipped in a magnetic field, these hyperfine states have threshold energies which vary linearly with the field. At ultracold temperatures, the lowest-threshold state is the only open channel, while states with higher thresholds are closed. When varying the field in a well-chosen range, quasi bound states of the closed channels can appear as resonances in the open channel, a phenomenon known as a “Feshbach resonance”, a phenomenon first studied in nuclear physics [@feshbach:58; @feshbach:62]. When a Feshbach resonance crosses the open-channel threshold, due to magnetic-field variation, the scattering length $a$, which effectively controls the atom-atom interaction, goes through infinite values, switching from positive to negative sign [@tiesinga:92; @tiesinga:93; @moerdijk:95]. This very spectacular phenomenon is now known as a magnetic-field-induced Feshbach resonance, or just “magnetic Feshbach resonance”. The practical importance of magnetic Feshbach resonances has motivated various theoretical models, which can be classified in three categories: (i) microscopic models, which should in principle deduce magnetic-Feshbach-resonance properties from many-electron calculations; (ii) effective potential models, which reduce the complexity of the many-electron problem to a two-atom coupled-channel problem (usually two channels are enough), where the interaction between the two atoms is modeled by a symmetric (two by two for a two-channel model) potential matrix; (iii) effective scattering-matrix models, which reduce the role of the underlying interactions to its impact on the atom-atom open-channel scattering matrix. Present-day theoretical descriptions of ultracold gases only require the knowledge of the atom-atom scattering length, which is directly related to the open-channel scattering matrix. As far as practical applications are concerned, the third category of models is thus sufficient. There, a magnetic Feshbach resonance is described as a pole of the scattering matrix in the complex wave-number planes, like any resonance [@taylor:72], and the whole complexity of the many-electron or atom-atom problem is reduced to a few parameters of the scattering-matrix Padé expansion [@marcelis:04]. These parameters can be numerically obtained, e.g., with the help of the reaction-matrix method [@nygaard:06], from a given microscopic or effective-potential model. In several contexts, the use of effective-scattering-matrix models is however progressively felt to be insufficient. This is for instance the case when a magnetic Feshbach resonance occurs with a large background scattering length, due to a bound or virtual state in the open channel, close to its threshold [@marcelis:04]. Such an open-channel state is also called a “potential resonance” because it naturally occurs in a potential model, even in a single-channel case. Other situations where a more detailed knowledge of the atom-atom interaction than just the scattering length might be necessary are cases where molecules can be formed, as in crossovers between a Bardeen-Cooper-Schieffer superfluid and a Bose-Einstein condensate, or in Bose-Einstein-condensate collapses. None of these cases probably requires to solve the full many-body electronic problem; effective-potential models, on the other hand, look like a reasonable approximation, as they allow for a realistic description of the atom-atom interaction in terms of the accessible channels and as a function of the radial coordinate $r$ between atoms. There is thus an interest for exactly-solvable coupled-channel potential models with threshold differences. The first example coming to mind is probably the coupled square-well potential, which can display both potential and Feshbach resonances, as well as bound states [@kokkelmans:02]. This model has however two drawbacks: first, despite its simplicity and exactly-solvable character, its scattering-matrix poles are given by rather complicated implicit equations. Second, its discontinuous form factor is rather limitative and very different from the known long-range atom-atom polarization interaction. The next choice towards realistic atom-atom interactions is thus a purely numerical resolution of the coupled-channel Schrödinger equation with smooth phenomenological potentials. This lack of exactly-solvable potentials can be related to the poor knowledge of the scattering inverse problem (i.e., the construction of a potential in terms of its bound- or scattering-state physical properties) in the coupled-channel case with threshold differences [@chadan:89]. In Ref. [@cox:64] however, an exactly-solvable coupled-channel potential with threshold differences is derived, two remarkable features of which are the compact expressions provided both for the potential and for its Jost matrix. Since the Jost-matrix completely defines the bound- and scattering-state properties of a potential model [@newton:82; @vidal:92a], such an analytical expression seems very promising in the context of the scattering inverse problem. The work of Cox has however received little attention, probably because it is plagued by two problems. First, the way of getting the potential is rather complicated and mysterious: the paper mostly consists in a check that the provided analytical expression for the solutions satisfies the coupled-channel Schrödinger equation with the provided analytical expression for the potential. Not much information is given on how these expressions were obtained, which makes any generalization of the method impossible. The second problem, already stressed in Ref.[@cox:64], is that, despite the compact expression of the Jost matrix, calculating the corresponding bound- and resonant-state properties is a difficult task because these states correspond to zeros of the [*determinant*]{} of the Jost matrix in the intricate structure of the energy Riemann sheet, which has a multiplicity $2^N$ for $N$ channels. The first problem was solved recently, when it was realized that the Cox potential, at least in its simplest form ($q=1$ in Ref. [@cox:64]), can be obtained by a single supersymmetric transformation of the zero potential [@sparenberg:06; @samsonov:07]. This leads to a much simpler derivation of this potential and naturally enables several generalizations of it; in particular, the initial potential is now arbitrary. The transformation used to get this result belongs to a category of supersymmetric transformations not much used up to now, namely transformations that do not respect the boundary condition at the origin (the so-called [*non-conservative*]{} transformations, see e.g. Ref. [@sparenberg:06]): a solution of the initial potential vanishing at the origin is transformed into a solution of the transformed potential which is finite at the origin. This feature makes the transformation of the Jost matrix more complicated to calculate than for usual conservative transformations but it is also the key to get potentials with non trivial coupling. In Sec. \[sec:Cox\] below, we give several alternative expressions for the Cox potential and explicitly make the link between the supersymmetric derivation and the expressions found in Ref. [@cox:64], for an arbitrary number of channels $N$. With respect to Refs. [@sparenberg:06; @samsonov:07], this result is new as these references mostly concentrate on generalizations of the Cox potential allowed by supersymmetric quantum mechanics and on $N=2$ examples. We also show in Sec. \[sec:Cox\] that the Cox potential contains the maximal number of arbitrary parameters allowed by a single non-conservative supersymmetric transformation, which makes it the most interesting potential from the point of view of the scattering inverse problem; we also derive a new necessary and sufficient condition for the regularity of the potential. The second problem, i.e., the calculation of bound- and scattering-state properties from the analytical Jost function, is touched upon in Sec. \[3\]. There, the discussion is limited to $N=2$ (a case complicated enough from the mathematical point of view but very rich already from the physical point of view). First, the number of bound states and resonant states is studied geometrically in terms of the potential parameters, as well as the necessary and sufficient condition for a regular potential (several mistakes made in Ref. [@cox:64], in particular regarding the number of bound states, are corrected in passing). Second, schematic inverse problems are solved, where the potential parameters are expressed in terms of physical quantities like bound-state energies, resonance energy and width; third, the low-energy behavior of the open-channel scattering matrix is studied, with ultracold gases in mind. This discussion makes possible a first practical use of this potential as a schematic model of atom-atom magnetic Feshbach resonances, which is described in Sec. \[4\]. There, an exactly-solvable model is established in cases where a magnetic Feshbach resonance interplays with a potential resonance, which results in a large background scattering length, either positive (interplay with a bound state) or negative (interplay with a virtual state). This physical context is mostly inspired by Ref. [@marcelis:04]. Sec. \[sec:conclusion\] finally summarizes our findings and discusses possible extensions of them, in particular to other fields of physics where coupled-channel models are known to play an important role. \[sec:Cox\] The Cox potential from supersymmetric quantum mechanics =================================================================== Let us first summarize the notations used below for coupled-channel scattering theory [@taylor:72; @newton:82; @vidal:92a]. We consider a multichannel radial Schrödinger equation that reads in reduced units $$\label{schr} H\psi(k,r)=K^2\psi(k,r),$$ with $$H=-\frac{d^2}{d r^2}+V,$$ where $r$ is the radial coordinate, $V$ is an $N\times N$ real symmetric matrix, and $\psi$ may be either a matrix-valued or a vector-valued solution. By $k$ we denote a point in the space ${\mathbb C}^N$, $k=\left\{k_1,\ldots,k_N\right\}$, $k_i\in \mathbb C$. A diagonal matrix with non-vanishing entries $k_i$ is written as $K=\mbox{diag}(k)=\mbox{diag}(k_1,\ldots,k_N)$. The complex wave numbers $k_i$ are related to the center-of-mass energy $E$ and the channel thresholds $\Delta_1,\dots, \Delta_N$, which are supposed to be different from each other, by $$\label{thrE} k_i^2=E-\Delta_i\,.$$ For simplicity, we assume here that the different channels have equal reduced masses, a case to which the general situation can always be formally reduced [@newton:82]. We also assume potential $V$ to be short-ranged at infinity and to support a finite number $M$ of bound states. Under such assumptions, the Schrödinger equation has two $N\times N$ matrix-valued Jost solutions which allow one to construct the Jost matrix $F(k)$ defining both scattering and bound-state properties. The scattering matrix, which is symmetric, reads $$\begin{aligned} S(k) & = & K^{-1/2}F(-k)F^{-1}(k)K^{1/2} \nonumber \\ & = & K^{1/2}[F^{-1}(k)]^T F^T(-k)K^{-1/2}, \label{S}\end{aligned}$$ with $T$ meaning transposition and $-k=\{-k_1,\dots,-k_N\}$. The zeros of the determinant of the Jost matrix, which are defined by $\det F(k)\equiv 0$, thus correspond to poles of all the elements of the scattering matrix. Bound states correspond to such zeros $k_m$, with $m=1,\dots,M$, lying on the positive imaginary $k_i$ axes for all channels: $k_{mi}= i \kappa_{mi}$ with $\kappa_{mi} \ge 0$ and $i=1,\dots,N$. The corresponding energies, $E_m=-\kappa_{mi}^2+\Delta_i$, lie below all thresholds. For simplicity, we call virtual state any other zero of the Jost-matrix determinant corresponding to a real energy below all thresholds, but not lying on all the positive imaginary $k_i$ axes. Finally, we call resonance any zero of the Jost-matrix determinant not lying on the imaginary $k_i$ axes, hence corresponding either to a complex energy or to a real energy above at least one threshold. Note that for a resonance to have a visible impact on the physical scattering matrix it should be located sufficiently close to the real axis. Let us then summarize the main results from supersymmetric quantum mechanics in the coupled-channel case [@amado:88a; @amado:88b]. Starting from an initial potential $V$ and its solutions $\psi$, a supersymmetric transformation allows the construction of a new potential $$\label{Vt} \tilde{V}(r)=V(r)-2 U'(r)$$ with solutions $$\label{psit} \tilde{\psi}(k,r)=\left[-\frac{d}{dr}+U(r)\right]\psi(k,r)\,,$$ where the so-called [superpotential]{} $U$ is expressed in terms of a square matrix $\sigma$ by $$\label{U} U(r)=\sigma'(r) \sigma^{-1}(r)\,.$$ Matrix $\sigma$ is called the factorization solution; it is a solution of the initial Schrödinger equation $$H \sigma(r) = -{\cal K}^2 \sigma(r)\,,$$ where ${\cal K}=\mbox{diag}(\kappa)=\mbox{diag}(\kappa_1,\dots,\kappa_N)$ is a diagonal matrix called the factorization wave number, which corresponds to an energy $\cal E$ lying below all thresholds, called the factorization energy. The entries of $\cal K$ thus satisfy ${\cal E}=-\kappa_i^2+\Delta_i$; by convention, we choose them positive: $\kappa_i>0$. Equation  implies that all the physical properties of the transformed potential can be expressed in terms of those of the initial potential, in particular its Jost matrix and scattering matrix. Let us now apply these results to a vanishing initial potential $V=0$, for which the Jost matrix and scattering matrix are identity, $S(k)=F(k)=I$. For a given factorization energy, the most general real symmetric superpotential depends on an $N$-dimensional real symmetric matrix of arbitrary parameters, i.e., on $N(N+1)/2$ real arbitrary parameters [@samsonov:07]. When $V=0$, the corresponding factorization solution can be written as \[sigCox\] $$\begin{aligned} \sigma(r)=\cosh(\kappa r) + {\cal K}^{-1} \sinh(\kappa r) U_0 \label{sigU} \\ =(2{\cal K})^{-1} [ \exp(\kappa r) ({\cal K} + U_0) + \exp(-\kappa r) ({\cal K} - U_0)], \label{sigexp}\end{aligned}$$ which ensures that the resulting potential $\tilde{V}$ is regular at the origin, and where the arbitrary parameters explicitly appear as the value of the (symmetric) superpotential at the origin, $U_0 \equiv U(0)$; $ \exp(\pm \kappa r)$, $ \cosh(\kappa r)$ and $\sinh(\kappa r)$ are diagonal matrices with entries $ \exp(\pm \kappa_i r)$, $ \cosh(\kappa_i r)$ and $\sinh(\kappa_i r)$ respectively. According to Ref. [@samsonov:07], when ${\cal K}+U_0$ is invertible, the transformed Jost matrix reads $$\label{FtCox} \tilde{F}(k)=({\cal K}-i K)^{-1}(U_0-i K).$$ This is the Jost function obtained by other means in Ref. [@cox:64] in the case $q=1$. However, it was not realized there that the corresponding potential could be simply expressed in terms of a solution matrix $\sigma$, using Eqs.  and . In that reference, a compact expression for the potential is found \[see Eq.  below\] but writing  and  is much more elegant because both the potential  and its Jost function  are expressed in terms of the same parameter matrix $U_0$. Nevertheless, this procedure also presents several disadvantages: calculating the potential requires several matrix operations (inversion, product, derivations); moreover, the parameters in $U_0$ should be chosen so that the factorization solution is invertible for all $r$, a condition not easily checked on Eqs. . Let us now derive an alternative form for the factorization solution, which solves both these inconveniences. In Ref. [@samsonov:07], the possibility of rank $({\cal K} +U_0) < N$ in Eq.  has been studied, which leads to an interesting asymptotic behavior of the superpotential but which reduces the number of parameters in the model. Here, in order to keep the maximal number of arbitrary parameters in the potential, we choose ${\cal K} +U_0$ invertible. The factorization solution  can then be multiplied on the right by $2 ({\cal K} +U_0)^{-1} {\cal K}^{1/2}$, which leads to the factorization solution $$\sigma(r) = {\cal K}^{-1/2} \left[ \exp(\kappa r) + \exp(-\kappa r) X_0 \right]. \label{sigX0}$$ According to Eq. , the superpotential, and hence the transformed potential, is unaffected by this multiplication. The symmetric matrix $X_0$ now contains all the arbitrary parameters. The link between the two sets of parameters is given by $$\begin{aligned} \label{XU} X_0 & = & {\cal K}^{-1/2} ({\cal K} - U_0) ({\cal K} + U_0)^{-1}{\cal K}^{1/2}\,, \\ U_0 & = & {\cal K}^{1/2} (I-X_0) (I+X_0)^{-1}{\cal K}^{1/2}\,. \label{UX}\end{aligned}$$ Equation  can also be written as $$\label{sigX} \sigma(r) = {\cal K}^{-1/2} \left[ I + X(r) \right] \exp(\kappa r)\,,$$ where $$\label{X} X(r)= \exp(-\kappa r) X_0 \exp(-\kappa r).$$ With respect to writing  and , Eq.  presents several advantages. First, it allows for a simple calculation of the superpotential $$\begin{aligned} U(r) & = & {\cal K} - 2 {\cal K}^{1/2} X(r) [I+X(r)]^{-1} {\cal K}^{1/2} \nonumber \\ & = & -{\cal K} + 2 {\cal K}^{1/2} [I+X(r)]^{-1} {\cal K}^{1/2}\,. \label{UrX}\end{aligned}$$ The last expression is particularly convenient since the $r$ dependence is limited to one factor of the second term; the potential can thus be explicitly written as $$\begin{aligned} \tilde{V}(r) & = & 4 {\cal K}^{1/2} [I+X(r)]^{-1} X'(r) [I+X(r)]^{-1} {\cal K}^{1/2} \nonumber \\ & = & - 4 {\cal K}^{1/2} \left(e^{\kappa r} + X_0 e^{-\kappa r}\right)^{-1} (X_0 {\cal K}+{\cal K} X_0) \left(e^{\kappa r} + e^{-\kappa r} X_0 \right)^{-1} {\cal K}^{1/2}\,.\end{aligned}$$ The last expression is exactly equivalent to Eq. (4.7) of Ref. [@cox:64] for $q=1$, which reads $$\begin{aligned} \tilde{V}(r)&=& 2 e^{-\kappa r} \left[I-A (2{\cal K})^{-1} e^{-2 \kappa r}\right]^{-1} (A{\cal K}+{\cal K}A) \nonumber \\ & \times & \left[I-e^{-2 \kappa r} (2{\cal K})^{-1}A \right]^{-1} e^{-\kappa r}\,, \label{VCox}\end{aligned}$$ provided one defines matrix $A$ as $$\begin{aligned} A & = & -2 {\cal K}^{1/2} X_0 {\cal K}^{1/2} \nonumber \\ & = & - 2 ({\cal K} - U_0)({\cal K}+U_0)^{-1} {\cal K}\,. \label{AU}\end{aligned}$$ The second advantage of writing  is that it easily leads to a necessary and sufficient condition on the parameters to get a potential without singularity at finite distances. This condition is positive definiteness of matrix $I+X_0$: $$\label{posX} I+X_0>0\,.$$ The potential has a singularity when $\sigma(r)$ is noninvertible, i.e., when $\det[I+X(r)]$ vanishes for some $r$. Using Eq. , we find that this is equivalent to the existence of $r_0\ge0$ such that $\det Y(r_0)=0$ with $Y(r)=\exp(2 \kappa r)+X_0$. Assume now that $\det Y(r)\ne0$ $\forall r\ge0$. Since $\det Y(r)=\prod_{i=1}^N y_i(r)$ where $y_i(r)$ are the eigenvalues of $Y(r)$, we conclude that $y_i(r)\ne0$ for all $i=1,\ldots,N$ and $r\ge0$. But since for sufficiently large $r$, $X_0$ becomes a small perturbation to $\exp(2\kappa r)$, all eigenvalues of $Y(r)$ should be positive for $r\ge0$ and in particular at $r=0$, thus proving the necessary character of the above condition. The sufficiency follows from the observation that $Y(r)$ is positive definite for any $r\ge0$, together with $Y(0)=I+X_0$. Indeed, if $Y(r)$ is positive definite, the inequality $\langle q|Y(r)|q\rangle >0$ holds for any $q\in L_N$. Here $\langle p\,|q\rangle=\sum_{i=1}^Np^*_iq_i $ is the usual inner product in the $N$-dimensional complex linear space $L_N$, with $p_i$, $q_i$ being coordinates of the vectors $p,q\in L_N$ with respect to an orthonormal basis. But since $\langle q|Y(r)|q\rangle =\langle q|X_0|q\rangle+ \langle q|\exp(2 \kappa r)|q\rangle \ge \langle q|X_0|q\rangle+ \langle q|q\rangle=\langle q|X_0+I|q\rangle$ \[we recall that $r\ge0$, $\kappa_i>0$ and $\exp(\kappa r)$ is a diagonal matrix with entries $\exp(\kappa_i r)$\], positive definiteness of $I+X_0$ implies positive definiteness of $Y(r)$ for $r\ge0$. Having established this condition on $X_0$, one can get the condition in terms of $U_0$, using Eq. . Since $$I+X_0= 2 {\cal K}^{1/2} ({\cal K} + U_0)^{-1} {\cal K}^{1/2},$$ the necessary and sufficient condition to get a regular potential is positive definiteness of matrix ${\cal K}+U_0$: $$\label{posU} {\cal K}+U_0>0\,.$$ Since the (diagonal) elements of $\cal K$ are positive and increase when the factorization energy decreases, this condition has a simple interpretation: it just puts some upper limit on the factorization energy. Finally, Eq.  shows that the condition det $A\ne0$ required in Ref. [@cox:64] is not required here. In Cox’ paper, this condition does not appear in the potential expression, which is valid in the general case, but only in the derivation of the proof; the fact that this condition is not required here illustrates the efficiency of the supersymmetric formalism. Equation  also implies that rank $({\cal K}+U_0) < N$ corresponds to det $A=\infty$, a case also not considered in Ref. [@cox:64]. The supersymmetric treatment, on the contrary, allows this case [@sparenberg:06; @samsonov:07]; our approach thus subsumes the results of Ref. [@cox:64] in several respects. General properties of the $2\times2$ Cox potential\[3\] ======================================================= Having established a connection between the Cox potential and supersymmetric quantum mechanics, we now proceed in a more detailed analysis of its properties for the simplest particular case $N=2$. As is happens, this case is not only complicated enough to deserve a dedicated analysis, but also sophisticated enough to make the solution of several interesting inverse problems possible. Explicit expression of the potential ------------------------------------ For $N=2$, the arbitrary parameters entering the Cox potential are the entries of the superpotential matrix at the origin, \[sp0\] U\_0 U(0)= ( [cc]{} \_1 &\ & \_2 ), and the factorization energy $\cal E$. The corresponding factorization wave number, $\kappa=(\kappa_1, \kappa_2)$, is made of two positive parameters $\kappa_1$ and $\kappa_2$ which are not independent of each other: they should satisfy the “threshold condition” \[see Eq. \] $$\kappa_2^2-\kappa_1^2=\Delta. \label{threshold-}$$ Here and in what follows we put for convenience $\Delta_1=0$, $\Delta_2=\Delta>0$. In terms of these parameters, the necessary and sufficient condition for a regular potential, i.e., ${\cal K}+U_0$ positive definite, can be written for instance \[nsc12\] $$\begin{aligned} \kappa_1 & > & -\alpha_1, \label{nsc1} \\ \kappa_2 & > & \frac{\b^2}{\kappa_1+\a_1}-\a_2. \label{nsc2}\end{aligned}$$ This puts an upper limit on the factorization energy in terms of the parameters appearing in $U_0$ \[see Eq.  and Fig. \[figS\] below\]. Two explicit expressions for the superpotential are given in Ref. [@samsonov:07]. Using Eqs.  and , one gets what is probably the simplest possible explicit expression for the potential itself: \[vtcox\] $$\begin{aligned} \tilde{v}_{11} & = & -8 \kappa_1 e^{-2 \kappa_1 r} \ \frac{x_{11} \kappa_1 +\left[2 x_{11} x_{22} \kappa_1 - x_{12}^2 \left(\kappa_1+\kappa_2\right)\right] e^{-2 \kappa_2 r} + x_{22} \left(x_{11} x_{22} - x_{12}^2\right) \kappa_1 e^{-4 \kappa_2 r}} {\left[1+x_{11} e^{-2\kappa_1 r} +x_{22} e^{-2\kappa_2 r} + \left(x_{11} x_{22} - x_{12}^2\right) e^{-2(\kappa_1+\kappa_2) r} \right]^2}, \label{vt11} \\ \tilde{v}_{12} & = & -4 x_{12} \sqrt{\kappa_1 \kappa_2} e^{-(\kappa_1+\kappa_2) r} \times \nonumber \\ && \frac{\kappa_1+\kappa_2+x_{11} (\kappa_2-\kappa_1) e^{-2\kappa_1 r} +x_{22} (\kappa_1-\kappa_2) e^{-2\kappa_2 r} - \left(x_{11} x_{22} - x_{12}^2\right) (\kappa_1+\kappa_2) e^{-2(\kappa_1+\kappa_2) r}} {\left[1+x_{11} e^{-2\kappa_1 r} +x_{22} e^{-2\kappa_2 r} + \left(x_{11} x_{22} - x_{12}^2\right) e^{-2(\kappa_1+\kappa_2) r} \right]^2}.\end{aligned}$$ The element $\tilde{v}_{22}$ is obtained from Eq.  by the replacement $\kappa_1 \leftrightarrow \kappa_2$ and $x_{11} \leftrightarrow x_{22}$. Here, we have used the symmetric matrix $$X_0 = \left( \begin{array}{cc} x_{11} & x_{12} \\[.5em] x_{12} & x_{22} \end{array} \right),$$ which is related to matrix  by Eqs.  and . In the following, as we are mostly interested in the Jost-matrix properties, we shall rather use matrix $U_0$. \[subsec:zeros\] Zeros of the Jost-matrix determinant ----------------------------------------------------- Let us denote for convenience the channel wave numbers as $k_1=k$ and $k_2=p$, with the threshold condition k\^2-p\^2=. \[threshold\] Then, according to Eq. , the Jost matrix for the Cox potential reads (see also Refs. [@cox:64; @sparenberg:06; @samsonov:07]) \[F\] (k,p)= ( [cc]{} &\ & ). The determinant of the Jost matrix coincides with the Fredholm determinant of the corresponding integral equation [@newton:82]; it reads here \[det1\] f(k,p)(k,p)= . The zeros of this Jost determinant in the $k$ and $p$ complex planes, which correspond to bound, virtual or resonant states, are functions of the parameters $\a_1$, $\a_2$ and $\b$ only. From here and the threshold condition , follows the system of equations for finding the zeros of $f(k,p)$, \[sys\] $$\begin{aligned} && k^2-p^2=\Delta, \\ && (k+i\a_1)(p+i\a_2)+\b^2=0, \label{sys1}\end{aligned}$$ which is equivalent to the fourth-order algebraic equation: \[k4\] k\^4+ia\_1k\^3+a\_2k\^2+ia\_3k+a\_4=0, where $$\begin{aligned} a_1 & = & 2\a_1, \\ a_2 & = & \a_2^2-\a_1^2-\Delta, \\ a_3 & = & 2[\a_1(\a_2^2-\Delta)-\a_2\b^2], \\ a_4 & = & -\a_1^2(\a_2^2-\Delta)+2\a_2\b^2\a_1-\b^4.\end{aligned}$$ We notice that after substitution $k=i\lambda$, Eq.  becomes an algebraic equation in $\lambda$ with real coefficients. Its four roots are thus either real numbers, which correspond to real negative energies (bound or virtual states), or mutually-conjugated complex numbers, which correspond to mutually-conjugated complex energies (resonant states). Basing on this property, we will use in what follows a geometric representation of the system of equations which allows for a visualization of the zeros of $f(k,p)$ in the parameter space. Let us first consider bound and virtual states, which correspond to solutions of system  with $k$ and $p$ purely imaginary. After substitution $k=i\lambda$, $p=i\rho$, with $\lambda$ and $\rho$ real, these equations define two hyperbolas in the $(\lambda,\rho)$-plane, \[syslr\] $$\begin{aligned} \label{sys1a} && \rho^2-\lambda^2 = \Delta\,,\\ && (\lambda+\a_1)(\rho+\a_2) = \b^2, \label{sys1b}\end{aligned}$$ the positions of which are defined by the values of the parameters $\a_1$, $\a_2$, $\b$ and $\Delta$. The roots of system  that correspond to bound and virtual states are the intersection points of these hyperbolas. Different possibilities of hyperbola locations are shown in Fig. \[fig1\]. The solid-line hyperbola corresponds to the threshold condition ; its semi-major axis is $\sqrt{\Delta}$ and its slant asymptotes are given by $\rho=\pm \lambda$. The dashed-line hyperbola corresponds to Eq. ; its asymptotes are given by $\lambda=-\alpha_1$ and $\rho=-\alpha_2$. The abscissa (resp., ordinate) of a crossing point in the $(\lambda,\rho)$-plane gives the position of the corresponding zero on the imaginary axis in the $k$-plane (resp., $p$-plane), as shown in the second (resp., third) column of Fig. \[fig1\]. Bound states correspond to $\lambda,\rho>0$, i.e., to intersection points laying in the first quadrant of the $(\lambda,\rho)$-plane, while virtual states correspond to intersections in the second, third and fourth quadrants. It is clearly seen on Fig. \[fig1\] that the two hyperbolas  and  cross in either two or four points. Moreover, they can have zero, one or two intersections in the first quadrant, which means that the potential has either zero, one or two bound states. This contradicts Ref. [@cox:64], where it is said that the potential does never support bound states. Since Eq.  is fourth order, when the hyperbolas cross in four points, the Jost determinant does not have any other zero; on the other hand, when the hyperbolas cross in only two points, the Jost determinant has two other zeros, which have to form a mutually-conjugated complex pair, as seen above. This last case corresponds to a resonance, as illustrated by Fig. \[fig1\](c), where the hyperbolas only have two intersection points in the $(\lambda,\rho)$-plane and a pair of complex roots appears in the complex $k$ and $p$ planes. The potential thus has either zero or one resonance. The intermediate case of three intersection points for the hyperbolas \[Fig. \[fig1\](b)\] corresponds to the presence of a multiple root of Eq. , which lies in an unphysical sheet (${\rm Im} k<0,\,{\rm Im}p>0$ or ${\rm Im} k>0,\,{\rm Im}p<0$) of the Riemann energy surface; this case corresponds to a transition between a one-resonance and a two-virtual-state situation. One sees that the parameters $\a_1$ and $\a_2$ determine the position of hyperbola  and, hence, the number of bound states $n_b$ (0, 1 or 2) and of resonances $n_r$ (0 or 1). Let us now determine, for fixed values of $\b$ and $\Delta$, the domains in the plane of parameters $\mathbb{A}=(\a_1,\a_2)$ with constant values of $n_b$ and $n_r$. To find domains in $\mathbb{A}$ where system  has two complex conjugated roots (one resonance), we consider the case where the hyperbolas have a common tangent point, as illustrated by Fig. \[fig1\](b). One can see that the decrease of either $\a_1$ or $\a_2$ leads to the disappearance of the resonance, while the increase of either $\a_1$ or $\a_2$ leads to the appearance of the resonance. We define the parametric curves $[\a_1(\lambda_0,\rho_0),\alpha_2(\lambda_0,\rho_0)]$ in plane $\mathbb{A}$ by shifting the tangent point $(\lambda_0,\rho_0)$ along the hyperbola $\rho^2-\lambda^2=\Delta$. These curves limit domains in $\mathbb{A}$ with either zero or two complex roots. To find them, we use the two conditions corresponding to the common tangent point $(\lambda_0,\rho_0)$ \[sysres\] $$\begin{aligned} && \rho_0=\frac{\b^2}{\lambda_0+\a_1}-\a_2= \pm\sqrt{\lambda_0^2+\Delta}, \\ && \left. \frac{d\rho}{d\lambda}\right|_{\lambda=\lambda_0}= -\frac{\b^2}{(\lambda_0+\a_1)^2} =\pm\frac{\lambda_0}{\sqrt{\lambda_0^2+\Delta}}\,.\end{aligned}$$ The upper signs correspond to $\lambda_0<0$ (tangent point in the second quadrant) while the lower signs correspond to $\lambda_0>0$ (tangent point in the fourth quadrant). We can solve system  with respect to $\a_1$ and $\a_2$: \[rc1\] $$\begin{aligned} \!\!\a_1(\lambda_0)&=& \pm\frac{\b}{\sqrt{|\lambda_0}|}(\lambda_0^2+\Delta)^{1/4}-\lambda_0\,,\\ \!\!\a_2(\lambda_0)&=& \pm\frac{\b\sqrt{|\lambda_0|}}{(\lambda_0^2+\Delta)^{1/4}}+{\rm sign}(\lambda_0)\sqrt{\lambda_0^2 +\Delta} \,.\end{aligned}$$ It should be noted that the Schrödinger equation with the Cox potential has the following scale invariance: $$\begin{aligned} \a_{1,2} &\to& \gamma\a_{1,2}\,,\qquad \Delta \to \gamma^2\Delta\,,\\ \kappa_{1,2} &\to& \gamma \kappa_{1,2}\,,\qquad \b \to \gamma \b\,, \\ r &\to& r/\gamma\,,\end{aligned}$$ which leaves $\Delta_d=\Delta/\b^2$ invariant. Hence, we may put $\Delta=1$ without losing generality. This choice is equivalent to measuring energies in units of $\Delta$. It is convenient to express equations in terms of dimensionless variables $\a_i/\b$, $\Delta_d=\Delta/\b$, $\lambda_0\rightarrow\lambda_0/\b$: \[rc2\] $$\begin{aligned} \!\!\!\!\!\!\!\! \frac{\a_1}{\b}(\lambda_0)&=& \pm\frac{1}{\sqrt{|\lambda_0|}}(\lambda_0^2+\Delta_d)^{1/4}-\lambda_0\,,\\ \!\!\!\!\!\!\!\! \frac{\a_2}{\b}(\lambda_0)&=& \pm\frac{\sqrt{|\lambda_0|}}{(\lambda_0^2+\Delta_d)^{1/4}}+{\rm sign}(\lambda_0)\sqrt{\lambda_0^2 +\Delta_d}.\end{aligned}$$ These four solutions \[taking into account ${\rm sign}(\lambda_0)$\] can be considered as four parametric curves in plane ${\mathbb A}=(\a_1/\b,\a_2/\b)$, which separate the plane in five regions (one inner region and four outer regions, see Fig. \[fig2r\]). In the inner region, the Jost determinant has two complex roots $k_{1,2}=\pm k_r+ik_i$ and, hence, these values of parameters $\a_1,\a_2$ correspond to one resonance ($n_r=1$). In the four outer regions, the Jost determinant has purely-imaginary roots, hence $n_r=0$. The curves in Fig. \[fig2r\] tend asymptotically to straight lines which are defined as the limits for $\lambda_0\rightarrow 0$ and $\lambda_0\rightarrow\pm\infty$. As a result, one finds for all branches two horizontal asymptotes $\a_2/\b=\pm\sqrt{\Delta_d}$ and three slant asymptotes defined by $\a_2/\b=-\a_1/\b$ (for the curves in the second and fourth quadrants) and $\a_2/\b=-\a_1/\b\pm 2$ (for the curves in the first and third quadrants, respectively). Consider now the case where the hyperbolas cross at the point $\lambda_0=0$, $\rho_0=\sqrt{\Delta}$ \[see the thin dashed lines in Fig. \[fig1\](a)\]. After a small decrease of either $\a_1$ or $\a_2$, the number of positive roots, i.e., of bound states, increases by one unit. Hence, assuming $\lambda_0=0$ and $\rho_0=\sqrt{\Delta}$ in system , we get the curves \[bsz\] \_1(\_2+)-\^2=0, which define three domains in the plane of parameters $\mathbb A$, where Eqs.  have different number of positive roots (see Fig. \[fig2b\]). One can directly check that the number $n_b$ of bound states may be calculated as a function of the parameters as \[nb\] n\_b=1+(I\_1-1)I\_2, where the quantities \[ind12\] $$\begin{aligned} \label{ind1} I_1 &=&{\rm sign} \left(\b^2-\a_1\sqrt{\Delta}-\a_1\a_2\right)\cdot1, \\ \label{ind2} I_2 &=&{\rm sign}(\a_2+\sqrt{\Delta})\cdot1\end{aligned}$$ may be considered as invariants. For $n_b=0$, one has $I_1=-1$ and $I_2=1$; for $n_b=1$, one has $I_1=1$ and $I_2=\pm 1$; for $n_b=2$, one has $I_1=I_2=-1$. Let us now summarize our findings on the number of bound states and resonances of the $2\times 2$ Cox potential, by combining Figs. \[fig2r\] and \[fig2b\] in Fig. \[fig2\], where both $n_b$ and $n_r$ are given for all the possible regions of plane $\mathbb A$. The border lines of these regions, as already discussed, correspond to the parametric curves defined by Eqs. , , and to the curves given by Eq. . From the asymptotic behavior of these curves, it is easy to see the global structure of the zones. For instance, for the case of two bound states, the hyperbolas in Fig. \[fig1\] have to have four intersection points, which implies that no resonance is present. This is the reason why the boundary lines between the zones of bound and resonant states do not cross in the lower-half $\mathbb{A}$-plane. Moreover, one can see that the topological structure of these zones does not depend on a particular choice of a parameter $\Delta_d=\Delta/\b^2$. A change of this parameter only leads to a deformation of zones, namely, the distance between horizontal asymptotes changes, but does not make any new intersection point or new boundary line appear. The case of $\b=0$, $\Delta_d=\infty$ corresponds to uncoupled channels. In this case there are no resonances. Only bound or virtual states located in different channels may appear (see Sec. \[4\]. A.) Up to now, we have excluded the factorization energy from our analysis because Eqs.  are independent of $\kappa_{1,2}$. We will now give a geometrical analysis of conditions , that have to be imposed on parameters $\kappa_{1}$ and $\kappa_{2}$ to warranty a regular character of the Cox potential $\forall r\ge0$. Let us notice that condition , \[nonsp\] (\_1+\_1)(\_2+\_2)-\^2&gt;0, is nothing but Eq.  for $\lambda=\kappa_1$, $\rho=\kappa_2$, and a shifted value of parameter $\b^2$. Actually, since the left-hand side of Eq.  should be positive, there exists a positive number $C$ (depending on the set of parameters) such that \[nonsp1\] (\_1+\_1)(\_2+\_2)-(\^2+C)=0,C&gt;0. This equation represents a hyperbola with the same asymptotes as hyperbola  but with a larger distance between its branches. We thus conclude that the permitted values of $\kappa_{1,2}$ are determined by the intersection points of hyperbolas  and  for arbitrary values of $C$, with the additional condition : $\kappa_1>-\alpha_1$. Figure \[figS\] helps to visualize these conditions: the allowed values of the factorization energy correspond to the values of $\kappa_{1,2}$ given by the upper-right intersections between the solid and the thin-dashed hyperbolas. In Fig. \[figS\], the solid and bold-dashed hyperbolas can also be considered as representing Eqs. ; the bound-state energies then correspond to their (0, 1 or 2) intersections in the first quadrant. The allowed values of $\kappa_{1,2}$ should thus be larger than the largest values of $\lambda$, $\rho$ corresponding to a bound state. The necessary and sufficient condition for a regular potential can thus be simply stated as: the factorization energy should be negative and lower than the lowest bound-state energy, if any. Inversion of the zeros of the Jost-matrix determinant ----------------------------------------------------- To solve a realistic two-channel scattering inverse problem, it is necessary to express the Cox potential in terms of physical data such as the threshold energy, bound-state energies, resonance energy and width, or scattering data. While the threshold energy explicitly appears in the expression of the Cox potential as parameter $\Delta$, the other data are directly related to the positions of the zeros of the Jost-matrix determinant, as seen above. Ideally, one would thus like to directly express parameters $\a_{1}$, $\a_{2}$, $\b$, and $\cal E$, which define the Cox potential, in terms of the roots of Eq. . Certainly, there exist general formulas for the roots of the fourth-order algebraic equation , but they are very involved and cannot help much in realizing the above program. Therefore, we propose here an intermediate approach. Two of the roots of Eq.  happen to be easily expressed in terms of parameters $\a_1$ and $\a_2$. Once two roots are fixed, Eq.  reduces to a second-order algebraic equation for the two other roots, thus providing an implicit but rather simple mapping between the roots of Eq.  and the set of parameters. Let us denote by $(k_1,p_1)$ and $(k_2,p_2)$ two zeros of $f(k,p)$. This imposes some restrictions on the parameters $\a_1$ and $\a_2$. They are not independent anymore from the other parameters but should be found as functions of $k_{1,2}$, $p_{1,2}$, and $\b$ from the system of two equations obtained from Eq. , written for $k=k_1$, $p=p_1$ and for $k=k_2$, $p=p_2$. This system reads \[sys11a\] $$\begin{aligned} & (k_1+i\a_1)(p_1+i\a_2)+\b^2=0\,,\\ & (k_2+i\a_1)(p_2+i\a_2)+\b^2=0\,.\end{aligned}$$ After eliminating parameter $\a_2$ from system , one finds a second-order algebraic equation for $\a_1$, \_1\^2-\_1i(k\_1+k\_2)-k\_1k\_2+\^2=0, with $$\begin{aligned} \Delta_k &=& k_2-k_1,\\ \Delta_p &=& p_2-p_1,\end{aligned}$$ from which follows the two possible choices \[alp\] $$\begin{aligned} \label{alp1} \a_1 &=& \frac{1}{2}\left[i(k_1+k_2)\pm\sqrt{-\Delta_k^2-4\b^2\Delta_k/\Delta_p}\right],\\ \label{alp2} \a_2 &=& \frac{1}{2}\left[i(p_1+p_2)\mp\sqrt{-\Delta_p^2-4\b^2\Delta_p/\Delta_k}\right].\end{aligned}$$ The upper (resp., lower) sign in Eq.  corresponds to the upper (resp., lower) sign in Eq. . The values of $k_{1,2}$ and $p_{1,2}$ should be chosen so as to warranty the reality of parameters $\a_{1,2}$. Now, two roots $k_{1}$ and $k_{2}$ of the fourth-order algebraic equation  are fixed. Therefore, the two remaining roots $k_{3}$ and $k_{4}$ are solutions of a second-order equation $Q_2(k)=0$, where polynomial $Q_2(k)$ is the ratio of the polynomial appearing in Eq.  and of ${P_2(k)=k^2-k(k_2+k_1)+k_2k_1}$, i.e., explicitly $$k^4+ia_1k^3+a_2k^2+ia_3k+a_4=P_2(k)Q_2(k)\,.$$ From here we find Q\_2(k)=(k+i\_1)\^2+k(k\_2+k\_1)+(2i\_1+k\_2+k\_1)(k\_2+k\_1)+\_2\^2--k\_1k\_2 and, hence, \[k34\] k\_3=,\ k\_4=, where $D_k=\Delta_k^2+4\b^2\frac{\Delta_p}{\Delta_k}+4k_2k_1$. The sign before the first square root in Eqs.  should be chosen in accordance with the sign in Eqs.  for parameters $\a_{1,2}$. To find $p_{3,4}$, we do not need to solve any equation. We simply notice that the equation $\mbox{det}\tilde{F}(k)=0$ is invariant under the transformation $k \leftrightarrow p$, $\a_1 \leftrightarrow \a_2$, $\Delta \leftrightarrow -\Delta$. This means that, being transformed according to these rules, Eqs.  give us the $p$ values: \[p34\] p\_3 = ,\ p\_4 = , where $D_p = \Delta_p^2+4\b^2\frac{\Delta_k}{\Delta_p}+4p_2p_1$. Let us now consider several important examples. According to the analysis of Subsec. \[subsec:zeros\], only two essentially different possibilities exist, namely, a system described by the Cox potential may have either one resonance or no resonance. ### One resonance Assume Eqs.  have two complex roots and let us define their first-channel components as \[k12\] $$\begin{aligned} k_1 &=& k_r+ik_i, \\ k_2 &=& -k_r+ik_i, \end{aligned}$$ where $k_r$ and $k_i$ are real. Let us assume the real part $k_r$ to be positive to fix ideas. The imaginary part $k_i$, on the other hand, can be either positive or negative. Let us write the corresponding energies, $k_{1,2}^2$, as $E_r \pm i E_i$, where we also assume $E_i$ positive to fix ideas (which means that the upper sign corresponds to $k_1$ or $k_2$, depending on the sign of $k_i$). We would like to choose as parameters the threshold difference $\Delta$, as well as the real and imaginary parts of the resonance complex energy, $E_r, E_i$. As seen below, these can correspond to physical parameters of a visible resonance in some (but not all) cases. In terms of these parameters, $k_r$ and $k_i$ are expressed as \[krootsri\] \[kroots\] k\_r&=& \^[-1/2]{},\ k\_i&=&\^[1/2]{}. \[kroots2\] In the second channel, the roots corresponding to $k_{1,2}$ can be found from the threshold condition . They are given by $$\begin{aligned} p_1 &=& p_r+ip_i, \\ p_2 &=& -p_r+ip_i, \end{aligned}$$ with \[proots\] p\_r&=& -\^[1/2]{},\ p\_i&=& \^[-1/2]{}. \[proots2\] The upper (resp., lower) sign in Eq.  corresponds to the upper (resp., lower) sign in Eq. , which means that, for a given zero, the signs of $k_i$ and $p_i$ are opposite. This can be understood from the first column of Fig. \[fig1\]: a resonance appears when the two hyperbolas are tangent to each other, which can only happen in the second and fourth quadrant, where $\lambda$ and $\rho$ have opposite signs. Moreover, Eqs.  and  show that, for a given zero, the signs of $k_r$ and $p_r$ are also opposite. This implies that, for the Cox potential, the complex resonance zeros (or scattering-matrix poles) are always in opposite quadrants in the complex $k$ and $p$ planes, as illustrated for instance by the complex zeros in Fig. \[fig1\](c). This has important consequences for physical applications: for a resonance to be visible, one of the corresponding zero has to lie close to the physical positive-energy region, i.e., close to the real positive $k$ axis and close to the region made of the real positive $p$ axis and of the positive imaginary $p$ interval: $[0,i\sqrt{\Delta}]$. Consequently, the only possibility for a visible resonance with the Cox potential is that of a Feshbach resonance, only visible in the channel with lowest threshold, with an energy lying below the first threshold. At higher resonance energies, the corresponding zero is either close to the $k$-plane physical region (and far from the $p$-plane one) or close to the $p$-plane physical region (and far from the $k$-plane one); it cannot be close to both physical regions at the same time, hence it cannot have a visible impact on the coupled scattering matrix. Here, we limit ourselves to the case of a visible resonance, which is the most interesting from the physical point of view. It corresponds to the lower signs in Eqs.  and , with a resonance energy $E_r$ such that $0<E_r<\Delta$, and a resonance width $\Gamma=2 E_i$ such that $E_i<E_r$. To get a potential without bound state, we choose the upper signs in Eqs. , which leads to \[a12\] \_[1]{}&=&-k\_i+ k\_r\^[1/2]{},\ \_[2]{}&=&-p\_i- p\_r\^[1/2]{}. From here, we see that, for non-zero values of the parameters $k_r$ and $p_r$ (which have opposite signs), the coupling parameter $\b$ cannot be infinitesimal: because $\a_{1}$ and $\a_{2}$ have to be real, $\b$ is restricted to satisfy the inequality $\b \ge \sqrt{-k_r p_r}$. Now, using Eqs.  and  it is easy to find the two remaining roots \[k11\] k\_[3,4]{}=-ii, \[k21\] p\_[3,4]{}=-ii. To get a potential with one bound state at energy $-\lambda_b^2$, we choose the lower signs in Eqs. . We then get for $k_3(\beta)$ an expression similar to Eq. , from which the value of $\beta$ can be found by solving the bi-squared equation k\_3()=i \_b. Let us now choose explicit parameters. First, without loosing generality we may put $\Delta=1$ (see Subsec. \[subsec:zeros\]). To get a visible resonance, we put $E_r=0.4$, $E_i=0.01$ (which corresponds to a resonance width $\Gamma=0.02$), and $\b=0.1$. Using Eqs.  and , one finds $\a_1=0.76938$ and $\a_2=-0.766853$. The factorization energy, $\cal E$, is not constrained in this case: it just has to be negative. The Cox potential with one resonance and no bound state for different values of $\kappa_1$ is shown in the first row of Fig. \[figExR\]. The diagonal elements of the potentials, $V_{11}$ and $V_{22}+\Delta$, are plotted with solid lines, while $V_{12}$ is plotted with dashed lines. Parameter $\kappa_1$ is responsible for changing the range of the potential, as shown by Eqs. , and hence the scattering length (see discussion below). The second row of this figure shows the corresponding partial cross sections, where the resonance behavior is clearly seen, as well as the evolution of the low-energy cross section, which is related to the scattering length. The last row of Fig. \[figExR\] shows the corresponding phase shifts for the open channel, where a typical Breit-Wigner behavior (see e.g. Ref. [@taylor:72]) is seen for the resonance, as well as the evolution of the zero-energy phase-shift slope, which is also related to the scattering length. ### Two bound states Let us now construct a Cox potential with two bound states, and hence no resonance (see Fig. \[fig2\]). We choose $k_1=0.1i$ and $k_2=1.5i$ for these bound states and, as in the previous example, we put $\Delta=1$ and $\b=0.1$. We thus have $p_1=\sqrt{1.01} i$ and $p_2=\sqrt{3.25} i$, which defines $\Delta_p$ in Eqs. . Choosing the upper signs in these equations, we find $\a_1=-0.112649$ and $\a_2=-1.79557$, while for the lower signs, we get $\a_1=-1.48735$ and $\a_2=-1.0122$. The corresponding Cox potentials are shown in Fig. \[figExB2\]. ### One bound state If only one bound state $E_b=-\lambda_b^2$ is known, we may consider the position of the second zero as a free parameter. Besides, one can solve system  at $\lambda=\lambda_b$ with respect to $\a_2$, which leads to \[c1\] \_2=-. Figure \[figex4\] gives a graphical representation of Eq.  in the plane of parameters $(\a_1,\a_2)$: the dashed curves correspond to all the Cox potentials that have a bound state at the same energy $E_b=-4$. Figure \[figex4\] also shows that these iso-energy curves  can be obtained by a shifting of the curves separating regions of plane $\mathbb{A}$ with different number of bound states, defined by Eq.  (solid lines). Let us finally summarize these possible inverse problems in Table \[tab2\]. The free parameters allow either for isospectral deformations of the potential or for fits of additional experimental data as, e.g., scattering lengths. This possibility will be used in the next section on atom-atom systems. --------------------------------------------- ---------------------- ------------------------------ -------------------------------- Experimental Fixed Free Restrictions data parameters parameters $\Delta\,, E_r\,, E_i$ $\a_1\,, \a_2\, $ $\kappa_1, \b$ $\b\geq \sqrt{-k_rp_r}$ $\Delta\,, E_b=-\lambda_b^2\,, E_r\,, E_i $ $\a_1\,, \a_2\,, \b$ $\kappa_1$ $\kappa_1>\lambda_b$ $\Delta\,, E_{1,2}=-\lambda_{1,2}^2$ $\a_1\,, \a_2\, $ $\kappa_1, \b$ $\kappa_1>\lambda_2>\lambda_1$ $\Delta\,, E_{b}=-\lambda_b^2 $ $\a_2\, $ $\kappa_1\,, \b\,, \alpha_1$ $\kappa_1>\lambda_b$ --------------------------------------------- ---------------------- ------------------------------ -------------------------------- : \[tab2\] Possible mappings between some experimental data and the Cox potential parameters. Low-energy scattering matrix ---------------------------- In this section, we analyze the $S$-matrix given by Eq.  for energies close to the lowest threshold, the energy of which we have chosen equal to zero. From Eqs.  and , one finds the Cox-potential $S$-matrix \[sm\] (k,p)= ( [cc]{} f(-k,p) &\ & f(k,-p) ). When the second channel is closed, i.e., for energies $0<E<\Delta$, the physical scattering matrix is just a function $S(k,p)$, which coincides with the first diagonal element of $S$-matrix . It reads \[sf\] S(k,p)= . From here one finds the scattering amplitude ${A(k)=[S(k)-1]/2ik}$, which reads A(k)= , and the scattering length $a=-A(0)$, which reads \[sl\] a= +. From the argument of $S(k)=e^{2i\d (k)}$, one deduces the phase shift $\d (k)$, which reads \[ps\] (k)= +. One can check on Eqs.  and  that the scattering length is the slope of the phase shift at zero energy, as it should be. Note that Eq.  is equivalent to k(k)=, where $a_{\b}(k)=\a_1-\b^2/\left(\sqrt{\Delta-k^2}+\a_2\right)$. In the uncoupled case ($\b=0$), this expression reduces to the phase shifts of the simplest Bargmann potential (see e.g. Ref. [@newton:82]), which depends on the parameters $\kappa_1$ and $a_B \equiv a_{\beta=0}=\alpha_1$. Therefore, the Cox potential may be considered as a coupled-channel deformation of the Bargmann potential, resulting in an energy dependence of one of its parameters, $a_B$. The scattering length is an important physical quantity. In many-body theories for instance, it is often used to describe interactions in the $s$-wave regime. Let us thus study into detail the scattering length of the Cox potential, as given by Eq. . When considered as a function of $\a_{1,2}$, it has a singularity located at the boundary of the single-bound-state region provided by Eq. . Such infinite values of the scattering length happen when a zero of the Jost determinant, which corresponds to an $S$-matrix pole, crosses the first threshold: a bound state is then transformed into a virtual state, in agreement with the general theory [@newton:82]. We can analyze the sign of the scattering length for different numbers of bound states $n_b$ given by Eq. , by considering the indices $I_{1,2}$ given by Eqs. . First, we remark that $I_1=-1$ for $n_b=2,0$, and $I_1=1$ for $n_b=1$. Then, since $I_2=-1$ for $n_b=2$, the scattering length is positive when the Cox potential has two bound states. Next, for $n_b=0$ we have $I_2=1$; the two contributions of the scattering length then have different signs. As a result, at fixed $\a_{1}$, $\a_{2}$, $\b$, and $\Delta$, one can get both positive and negative scattering lengths by varying only $\kappa_1$. Similarly, when $I_2=1$ (one-bound state), the scattering length may only be positive. In contrast, for $I_2=-1$ it may be both positive and negative but for fixed values of $\a_1$, $\a_2$, $\b$, and $\Delta$, it becomes negative for large enough $\kappa_1$. Two-channel model of alkali-metal atom-atom collisions in the presence of a magnetic field \[4\] ================================================================================================ Magnetic Feshbach resonance --------------------------- Ultra-cold collisions of alkali-metal atoms play a key role in applications of laser cooling such as Bose-Einstein condensation and BEC-BCS crossover. The analysis of such experiments is commonly based on the coupled-channel method [@stoof:88], i.e., on solving numerically a set of coupled differential equations. In this paper, we reduce the low-energy scattering problem of two alkali-metal atoms to an effective two-channel problem with a single Feshbach resonance, as in Ref. [@nygaard:06]. The model consists of a single closed channel $Q$ containing a bound state, which interacts with the scattering continuum in the open channel $P$, so that the whole scattering problem is reduced to the two-channel scattering described by the $2\times2$ Hamiltonian \[HPQ\] H=-+ ( [cc]{} V\_[P]{}(r) & V\_[int]{}(r)\ V\_[int]{}(r)& V\_[Q]{}(r) ), where $V_{P}$ is the uncoupled open-channel potential, $V_{Q}$ is the uncoupled closed-channel potential, and potential $V_{int}$ describes the coupling between the open and closed channels $P$ and $Q$. These channels describe atoms placed in a magnetic field and occupying different energy sub-levels which can be shifted with respect to each other with the change of the magnetic field (Zeeman effect). For each value of the magnetic field, the zero of energy is chosen as the energy of the dissociated atoms in channel $P$. Even in the simplest case of a homogeneous magnetic field, the potential-energy matrix of Hamiltonian  depends on the magnetic field. We will assume that the external field changes slowly enough so that we can take advantage of the adiabatic approximation, assuming that the stationary Schrödinger equation may be applied for describing the scattering process and the magnetic field enters the Hamiltonian as a parameter only. Moreover, the known observation that, when the scattering length is much larger than the range of the interaction, the general behavior of the system is nearly independent of the exact form of the potential [@sakurai:94], suggests us to use the Cox potential with large scattering length for describing the interatomic scattering. We thus replace the potential matrix in Eq.  by the Cox potential. In this case, the parameters of the Cox potential should carry a dependence on the magnetic field. Below, we show that, to get a good agreement with available experimental data, it is sufficient to impose a linear field dependence on the threshold difference $\Delta$ only, keeping all other parameters field independent. Thus, inverting known scattering experimental data, one can find all the parameters defining the Cox potential, obtaining in this way a simple analytical model of the atom-atom scattering process in the presence of a magnetic field. The position of the highest bound (or virtual) state is crucial in describing the resonance phenomena of interatomic collisions. In an $s$-wave single-channel system, the scattering process becomes resonant at low energy when a bound state or virtual state is located near the threshold, a phenomenon known as “potential resonance”. In a multichannel system, the incoming channel (which is always open) may be coupled during the collision process to other open or closed channels, corresponding to different spin configurations. When a bound state in a closed channel lies near the collision energy continuum, a Feshbach resonance [@feshbach:58; @feshbach:62] may occur, giving rise to scattering properties that are tunable by an external magnetic field. In Ref. [@marcelis:04], some interesting examples of the interplay between a potential resonance and a Feshbach resonance are considered. Below, we adjust the analytically-solvable model based on the Cox potential for describing the same phenomena. Typically, the coupling between the closed and open channels is rather small; we thus consider first an uncoupled limit of the Cox potential, i.e., $V_{int}(r)\to0$, which corresponds to $\b\to0$. In this case, the Jost determinant  has the following zeros: k\_1=-i\_1 and p\_2=-i\_2. The energies of these unperturbed (i.e., with zero coupling) states (called bare molecular states in Ref. [@marcelis:04]) are \[bme\] E\_1=-\_1\^2 and E\_2=-\_2\^2+. It should be noted that in this case $E_1$ belongs to channel $P$ while $E_2$ belongs to channel $Q$. Hence, $\a_1$ is associated with the potential resonance, while $\a_2$ is associated with the Feshbach resonance. Due to the Zeeman effect, the difference between the thresholds is a linear function of the magnetic field, (B)=\_0+\_[mag]{}(B-B\_0), \[DeltaB\] where $B_0$ can be arbitrarily chosen in the domain of interest and $\Delta_0$ is the value of the threshold corresponding to $B_0$. If $\a_{1,2}<0$ and the coupling is absent, then the two bound states cross at $\Delta=\a_2^2-\a_1^2$. Note that $E_2$ crosses the threshold at $\Delta=\a_2^2$. When there is a coupling between channels, the levels $E_1$ and $E_2$ avoid crossing (see below). Let us consider the behavior of the scattering length in the presence of the Feshbach resonance. It is described by the following formula [@moerdijk:95]: \[slt\] a=a\_[bg]{}(1-). Here, $B_0$ is the position of the magnetic Feshbach resonance and $\Gamma_B$ is its width (in terms of magnetic field). In particular, Eq.  shows that such an infinite value of the scattering length occurs for the Cox potential at a threshold $\Delta_0$ defined by: $$\sqrt{\Delta_0}=\frac{\b^2-\a_1\a_2}{\a_1}\,.$$ Let us now assume for the Cox potential a threshold difference given by Eq.  with such a value of $\Delta_0$. Expanding Eq.  near this resonance, it is easy to get Eq.  and a simple expression for the width of the resonance for the Cox potential. $$\begin{aligned} \label{CoxSLFr} a &=& \frac{\a_1-\kappa_1}{\a_1\kappa_1}\\ & \times & \left(1+\frac{2\left[1+{\rm o}\left(\Delta-\Delta_0\right)\right]\kappa_1\sqrt{\mathstrut \Delta_0}\left(\sqrt{\mathstrut \Delta_0}+\a_2\right)} {\left(\a_1-\kappa_1\right)\left(\Delta_0-\Delta \right)}\right)\,.\nonumber\end{aligned}$$ \_B=. As shown in Ref. [@marcelis:04], the background scattering length $a_{bg}$ is due to the open-channel potential. When there is a bound state or virtual state close to threshold, it can be further decomposed as a sum of two contributions: a standard potential part, which depends on the potential range, and a potential-resonance part, which depends on the bound/virtual-state energy. This decompostion clearly appears in our model: the background scattering length corresponds to a large magnetic field $B$ in Eq.  or to a large $\Delta$ in Eq. , namely, $a_{bg}=\lim\limits_{\Delta\rightarrow\infty}a$, which yields: \[abg\] a\_[bg]{}=-. The same result comes from Eq. . It should be noted that a large threshold difference $\Delta\rightarrow\infty$ effectively corresponds to a small coupling $\b\rightarrow 0$. From here we find another relation $a_{bg}=\lim\limits_{\b\rightarrow 0}a$ which leads to the same background scattering length. In this formula, the first term is proportional to $1/\kappa_1$, the parameter which defines the range of the open-channel potential \[see Eqs. \]; it may thus be considered as the standard potential part of the background scattering length. The second term is associated with the $P$-channel bound (or virtual) state in the uncoupled limit. Hence, it may be interpreted as the potential-resonance part of the background scattering length. Let us further consider two different possibilities giving rise to a large (either positive or negative) background scattering length. Interplay between a bound state and the Feshbach resonance ---------------------------------------------------------- The first possibility occurs when the highest bound state is located near the threshold, i.e., when $\a_1\lesssim 0$. In Fig. \[figE\], we show energies as functions of the magnetic field when channel $P$ has a bound state just below the threshold, for \[parCs\] $$\begin{aligned} \b & = & 0.05, \\ \a_1 & = & -\lambda_b=-0.103, \\ \a_2 & = & -0.5, \\ \kappa_1 & = & 1. \label{kappa1}\end{aligned}$$ We are using arbitrary units and choose $\Delta(B)=0.35-B$ in Eq. . The bare bound states ($\beta=0$) of the $P$ and $Q$ channels are indicated by the dashed horizontal and slanted lines respectively. These bare states are Hamiltonian eigenstates in the uncoupled $P$ and $Q$ subspaces. The bare $Q$-channel bound state crosses the $P$-channel threshold (solid horizontal line) at $B=0.1$. The dressed states are represented by solid lines and display an avoided-crossing behavior [@marcelis:04]. For low fields, the model has one bound state and one Feshbach resonance, the energies of which are close to the bare-state energies. The energy of the Feshbach resonance becomes negative above $B=0.112$, when the imaginary part of the zero in the complex $k_1$ plane becomes larger than its real part. At $B=0.12$, these complex zeros collapse and transform into two virtual states (purely imaginary zeros), which corresponds to the discontinuous slope in Fig. \[figE\]. With increasing magnetic field, one of these virtual states (represented in Fig. \[figE\]) gets closer to threshold, while the other one (not represented in Fig. \[figE\] as it does not affect the low-energy scattering properties) goes away. At $B_0=0.124$, the virtual state crosses the threshold and becomes a bound state; the scattering length thus goes through infinite values at that field: this is the magnetic-Feshbach-resonance phenomenon itself. Above $B_0$, the model has two bound states, the energies of which tend to the bare-state energies when the field increases. Following Ref. [@marcelis:04], we stress that, although the behavior of the dressed states shows some resemblance with the two-level Landau-Zener description, this model does not include the threshold effects shown in Fig. \[figE\] and, hence, cannot be used to properly describe the interplay between a potential resonance and a Feshbach resonance. With respect to Ref. [@marcelis:04], our model displays a slightly more sophisticated behavior for the state energies (compare our Fig. \[figE\] with their Fig. 4). A more significant novelty of our description is the direct knowledge of the coupled-channel potential corresponding to these energies. This potential is shown in Fig. \[figptCs\] for $B=0.1$. The potential form factor changes slowly with the change of the magnetic field, which is mainly responsible for the variation of $\Delta$. The value of $\kappa_1$ chosen in Eq.  is arbitrary. However, the necessary and sufficient condition to get a Cox potential without singularity imposes then that the bound-state energies of the model should be larger than $-1$. Figure \[figE\] shows that this condition will be satisfied for a limited range of magnetic field only. For higher fields, a larger $\kappa_1$ should be chosen. The phase shifts of the same Cox potential, as well as a graphical representation of Eqs. , are shown in Fig. \[figSH\] for different values of $B$. The first and the last columns correspond to a large positive background scattering length ($a_{bg}\sim1/\lambda_b\approx 10$), due to a bound state close to the threshold. Physically, this occurs for the $^{133}\rm{Cs}$ atom-atom interaction [@leo:00], for instance. Figure \[figSH\](b) illustrates the case where the scattering length is close to zero. The calculation or measurement of the zero of the scattering length plays an important role in determining the resonance width [@ohara:02]. The phase-shift behavior for the virtual state and bound state close to threshold is shown in Figs. \[figSH\](c) and \[figSH\](d), respectively. In this case, the scattering length is very large and its sign changes while the energy of the zero of the Jost-matrix determinant crosses the threshold. Recalling that the intersection points in the graphical representation of Eqs. , shown in the second row of Fig. \[figSH\], give the positions of bound and virtual states, one may establish a correspondence between the second row of Fig. \[figSH\] and the motion of the corresponding zeros in the complex plane described above. Interplay between a virtual state and the Feshbach resonance ------------------------------------------------------------ Another interesting possibility occurs when there is a virtual state close to the threshold, i.e., when $\a_1\gtrsim 0$. This is the case of the $^{85}{\rm Rb}$ atom-atom interaction, for example. We will use rubidium scattering data [@arimondo:77; @marcelis:04] in this example, and work with units $\hbar=2\mu=1$, where $\mu$ is the reduced mass of the two atoms. The length unit is chosen as the Bohr radius $a_0$; energies are thus expressed in units of $a_0^{-2}$. According to Ref. [@marcelis:04], the bare virtual state is located at $\lambda_v=-1.78\cdot10^{-3}a_0^{-1}$, but this value is associated with the model they used in their calculations. We just consider $\lambda_v\sim -10^{-3}a_0^{-1}$ and set Eq.  as a constraint between $\alpha_1=-\lambda_v$ and $\kappa_1$. In order to fit the scattering-length behavior  with $a_{bg}=-443\,a_0$, $B_0=15.5041$ mT and $\Gamma_B=1.071$ mT, we use Eq. . The value of $\b$ defines, in particular, the position of the Feshbach resonance, i.e., the magnetic field $B_0$ for which the bound state crosses the threshold. According to Eq. , one has $$\b=\sqrt{\a_1\left(\a_2+\sqrt{\Delta_0}\right)},$$ where $\Delta_0$ is the value of the threshold corresponding to $B_0$. The value of $\a_2$, defining the width of the Feshbach resonance $\Gamma_B$, should be found from the condition $a(B_0+\Gamma_B)=0$. Then, according to Eq. , we find \_2= -, where $\Delta_0=2471.386$ MHz and $\mu_{mag}=-36.4$ MHz/mT [@marcelis:04]. To get that value of $\Delta_0$, we have used the known value of the threshold at zero magnetic field [@arimondo:77] and assumed that Eq.  is valid down to that field. From Eq. , we may fix $\kappa_1=\a_1/(1+\a_{bg}\kappa_1)$ at $a_{bg}=-443\,a_0$ and find the values of all parameters defining the potential at the given position of the Feshbach resonance and with the given value of the background scattering length: \[parRb\] & = &0.0202366a\_0\^[-1]{},\ \_1 & = &-\_v=2.210\^[-3]{}a\_0\^[-1]{},\ \_2 & = & -0.239343a\_0\^[-1]{},\ \_1 & = & 0.0866a\_0\^[-1]{}, \[kappa1Rb\]\ \_2 & = & =a\_0\^[-1]{}. In Fig. \[figSl\], we show that, with these parameters, the Cox-potential scattering length  reproduces the Feshbach-resonance scattering length  with good precision. The value $\a_1=2.2\cdot10^{-3}\,a_0^{-1}$ was chosen to get a smooth potential $V_P$ without repulsive core. This potential is shown in Fig. \[RB-Cox\] and, once again, has a form factor rather independent of the field, except for the threshold. In Fig. \[figEV\], we show the corresponding energies as functions of the magnetic field. The bare bound state of channel $Q$ is represented by the slanted dashed line. The bare virtual state of channel $P$, which is located at $\lambda_v=-2.2\cdot10^{-3} \,a_0^{-1}$, is not shown in Fig. \[figEV\]. The dressed states are indicated by solid lines. When $B<B_0=15.5041$ mT, there exist both a virtual state and a Feshbach resonance, the energies of which tend to the bare-state energies for small $B$. The virtual state becomes a bound state at $B=B_0$ (see inset). With increasing $B$, the real part of the resonance energy decreases and at $B=16.657$ mT it crosses the threshold. Finally, at $B=16.9$ mT, the two resonance poles collapse and produce two virtual states, one of which stabilizes at $\lambda_v=-2.2\cdot10^{-3}\,a_0^{-1}$ ( the other one has a much larger negative energy and is not represented in Fig. \[figEV\], as it does not affect the low-energy scattering properties). The behavior of the curves in Fig. \[figEV\] is very similar to those of Fig. \[figE\], in particular regarding the transformation of the Feshbach resonance into a virtual state. The only difference between the present case (avoided crossing between a virtual state and a Feshbach resonance) and the previous case (avoided crossing between a bound state and a Feshbach resonance) is that here a virtual state transforms into a bound state before the crossing, while there a virtual state transforms into a bound state after the crossing. Another interesting comparison is between our Fig. \[figEV\] and Fig. 5 of Ref. [@marcelis:04]; it would be instructive to perform a detailed comparison of the two models to explain the differences between these two figures. As for the interplay with a bound state, Fig. \[figEV\] also shows some limit on the range of magnetic field on which our model can be used: since $\kappa_1$ is fixed in Eq.  and the bound-state energy should be larger than $-\kappa_1^2 \approx -0.0075 a_0^{-2}$ (otherwise the potential becomes singular for some value of $r$), the field should be lower than 24.5 mT. The behavior of the phase shifts in the region with the resonant and virtual states is shown in the first row of Fig. \[figSH1\]. A similar discussion to that of Fig. \[figSH\] can be made here, except that here the large positive background scattering length results in a large negative slope for the phase shift at the origin. Exactly at $B_0=15.5041$ mT, when the bound state transforms into a virtual state, the phase shift starts from $\pi/2$. The second row of Fig. \[figSH1\] shows the corresponding behaviour of the bound- and virtual-state zeros on the wave-number imaginary axes, confirming the above analysis. \[sec:conclusion\] Conclusion ============================= In this work, we have derived the exactly-solvable $N$-channel Cox potential from a supersymmetric transformation of the vanishing potential and we have established different parameterizations of this potential, as well as a necessary and sufficient condition for its regularity. In the $N=2$ case, a full analysis of the corresponding Jost matrix has been carried out. The structure of the zeros of the Jost determinant has been presented geometrically and a method for controlling the position of the zeros of this Jost determinant has been proposed. This has led to several examples of Cox potentials with different number of bound states and resonance, solving schematic coupled-channel inverse problems. With ultracold gases in mind, we have also studied the low energy $S$-matrix and the scattering length of the Cox potential. Using independence of scattering properties from interaction details in the regime with a large scattering length, a model of alkali-metal atom-atom scattering has been constructed. This provides interesting exactly-solvable schematic models for the interplay of a magnetically-induced Feshbach resonance with a bound state or a virtual state close to threshold. We consider the development of supersymmetric transformations as a very promising tool for the multi-channel inverse scattering problem and for the construction of more advanced exactly-solvable coupled-channel models. In particular, iterations or chains of transformations might lead to more complicated Jost functions, with arbitrary number of bound states and resonances, hopefully still with a tractable connection between potential parameters and physical observables. As far as physical applications are concerned, atom-atom interactions are both very interesting today, due to the active research field of ultracold gases, and rather simple with respect to supersymmetric quantum mechanics, as only $s$-waves have to be considered and as the interaction is short ranged (no Coulomb term). We foresee to apply the present model to other systems presenting these simple features, namely coupled $s$-wave baryon-baryon interactions, with at least one neutral baryon. In the longer term, we hope to generalize our method to higher partial waves and to Coulomb interactions. This should allow us to construct useful models in the context of low-energy nuclear reactions, the field which first motivated the work of Feshbach [@feshbach:58; @feshbach:62] on coupled-channel resonances, leading to possible applications in nuclear astrophysics and exotic-nuclei low-energy reactions.
--- author: - | A dissertation submitted\ for the degree of\ Doctor of Philosophy in Physics\ \ by\ \ Shi Pu\ \ \ \ \ \ \ \ Supervisor: Qun Wang\ \ 2011 bibliography: - 'bib/main.bib' title: | [UNIVERSITY OF SCIENCE AND TECHNOLOGY OF CHINA ]{}\ [Heifei, CHINA]{}****\ ****\ ****\ ****\ ****\ **Relativistic fluid dynamics in heavy ion collisions** --- Copyright by Shi Pu 2011 [All Rights Reserved]{} **Dedicated to my dear family**
--- author: - 'R. Blomme' - 'G.C. Van de Steene' - 'R.K. Prinja' - 'M.C. Runacres' - 'J.S. Clark' date: 'Received date; accepted date' title: 'Radio and submillimetre observations of wind structure in $\zeta$ Pup' --- Introduction ============ (; O4I(n)f) is one of the most-studied O-type stars. Its stellar wind is driven by radiation pressure, whereby the momentum of the stellar photons is transferred to the wind material via line opacity. With other early-type stars it shares the property that its wind is structured. Evidence for this structure for O-type stars in general is seen in the Discrete Absorption Components (DACs) and black troughs in the ultraviolet resonance lines (e.g. Prinja et al. [@Prinja+al90]), the presence of X-ray emission (Sciortino et al. [@Sciortino+al90]) and the excess flux at infrared and millimetre wavelengths (Runacres & Blomme [@Runacres+Blomme96]). Specifically for , the significant discrepancy between the mass loss rate derived from the H$\alpha$ spectral line and that from the radio continuum flux (Petrenz & Puls [@Petrenz+Puls96]) also points to structure in the wind. Various types of structure have been proposed to explain the observations. Inhomogeneities on the stellar surface (e.g., due to non-radial pulsations), or magnetic fields, result in somewhat different radiative forces, thereby creating fast and slow streams of gas. As these streams also rotate, they collide, creating Corotating Interaction Regions (CIRs). These are large-scale, spiral-shaped structures in the wind that corotate with the stellar surface. In some cases, the CIRs can explain the most interesting property of DACs: the recurrence time scale is in agreement with the estimated rotation period (Mullan [@Mullan86], Cranmer & Owocki [@Cranmer+Owocki96]). Another type of structure that can exist in the wind is of a smaller-scale, stochastic type. It is caused by the inherent instability of the radiative driving mechanism (Owocki [@Owocki00]). Stochastic structure has been used to explain the X-ray fluxes (Lucy [@Lucy82a], Hillier et al. [@Hillier+al93], Feldmeier et al. [@Feldmeier+al97]), the black troughs in the UV resonance lines (Lucy [@Lucy82b]) and the excess millimetre flux (Blomme et al. [@Blomme+al02]). Finally, it should be noted that $\zeta$ Pup is a rapid rotator (at 43 % of critical velocity). This suggests the possibility that $\zeta$ Pup has a disk, or at least some enhancement of its density near the equatorial plane. Harries & Howarth ([@Harries+Howarth96]) derived a density contrast of at least 1.3 from their linear spectropolarimetry of H$\alpha$. Such equatorial density enhancement has been used (Petrenz & Puls [@Petrenz+Puls96]) to explain the discrepancy that smooth and spherically symmetric models give for the mass loss rates derived from H$\alpha$ and radio observations. While observational evidence for the existence of disks in certain stars (e.g. Be stars) is strong, there are theoretical difficulties in understanding how these disks form (Owocki et al. [@Owocki+al98]). One of the explanations proposed was the wind-compressed disk model (Bjorkman & Cassinelli [@Bjorkman+Cassinelli93]), but a more detailed study showed that the non-radial component of the line force and gravity darkening can combine to inhibit the formation of the disk. Interestingly, in some circumstances, the detailed models show an enhancement of the density towards the poles rather than the equator (Owocki et al. [@Owocki+al96], [@Owocki+al98], Petrenz & Puls [@Petrenz+Puls00]). Observations at submillimetre and radio wavelengths are relevant to structure. This is because bremsstrahlung, the dominant emission process at these wavelengths, has two interesting properties. First, its opacity is proportional to the [*wavelength*]{} squared. Therefore, the observed emission originates above a characteristic radius that increases with wavelength (typical values are $\sim 5~R_*$ at 1 mm and $\sim 100~R_*$ at 20 cm). Secondly, free-free emission also depends on the [*density*]{} squared. This makes it a good indicator of structure: a structured wind will have more radio emission than a smooth one. By looking for variability at a certain wavelength, one can hope to distinguish between structure that has azimuthal symmetry (stochastic, disk or polar outflow) and structure that lacks azimuthal symmetry (CIRs). By looking at increasing wavelengths, one can see how the “amount" of structure changes in the wind. We have already applied these techniques to the submillimetre and radio observations of the B0Ia star (Blomme et al. [@Blomme+al02]). For this star, we did not detect variations in the radio emission (at the 25 % level), but we found that the millimetre flux is substantially higher than a smooth wind model predicts. This discrepancy was interpreted with a model for stochastic structure, showing that considerable structure must persist up to at least $\sim 10~R_*$ in the wind of $\epsilon$ Ori. The present paper extends the study of structure in the outer wind to a luminous early O-type star. To look for variability at radio wavelengths, we acquired 3.6 and 6 cm observations with the Australia Telescope Compact Array[^1] (ATCA), during a 12-day observing session, which covers about two rotational periods of the star. To see how structure changes as a function of distance, observations were collected at 850 $\mu$m with the James Clerk Maxwell Telescope[^2] (JCMT), and at 20 cm with the NRAO Very Large Array[^3] (VLA). We supplemented this material with data from the VLA archive. A log of all data used in this paper is given in Table \[table data zeta pup\]. Upper limits from other observations (not discussed here) are listed in Wendker ([@Wendker95]). [lllllllllllll]{}\ & &\ & & &\ \ \ & C824& 1999-09-15$\rightarrow$27 & 3.6+6 &\ \ & AB1017& 2002-03-27 & 20 &\ \ & M00BU20& 2000-10-12 & 0.0850 &\ \ \ & FLOR & 1978-07-23 & 6 &\ & FLOR & 1978-10-13 & 6 &\ & BIEG & 1978-11-05 & 6 &\ & NEWE & 1979-02-09 & 2 &\ & FLOR & 1979-02-16 & 20 &\ & CHUR & 1979-07-12 & 6 &\ & FLOR & 1981-10-18 & 1.3+2+6+20 &\ & AA28 & 1984-03-07 & 2+6 &\ & AB327 & 1985-01-29 & 2+6 &\ & AH365 & 1989-05-13 & 3.6 &\ & AC308 & 1995-01-17 & 20 &\ In Sect. \[section stellar parameters\], we present the stellar parameters of $\zeta$ Pup. The ATCA, VLA and JCMT observations are discussed in Sects. \[section ATCA observations\], \[section VLA observations\] and \[section JCMT observations\] respectively. The interpretation of the observational material is discussed in Sect. \[section discussion\] and conclusions are drawn in Sect. \[section conclusions\]. The parameters of $\zeta$ Pup {#section stellar parameters} ============================= Table \[table stellar parameters\] lists the $\zeta$ Pup parameters we use in this paper. The stellar parameters and wind parameters are taken from the unified NLTE model of Puls et al. ([@Puls+al96]). These values are in close agreement with models by Bohannan et al. ([@Bohannan+al90]) and Pauldrach et al. ([@Pauldrach+al94]). The traditionally assumed distance of 450 pc (Kudritzki et al. [@Kudritzki+al83]) falls well within the Hipparcos error bar, so we will use $d=450$ pc throughout this paper. The radius of 19 $R_{\sun}$ at a distance of 450 pc corresponds to an angular diameter of 0.39 milli-arcsec (mas), in good agreement with interferometric measurements ($\theta_{\rm LD} = 0.42 \pm 0.03$ mas; Hanbury Brown et al. [@HanburyBrown+al74]). [llll]{} parameter & value & reference\ \ RA (J2000) & $08^{\rm h}03^{\rm m}35{\fs}0467$ & SIMBAD\ Dec (J2000) & $-40{\degr}00{\arcmin}11{\farcs}332$ & catalogue\ $\mu_\alpha {\rm cos}~\delta$ & $-31.7 \pm 0.5$ mas/yr & G01\ $\mu_\delta$ & $+17.6 \pm 0.6$ mas/yr & G01\ $V$ magnitude & 2.25 & M87\ $B-V$ & $-0.27$ & M87\ $E_{\rm B-V}$ & 0.044 & S77\ spectral type & O4I(n)f & W72\ $T_{\rm eff}$ & 42000 K & P96\ log g & 3.60 & P96\ $\log L/L_{\sun}$ & 6.00 & P96\ $R_*$ & $19 R_{\sun}$ & P96\ $M_*$ & $52.5 M_{\sun}$ & P96\ $N_{\rm He}/N_{\rm H}$ & 0.12 & P96\ $v \sin i$ & 220 km/s & P96\ $v_\infty$ & 2250 km/s & P96\ $\beta$ & 1.15 & P96\ $\dot{M}$ & $5.9 \times 10^{-6} M_{\sun}/\mathrm{yr}$ & P96, from H$\alpha$\ $d$ & 450 pc & K83\ & $429^{+120}_{-77}$ pc & Hipparcos\ \ \ References:\ ----- ------------------------------------------- G01 Gontcharov et al. ([@Gontcharov+al01]) K83 Kudritzki et al. ([@Kudritzki+al83]) M87 average from Mermilliod ([@Mermilliod87]) P96 Puls et al. ([@Puls+al96]) S77 Snow et al. ([@Snow+al77]) W72 Walborn ([@Walborn72]) ----- ------------------------------------------- : Stellar parameters of $\zeta$ Pup.[]{data-label="table stellar parameters"} $\zeta$ Pup shows considerable evidence for structure in its stellar wind. Discrete Absorption Components (DACs) are seen to move through the ultraviolet resonance lines (Prinja et al. [@Prinja+al92] and references therein). In 1995, $\zeta$ Pup was one of three targets observed continuously during 16 days by the [*International Ultraviolet Explorer*]{} (IUE Mega Campaign, Massa et al. [@Massa+al95]). The analysis of this high-quality dataset by Howarth et al. ([@Howarth+al95]) reveals a 19.2 h and a 5.2 day period. The 19.2 h period is the recurrence time of the DACs. The 5.2 day period indicates modulation on a global scale, and is in good agreement with the rotation period estimated from $v \sin i$, suggesting that material in the wind corotates with the star. One possible cause for this corotating material is the presence of a (weak) magnetic field. Attempts to detect the magnetic field for $\zeta$ Pup have so far yielded a null result with an uncertainty of 100-200 G (Barker et al. [@Barker+al81], Chesneau & Moffat [@Chesneau+Moffat02]). $\zeta$ Pup also shows variability in the H$\alpha$ and He [ii]{} 4686 line profiles (Moffat & Michaud [@Moffat+Michaud81] and references therein, Hendry & Bahng [@Hendry+Bahng81], Reid & Howarth [@Reid+Howarth96], Eversberg et al. [@Eversberg+al98]). The presence of variability in these wind-formed lines is consistent with large-scale structures corotating through the wind. Eversberg et al. however interpret their He [ii]{} 4686 observations in terms of smaller-scale structures (“clumps") moving out in the wind. Fullerton et al. ([@Fullerton+al96]) and Reid & Howarth ([@Reid+Howarth96]) detected variability in a number of optical lines (He [i]{}, He [ii]{}, N [iv]{}, C [iv]{}). The variations have a period of 8.54 h. Both papers attempt to interpret the variations in terms of non-radial pulsations, but they also raise concerns that these lines might be wind contaminated. The photospheric perturbations due to non-radial pulsations are another possible way of creating CIRs. Optical continuum fluxes also show variability: the 5.2 day rotation period was detected in optical photometry (Balona [@Balona92]). From Hipparcos photometry, Marchenko et al. ([@Marchenko+al98]) derive a period which is half the rotation period. Other indicators of structure are the continuum flux excess at infrared and millimetre wavelengths (Runacres & Blomme [@Runacres+Blomme96]) and the presence of X-ray emission (Long & White [@Long+White80]). Models by Hillier et al. ([@Hillier+al93]) and Feldmeier et al. ([@Feldmeier+al97]) show that structure can explain the X-ray emission. Earlier claims that the observed X-ray emission is variable (Collura et al. [@Collura+al89]) were later shown to be incorrect (Berghöfer & Schmitt [@Berghoefer+Schmitt94]). Later, Berghöfer et al. ([@Berghoefer+al96]) found variability in the ROSAT X-ray data (on roughly the same time scale as the H$\alpha$ variability), but this was not confirmed by the ASCA data (Oskinova et al. [@Oskinova+al01]). ATCA observations {#section ATCA observations} ================= Data ---- We used the Australia Telescope Compact Array (ATCA) for a long observing session on $\zeta$ Pup in September 1999. The $\sim$ 27 h on-target integration time was divided into 8 runs, spread over 12 days, thus providing coverage of the $\sim$ 5.2 day rotation period. A detailed observing log is presented in Table \[table ATCA\]. At the time of the observation, ATCA was in configuration 6A, with the longest baseline at 5939 m and the shortest one at 337 m. The continuum observations were done simultaneously at 3.6 cm (X-band; 8.688 GHz) and 6 cm (C-band; 4.848 GHz). The observing bandwidth was 128 MHz. A single observing run consists of repetitively observing $\zeta$ Pup for 15 min and then the phase calibrator for 5 min. The flux calibrator was observed when possible at the beginning or end of the run. The flux calibrator could not be observed during the fourth (SEP20) and seventh observing run (SEP25) due to scheduling or technical problems. [lllllllr@[.]{}llll]{} & & & & & & & &\ & & & & & & & &\ SEP15-23:47:55 & SEP16-03:32:15 & 3.30 & 2.44 & 1.98 $\pm$ 0.12 & 1.40 $\pm$ 0.11 & 4.05$\times$0.68 & 33&6 & 7.43$\times$1.23\ SEP18-23:07:15 & SEP19-03:01:15 & 3.52 & 2.56 & 2.10 $\pm$ 0.12 & 1.49 $\pm$ 0.11 & 3.78$\times$0.67 & 29&4 & 6.96$\times$1.20\ SEP19-22:09:05 & SEP20-01:59:35 & 3.44 & 2.50 & 1.98 $\pm$ 0.13 & 1.34 $\pm$ 0.11 & 4.31$\times$0.72 & 18&3 & 8.07$\times$1.10\ SEP20-18:06:25 & SEP20-22:13:35 & 3.79 & 2.92 & 2.40 $\pm$ 0.12 & 1.61 $\pm$ 0.11 & 3.58$\times$0.63 & -20&4 & 6.63$\times$1.14\ SEP22-20:08:55 & SEP23-02:16:45 & 5.26 & 3.93 & 2.22 $\pm$ 0.13 & 1.62 $\pm$ 0.10 & 2.65$\times$0.62 & 9&5 & 4.82$\times$1.14\ SEP23-15:07:55 & SEP24-01:38:55 & 9.39 & 6.86 & 2.41 $\pm$ 0.13 & 1.67 $\pm$ 0.11 & 2.41$\times$0.79 & -45&1 & 4.35$\times$1.44\ & & & & 2.40 $\pm$ 0.13 & 1.66 $\pm$ 0.10 & 3.22$\times$0.60 & 9&2 & 5.92$\times$1.09\ SEP25-20:05:25 & SEP26-00:00:25 & 3.54 & 2.74 & 2.55 $\pm$ 0.13 & 1.60 $\pm$ 0.11 & 4.11$\times$0.57 & 2&6 & 7.74$\times$1.04\ SEP27-16:04:05 & SEP27-19:59:35 & 3.60 & 2.62 & 2.39 $\pm$ 0.14 & 1.53 $\pm$ 0.11 & 3.30$\times$0.73 & -40&3 & 6.02$\times$1.32\ TOTAL & & 35.84 & 26.57\ & & & 2.26 & 1.55 & & &\ & & & 2.39 & 1.60 & & &\ & & & 2.38 $\pm$ 0.09 & 1.64 $\pm$ 0.07 & 1.65$\times$0.73 & 2&6 & 3.01$\times$1.33\ Reduction --------- As the level of variability we expect is small, the reduction needs to be done as carefully as possible. For this reason, we also discuss the reduction in detail. The data were reduced in Miriad following the user guide (Sault & Killeen [@Sault+Killeen99]). The data were read into Miriad and corrected for self-interference of the array, as well as for the phase difference between the X and Y channels. Next, the programme [blflag]{} was used to flag out bad datapoints interactively. The calibrators were then used to determine the antenna gains as a function of time (using [mfcal]{}). The flux assigned to the flux calibrator PKS B1934-638 is 5.85 Jy (with a 2 % error) at 6 cm. The flux value determined with [uvflux]{} of the phase calibrator PKS B0823-500 is 3.0 Jy. At 3.6 cm the flux calibrator is 2.88 Jy (with a 2 % error) and the phase calibrator is 1.53 Jy. In those runs where the flux calibrator had not been observed, we used the phase calibrator with the above mentioned flux values to calibrate the data. We determined the bandpass function from the flux calibrator. We interpolated the instrumental gains and applied them to the $\zeta$ Pup observation. The task [invert]{} was then used to produce an image from the visibility datasets by Fourier transform. In the inversion we used multi-frequency synthesis (MFS), which compensates for the spectral index of the source across the bandwidth. Because of the sparse UV-coverage per observation, we used robust uniform weighting (Briggs [@Briggs95]) to improve the RMS in the map. To de-convolve the image we used [mfclean]{}. The resulting clean components are then convolved with a Gaussian and added to the residual image. Besides $\zeta$ Pup, 6 other objects were detected in the primary beam at 6 cm and 4 at 3.6 cm (see Fig. \[fig sources\] and Table \[table identifications\]). ![$\zeta$ Pup and 6 other sources detected at 6 cm on the ATCA images. This map covers a large part of the primary beam. The synthesized beam is shown in the lower left corner. Identifications of the sources are listed in Table \[table identifications\]. Sources S6 and S7 are not seen on the 3.6 cm image, because of the smaller primary beam. []{data-label="fig sources"}](zp_fig1.ps) no. RA (2000) DEC (J2000) Identification ----- ------------- ------------- ---------------- -- S2 08 03 33.11 -40 00 03.1 S3 08 03 28.65 -39 58 20.4 08 03 24.38 -39 59 49.2 S5 08 03 21.54 -40 01 56.9 S6 08 03 10.87 -39 57 29.8 08 03 44.03 -39 57 56.5 : Position of other sources on the combined ATCA image. Formal error bars are better than 001 in right ascension and 01 in declination. Identifications given refer to Jones ([@Jones85]) and Condon et al. ([@Condon+al98]). No identification was found for S4 or S7 in the SIMBAD or HEASARC catalogues. []{data-label="table identifications"} The cleaning was stopped when the absolute maximum in the residual map (i.e. the map from which the clean components have been subtracted) reached 0.30 mJy. This is about 3 to 4 times the theoretical noise of each map, where the theoretical noise is calculated taking only the system temperature of the front-end receiver into account, not the calibration errors, side-lobes or any other instrumental effects. Experience shows that at this cutoff the RMS in the cleaned map and in the residual map are both about equal to the theoretical RMS. At the end, to correct for primary beam attenuation, the task [linmos]{} was used. All images were reduced in exactly the same way with exactly the same parameters. ![The top panel shows the $\zeta$ Pup fluxes of each ATCA observation at 3.6 cm ($\Diamond$) and 6 cm ($\Box$) as a function of time. The flux derived from the single dataset combining all the ATCA observations is given by the dotted line. The lower 4 panels show the correlation of the $\zeta$ Pup fluxes with those of S2=EQ 0801-398 and S3=NVSS J080327-395828. In the correlation plots the fluxes were normalized by their average.[]{data-label="fig ATCA variability"}](zp_fig2.ps) Fluxes and error bars --------------------- The flux of $\zeta$ Pup was determined by fitting an elliptical Gaussian, with values for the major and minor axis and position angle kept fixed to the beam values. Table \[table ATCA\] lists the flux values at 3.6 and 6 cm, and the respective beam sizes with their position angle. Because the observations at 3.6 and 6 cm were done simultaneously, the position angles are the same at both wavelengths. The data obtained on SEP23-24 were split in two sets in order to have the same time span as in the other observing runs. The 6 cm fluxes of the other sources in the image are less than 1.2 mJy, except S2 and S6 which are about 4 mJy (uncorrected for the primary beam effect): these values are low enough that we do not have to worry about the effect of their sidelobes. The fitting procedure ([imfit]{}) gives an RMS error on the flux measurement. To take into account the calibration error, we added 2 % of the flux values to this RMS, and thus arrived at the final error bars (listed in Table \[table ATCA\]). This error bar only covers the random sources of error. To get a feeling for the systematic errors, we redid the reduction in a slightly different way. Instead of robust uniform weighting, we tried both uniform weighting (by setting the [*robust*]{} parameter to $-2$) and natural weighting ([*robust*]{} parameter = $+2$). In the latter, the RMS in the maps is lower at the expense of the more elongated beam and worse side-lobe levels. In the former, the beam shape is smaller at the expense of higher RMS levels in the map. The robust uniform and natural weighting flux determinations agree well within the error bar. As usual, the better the UV coverage and beam shape the less difference there is among the different flux determinations. We also measured the flux by determining the maximum intensity of the point source, instead of fitting an elliptical Gaussian. This alternative flux determination always falls within the error bar. ![ATCA observation (all data combined) of $\zeta$ Pup at 3.6 and 6 cm. The cross indicates the optical ICRS 2000.0 position (from SIMBAD), corrected for proper motion (see Table \[table stellar parameters\]). The contour levels follow a logarithmic scale: their values are listed at the bottom of each figure. The negative contour is given by the dashed line. The first positive contour is at about three times the RMS noise in the map. The beam is shown in the upper left corner.[]{data-label="fig ATCA observations"}](zp_fig3.ps) Results ------- The resulting 3.6 and 6 cm fluxes are plotted in the top panel of Fig. \[fig ATCA variability\]. While this figure suggests that $\zeta$ Pup is variable at both wavelengths, we found a similar behaviour in other sources on the map. We therefore compared the flux values for $\zeta$ Pup with two of the brightest sources close-by (S2=EQ 0801-398 and S3=NVSS J080327-395828, see Fig. \[fig sources\]). The comparison (lower panels on Fig. \[fig ATCA variability\]) shows that all three sources follow the same trend. Hence, this argues strongly against $\zeta$ Pup showing variability. A further argument against variability follows from the fluxes derived from the complete data set (see below) which are also plotted on Fig. \[fig ATCA variability\] (dotted lines). Obviously, the flux of the complete data set is not the straight average of the separate fluxes. This points to problems in the interpolation of the instrumental gain phases during the first few runs of the observing session, causing part of the $\zeta$ Pup flux to be scattered over the rest of the image. These runs get less weight in the complete data set due to the non-linear nature of the cleaning process. Having concluded that the present data do not show detectable variability in $\zeta$ Pup, we can derive upper limits on the amount of variability from the range in fluxes (with their error bars): these are $\pm 20$ % of its flux value at 3.6 and at 6 cm. In principle, the non-detection of variability could be caused by a bad coverage of the phase for the various periods. However, the 8 observations spread over 12 days cover the 5.2 day rotation period reasonably well. We also checked the phase coverage of the variability cycle for possible periods around 8.5 h and 19 h (see Sect. \[section stellar parameters\]) and found it to be good. As there is no variability, we combined all visibility data and Fourier transformed them to obtain a single map. Because the RMS in this combined map is lower than in each individual map, we cleaned it deeper, down to 0.1 mJy. The resulting images at 3.6 and 6 cm are shown in Fig. \[fig ATCA observations\]. The measured fluxes and their error bars (RMS + 2% of the flux) are listed in Table \[table ATCA\]. To get a feeling for the systematic errors on the combined data set, we investigated the effect of the cutoff limit in the [clean]{}ing procedure: we [clean]{}ed the total map at 6 cm ([*robust=0.5*]{}, rms=0.024 mJy) down to 0.07 mJy and 0.15 mJy. In the former case the number of clean components was 2769 in the latter 160. The resulting $\zeta$ Pup fluxes fall well within the error bar. Also using a single, large, cleaning box instead of multiple cleaning boxes gave negligible differences in the flux. Finally, we redid the reduction, systematically dropping one of the observing runs. The range of values thus obtained again falls within the error bar. VLA observations {#section VLA observations} ================ 20 cm observation {#section 20 cm observation} ----------------- We obtained a 20 cm (L-band) observation on 2002 March 27 with the VLA in A configuration (i.e. the configuration with the highest spatial resolution). The observation alternated between $\zeta$ Pup and the phase calibrator (J2000). Two runs of 7 min were made on $\zeta$ Pup and 3 runs of 2 min on the phase calibrator. For the flux calibration we also observed = (J2000). The observation consists of two sidebands (at 1.3851 and 1.4649 GHz), each of 50 MHz bandwidth. The reduction of these data was done using the NRAO package AIPS (Astronomical Image Processing System), following the same steps as for the ATCA data (Sect. \[section ATCA observations\]). Technical details of the reduction are listed in Table \[table VLA data reduction\]. We stopped cleaning when the algorithm started finding about the same number of negative as positive components. The resulting map is shown in Fig. \[fig VLA 20 cm\]. ![VLA observation at 20 cm. The contour levels follow a linear scale and the values are listed at the bottom of the figure. The negative contour is given by the dashed line. The first positive contour is at about twice the RMS noise in the map. The beam is shown in the upper left corner. The cross indicates the optical ICRS 2000.0 position (from SIMBAD), corrected for proper motion. The $\sim$ 14 offset with the radio position is most probably due to ionospheric refraction, which is more important at longer wavelengths and at the low elevation at which $\zeta$ Pup was observed. []{data-label="fig VLA 20 cm"}](zp_fig4.ps) By fitting an elliptical Gaussian to $\zeta$ Pup, we found a 20 cm flux of $0.76 \pm 0.09$ mJy. The error bar is the RMS noise in the total map. We also added a 2 % calibration error to the result (Perley & Taylor [@Perley+Taylor02]), but this does not change the resulting error bar significantly. The maximum intensity of $\zeta$ Pup is 0.74 mJy/beam: the close agreement with the value from the Gaussian fit shows that $\zeta$ Pup is indeed a point source. To judge the robustness of our flux determination, we repeated the reduction, systematically dropping one antenna, using different weightings of distant visibilities, or natural weighting instead of robust uniform, or doubling or halving the number of clean components. In all cases, the results fall within the error bar. The RMS error bar therefore not only covers the statistical uncertainty, but the systematic errors as well. Archive observations {#section archive observations} -------------------- The VLA archive contains a number of $\zeta$ Pup observations, which are listed in Table \[table data zeta pup\]. Many of these observations have not been published previously. To avoid introducing systematic effects we decided to reduce the whole set. The procedure is the same as that followed in Sect. \[section 20 cm observation\]. The details of the reductions are summarized in Table \[table VLA data reduction\]. In principle, the 1.3 cm (K-band) and 2 cm (U-band) observations should be corrected for differential atmospheric extinction (using the AIPS task [elint]{}), but in none of those cases was there sufficient information to apply this. The resulting maps are shown in Fig. \[fig VLA archive data\] and the fluxes are listed in Table \[table VLA data reduction\]. The error bars show the range of results found by various reductions where we systematically dropped one antenna, used different weightings of distant visibilities, or natural weighting instead of robust uniform. The error bars are always larger than the RMS noise in the map. For each observation, we compared the peak intensity of the source to the flux derived from the Gaussian fit. With the only exception of the 3.6 cm observation of AH365 (see below), there is very good agreement, showing that the sources are indeed point sources. When we doubled or halved the number of clean components, we were always well within the error bar. The error bars of the 3.6, 6 and 20 cm observations include a 2 % calibration error, and the 2 cm a 5 % calibration error. [rllllclrrr@[.]{}l@[$\times$]{}r@[.]{}llc]{}\ & & & & & & & & & & &\ & & & & & & & & & &\ & & & & & & & & & & &\ 1.3 & FLOR & 1981-10-18 & C & & 2.517 & & 19 & 56 & 5&6 & 3&0 & $<13$ & 1,2\ \ 2 & NEWE & 1979-02-09 & & — & — & & 8 & 9 & & $<13$ & 1,2,3\ & FLOR & 1981-10-18 & C & 3C286 & 3.452 & 0828$-$375 & 24 & 64 & 5&1 & 3&5 & 4.3 $\pm$ 0.9 & 1\ & AA28 & 1984-03-07 & CnB & 3C286 & 3.423/3.432 & 0828$-$375 & 24 & 18 & 1&7 & 1&3 & 2.9 $\pm$ 0.3 & 1,4\ & AB327 & 1985-01-29 & A & & 1.742/1.748 & 0828$-$375 & 26 & 121 & 3&0 & 1&8 & — & 1,5,6\ \ 3.6 & AH365 & 1989-05-13 & CnB & 3C48 & 3.171/3.153 & 0828$-$375 & 26 & 14 & 4&0 & 3&0 & 1.4 $\pm$ 0.3 & 6,7\ \ 6 & FLOR & 1978-07-23 & & 3C286 & 7.462 & & 11 & 9 & & $<4$ & 8\ & FLOR & 1978-10-13 & & 3C286 & 7.462 & 0836$-$202 & 9 & 160 & 9&4 & 1&3 & 1.7 $\pm$ 0.3 & 9\ & BIEG & 1978-11-05 & & 3C286 & 7.462 & 0836$-$202 & 8 & 24 &16& & 0&69 & $<4$ & 8\ & CHUR & 1979-07-12 & & 3C286 & 7.462 & 0828$-$375 & 12 & 140 & 4&9 & 0&55 & 1.4 $\pm$ 0.3 & 10,11\ & FLOR & 1981-10-18 & C & 3C286 & 7.462 & 0828$-$375 & 27 & 28 &18& & 3&6 & 1.71 $\pm$ 0.14 &\ & AA28 & 1984-03-07 & CnB & 3C286 & 7.462/7.510 & 0828$-$375 & 24 & 18 & 5&0 & 4&1 & 1.49 $\pm$ 0.11 & 12\ & AB327 & 1985-01-29 & A & 3C48 & 5.405/5.459 & 0828$-$375 & 27 & 91 & 1&1 & 0&45 & 1.05 $\pm$ 0.08 & 6\ \ 20 & FLOR & 1979-02-16 & & 3C286 & 14.51 & 0836$-$202 & 11 & 182 &14& & 2&3 & $<1.5$ & 2\ & FLOR & 1981-10-18 & C & 3C286 & 14.51 & 0828$-$375 & 26 & 18 &60& & 12& & $<0.75$ & 2\ & AB1017& 2002-03-27 & A & 3C48 & 15.49/16.20 & & 26 & 14 & 5&5 & 1&1 & 0.76 $\pm$ 0.09 &\ Notes:\ ---- ----------------------------------------------------------------------------------------------------------------------------------------------- 1 Insufficient data for atmospheric extinction correction 2 Upper limit from 3 sigma 3 No flux calibrator, used 0532+075 (J2000) instead with an assumed flux of 2.20 Jy 4 Bieging et al. ([@Bieging+al89]) list 3.0$\pm$0.2 mJy for this observation 5 Use of 3C48 as a flux calibrator in the A configuration is not recommended (Perley & Taylor [@Perley+Taylor02]) 6 Data reduction problems detailed in Sect. \[section archive observations\] 7 Lamers & Leitherer ([@Lamers+Leitherer93]) list 1.60$\pm$0.07 mJy for this observation, based on work by Howarth & Brown ([@Howarth+Brown91]) 8 Upper limit derived from non-detection of EQ 0801-398 9 Primary is resolved; true flux might be slightly lower 10 An additional 10 % uncertainty in the flux calibration should be added, because the instrumental gains of the flux and phase calibrator show significant differences (Fomalont & Perley [@Fomalont+Perley99]) 11 Abbott et al. ([@Abbott+al80]) and Bieging et al. ([@Bieging+al89]) list 1.4$\pm$0.3 mJy for this observation 12 Bieging et al. ([@Bieging+al89]) list 1.3$\pm$0.1 mJy for this observation ---- ----------------------------------------------------------------------------------------------------------------------------------------------- ![image](zp_fig5.ps) As $\zeta$ Pup is rather low on the horizon from the VLA, the quality of the observations is not always good. The AB327 observations at 2 and 6 cm show large variations in the gain phases (30-50 degrees) of the calibrators. All sources on the 6 cm image (including $\zeta$ Pup) are 30-50 % lower in flux compared to other observations. The 2 cm $\zeta$ Pup flux is heavily dependent on tapering, and is compatible with a value of $\sim 3$ mJy. The 3.6 cm observation of AH365 is the only one where the measured peak intensity is systematically higher (by $\sim 40$ %) than the flux derived from the Gaussian fit. This suggests that, due to phase problems, the source is not quite a point source. The measured peak intensity is 2 mJy. Minor comments on other observations are given in Table \[table VLA data reduction\]. In general, the agreement of the fluxes we derived with published values is quite good (see notes to Table \[table VLA data reduction\]). Only for the AA28 - 6 cm observation do we find a value (1.49 $\pm$ 0.1 mJy) significantly higher than the published value (1.3 $\pm$ 0.1 mJy, Bieging et al. [@Bieging+al89]). There is an additional observation available (AC308 – see Table \[table data zeta pup\]) made at 20 cm. This observation is part of the NRAO VLA Sky Survey (NVSS – Condon et al. [@Condon+al98]). We checked this survey and found that $\zeta$ Pup was not detected (but EQ 0801-398 is detected). Long term variability {#section long term variability} --------------------- ![image](zp_fig6.ps) We now compare the different VLA observations, wavelength by wavelength, to see if we can detect variability at timescales much longer than the rotation period. We have only two 2 cm flux determinations. The error bars do not overlap, which might suggest variability. However, we recall the considerable difficulties in the reduction of these data (Sect. \[section archive observations\]), and based on that, we do not consider them to present evidence of variability. For completeness, we also mention the 2 cm observations of $\zeta$ Pup by Morton & Wright ([@Morton+Wright79]): they list 7.2$\pm$1.1 mJy as the average of two observing runs. This value is significantly higher than ours. However, these observations were made with a single-dish antenna (Parkes 64-m), which has a beam of 23. When targeting $\zeta$ Pup, the beam also covers S2=EQ 0801-398 (see Fig. \[fig sources\]), which will dominate the measured flux. These data can therefore not be used to look for variability of $\zeta$ Pup. The single VLA 3.6 cm observation ($1.4 \pm 0.3$ mJy) can be compared to the ATCA determination ($2.38 \pm 0.09$ mJy). In Sect. \[section archive observations\] we showed that the AH365 observation was quite problematic. Comparing the ATCA measurements of sources S2=EQ 0801-398 and S3=NVSS J080327-395828 to the AH365 measurements shows only a 20 % effect for S2, and none for S3, suggesting that the Gaussian fit fluxes would be reliable. In that case, the AH365 $\zeta$ Pup flux would be significantly different from the ATCA determination. However AH365 shows such an accumulation of problems (the flux of the secondary calibrator is considerably less than expected, there are reasonably high phase changes and there is the difference between peak intensity and Gaussian integrated flux) that we do not consider this convincing evidence of variability. The single 20 cm determination is compatible with the two upper limits. The only wavelength for which we have a reasonable number of flux determinations, is 6 cm. Fig. \[fig 6 cm\] shows these fluxes as a function of time. The flux from the ATCA combined data goes through the error bars of almost all observations, indicating that there is no detectable variability. The only exception is the significantly lower flux of AB327. In Sect. \[section archive observations\] we discussed the phase problems of this observation, which lead to the lower flux. Another possible factor might be that we are starting to resolve the stellar wind of $\zeta$ Pup (this is the observation with the highest spatial resolution we have). To estimate how much flux we would lose, we use a Wright & Barlow ([@Wright+Barlow75]) approach. By their definition of the characteristic radius ($R_\nu$) of the radio-emitting region, the observed flux ($S_\nu$) is given by: $$S_\nu = 10^{-26} \int_{R_\nu}^{+\infty} {\rm d}r \frac{4 \pi r^2}{D^2} K(\nu,T) \gamma \left( \frac{\dot{M}}{4 \pi r^2 v_\infty \mu m_{\rm H}} \right)^2 B_\nu (T),$$ where $r$ is the radius, $D$ the distance to the star, $K(\nu,T)$ the free-free absorption coefficient, $\gamma$ the ratio of electron to ion number densities, $\dot{M}$ the mass loss rate, $v_\infty$ the terminal velocity, $\mu$ the average atomic mass, $m_{\rm H}$ the proton mass and $B_\nu (T)$ the Planck function at frequency $\nu$ and temperature $T$. All units are cgs, except for $S_\nu$ which is in mJy. If, instead of integrating to $+\infty$, we only integrate to a radius $R_{\rm max}$, we lose a fraction of the flux given by: $$\frac{\Delta S_\nu}{S_\nu} = \frac{R_\nu}{R_{\rm max}}.$$ The beamsize ($\theta_{\rm beam}$) determines $R_{\rm max}$ by: $\theta_{\rm beam} = 2 R_{\rm max}/D$. Combining this with the Wright & Barlow expression for $R_\nu$, we get: $$\frac{\Delta S_\nu}{S_\nu} = 3.8 \times 10^5 \frac{(\gamma g Z^2)^{1/3} T^{-1/2}}{D \theta_{\rm beam}} \left( \frac{\dot{M} \lambda}{\mu v_\infty} \right)^{2/3},$$ where $\dot{M}$ is in $M_{\sun}/\mathrm{yr}$, $v_\infty$ in km/s, $\lambda$ in cm, $\theta_{\rm beam}$ in arcsec, $D$ in kpc. We take $\dot{M} = 5.9\times 10^{-6}$ (Puls et al. [@Puls+al96]), $\mu$=1.4, $\gamma$=1, $Z$=1 and the Gaunt-factor $g$=7. Other parameters are taken from Table \[table stellar parameters\]. With a beam of $\theta_{\rm beam}$=07, we find a flux loss of $6-12$ %, depending on whether we take a hot wind (42000 K) or a cool wind (10000 K). If we use the mass loss rate derived from our model for the radio observations (Table \[table mass loss rate determinations\]), we find a $4-8$ % effect. The low AB327 flux can thus not be explained by our starting to resolve the stellar wind, but is due to problems with the interpolation of the gain phases. JCMT observation {#section JCMT observations} ================ We also determined the 850 $\mu$m flux of $\zeta$ Pup using the Submillimetre Common-User Bolometer Array (SCUBA, Holland et al. [@Holland+al99]) on JCMT (see Table \[table data zeta pup\]). The instrument observes simultaneously at 450 and 850 $\mu$m using two hexagonal arrays of bolometers. The sensitivity at 450 $\mu$m is too low for a detection, so we will only discuss the 850 $\mu$m data. Our flexibly scheduled observations were taken in medium weather conditions (850 $\mu$m zenith opacity was 0.32 – 0.47) during a half-shift on 2000 October 12. The beam at 850 $\mu$m is 145. As the data were collected in the same run as the $\epsilon$ Ori observation discussed by Blomme et al. ([@Blomme+al02]), we refer to that paper for details on the observation and the reduction. The total on-target integration time for $\zeta$ Pup was 15 min. After reduction, the RMS on the $\zeta$ Pup observation turns out to be about 16 %. We tried variants in the reduction (e.g. using other bolometers than the inner ring for sky-noise removal), but this changes the flux by considerably less than 16 %. The $\zeta$ Pup observations are preceded and succeeded by an observation of the calibrator . Taking the average of these two calibration observations, we arrive at $31 \pm 5$ mJy for the flux. The error bar takes into account the measurement errors on the target and the calibrator as well as the calibration error. However, OH231.8 is somewhat variable and the flux of the calibrator at the time of our observation could be different from the reference flux we used. If we use the calibration observation that precedes our run, we have $28 \pm 5$ mJy. Trying to use other calibrators that were observed during that night (but of course further away in time from our observation) tends to favour the lower value. In view of this we propose $28 \pm 5$ mJy as the best determination of the flux. The $\zeta$ Pup value is then also based on the same calibrator as the $\epsilon$ Ori observation analysed by Blomme et al. ([@Blomme+al02]). $\lambda$ flux (mJy) reference ------------ ----------------- ------------------------------------------------- -- -- 850 $\mu$m 28 $\pm$ 5 JCMT 1.3 mm 20.2 $\pm$ 1.8 from Leitherer & Robert ([@Leitherer+Robert91]) 2 cm 4.3 $\pm$ 0.9 VLA, FLOR 1981-10-18 2 cm 2.9 $\pm$ 0.3 Bieging et al. ([@Bieging+al89]), revised 3.6 cm 2.38 $\pm$ 0.09 ATCA 6 cm 1.64 $\pm$ 0.07 ATCA 20 cm 0.76 $\pm$ 0.09 VLA : Submillimetre and radio fluxes of $\zeta$ Pup. Data from this paper, unless otherwise indicated.[]{data-label="table all fluxes"} If we want to compare our 850 $\mu$m value with the 1.3 mm determination of Leitherer & Robert ([@Leitherer+Robert91]), we need to correct for the difference in wavelength. If we assume an $\alpha=0.6$ spectrum, we find that our value corresponds to $22 \pm 4$ mJy at 1.3 mm, which agrees very well with the Leitherer & Robert value of $20.2 \pm 1.8$ mJy. Our value is also compatible with the Altenhoff et al. ([@Altenhoff+al94]) upper limit of 33 mJy at 1.2 mm. Discussion {#section discussion} ========== Smooth wind model {#section smooth wind model} ----------------- Table \[table all fluxes\] summarizes the submillimetre and radio fluxes we will discuss in this section. Other fluxes either agree with these, or possible disagreements have been explained in Sect. \[section archive observations\]. From the 1.3 mm and 6 cm fluxes we can derive the spectral index $\alpha$ (defined by $F_\nu \propto \lambda^{-\alpha}$). The measured value of $\alpha = 0.66 \pm 0.03$ defines a power law that goes through the error bars of all observations, with the exception of the 2 cm fluxes (where it is intermediate between the two determinations). This spectrum is somewhat steeper than expected from the Wright & Barlow ([@Wright+Barlow75]) model (where $\alpha = 0.6$), indicating some discrepancy between the millimetre and radio fluxes. To better quantify this discrepancy, we also made a smooth wind model for $\zeta$ Pup in the same way as in Runacres & Blomme ([@Runacres+Blomme96]). The model solves the equations of radiative transfer and statistical equilibrium in a spherically symmetric stellar wind, containing only hydrogen and helium. The density in the wind is determined by solving the time-independent hydrodynamical equations (following Pauldrach et al. [@Pauldrach+al86]). When fitting the model to the observations, the visual and near-infrared fluxes were used to determine the interstellar extinction and the radio fluxes to determine the mass loss rate. The far-infrared and millimetre fluxes are unconstrained and can therefore be used to see how well the smooth wind model fits the observations. Further details of the model are given in Runacres & Blomme. A new version of their $\zeta$ Pup model was calculated that uses the parameters listed in Table \[table stellar parameters\]. In fitting the model, we fixed the mass loss rate from our ATCA 6 cm observation. We choose this observation because it has the smallest error bar. The best fit gives a mass loss rate of $\dot{M}$ = $3.5 \pm 0.1 \times 10^{-6}$ $M_{\sun}$/yr. The error bar on $\dot{M}$ corresponds only to the error bar on the 6 cm flux, and does not include the more important errors due to stellar parameters or distance. E.g., the error in the Hipparcos distance converts to a ($-0.9$, $+1.5$) $\times 10^{-6}$ $M_{\sun}$/yr error on $\dot{M}$. ![Observed fluxes normalised to a smooth wind model. Observations above the dotted line point to additional emission that is not included in the smooth model.[]{data-label="fig smooth model"}](zp_fig7.ps) ----------- --------------------- -------------------------------------------- -- radio $2.4_{-0.7}^{+1.0}$ Lamers & Leitherer ([@Lamers+Leitherer93]) H$\alpha$ $3.5_{-1.2}^{+1.9}$ Lamers & Leitherer ([@Lamers+Leitherer93]) 5.9 Puls et al. ([@Puls+al96]) radio 3.5 this paper ----------- --------------------- -------------------------------------------- -- : Comparison to other mass loss rate determinations of $\zeta$ Pup.[]{data-label="table mass loss rate determinations"} ![image](zp_fig8.ps) Because it is such a well-studied star, a large number of mass loss rate determinations have been made for $\zeta$ Pup. In our comparisons we will only consider the more recent determinations (Table \[table mass loss rate determinations\]). Our radio mass loss rate of $3.5 \times 10^{-6}$ $M_{\sun}$/yr is different from the $5.9 \times 10^{-6}$ $M_{\sun}$/yr value by Puls et al. ([@Puls+al96]) based on fitting the H$\alpha$ line profile. This difference is significant, considering that we used the same stellar parameters as they did. We recall that Petrenz & Puls ([@Petrenz+Puls96]) already noted this discrepancy and tried to explain it by invoking an equatorial density enhancement, caused by the large rotational velocity of the star. Our results are in acceptable agreement with both the radio and H$\alpha$ determinations by Lamers & Leitherer ([@Lamers+Leitherer93]), but we note that they used a simpler model for the H$\alpha$ emission than Puls et al. Contrary to what the Lamers & Leitherer numbers suggest, $\zeta$ Pup is therefore an exception to the usually good agreement between the H$\alpha$ and radio mass loss rates. The observed fluxes are compared to the best-fit smooth wind model in Fig. \[fig smooth model\]. The figure shows the millimetre and radio observations listed in Table \[table mass loss rate determinations\], supplemented by visual and infrared data listed in Runacres & Blomme ([@Runacres+Blomme96]). As found by Runacres & Blomme, there is an excess flux at millimetre wavelengths. Due to our somewhat higher 6 cm flux, the effect found here (24 %) is somewhat lower than what they found (35 %). The present result is significant at the 2 sigma level. In view of the large error bars, it is not clear what happens at 2 cm, but the 3.6 cm flux could be marginally in excess. The 20 cm flux is in good agreement with the smooth wind model. We stress that the same discrepancy is found when a much simpler Wright & Barlow ([@Wright+Barlow75]) model is used instead of our smooth wind model. Explaining the millimetre excess -------------------------------- The millimetre excess is the most important result from this work. Various explanations will be considered in this section. To study the excess we will make considerable use of models similar to the one developed by Wright & Barlow ([@Wright+Barlow75]). We start by noting that the Wright & Barlow formula for the flux is not very sensitive to temperature, so a radial gradient of the temperature will not explain the excess. There could be an indirect effect of the temperature however, due to the recombination of important ions. This explanation has been proposed by Leitherer & Robert ([@Leitherer+Robert91]). As He$^{++}$ will recombine much more quickly than H$^{+}$, we only consider the recombination of He$^{++}$. A careful evaluation of the ionization-dependent factors in the Wright & Barlow ([@Wright+Barlow75]) formula for the radio flux shows that, if we assume all helium to be He$^{++}$ in the millimetre formation region and all helium to be He$^{+}$ in the centimetre formation region, the discrepancy between the fluxes can be explained. It should be noted that our own smooth-wind model does not show recombination, but it does not include important effects, such as line blanketing due to metals. Recent models of early-type stars arrive at lower effective temperatures than the one we used here (Bianchi & Garcia [@Bianchi+Garcia02], Bianchi et al. [@Bianchi+al03], Martins et al. [@Martins+al02]), which would favour recombination. Further evidence that recombination occurs, comes from models of the stellar wind of $\zeta$ Pup (Hillier et al. [@Hillier+al93], Pauldrach et al. [@Pauldrach+al01], Puls 2003, pers. comm.). Due to the recombination in these models, it is necessary to assume that the X-rays are formed out to large distances in the wind ($> 100~R_*$), because X-rays formed too close to the star get absorbed. However, the X-ray spectral lines of $\zeta$ Pup (Cassinelli et al. [@Cassinelli+al01], Kahn et al. [@Kahn+al01]) seem to require formation closer to the stellar surface and less opacity in the wind (Kramer et al. [@Kramer+al03]). This fact is difficult to reconcile with He recombination. A slower velocity law has been claimed to explain the [*far-infrared*]{} discrepancies seen on Fig. \[fig smooth model\] (Kudritzki & Puls [@Kudritzki+Puls00]). To explain the [*millimetre*]{} discrepancies would require a very much slower velocity law. To investigate this we adapted the Wright & Barlow ([@Wright+Barlow75]) model to include a $\beta$-type velocity law. In such case, an analytical solution is no longer possible, and all integrations were done numerically. We find that $\beta \approx 5$ is needed to explain the millimetre fluxes, which is not compatible with other observational indicators such as the H$\alpha$ profile, which requires $\beta \approx 1.15$ (Puls et al. [@Puls+al96]). The existence of a flux excess at millimetre wavelengths can also be due to structure in the wind. As there is a wealth of evidence for structure in these winds, it seems obvious to explore a model based on that. The model in question is the same as used in Blomme et al. ([@Blomme+al02]) for $\epsilon$ Ori. The smooth wind opacity in a Wright & Barlow ([@Wright+Barlow75]) model is multiplied by the clumping factor to get the clumped wind opacity. The clumping factor is defined as /, where the symbol stands for a time-average, which we have approximated by integrating over a small volume of the wind. Optical depth, emergent intensity and flux are then calculated as in Wright & Barlow. As our model allows the clumping factor and velocity to change as a function of distance, all integrations have to be done numerically. We used a run of the clumping factor based on the work of Runacres & Owocki ([@Runacres+Owocki01]), who calculated time-dependent hydrodynamical models to study the effect of the line-driving instability at large distances from the star. The clumping factor in their models rises to reach a maximum rather far away from the star ($10 - 50~R_*$) and then decreases again. If we simplify their results somewhat, we can approximate the typical behaviour for the clumping factor by a piece-wise linear curve fixed by specifying three points. We let the clumping factor rise linearly from close to the surface of the star (point 1) to a certain distance (point 2), and then let it fall off again linearly till it becomes one (point 3). The inset to Fig. \[fig clumped model\] shows an example (dotted line). We also used piece-wise linear curves fixed by four points. By setting the clumping factor equal to one at large distances, we ensure that there is no excess at radio wavelengths. Fig. \[fig clumped model\] shows a number of fits we attempted. Although no unique solution is found, it is quite clear that the clumping factor has to be significantly higher than 1 in the inner part of the wind, and needs to diminish considerably beyond 70 $R_*$. To stress this last point, Fig. \[fig clumped model\] includes a model with substantial clumping up to 120 $R_*$ (dashed line); the fluxes clearly overshoot the 3.6 and 6 cm observations. The fact that structure diminishes beyond a certain distance is similar to what we found for $\epsilon$ Ori (Blomme et al. [@Blomme+al02]), except that for $\epsilon$ Ori, this started happening around $\sim 40~R_*$. While structure diminishes around 70 $R_*$, it does not necessarily disappear. For $\epsilon$ Ori we showed that a model where structure persists up to large distances (with a constant clumping factor) could equally well explain the observations, provided we reduce the mass loss rate accordingly. For $\zeta$ Pup, we see that the 20 cm observation is in reasonable agreement with the 6 cm one. This shows that, [*if*]{} there is still clumping left, it falls off considerably slower than it does in the inner 70 $R_*$ of the wind. Obviously, a more accurate 20 cm flux and observations in the 1 mm – 3.6 cm region will further constrain the extent and amount of structure. The above model was very much inspired by the small-scale structure expected from the instability of the radiative driving mechanism. However, CIRs, a disk or a polar enhancement should have similar effects on the millimetre flux. In these cases the 70 $R_*$ radius will be significant as well, in that it shows where the structure diminishes substantially, or maybe even disappears. A detailed comparison of models for different types of structure is beyond the scope of the present paper. Conclusions {#section conclusions} =========== Radio observations of $\zeta$ Pup covering about two rotational periods, supplemented by archive observations covering a much longer time scale, do not show variability at more than the $\pm 20$ % level. The long integration time gives us an accurate flux determination of 2.38 $\pm$ 0.09 mJy at 3.6 cm and 1.64 $\pm$ 0.07 mJy at 6 cm. These values are slightly higher than the ones previously known. Converting the fluxes into a mass loss rate, we find $\dot{M}$ = $3.5 \times 10^{-6}$ $M_{\sun}$/yr. This value confirms the significant discrepancy with the H$\alpha$ mass loss rate (Petrenz & Puls [@Petrenz+Puls96]). A smooth wind model shows that the millimetre fluxes are too high compared to the radio fluxes. While recombination of helium in the outer wind cannot be discounted as an explanation, we favour a model that ascribes the discrepancy to structure. A simple model shows a substantial decay, or maybe disappearance, of structure beyond 70 $R_*$. Fig. \[fig clumped model\] shows how observations at wavelengths between 1 mm and 3.6 cm can further constrain models for structure. The present data do not allow a distinction between the various types of structure (stochastic, CIRs, disk or polar enhancement). Attempting to detect variability at far-infrared or millimetre wavelengths should provide better constraints on the azimuthal symmetry of the structure, thereby allowing us to decide whether it is in the form of CIRs. We thank Joan Vandekerckhove for his help with the reduction of the VLA data. We are grateful to Thomas Lowe for making the JCMT observation. We also thank the original observers of the VLA archive data we used. We thank Joachim Puls for information about the helium recombination in the stellar wind models. This work benefitted from discussions with Stan Owocki. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France and NASA’s Astrophysics Data System Abstract Service. We also consulted the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA’s Goddard Space Flight Center. M.C.R. acknowledges support from ESA-Prodex project no. 13346/98/NL/VJ(ic), financed by ESA-Prodex. Part of this research was carried out in the framework of the project IUAP P5/36 financed by the Belgian State, Federal Office for Scientific, Technical and Cultural Affairs. Abbott, D. C., Bieging, J. H., Churchwell, E., & Cassinelli, J. P. 1980, ApJ 238, 196 Altenhoff, W. J., Thum, C., & Wendker, H. J. 1994, A&A 281, 161 Balona, L. A. 1992, MNRAS 254, 404 Barker, P. K, Landstreet, J. D., Marlborough, J. M., Thompson, I., & Maza, J. 1981, ApJ 250, 300 Berghöfer, T. W., & Schmitt, J. H. M. M. 1994, A&A 290, 435 Berghöfer, T. W., Baade, D., Schmitt, J. H. M. M., et al. 1996, A&A 306, 899 Bianchi, L., & Garcia, M. 2002, ApJ 581, 610 Bianchi, L., Garcia, M., & Herald, J. 2003, Rev. Mex. Astron. Astrofis. (Serie de Conferencias) 15, 226 Bieging, J. H., Abbott, D. C., & Churchwell, E. B. 1989, ApJ 340, 518 Bjorkman, J. E., & Cassinelli, J. P. 1993, ApJ 409, 429 Blomme, R., Prinja, R. K., Runacres, M. C., & Colley, S. 2002, A&A 382, 921 Bohannan, B., Abbott, D. C., Voels, S. A., & Hummer, D. G. 1990, ApJ 365, 729 Briggs, D. S. 1995, High Fidelity Deconvolution of Moderately Resolved Sources, PhD thesis (The New Mexico Institute of Mining and Technology, Socorro, New Mexico) Cassinelli, J. P., Miller N. A., Waldron W. L., MacFarlane, J. J., & Cohen D. H. 2001, ApJ 554, L55 Chesneau, O., & Moffat, A. F. J. 2002, PASP 114, 612 Collura, A., Sciortino, S., Serio, S., et al. 1989, ApJ 338, 296 Condon, J. J., Cotton, W. D., Greisen, E. W., et al. 1998, AJ 115, 1693 Cranmer, S. R., & Owocki, S. P. 1996, ApJ 462, 469 Eversberg, T., Lépine, S., Moffat, A. F. J. 1998, ApJ 494, 799 Feldmeier, A., Kudritzki, R.-P., Palsa, R., Pauldrach, A. W. A., & Puls, J. 1997, A&A 320, 899 Fomalont, E. B., & Perley, R. A. 1999, in ASP Conf. Ser. 180, Synthesis Imaging in Radio Astronomy II, eds. G. B. Taylor, C. L. Carilli, & R. A. Perley, 79 Fullerton, A. W., Gies, D. R., & Bolton, C. T. 1996, ApJS 103, 475 Gontcharov, G. A., Andronova, A. A., Titov, O. A., & Kornilov, E. V. 2001, A&A 365, 222 Hanbury Brown, R., Davis, J., & Allen, L. R. 1974, MNRAS 167, 121 Harries, T. J., & Howarth, I. 1996, A&A 310, 533 Hendry, E. M. & Bahng, J. D. R. 1981, JApA 2, 141 Hillier, D. J., Kudritzki, R. P., Pauldrach, A. W. A., et al. 1993, A&A 276, 117 Holland, W. S., Robson, E. I., Gear, W. K., et al. 1999, MNRAS, 303, 659 Howarth, I. D., & Brown, A. B. 1991, in IAU Symp. 143, Wolf-Rayet Stars and Interrelations with Other Massive Stars in Galaxies, eds. K. A. van der Hucht & B. Hidayat (Dordrecht: Kluwer), 315 Howarth, I. D., Prinja, R. K., & Massa, D. 1995, ApJ 452, L65 Jones, P. A. 1985 MNRAS 216, 613 Kahn S. M., Leutenegger M. A., Cottam J., et al. 2001, A&A 365, L312 Kramer R. H., Cohen, D. H., & Owocki, S. P. 2003, ApJL, in press Kudritzki, R. P., & Puls, J. 2000, ARA&A 38, 613 Kudritzki, R. P., Simon, K. P., & Hamann, W.-R. 1983, A&A 118, 245 Lamers, H. J. G. L. M., & Leitherer, C. 1993, ApJ 412, 771 Leitherer, C., & Robert, C. 1991, ApJ 377, 629 Long, K. S., & White, R. L. 1980, ApJ 239, L65 Lucy, L. B. 1982a, ApJ 255, 286 Lucy, L. B. 1982b, ApJ 255, 278 Marchenko, S. V., Moffat, A. F. J., Van der Hucht, K. A., et al. 1998, A&A 331, 1022 Martins, F., Schaerer, D., & Hillier, D. J. 2002, A&A 382, 999 Massa, D., Fullerton, A. W., Nichols, J. S., et al. 1995, ApJ 452, L53 Mermilliod, J.-C. 1987, A&AS 71, 413 Moffat, A. F. J., & Michaud, G. 1981, ApJ 251, 133 Morton, D. C., & Wright, A. E. 1979, in IAU Symp. 83, Mass loss and evolution of O-type stars, eds. P. S. Conti & C. W. H. de Loore (Dordrecht: Kluwer), 155 Mullan, D. J. 1986, A&A 165, 157 Oskinova, L. M., Clarke, D., & Pollock, A. M. T. 2001, A&A 378, L21 Owocki, S. P. 2000, Radiatively Driven Stellar Winds from Hot Stars, in Encyclopedia of Astronomy and Astrophysics, http://www.ency-astro.com, London: Nature Publishing Group, and Bristol: Institute of Physics Publishing. Owocki, S. P., Cranmer, S. R., & Gayley, K. G. 1996, ApJ 472, L115 Owocki, S. P., Cranmer, S. R., & Gayley, K. G. 1998, Ap&SS 260, 149 Pauldrach, A., Puls, J., & Kudritzki, R. P. 1986, A&A 164, 86 Pauldrach, A. W. A., Kudritzki, R. P., Puls, J., Butler, K., Hunsinger, J. 1994, A&A 283, 525 Pauldrach, A. W. A., Hoffmann T. L., & Lennon, M. 2001, A&A 375, 161 Petrenz, P., & Puls, J. 1996, A&A 312, 195 Petrenz, P., & Puls, J. 2000, A&A 358, 956 Perley, R. A., & Taylor, G. B. 2002, The VLA Calibrator Manual (http://www.aoc.nrao.edu/$\sim$gtaylor/calib.html) Prinja, R. K., Barlow, M. J., & Howarth, I. D. 1990, ApJ 361, 607 Prinja, R. K., Balona, L. A., Bolton, C. T., et al. 1992, ApJ 390, 266 Puls, J., Kudritzki, R. P., Herrero, A., et al. 1996, A&A 305, 171 Reid, A. H. N., & Howarth, I. D. 1996, A&A 311, 616 Runacres, M. C., & Blomme, R. 1996, A&A 309, 544 Runacres, M. C., & Owocki, S. P. 2001, A&A 381, 1015 Sault, B., & Killeen, N. 1999, Miriad Users Guide (http://www.atnf.csiro.au/computing/software/miriad) Sciortino, S., Vaiana, G. S., Harnden, F. R., Jr., et al. 1990, ApJ 361, 621 Snow, T. P., Jr., York, D. G., & Welty, D. E. 1977, AJ 82, 113 Walborn, N. R. 1972, AJ 77, 312 Wendker, H. J. 1995, A&AS 109, 177 Wright, A. E., & Barlow, M. J. 1975, MNRAS 170, 41 [^1]: The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. [^2]: The JCMT is operated by the Joint Astronomy Centre in Hilo, Hawaii on behalf of the parent organizations Particle Physics and Astronomy Research Council in the United Kingdom, the National Research Council of Canada and The Netherlands Organization for Scientific Research. [^3]: The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
--- abstract: 'We have released an archive of all observational data of the VUV spectrometer [*Solar Ultraviolet Measurements of Emitted Radiation*]{} (SUMER) on SOHO that has been acquired until now. The operational phase started with ‘first light’ observations on 27 January 1996 and will end in 2014. Future data will be added to the archive when they become available. The archive consists of a set of raw data (Level 0) and a set of data that are processed and calibrated to the best knowledge we have today (Level 1). This communication describes step by step the data acquisition and processing that has been applied in an automated manner to build the archive. It summarizes the expertise and insights into the scientific use of SUMER spectra that has accumulated over the years. It also indicates possibilities for further enhancement of the data quality. With this article we intend to convey our own understanding of the instrument performance to the scientific community and to introduce the new, standard-FITS-format database.' author: - 'W. $^{1}$, D. $^{1}$, K. $^{1}$, U. $^{1}$, L. $^{1}$, D. $^{1}$, K. $^{2}$, P. $^{2}$' title: The SUMER Data in the SOHO Archive --- Introduction {#sec: Intro} ============ The [*Solar Ultraviolet Measurements of Emitted Radiation*]{} (SUMER; Wilhelm [[*et al.*]{}]{}, 1995) telescope and spectrometer has been taking ultraviolet spectra (50 nm to 161 nm) from the [*Solar and Heliospheric Observatory*]{} (SOHO) since its launch in December 1995. The complete dataset up to January 2013 has been reprocessed and is now available in FITS format from the SOHO archive. The motivation and basic idea behind this communication is to describe aspects of the data acquisition that are relevant to the data quality and details of the various steps and procedures needed to arrive at Level 1 (L1) data. The data reduction steps used in the archive correspond to the best knowledge of today. Even after so many years, the processing is still not perfect and at several stages compromises had to be taken, which may explain why this task was not completed earlier. SUMER data has so far been accessible from archives that contain reformatted telemetry, [*i.e.*]{} raw, Level 0 (LZ) data that is provided either in FITS, FTS, or IDL restore format. Various correction and calibration procedures have to be applied in order to minimize the known instrumental effects in LZ data and to convert the incoming signal to physical quantities of the radiation. The basic knowledge acquired during ground testing and commissioning was continuously extended, confirmed, or improved over the years. This knowledge is documented in almost 1000 articles in refereed journals as indicated in Figure \[fig:publ\]. There are clear indications from the publication rate that also future work with SUMER data can be expected. It is our motivation to make sure that the accumulated knowledge will not be forgotten, and at the same time provide ready-to-use spectra to the community for future work. Details of the reduction procedures were made publicly available on the instrument web sites. The latest update was in 2008.[^1] The procedures used to produce L1 data described here represent the present view of our instrument. A fair amount of this information is presented as a complete, overworked and self-standing document. We cannot exclude deviations from the results of data processing that was completed in earlier times. The differences are to our knowledge small and will in all likelihood not affect the validity of previous analyses. Therefore, this improved view of our instrument in no way compromises reduction work with previous versions, but does also not claim to be final. For this reason we have tried to make the description of each individual step as transparent as possible. This may help the user not to use the reduction as a black box, but to improve the results if future insights allow him to do so. The application of the various correction and calibration procedures has to reverse the order applied to the incoming signal by the various instrument subsystems. This applies in particular to stages of the signal processing in the detectors with specific shortcomings (procedures 2 to 6). Therefore the sequence must be executed in the described order: 1. Decompression\ 2. Deadtime correction\ 3. Odd-even pattern\ 4. Local-gain correction\ 5. Flatfield correction\ 6. Geometric distortion correction\ 7. Radiometric calibration If at a later time, an improvement in any one of these procedures is possible, then either the raw data can be reprocessed with the improved algorithm or the existing correction must be re-tracked to the problematic procedure that has to be replaced and re-run together with those following in the sequence. All information needed for the re-tracking is documented in the FITS header of L1 data. In addition to these procedures that affect the pixel values themselves, we also mention procedures that are to be used for special analyses, but have not been applied to the archive data. These are procedures to compensate for the instrumental effect on the width of spectral lines, to compute the movement of the slit image as parasitic effect of the grating focus mechanism, to estimate the stray-light level in off-limb spectra, and give information on the wavelength calibration including thermo-elastic deformation effects along the spectral dimension. Instrument Description {#sec: Instrument} ====================== The instrument is described in @Wilhelm95. Performance details are given in @Wilhelm97a and @Lemaire97. The concept of the instrument is also discussed and reviewed in the general context of VUV space instrumentation in @Wilhelm04. A survey of the most significant results is found in @Wilhelm07. Here we emphasize those details that are relevant to the archive. The particular spectral range that is covered by SUMER comprises emission lines and continua of many elements. It includes the entire Lyman series of hydrogen and chromospheric emission from other atomic constituents, as well as emission lines useful for observations of transition-region or coronal phenomena. It turned out that forbidden transitions between high-lying levels of highly ionized iron and other heavy species can be used as proxies for X-ray radiation and tracers of processes during flare events [@Feldman00], thus supporting X-ray spectroscopy. Optical Design {#sec: Optics} -------------- The optical design of the instrument is shown in Figure \[fig:opt\] and discussed in @Wilhelm95. Many features hereof are relevant for instrumental effects and are important for the data reduction. The optical system is based on a normal-incidence off-axis telescope and a slit spectrometer in Wadsworth mount. Pointing is accomplished by two nested mechanisms that allow, for optical performance reasons, a spherical motion of the parabolic mirror around the changeable slit in the focal plane. The beam issued from the slit – entrance of the spectrometer – is collimated by an off-axis parabolic mirror. The collimated beam is seen by a spherical concave grating producing a stigmatic image of the slit that is dispersed in wavelength. A plane mirror in front of the grating allows us to change the angle of incidence and thus, to modify the setting of the instantaneous wavelength portion seen by the detectors in the focal plane of the grating. The effective focal length of the grating depends on the angle of incidence and thus on the wavelength setting. A grating focus mechanism is, therefore, needed to keep the spectral image on the detector always in focus. For this reason, important optical parameters are wavelength dependent, notably the magnification, the angular pixel size along the slit, and the dispersion defining the spectral pixel size. All these parameters are provided in the FITS header of the data files. The reduction of scattered light has been a strong requirement for the optical system and the surface quality of the optical components with chemical-vapor-deposited (CVD) silicon carbide (SiC) coated surfaces. Measurements of the surface roughness performed at GSFC (Saha and Leviton, 1993) have shown that the rms micro-roughness at 10 $\mu$m scaling length was 0.6 nm, which is well within the specification. Such measurements have been used to predict the scatter performance at FUV wavelengths. Mechanisms {#sec: Mechanisms} ---------- SUMER is equipped with seven mechanisms, the door mechanism, two mechanisms for pointing in azimuth and elevation, the slit changer, the telescope focus mechanism, the wavelength scan, and the grating focus. All mechanisms are actuated by stepper motors and have position encoders for monitoring purposes. The encoders are operated in open-loop modes, [*i.e.*]{}, the positions that are read out are telemetered as housekeeping values, but not used to control the stepper motors. ### Pointing in Azimuth and Elevation {#pointing} With incremental single steps of [0.38]{} both in azimuth and elevation, the telescope could be pointed anywhere in the field of view of [64]{} $\times$ [64]{} centred on the Sun. The azimuth drive was also used to scan regions of interest on the Sun and to compensate for the solar rotation in sit-and-stare applications. The nominal elevation pointing position is always related to the central pixel of the slit, irrespective of the location of the slit image on the detector. The absolute pointing uncertainty of typically [10]{} is the combined result of thermoelastic effects on SOHO as well as in the instrument, of step losses in the mechanism, and of the parasitic shift of the slit image on the detector resulting from a misalignment of the grating focus mechanism (as described below). From time to time and for special cases the solar limb was used to ‘re-calibrate’ the reference position of the SUMER pointing mechanism (the zero position of the SUMER coordinate system). When step losses of the azimuth drive occurred in October 1996, it was decided to operate the driving motor with retaining power and in high-current mode. Only sporadic step losses occurred over the succeeding years in this mode. However, in April 2008, the problem re-appeared ex nihilo and became worse since then. Raster scans could not reliably be completed anymore. With the help of the azimuth position encoder, pointing was still possible. The value of the azimuth and elevation encoders are given in the L1 image header in order to improve the knowledge of the pointing. We note, however, that the encoder reading for the housekeeping process is completed at a cadence of 15 s and not synchronized with the science operation. With short exposure times it may happen that the encoder values in the first image header are not yet updated. ### Slit Focus and Slit Select {#slit} During the commissioning phase in early 1996, the slit focus mechanism was used to optimize the focus position of the telescope [[*cf.*]{}, @Lemaire97]. The setting has not been changed since then. The slit changer can choose between the four slits of size [4]{} $\times$ [300]{}, [1]{} $\times$ [300]{}, [1]{} $\times$ [120]{}, and [0.3]{} $\times$ [120]{}. Images of the short slits can be positioned in such a way that the top, the central, or the bottom section of the detector active area are illuminated, ‘top’ or ‘bottom’ settings are called ‘asymmetric’ slit positions. In bottom position, a baffle obscures the extreme pixels of the short slits (\#5 and \#8). The slit selection constrains the image format that is appropriate for the detector readout. Possible image formats including their telemetry load are listed in Table \[tab:formats\]. The low telemetry rate of $10.5 \times 10^3$ bits per second turned out to be a major limitation for fast data acquisition. Faster raster scans could, however, be completed by buffering spectra into the on-board memory up to a data volume of $\approx 5 \times 10^6$ bytes. [rrrlr]{} ID & Spectral pixels & Spatial pixels & Format & TM load\ &&&& s\ 2 & 1024 & 360 & 1 byte & 280.9\ 3 & 1024 & 360 & 2 bytes & 561.8\ 4 & 1024 & 120 & 1 byte & 93.7\ 5 & 1024 & 120 & 2 bytes & 187.3\ 8 & 50 & 360 & 1 byte & 13.8\ 9 & 50 & 360 & 2 bytes & 27.5\ 10 & 50 & 120 & 1 byte & 4.6\ 11 & 50 & 120 & 2 bytes & 9.2\ 12 & 25 & 120 & 1 byte & 6.9\ 13 & 25 & 120 & 2 bytes & 13.8\ 14 & 25 & 120 & 1 byte & 2.3\ 15 & 25 & 120 & 2 bytes & 4.6\ 37 & 256 & 360 & 2 bytes & 140.5\ 38 & 512 & 360 & 1 byte & 140.5\ 39 & 512 & 360 & 2 bytes & 280.9\ ### Wavelength and Grating Focus {#grating} The optical system requires that the wavelength and grating focus mechanisms are always operated simultaneously. Because of a misalignment of the grating focus drive – a combination of the misalignment of the guiding rails of the grating focus mechanism, the rotational axis of the wavelength mechanism, and the grating optical axis – the position of the slit image on the detector is slightly offset, whenever a new wavelength setting is commanded. The combined effect of this parasitic movement and the change of the angular pixel size is up to 25 pixels as shown in Figure \[fig:shift\] (see also Section 5.4). Since the readout window is fixed and does not follow the shift, often dark pixels appear in short-slit image formats. When only partial spectral windows are transmitted, the line of interest should in an ideal case be centred in the telemetered window. This is normally not the case and one should keep in mind that the repeatability of this mechanism is limited (see Section 5.4.1). At short wavelengths, one actuator step corresponds to seven spectral pixels, while several steps are needed for a shift of one pixel at the other extreme. Therefore, problems with the wavelength setting occur more often in the short wavelength range. Detectors {#sec: XDL} --------- SUMER is equipped with two photon-counting detector systems. Details of the detectors and their performance are given in @Siegmund94. A triple stack of multichannel plates (MCP) carries the photocathode deposited on the front face of the first MCP and a cross delay-line anode converts photons to an electronic pulse. The travel time of the pulse through the crossed delay-lines determines the location of the photon event and the image is constructed by a time-to-digital converter (TDC) that creates a 1024 $\times$ 360 array of photon events, which we for simplification, but inaccurately call pixels. It is this analog-to-digital conversion and, in particular, the linearity of this ADC which, in part, is responsible for some of the artifacts in the digital image, most notably the odd-even pattern, as described below. High overall count rates result in deadtime effects, since every individual event has to be processed by the post-anode digital electronic and bright lines will lead to local gain depression of the MCPs. The adverse effects of these shortcomings are balanced or even overcompensated by a unique feature of photon counting systems: their dark signal is almost negligible so that deep exposures can be made with very low signal. In other words, the dynamic range can be extended over many orders of magnitude. A sample spectrum around 118 nm is shown in Figure \[fig:xdl\] to illustrate the layout of the detector array. Only the most prominent lines are indicated [[*cf.*]{}, @Curdt01 for a comprehensive line identification]. The slit image covers only $\approx$300 out of the 360 spatial pixels. The central spectral pixels ($\approx$280 to $\approx$770) represent those sections, where the KBr photocathode is deposited on the bare MCP. These pixels have a much higher sensitivity, in particular in the spectral range from 90 nm to 130 nm (see Section 6 for details). Some pixels at the bare-to-KBr transition are difficult to interpret, since this is not a sharp boundary. The extreme 50 pixels on both sides are covered by a grid that serves as an 1:10 attenuator. However, as a side effect the attenuation exerts a modulation on the line profile, which makes it difficult to interpret these data. SUMER is also equipped with a rear slit camera (RSC). Despite misalignment problems, which reduced the scientific value of this device, the RSC could be used to verify the pointing mechanism by locating sunspots and by observing the solar limb. In the archive, RSC images are only included as raw data. Instrument Operation {#sec: Operation} -------------------- The principal data acquisition features of SUMER are described in detail in @Wilhelm95. The instrument operation applied during the many years of observation is summarized hereafter. During several years (1996 to 2003), the 24 h of SOHO throughput operation was divided in a long pass with continuous real-time access during at least 8 h and two or three small 1 h real-time passes. Later, the real-time coverage was reduced and became more irregular. The round-trip time for real-time commands is about 10 s with an uplink rate of three bytes per second. The on-board SUMER memory capability was 64 elementary or macro commands (formatted as User Defined Programme, see the following UDP section). So, at any time during a real-time pass we have been able to load instantaneous or time-tagged activities. The primary ground station to operate the instrument is located at the GSFC/EOF (Experiment Operation Facility at the Goddard Space Flight Center). During few campaigns remote terminals at the IAS/MEDOC (Multi-Experiment Data and Operation Centre at the Institut d’Astrophysique Spatiale) could be connected to this station. Since 2011 we can also operate our instrument remotely from MPS. The operations were completed following a timeline comprising a time span of $\approx$24 h. The definition of the timeline was accomplished in several steps. ### Target Selection and Definition of the Type of Observation Target selection was done by looking at the status of the Sun with the image-tool routine. Image-tool is the user interface of a pointing tool that is based on a database of solar images obtained from ground-based and space observatories, [*e.g.*]{} SOHO/[Extreme ultraviolet Imafing Telescope]{} (EIT; Delaboudinière [[*et al.*]{}]{}, 1995) and allows fine targeting for an observation to come hours or days in advance.[^2] The choice of the feature (coordinates and times) to be observed was done by the SUMER observer or in coordination with other SOHO (or other ground-based or space observatory) instruments. ### SUMER Command Language, Predefined and User-Defined Programmes A dedicated command language has been used to define the structure and contents of observing sequences. A library of high-level functions specific to SUMER and elements to build loops, branching points, etc. are the basic features of the SUMER command language (SCL). Various mapping modes and other elements of the SCL library are described in @Wilhelm95. The SCL source code to define an observing sequences is written in plain English. It sets the telescope pointing coordinates, the selected entrance slit of the spectrometer, the reference pixel on the detector associated with a wavelength, the wavelength and the associated windows on the detector, the exposure times, the number of spectra to be collected, and the detector voltage handling. A set of 30 differently structured, hard-wired observing sequences, so-called Predefined Observational Programmes (POPs) was already included in the SUMER flight software. Some had a simple linear structure, but more complex ones with loops and branching points were also available. In addition, new User-Defined Programmes (UDPs), also written in SCL code, could be added to the available POPs after passing different stages of a validation process. First, the syntax was checked, then the code was compiled and converted to a token code as input to the TKI (Token Code Interpreter) programme, a tool that is also part of the on-board software. Finally, the token code of each new UDP had to pass an instrument simulator before the sequence was added to the pool of more than 1000 validated UDPs. In addition to the POPs, the token code of 16 different UDPs could be held on board at the same time. Unfortunately, a flaw in the detector communication occasionally terminated the full execution of a POP. Mitigation of this problem required to modify the hard-wired structure of the code and this could only be completed by rewriting the sequence as UDP. The availability of such user-defined sequences demonstrated the enormous flexibility of the software design and turned out to be highly appropriate for the operation of SUMER. Most of the time, SUMER was operated in this mode. Any spectrum in the archive is linked to its ‘mother’ UDP, which is kept in the UDP database ([*cf.*]{}, Section 2.6). ### Generation and Uploading of Commands The prepared observation programmes are inserted into the SUMER Planning Tool, which can generate a file of time-tagged execution commands to be sent to the SUMER on-board computer through a SOHO channel opened by the SOHO/EOF ground system. At the same time, an activity plan can be produced to be loaded into the SOHO activity data base, documenting the plan to be executed. ### Monitoring and Preprocessing of Real-Time Data A few seconds after the observation the real-time data are collected and displayed on ground computers (EOF, MEDOC, MPS). Few minutes after the data are taken, they are pre-processed using a quick-look facility to follow the progress of the observations. That has been very useful in order to react in near real-time to the selection of sunspot coordinates, the solar limb position (using the rear slit camera) or the selection of any solar feature within a raster scan image. In parallel house keeping data are received in real-time and are used to follow the instrument parameter and to detect any anomaly that requires an operator reaction, either to check the accomplishment of the programme and re-iterate the observation as needed or to optimize parameters for a secondary run of that programme. Ground Support Facilities {#sec: EGSE} ------------------------- The ground support facilities were built as a dual Electrical Ground Support Equipment (EGSE): a scientific EGSE based on a VMS operating system (here referenced as operation station) and a PC-based maintenance EGSE. The maintenance EGSE is used for real-time commanding and real-time visualization of scientific and housekeeping data necessary to check the health of the instrument and the running of the observational programme. The operation station was used to complete scientific observations: target selection, creation of UDP, insertion of individual commands and programmes in the time line through the Planning Tool, sending the commands, reception of the telemetry, pre-processing the raw scientific data in the Quick-Look and formatting the zero level FITS (FTS) data to feed the preliminary SOHO archive. The dual EGSE was duplicated at the IAS/MEDOC centre. UDP Database {#sec: UDP} ------------ The source code of all used UDPs is accessible on the MPS SUMER archive web page. These files are not needed directly for the FITS formatting process. Information about, which programme is used to acquire the dataset is included in the FITS image header (see Section \[sec: Aux\]). The Archive {#sec: Archive intro} ----------- The SUMER archive is part of the total SOHO archive which is built around the archive created by all SOHO instruments. The main SOHO archive is based at NASA/GSFC, while a copy is maintained at ESA and at IAS/MEDOC. The access to the SUMER LZ archive can be accomplished through various web pages.[^3] Data Resources {#sec: Data} ============== In this chapter it is described which data sources are included in the processing from TM raw data to the ready FITS product. Further on it describes the various additional data sources which are processed and how this information is added to the FITS image header. A flow diagram showing the data sources and processing steps is depicted in Figure \[fig:dataflow\]. TM Processing {#sec: TM processing} ------------- The SOHO telemetry data is distributed in files which contain the data of one day (see SOHO Interface Control Document[^4]). For SUMER there are three different kinds of file, standard housekeeping data called HK0 and science low and high rate files. The HK0 data are extracted out of the TM files and written, also on a daily base, into binary files. This is more or less a copy process. Processing the science data needs considerably more effort, since the SUMER image data packets are interlaced with science housekeeping packets ([*e.g.*]{} HK255, see SUMER Operations Guide[^5], Chapter 5 for more detailed information). During processing the various packets are extracted and, depending on the type, are sorted into binary files for HK and images. The produced binary HK0, science HK, and image files build the base for all further processing and SUMER data products. Science Data {#sec: Science data} ------------ The main sources of SUMER data are the SOHO/SUMER telemetry (TM) files. These files consist of a header, the TM packets, information about telemetry gaps and information about the transmission quality of the packet as listed below. The gap and quality information are added to the SUMER image header in the flags QAC and MDU. If the QAC flag is set it means that during transmission an error occurred. When the MDU flag is set it means, there are TM packets missing and the image contains fill data at these positions. The fill data is chosen as the max value in the image. For traceability of the data processing, the name of the TM source file is added to the FITS header. The reflecting FITS keywords are: Keyword Description ---------- --------------------------------------- XSSMDU Missing data in image 0=no,1=yes XSSQAC Quality of image data 0=OK,1=NOTOK XSSFID File ID from TM file catalog XSSFPTR Pointer to image position in bin file XSCDID CD ID of TM file XSSEQID CD sequence ID of TM file XSTMFILE TM filename without ext Housekeeping Data {#sec: HK} ----------------- The SUMER image header already contains a snapshot of most housekeeping values of the moment the image is taken. But there are still significant values missing, like temperatures of the instrument and the encoder positions of the mechanisms. These values are sampled every 15 s to 45 s and are sent asynchronously via the housekeeping channel HK0. The HK0 data are read and correlated to the start time of an exposure plus half of the exposure time so that they are accurate mid-way through the exposure. This approach is more accurate for exposure times longer than 1 min because of the sample rate of HK0 data. The maximum delay for the HK0 values is 300 s. This information is added to the FITS Header. Keyword Description ---------- -------------------------------------------------- T3TELE Telescope (MC2) temperature in degree Celsius T3REAR SUMER rear (MC3) temperature in degree Celsius T3FRONT SUMER front (MC4) temperature in degree Celsius T3SPACER SUMER spacer (MC6) temperature in degree Celsius MC2ENC SUMER MC2 (azimuth) encoder position MC3ENC SUMER MC3 (elevation) position MC4ENC SUMER MC4 (slit select) position MC6ENC SUMER MC6 (scan) encoder position MC8ENC SUMER MC8 (grating) encoder position HK0TIME Time stamp of HK0 record Auxiliary Data {#sec: Aux} -------------- The information about UDP name, campaign, etc. are extracted from the Oracle planning database and commanding log files. This data is formatted into FITS files (tables). This intermediate extraction step is done due to the fact that the SUMER planning database is not globally accessible, and the produced FITS files can be more easily distributed. These files will be put into the SSWDB area so the the information can be accessed with simple FITS read programmes ([*e.g.*]{} [**fits\_read**]{}). The task of correlating the information by time, is done during processing (LZ preparation). The routine [**update\_fits**]{} completes the time correlation and adds the information to the FITS header. This routine calls the routines [**get\_fitsudpinfo**]{} and [**get\_fitscmpinfo**]{}, which read the information from the auxiliary FITS tables. The information/keywords added to the header by [**update\_fits**]{} are:\ Keyword Description ----------- ---------------------------------- CMP\_NAME Name of campaign observation CMP\_ID Campaign number STUDY\_ID Study number (database ID) STUDY\_NM Study name OBJECT Target SCIENTIST Scientist responsible of POP/UDP OBS\_PROG Name of scientist programme PROG\_NM Name of observing programme The spacecraft attitude information are distributed as FITS files. Using the IDL routines [**get\_sc\_att**]{} and [**get\_sc\_point**]{} from Solar Software tree (SSW) the attitude information can be read and then added into the FITS header. The solar angles P0 and B0 are read with the routine [**pb0r**]{} and also added to the header. Keyword Description ----------- ---------------------- SOLAR\_P0 Solar angle P0 / deg SOLAR\_B0 Solar angle B0 / deg Geocentric and heliocentric information is added from information read in with the function [**get\_orbit**]{}. Data Reduction Steps {#sec: Reduction} ==================== Details of the data reduction steps that were used to prepare L1 data are described here. The order of the various data reduction steps follows the rational as explained before. Decompression {#sec: decompression} ------------- In most cases, the accumulated counts of the 16-bit data array have been compressed on board to 1-byte integers. The by far most often applied ‘Method 5’ (quasi-logarithmic byte scale) used an algorithm that mapped the dynamic range found in the image to a logarithmic lookup table. For numbers from 0 to 107 the result of the lookup table is a one-to-one copy of the input value. Thus all values with low count rates are preserved losslessly, and can be reversed by a decompression that uses information contained in the raw image header. Data in LZ format as well as in FITS and FTS format of existing archives are already decompressed during creation, if methods between 5 and 10 were employed. Data that was compressed by application of more complex compression methods ([*e.g.*]{} ‘moment calculation’) is only available in LZ format, since decompression is not possible. Reversion {#sec: Reversion} --------- The pixel addresses of the SUMER detectors are such that the highest wavelength is on pixel 0, the lowest on pixel 1023; the wavelength stated in the raw header refers to the reference pixel. Therefore, wavelengths are descending from left to right in reformatted raw data. For compatibility with the SOHO conventions and the following image correction routines, the spectral direction of the images in LZ format of this archive and in FITS or FTS format has already been reversed (such that wavelength increases to the right). Flatfield Correction and Removal of Digitization Nonlinearity {#sec: FF} ------------------------------------------------------------- ### Image Features that Need a Correction by a Flatfield Routine Detector anode and photocathode both show effects that are specific for each detector and are relevant for the data reduction. The flatfield correction of SUMER images generally corrects small-scale structures introduced by the detectors. Larger structures, like the overall response of the different photocathode areas, are treated by the radiometric calibration routine. The small-scale structures are introduced by the spatial inhomogeneity of the channel plate response and the non-linearity of the analog-to-digital converters of the detector electronics. Another small-scale structure that may be present in SUMER images, is local gain depression which, however, must be treated by another routine. Inhomogeneity of the micro-channel plate response is mainly due to the hexagonal pattern of the microfiber bundles and the relative orientation of these bundles in a stack of three microchannel plates. Depending on this relative orientation, a complex moiré pattern of the response is produced, which is very distinct in the detector A and much less pronounced in the detector-B. If a clear hexagonal (‘chicken wire’) pattern is visible in the image it is produced by the lowest channel plate (the one which is closest to the anode). The smaller structures are produced by the superposition of the fiber bundles of the three plates. In addition to the moiré pattern there may be dead pores in one of the channel plates, which lead to dark spots in the image. Depending on which of the plates has the dead pore determines the size of the dark spot. Because of the spreading of the charge, when transferred from one plate to the next, more pores are blacked-out in the succeeding plate and, thus the dark spot is larger when it is in the first plate (the one farthest from the anode). The further processing of the signal to create a digital image out of the analog signal by the time-to-digital converter (TDC) leads to additional small-scale artifacts in the image data. A non-linearity of the analog-to-digital-converter (ADC) introduces a difference in the response between two succeeding rows of the image. This can be seen in the very distinctive alternating response of the odd and even rows of the image. The ADC non-linearity causes that the signal in one row is about 9.5% higher than average, while in the adjacent row it is 9.5% lower than average, thus making on average a 19% difference between the rows. In the following, we will call this the ‘odd-even pattern’. Note that this pattern only exists in the rows of the image, not in the columns where it has been avoided internally by the detector electronics using an unsharpening technique (‘dithering’). This effect, which is present in both detectors, is also effectively averaged out if an even number of binning is applied along the slit direction. The non-linearity of the time-to-digital converter of the post-anode digital electronics is found to be constant in time and well characterized for both detectors. It can rather easily be removed. However, during the period from June 2005 to November 2006, when a flaw in the address decoder of detector A developed that rapidly progressed and finally led to the destruction of the TDC unit, the fixed pattern of this chain increased significantly. During this time period, the characteristic of the TDC was regularly monitored, so that these data are still scientifically sound. The best results were found by separation of both effects, when in a first step the pattern is compensated and in a second step the residual non-uniformities of the pixel array were removed. ### Producing Flatfield Correction Data There is no flatfield illumination of SUMER detectors on board. There are however alternative ways in producing quasi-flat illuminations of the SUMER detectors that can be used to extract the small-scale features and produce a data array that can be used to compensate these artifacts. In order to produce a quasi-flat illumination of the detectors on board, an observing sequence has been written that puts the SUMER spectrograph in a state of maximal defocusing at the wavelength of 88 nm for a several-hour long observation of a preselected quiet-Sun (QS) area at the wavelength of the Lyman continuum. At this wavelength the solar spectrum is devoid of strong lines, and a quiet solar region avoids bright features with strong variability in the field of view. The defocusing provides a smearing of any small features to a size of at least 16 pixels. Thus, the resulting image of this deep exposure – an example is shown in Figure \[fig:ff\] – can be used to extract small-scale features of the detectors. An on-board routine extracts from this exposure the small-scale ‘fixed pattern’ by applying a median filter (of size 16 pixels) to the data and division by the filtered image. The resulting image (the flatfield array) is a matrix with values between 0.5 and 1.5, the average value being 1.0. It is stored on board for processing of images and sent to ground by telemetry for further application to any data on the ground. The flatfield correction array is applied in a simple multiplicative way. The procedure of flatfield correction can be made on ground (or, reversely, the on-board flatfield corrections can be undone) by using the function [**sum\_flatfield.pro**]{} available in the Solar Software Tree. Flatfield arrays have been produced frequently until 1998, but later the occasions of making flatfield exposures have been greatly reduced in an effort to reduce the total exposure of the detectors to high count rates that would lead to ever increasing high voltage needed to compensate for the resulting gain loss. As the high voltage power supply units were reaching their upper limits, at some occasions only short flatfield exposures have been taken, which could be used to find any changes in the flatfield structures with respect to the latest deep exposures. Another method to determine the ‘fixed pattern’ of the detectors can be employed when a large enough data set can be used to extract from the average of all images, in a similar way as above, the small-scale features introduced by the detector. In general, this average of the images is not as deep an exposure as the ‘normal’ flatfield exposure of three hours in the Lyman continuum, but its one certain advantage is that it is as close in time as possible to the actual observations to which it will be applied. This is useful in particular cases, because not all of the ‘fixed patterns’ are really fixed in time. ### Changes of the Fixed Pattern with Time It had been found very early in 1996, by comparing several flatfield correction arrays of detector A, that the flatfield pattern of the detectors were changing slightly with the time of usage of the detector. This can be found by correlating with each other different flatfield array data. The correlation can be maximized by a shift of the flatfield pattern, which is mostly less than one pixel (in $X$- and $Y$-directions), but can amount to several pixels between different flatfields. There may be several reasons for this shift of the small-scale pattern: One reason lies in the ‘scrubbing’ of channel plates, i. e., extracting charge from the MCPs by heavy usage, and the resulting gain loss of the lowest of the three channel plates. Since the channels are inclined with respect to the anode, the gain loss may cause a shift of the charge cloud centroid that should be located on the anode by the position encoding. Such an effect was clearly detected when a strong spectral line was observed for long time at the same location on the detector. Then, the persisting gain loss at this location resulted in a shift of the charge cloud centroid towards the area with higher gain. Fortunately, the SUMER wavelength scan mechanism allows the position of spectral lines to be place anywhere on the detector and this resulted on the average to a more or less uniform gain loss across the active area. When such a uniform gain loss was reached, increasing the high voltage of the MCPs could compensate it. This gain calibration has been done frequently. But the change of high voltage may have an effect on the electrical field between the channel plate and the anode. It may also have an effect on the position encoding if it affects the travel time of pulses in the cross-delay lines of the position encoding system. Both may lead to a shift of the image pattern. Therefore, new flatfield images have been acquired regularly - roughly every month - after the high-voltage setting had been newly adjusted during a gain calibration. Later, when the observations of the solar disk have been reduced to save lifetime of the detectors, the period between flatfield acquisitions has been increased. Since the gain loss occurs more or less constantly during usage of the detector and the compensation can only be done stepwise, there is, in principle, a possible small shift between the data and the flatfield pattern. But in general the shift of the flatfield pattern is not uniform. A uniform scrubbing of the detector cannot be achieved, and therefore a differential (or local) scrubbing, which is due to the non-uniform illumination of the detector during its use, results in a shift pattern that is not uniform: depending on which part of the detector area has been used more, the shift is higher in these areas. In addition, as mentioned above, the ‘fixed pattern’ results from a superposition of the pattern of each channel plate. The scrubbing, however, takes place mostly in the lowest channel plate (the one closest to the anode), from which the largest amount of charge has been drawn. Thus it may be possible that features arising from different channel plates may suffer a different displacement in the image. However, this intricacy may be difficult to detect, since the differences are probably much smaller than one pixel. There is, however, a very strong fixed pattern in the flatfield data that never changes. It is the nonlinearity of the detector ADC in the position encoding electronics, which causes the difference of responsivity of odd and even rows. This odd-even pattern is always present along the slit direction, and it has been found to be very stable throughout the time of all flatfield images we have. ### Application of the Flatfield Correction The general flatfield routine [**sum\_flatfield.pro**]{} applies to the image data the flatfield correction array in the same way as the on-board flatfield routine. It corrects all fixed pattern by multiplication of the flatfield correction array. It does not take into account any changes of the fixed pattern with time. Thus, it corrects perfectly the odd-even pattern and much of the other channel plate non-uniformities. By selecting the flatfield array dated closest to the date of observation, the shift between the flatfield data and the corrected data can be minimized. This is the simplest approach, and for most purposes the results are satisfactory. To improve the flatfield correction, we can take the shift of the fixed pattern into account. In this case, the odd-even pattern must be corrected first. For this purpose we have extracted the odd-even pattern from the flatfield raw data and produced new flatfield arrays that have the odd-even pattern removed. This was done in the following way: Separately for detectors A and B, the average odd-even pattern was determined from the row-sums of all flatfield exposure raw images available. From the row-sums, the odd-even pattern was extracted by subtraction of the two-pixel average. Since the pattern is a non-linearity of the ADC, it must be the same all along the slit. Thus, the average along the row-sum was taken to determine a single value for the upper and lower deviation, respectively, from the average. These two values were taken to construct an artificial image array of the odd-even pattern of 1024 by 360 pixels. This array can thereafter be used to remove the odd-even pattern from images by multiplication (in the same way as the usual flatfield function). It has also been applied to all the flatfield raw images, in order to remove from them the odd-even pattern and to produce the new flatfield correction arrays without odd-even pattern. In order to apply the shifted flatfield correction to SUMER data, there are now odd-even arrays and flatfield arrays available to apply these corrections sequentially (see the SUMER Data Cookbook for details about how to use these files). Geometric Distortion Correction {#sec: Geo} ------------------------------- The digital image created by the detector is not a perfectly rectangular array but, due to the analog image acquisition method using pulse travel times and time-to-digital conversion, it is distorted in a cushion shape, resulting in a non-linear spatial and spectral scale. Most of this distortion is due to inhomogeneity in the anode causing small differences of the propagation speed of pulses in the delay lines. This leads to local variations of the plate scale of the detector (Wilkinson [[*et al.*]{}]{}, 2001). The distortion correction for both detectors is based on images of a rectangular grid that was placed in front of the detectors before integration into the SUMER instrument. Figure \[fig:grid\] shows one of these images that have been used to determine the correction matrices. In addition to the geometric distortion, the spectral lines are inclined with respect to the detector vertical lines due to a discrepancy in the orientation of the grating and the detector. For precise wavelength measurements a highly accurate linearization of the spectral plate scale is necessary. Thus, the image distortions need a geometrical correction such that the curvature of spectral lines is removed and the wavelength scale is made linear. A combination of the laboratory images and data from cool solar emission lines acquired for this particular purpose have been used to extract the needed information for creating the geometric correction arrays (Moran, 2002). The arrays consist of lookup tables with pixel shift vectors that can be applied by using a bilinear interpolation algorithm to correct the distorted frames with a standard uncertainty of 0.11 pixel in spectral and 0.25 pixel in spatial direction. This algorithm incorporates resizing of the pixels while maintaining the radiometric accuracy. As a side effect of this treatment, empty values will be produced in some pixels near the edges. This led to an adverse effect in those cases where adjacent windows were selected, since these empty pixels may cut out important parts of the line of interest. Therefore we had to concatenate individually read out windows to produce full-size formats. These ‘inserted’ full image formats led to a significant blow up of the data volume of the archive. The distortion correction for both detectors is based on images of a rectangular grid that was placed in front of the detector before integration. This may explain, why a residual distortion still remains for edge pixels. Dead-Time and Local-Gain Effects -------------------------------- The *total* count rate of a detector during one exposure may be so high that individual photon events may not be detected correctly and electronic deadtime correction factors must be applied. The deadtime effect of the detector electronics is not negligible whenever the total count rate on the detector is above 50000 s$^{-1}$. The [**deadtime-corr.pro**]{} routine takes care of this effect and corrects the radiometric calibration using the total count rate as input. Note that the total count rate on the detector cannot be inferred from subformat images. Instead, this information is taken from the detector housekeeping channel. Due to the cyclic readout of detector housekeeping data – a process that is asynchronous to the science observation – the actual incoming event rate may not be updated fast enough in the header data when a change of photon flux happened less a minute before the exposure. The *local* count rate in a spectral line may also be high, such that the local gain in this part of the detector channel plate is reduced and pulses may fall below the detection threshold. This reduction in responsivity can be corrected by the local-gain depression correction as long as the incoming photon flux is moderate and the pixel count rate is below $\approx$20 s$^{-1}$. For higher pixel count rates, the uncertainty increases dramatically. In case of severe overexposure, the number of valid counts will be reduced instead of going up. Although the scientific use of such spectra is questionable, they appear unflagged in the archive. Radiometric Calibration {#sec: Radio} ======================= In this section, the radiometric calibration of the spectrometer SUMER and related aspects will be outlined. The spectroradiometry to be employed is covered in many publications that will be summarized here with reference to the most relevant original articles. The solar electromagnetic radiation—of which SUMER can observe the wavelength range from 46.5 nm to 161.0 nm—can be characterized by the total solar irradiance (TSI) and its spectral distribution, the solar spectral irradiance (SSI), as a function of the wavelength, $\lambda$ (or, alternatively, the frequency, $\nu$). Quantitative information on these quantities can only be obtained with calibrated instrumentation, [*i.e.*]{} the observations must be compared to laboratory-based standards, thereby providing a baseline for short-term and long-term investigations of any solar variability ([*cf.*]{} Quinn and Fröhlich, 1999; Lean, 2000; Willson and Mordvinov, 2003; Wilhelm, 2009, 2010). The physical quantities have to be given in units of the International System of Units (SI: Le système international d’unités) (BIPM, 2006; see also NIST, 2008). Calibration Concept {#concept} ------------------- In Table \[Tab\_Units\], some derived SI units of physical quantities are compiled that are relevant in the context of spectroradiometry. Quantity Symbol$^{\rm a}$ Unit symbol Unit name of quantities --------------------- -------------------------- ----------------------------- ------------------------------- Radiant energy $Q$ J joule (1 J=1 kgm$^2$s$^{-2}$) Radiant flux, power $\mathit \Phi$ W watt (1 W=1 Js$^{-1}$) Spectral flux ${\mathit \Phi}_\lambda$ Wnm$^{-1}$ Irradiance $E$ Wm$^{-2}$ Spectral irradiance $E_\lambda$ Wm$^{-2}$nm$^{-1}$ Radiance $L$ Wm$^{-2}$sr$^{-1}$ Spectral radiance $L_\lambda$ Wm$^{-2}$sr$^{-1}$nm$^{-1}$ Radiant intensity $I$ Wsr$^{-1}$ Spectral intensity $I_\lambda$ Wsr$^{-1}$nm$^{-1}$ $^{\rm a}$ Recommendations only. \[Tab\_Units\] Four quantities are of major importance for the spectroradiometry: the “radiant flux density” or “irradiance”, $E$; the “spectral irradiance”, $E_\lambda$; the “radiance”, $L$; and the “spectral radiance”, $L_\lambda$. They are given in Table \[Tab\_Units\] together with supporting explanations. Note that the radiance and the intensity are not dependent on the observing distance, whereas the irradiance varies with the inverse square of the distance. The spatially-resolved observations of SUMER yield the spectral radiance, $L_{\lambda}(\vartheta,t)$, defined by the relation $${\rm d} Q(\lambda,\vartheta,t) = L_{\lambda}(\vartheta,t)\,{\rm cos}\vartheta\,{\rm d}S\, {\rm d}\omega\,{\rm d}t\,{\rm d}\lambda ~, \label{radiance}$$ where d$Q$ is the differential radiant energy emitted into the solid angle ${\rm d}\omega$ from $\cos \vartheta\,{\rm d} S$, the projected area normal to the direction of ${\rm d}\omega$, during the time interval ($t,~t + {\rm d}t$) in the wavelength interval ($\lambda, ~\lambda + {\rm d}\lambda$). An average value of the spectral radiance, $\overline{L_{\lambda}}$, over certain solid angle, time, and wavelength intervals can be obtained from a measurement of the energy $${\mathrm{\Delta}}Q~=~\overline{L_\lambda}\,{\mathrm{\Delta}}\Omega\,{\mathrm{\Delta}}t\,{\mathrm{\Delta}}\lambda\,A \label{energy}$$ through the aperture area, $A$, of SUMER ([*cf.*]{} Wilhelm, 2002a). If the wavelength interval $\Delta \lambda$ covers the profile of a spectral line at $\lambda$, $L_{\rm line} = (\overline{L_\lambda} - L_{\rm back})\,\Delta \lambda$ represents—after a suitable background subtraction—its line radiance. As mentioned before, the radiometric calibration must be traceable to laboratory standards. Ideally these would be primary standards— absolute radiation sources that can be realized in the laboratory (Smith and Huber, 2002). Synchrotron radiation constitutes a suitable source standard in the VUV range, because the spectral radiant flux emitted can be calculated from the parameters of the electron or positron storage ring (Schwinger, 1949; Hollandt [[*et al.*]{}]{}, 2002). In most cases, secondary standards have to be engaged as transfer standards between primary standards and the instrumentation to be calibrated, because the operational requirements of the primary standard and those of the test specimen are often not compatible. SUMER Radiometric Calibration {#radiometry} ----------------------------- A calibration of the SUMER spectrometer designed for operation on a spacecraft directly at a synchrotron facility would have caused conflicts in cleanliness requirements and schedule constraints. A transfer standard equipped with a hollow-cathode plasma-discharge lamp was therefore calibrated with BESSY I at the PTB laboratory[^6] for 32 emission lines with wavelengths between 53.7 nm and 147.0 nm. This was done by a comparison of the radiation characteristics of both standards with the help of a VUV monochromator, taking into account the polarization of the synchrotron beam. The calibration of the transfer standard was carried out before (and after) it was used to characterize the spectral response of SUMER. During the calibration runs, a reproducibility of the radiant flux could be obtained in certain spectral lines within $\pm\,2.5\,\%$ over several hours, and $\pm\,5\,\%$ after a change of the filling gas (Hollandt [[*et al.*]{}]{}, 1996, 1998, 2002; Wilhelm [[*et al.*]{}]{}, 2000). The laboratory calibration was intended to measure the radiometric response of the system, as far as mirror reflectivities and detector responsivities were concerned, without internal vignetting. Contributions to the relative standard uncertainty by the various subsystems have been compiled in Table \[Uncertainties\] for the central wavelength range. The data resulted in relative standard uncertainties of 0.11 using the 2 mm hole in place of the slit, 0.12 with the nominal slit and 0.18 for the 0.3$\times$ 120 slit. Based on these measurements Figures \[fig:cal\](a) and \[fig:cal\](b) have been plotted. [lll]{} Item & Quantity & Uncertainty\ & &(1 $\sigma$)\ Transfer standard & ($6.80 \times 10^6$ to $7.04 \times 10^8$) s$^{-1}$ & 0.06 to 0.07\ (photon flux) &&\ Detector and & 5.64 mm$^2$/(9.5 mm $\times$ 27.0 mm)$^{\rm a}$ &\ telescope inhomogeneities & 140 mm$^2$/(90 mm $\times$ 130 mm)$^{\rm a}$ & 0.08\ Aperture stop & 90 mm $\times$ 130 mm & 0.001/0.001\ Focal length of telescope & 1302.77 mm at 75 $^\circ$C & $5 \times 10^{-5}$\ Slits: &&\ \#1 (4$\times$ 300) & 26.03 $\mu$m $\times$ 1889.7 $\mu$m & 0.005/0.003\ \#2 (1$\times$ 300; nominal) & 6.23 $\mu$m $\times$ 1889.7 $\mu$m & 0.016/0.003\ \#4 (1$\times$ 120)$^{\rm b}$ & 6.27 $\mu$m $\times$ 755.4 $\mu$m & 0.016/0.005\ \#7 (0.3$\times$ 120)$^{\rm b}$ & 1.76 $\mu$m $\times$ 755.4 $\mu$m & 0.045/0.005\ \#9 ($\oslash$: 317; calibration) & 2 mm diameter hole & 0.003\ Nominal/calibration slit & 0.0110 (signal ratio) & 0.05\ Lyot stop & 27.73 mm $\times$ 40.12 mm & 0.004/0.003\ Slit diffraction & Model calculations & 0.01\ Detector pixel size (mean) & 26.5 $\mu$m (spat.)$\times$ 26.5 $\mu$m (spectr.) & 0.02/0.015\ Detector & Flatfield and distortion corrections & 0.01\ \ \ \[Uncertainties\] The scattered-light measurements were carried out using a source emitting two intense Kr[i]{} lines at 116.5 nm and 123.6 nm, because they have wavelengths close to the bright H[i]{} Ly-$\alpha$ line at 121.6 nm. The laboratory results showed excellent stray-light characteristics. Nevertheless, the scattered light of the H[i]{} Lyman-$\alpha$ and $\beta$ lines could be observed at 1700 from the centre of the solar disk in order to obtain line profiles unaffected by the geocorona (Lemaire [[*et al.*]{}]{}, 1998, Lemaire 2002; Emerich [[*et al.*]{}]{}, 2005). Calibration Tracking {#tracking} -------------------- A critical issue is the stability of the spectroscopic responsivity, which can be affected by obstructions of the apertures or contamination of the optical elements and detectors. Solar ultraviolet radiation leads to photo-activated polymerization of contaminating hydrocarbons and, as a result, to a permanent degradation of the system. A cleanliness programme is therefore of great importance to ensure particulate and molecular cleanliness of the instruments and the spacecraft (Schühle, 1993, 2003; Thomas, 2002). An instrument door and an electrostatic solar wind deflector in front of the primary telescope mirror are specific features incorporated in the design in order to maintain the radiometric responsivity during launch and the operational phases. Nevertheless it is essential to track the calibration status through all phases of the mission, [*i.e.*]{}, transport, spacecraft integration and tests, launch, commissioning as well as operations. The procedures include in-flight calibration using line ratios provided in atomic physics data (Doschek [[*et al.*]{}]{}, 1999; Landi [[*et al.*]{}]{}, 2002), inter-calibration between instruments (Wilhelm, 2002b), degradation monitoring (Wilhelm [[*et al.*]{}]{}, 1997; Schühle [[*et al.*]{}]{}, 1998, 2000a): 1. In order to obtain a radiometric characterization outside the spectral range covered in the laboratory, line-radiance ratios measured on the solar disk have been compared with the results of atomic physics calculations. 2. A deep-exposure reference spectrum obtained with detector A in a stable coronal streamer on 13 and 14 June 2000 showed the Si[xii]{} pair at 49.94 nm and 52.07 nm in second and third order of diffraction. This allows us to establish responsivity curves in third order for both photocathodes. 3. Integration of the spectral radiance using full-disk SUMER scans have been performed in the N[v]{} line at 123.8 nm and the C[iv]{} line at 154.8 nm in 1996. The spectral irradiance of the Sun so obtained could be compared with the [*Solar Stellar Irradiance Comparison Experiment*]{} on the [*Upper Atmosphere Research Satellite*]{} (SOLSTICE/UARS; Rottman [[*et al.*]{}]{}, 1993; Woods [[*et al.*]{}]{}, 1993) radiometrically calibrated at the Synchrotron Ultraviolet Radiation Facility (SURF-II) at NIST. Agreement within a factor of 1.14 was found for the N[v]{} line and approximately 1.10 for the C[iv]{} line (Wilhelm [[*et al.*]{}]{}, 1999). 4. Stellar observations (Lemaire, 2002) and reference spectra of QS regions taken on both detectors indicated that the data points available for both detectors are not systematically different within the relative uncertainty margin of $\pm 20$%. It was thus possible to determine joint KBr responsivity functions for detectors A and B in Figure \[fig:cal\](b). 5. A stable radiometric calibration is also supported by the results of the flatfield exposures performed in the H[i]{} Lyman continuum near 88 nm in QS areas. No significant change of the responsivity of detector A has been found over 200 days, nor was there any decrease in detector B over 350 days (Schühle [[*et al.*]{}]{}, 1998). 6. However, a major change of the responsivity happened during the attitude loss of SOHO in 1998. After the recovery, a responsivity decrease was found over a wide spectral range as shown in Figures \[fig:cal\](c) and \[fig:cal\](d). This change is attributed to the deposition of contaminants and subsequent polymerization on the optical surfaces of the instrument, because both detectors were equally affected. Relative losses of 26% for He[i]{} 58.4 nm, 28% for Mg[x]{} (60.9 and 62.4) nm, 34% for Ne[viii]{} 77.0 nm, 39% for N[v]{} 123.8 nm, and 29% for the H[i]{} Lyman continuum were obtained, resulting in an average relative loss of 31% (Schühle [[*et al.*]{}]{}, 2000b). Star observations of $\alpha$ Leonis before and after the attitude loss provide strong evidence that the responsivity decrease is wavelength dependent with a tendency to become rather small at the longest wavelengths (Lemaire, 2002). Consequently, we adopted a relative loss in the responsivities of 31% for wavelengths shorter than 123.8 nm (N[v]{}) (as before), which linearly decreases to 5% at 161 nm. ![Spectral responsivities of the SUMER instrument with its detectors A and B, and the corresponding relative uncertainties for the nominal slit. (a) In the upper panel the responsivity ratio of the photocathodes is shown. (b) First-order and second-order KBr responsivities evaluated jointly for both detectors. The long-wavelength deviation of detector B is treated in the text as well as the bare MCP responsivities and the third-order calibration. Independent assessments of (c) detector A and (d) detector B. For both detectors the relative uncertainties inside and outside the wavelength band from 53.7 nm to 123.6 nm and their changes after the recovery of SOHO are indicated. (On the long-wavelength side, the relative uncertainty refers to the KBr photocathode only.) []{data-label="fig:cal"}](Arch_Rad.eps){width="12cm"} The responsivities of SUMER are shown in Figure \[fig:cal\] as a typical result of the ground and in-flight calibration activities. Note that the radiant energy is measured here as the number of photons with energy $h\,\nu = h\,c_0/\lambda$, where $h$ is Planck’s constant and $c_0$ the speed of light in vacuum. This convention is often adopted in radiometry. The spectral responsivity curves displayed in Figure \[fig:cal\] refer to situations with low count rates both for the total detector and for single pixels. Whenever the total count rate exceeds about $5 \times 10^4$ s$^{-1}$, a deadtime correction is required; and with a single-pixel rate above about 3 s$^{-1}$, a gain-depression correction is called for (see Section 3.4). The uncertainties given in panels (c) and (d) include the contributions of optical stops and diffraction effects, but the uncertainty of the pixel size has to be treated separately. When applying spectral responsivity curves as shown in Figure \[fig:cal\](b), (c) and (d) to the telemetry data, it is necessary to take into account the effects of field stops as well as the epoch of the observation. This can be accomplished by applying the SUMER calibration programme [**radiometry.pro**]{} [^7]. The programme can perform all calculations in photon units or in energy units in accordance with SI. As an example, Figure \[fig:spec\] depicts the VUV radiance spectrum of a QS region with many emission lines and some continua in the wavelength range from 80 nm to 150 nm. ![Spectral radiance of the quiet Sun in the VUV range from a region near the centre of the disk. Prominent emission lines are marked. The spectral radiances expected for some brightness temperatures, $T_{\rm B}$, are shown in red as approximations of the continua in the corresponding wavelength ranges (after Wilhelm and Fröhlich 2010).[]{data-label="fig:spec"}](Arch_spec.eps){width="14cm"} The responsivity curves are available for the periods from January 1996 to June 1998 and from November 1998 to December 2001. They have undergone modifications in the past and might do so again in the future. There are two reasons for such modifications: an improved understanding of the performance of the instrument including its calibration; and changes of the status of the instrument with time. There are indications that the responsivity of the instrument did not change at least until April 2007, although the uncertainties increased. After April 2010, however, a thermal runaway effect in the MCP of detector B forced us to reduce the high voltage. This resulted in a significant drop of the sensitivity and a partial loss of the KBr coated section of the detector. Since April 2012, only the bare sections of detector B can be used. Figure 10.2 of Wilhelm [[*et al.*]{}]{}(2002) summarizes the modifications since October 1999 continuing the history documented in Figure 4 of Wilhelm [[*et al.*]{}]{}(2000), demonstrating that the calibration status in the central wavelength range is very consistent over time for both detectors. We recommend in general the joint evaluation, but near 80 nm both detectors cannot be treated jointly. Thus the separate responsivity of detector B should be used here. Before an adequate low-gain level of detector B was found, a test configuration was used between 24 September and 6 October 1996. Other Data Reduction Aspects ---------------------------- Various other data correction algorithms have been established that are used for special applications, but have not been included in the set of standard data reduction procedures applied to the data in the archive. ### Wavelength Calibration, Line Identification, Doppler Velocities {#sec: Doppler} The wavelength setting is accomplished by the linear movement of a rod that changes the reflection angle of the plane mirror and thus the angle of incidence on the grating ([*cf.*]{}, Section 2.2.3). The relationship between actuator step and wavelength setting is highly non-linear and has been approximated by a lookup table. This lookup table and the dispersion relation as discussed before have been used to convert the spectral pixels to physical units. The wavelength of L1 data is given in unit of nanometer. The limited accuracy and reproducibility of the wavelength setting introduce an uncertainty of several pixels, which is far below the spectral resolution. A more careful wavelength calibration is required, if it is important to know the absolute wavelength of a spectral line. Since SUMER has no on-board calibration lamp, each set of spectra with the same wavelength setting has in this case to be calibrated using nearby photospheric and chromospheric lines of the solar spectrum as wavelength standards. This cannot be completed in an automated manner and will be the task of the user. The wavelength calibration in the archive is based on the nominal dispersion. Moreover, it assumes that the line of interest is at the central pixel of the spectral window. Unpredictable offsets of many pixels render this assumption as unrealistic. Therefore, the automated wavelength scale given in the archive is only a first guess. Even very faint lines could be detected in deeply exposed on-disk and off-disk spectra because of the extremely low level of dark counts, and many new line identifications were possible. A comprehensive overview of the solar spectrum in the SUMER spectral range is provided in spectral atlases of disk [@Curdt01] and coronal features [@Curdt04]. Centroiding allows to determine the position of unblended spectral lines down to one tenth of a pixel, in particular for lines observed in second order of diffraction. The line shift $\delta \lambda$ can be used to calculate the Doppler flow $v_{\rm D}$ using $$v_{\rm D} = \frac{\delta \lambda}{\lambda}\,c_0$$ where $c_0$ is the speed of light in vacuum and $\lambda$ the wavelength of the spectral line. Several conditions must be met to reach uncertainties as low as 1 kms$^{-1}$ to 2 kms$^{-1}$: the line of interest must be unblended and gaussian; its laboratory wavelength must be well-known; and wavelength standards have to be at rest and close to the line of interest. The limiting parameter for the effective spectral resolution of the instrument is certainly given by the detector non-linearity and instability. ### Pixel Shift {#sec: Deltapixel} The location of the slit image on the detector array is not at all constant. The size of the slit image varies with wavelength – an effect of the wavelength-dependent magnification ([*cf.*]{}, Section 2.1). The accumulated effect of the alignment errors between the scan mirror rotation axis, the grating ruling direction, the direction of the grating focussing mechanism, and the direction of the detector rows can be evaluated for the distortion-corrected detector arrays (both A and B) by the function [**deltapixel.pro**]{} as shown in Table \[tab:shift\]. The correction for this pixel shift is only needed for co-registration of spectra with different wavelength settings. For such cases, numerical values can be extracted from the lookup table in Table \[tab:shift\]. There is no compensation for this effect in the data of the archive. [clllclll]{} Wavelength & Lower & Upper & Pixel & Wavelength & Lower & Upper & Pixel\ $\lambda$ / nm & pixel & pixel & shift & $\lambda$ / nm & pixel & pixel & shift\ 69.2 & 105.6 & 220.9 & -16.8 & 109.2 & 119.5 & 238.7 & -0.9\ 71.2 & 106.5 & 222.1 & -15.7 & 111.2 & 119.8 & 239.2 & -0.5\ 73.2 & 107.4 & 223.2 & -14.7 & 113.2 & 119.9 & 239.7 & -0.2\ 75.2 & 108.2 & 224.1 & -13.8 & 115.2 & 120.0 & 240.0 & 0.0\ 77.2 & 109.0 & 225.1 & -13.0 & 117.2 & 119.3 & 240.2 & 0.1\ 79.2 & 109.7 & 226.0 & -12.1 & 119.2 & 119.7 & 240.3 & 0.0\ 81.2 & 110.5 & 226.8 & -11.3 & 121.2 & 119.4 & 240.2 & -0.2\ 83.2 & 111.2 & 227.7 & -10.5 & 123.2 & 118.9 & 239.9 & -0.6\ 85.2 & 112.0 & 228.6 & -9.7 & 125.2 & 118.2 & 239.5 & -1.1\ 87.2 & 112.7 & 229.5 & -8.9 & 127.2 & 117.4 & 238.9 & -1.9\ 89.2 & 113.5 & 230.3 & -8.1 & 129.2 & 116.4 & 238.1 & -2.8\ 91.2 & 114.2 & 231.2 & -7.3 & 131.2 & 115.2 & 237.1 & -3.9\ 93.2 & 114.9 & 232.1 & -6.5 & 133.2 & 113.8 & 235.9 & -5.1\ 95.2 & 115.6 & 233.0 & -5.7 & 135.2 & 112.3 & 234.5 & -6.6\ 97.2 & 116.3 & 233.9 & -4.9 & 137.2 & 110.5 & 232.9 & -8.3\ 99.2 & 117.0 & 234.8 & -4.1 & 139.2 & 108.6 & 231.1 & -10.1\ 101.2 & 117.6 & 235.7 & -3.4 & 141.2 & 106.5 & 229.2 & -12.1\ 103.2 & 118.2 & 236.5 & -2.7 & 143.2 & 104.3 & 227.1 & -14.3\ 105.2 & 118.7 & 237.3 & -2.0 & 145.2 & 101.8 & 224.9 & -16.7\ 107.2 & 119.1 & 238.0 & -1.4 & 147.2 & 99.3 & 222.5 & -19.1\ ### Line Broadening {#sec: linewidth} The width of spectral lines is affected by a contribution of the instrumental broadening to the Doppler broadening. Using the function [**con-width-funct-3.pro**]{} the instrumental width can be taken out by using a de-convolution matrix. ### Straylight and Dark Signal {#sec: Straylight} Because of the excellent surface quality of the primary mirror ([*cf.*]{}, Section 2.1; ‘Optical design’), the scattered light of the disk falls off by five orders of magnitude within 20 arcsec ([*cf.*]{}, Figure 1 in Lemaire [[*et al.*]{}]{}, 1998). It is, therefore, possible to observe the lower corona off-disk without occultation. At larger limb distances, however, the off-limb scattered light dominates the spectra. The fall-off curves for the scattered light levels are the result of the large-angle scatter characteristic of the micro-roughness of the mirror coating, which is wavelength dependent and also depends on the non-uniform and variable brightness distribution of the disk. It is therefore not possible to apply an easy algorithm for automatic straylight subtraction. For many applications, the scattered light is unproblematic, it may even help with the wavelength calibration. In all other cases the straylight has to be subtracted by the user. During the first years in orbit, the data from the photon counting detection system was practically noise-free ([*cf.*]{}, Section 2.3 or Wilhem [[*et al.*]{}]{}, 1997a). Only during rare solar energetic particle (SEP) events with a strong high energy contribution a temporary increase of dark counts was observed. The rate of dark counts increased, however over the years, and recently, the number of flaring pixels may become a problem for long exposures at low signal. ### Thermo-Elastic Deformation Effects An analysis by @Rybak99 revealed a parasitic effect of the algorithm used to regulate the temperature of the optical bench. It was found that the oscillation of the heater duty cycle had an effect of the line position on the detector due to thermoelastic deformations. For the front section of the instrument, @Rybak99 report oscillation amplitudes of $\approx$0.3 K with a period of $\approx$120 min. Similarly, in the rear section of the instrument the temperature oscillates with an amplitude of $\approx$0.1 K and with a period of $\approx$75 min. As a consequence of these temperature variations, the position of a spectral line was periodically shifted by up to 2.5 nm, dominated by the front bench. @Rybak99 also report a correction procedure needed for long-duration studies during the first years that are sensitive to this effect. With increasing equilibrium bulk temperatures (ageing effect of the thermo- optical properties), the need of the heaters was reduced and the heater duty cycle diminished. Already in 1998, the thermoelastic deformations became very small and disappeared later. Archive Description {#sec: Archive} =================== Figure \[fig:dataflow\] shows the path from the various data sources towards the SUMER FITS data products, including intermediate processing steps. ![Data Flow Diagram.[]{data-label="fig:dataflow"}](auxinfoflow){width="12cm"} FITS LZ {#sec: fitslz} ------- The IDL routine [**mk\_sumerfits**]{} takes the binary data from the TM processing (see Section \[sec: TM processing\]) and produces SUMER LZ FITS data. The main task is to create a FITS header with all the information needed. In addition some missing HK0 data is correlated to the image (see Section \[sec: HK\]). This routine takes care of assembling several wavelength windows taken with one exposure into one detector image (see Table \[tab:formats\]). This step is necessary because of the difficulty that further processing steps like geometrical correction ‘deform’ the image so that a composition of two adjacent images will not be possible without gaps. The SUMER FITS LZ product is decompressed, solar coordinate corrected (North up) and wavelength direction corrected (low to high from left to right) data. All further processing steps described in Section \[sec: Intro\] are performed in the FITS L1 processing which is described in Section \[sec: fitsl1\]. ### LZ FITS File Structure {#sec: fitslzfile} The created SUMER LZ FITS file contains a [**[H]{}**]{}eader [**[D]{}**]{}ata [**[U]{}**]{}nit - the header, and a [**[P]{}**]{}rimary [**[D]{}**]{}ata [**[U]{}**]{}nit - the image data. In addition there is an extension. - This is the primary FITS header (an example is given in the electronic supplementary material). - This is the actual image data set. - The original telemetry image header(s) of the image. This is a byte array of at least 120 bytes which can be analysed with the SUMER header routines (see the SUMER Data Cookbook for details). This data is kept for compatibility so that the ‘old’ routines can also be used to analyse the data. For each image acquired during the detector integration (1 to 8), there is one 120 byte array. FITS L1 - Calibrated Data {#sec: fitsl1} ------------------------- ### Definition The SUMER FITS L1 data is defined calibrated data, where all processing steps described in Section \[sec: Intro\] are performed including radiometric calibration. ### Restrictions The calibration of SUMER data is only possible for data compressed with the SUMER lossless compression schemes 6 and below (see SUMER Operations Guide[^8]). Images compressed with other schemes and from the rear slit camera are ignored by the L1 processing. ### Flatfields {#sec: fitsflatfield} Before starting to describe the production of the calibrated SUMER FITS data (L1) a short description is given for the preparation of the flatfield data out of the newly created SUMER FITS LZ data. During the processing of SUMER LZ data images matching the conditions for SUMER flatfields are logged with their filename in a list. This list is afterwards taken to produce the flatfield data for the L1 production process. The IDL routine [**sum\_make\_fits\_ff**]{} takes care of all the necessary steps. One of these steps is to remove the odd-even pattern from the raw image (see Section \[sec: FF\]). Another step results from the changed acquisition strategy in later times of SUMER operation. In the beginning a flatfield was taken as one long exposure and then processed and stored for on-board flatfielding. These data were downlinked as two separate images, one was the raw data and one the processed on-board flatfield. Later, due to the ‘odd-even pattern’ (see Section \[sec: FF\], the on-board processing was skipped and only the raw data were downlinked. To exclude transmission errors of the flatfield, the flatfield was taken in several single exposures which were then added to one flatfield image. This addition is also done by [**sum\_make\_fits\_ff**]{}. In addition this routine ‘corrects’ some image rows which were missing due to telemetry gaps. The replacement is indicated in the FITS header, as a comment. ### Processing {#sec: L1Processing} The L1 processing takes LZ data sets and performs all necessary processing to get the calibrated data. The overall routine [**proc\_sumer\_calib**]{} takes care of the in and outfile organization [*e.g.*]{} reading LZ files and putting all processed data into a new file. All the calibration steps are included in the routine [**sumer\_calib\_mpsfits**]{} which is called by [**proc\_sumer\_calib**]{}. This routine calls the various SUMER calibration routines in the correct order and logs the performed processing in the FITS header. Via parameters, the processing level can be controlled to do a step-by-step calibration for verification. #### Reverse Flatfield If an on-board faltfield processing has already been done, this is reversed for preparation of the odd-even correction. The FITS keywords reflecting this processing are: Keyword Description ---------- ------------------------------------------ SSFF Mark as not processed on board RFLATFIL Used reverse flatfield history Applied [**sum\_flatfield**]{} (reverse) #### Dead Time Correction The dead time correction step is performed by the [**deadtime\_corr**]{} routine. The FITS keywords reflecting this processing are: Keyword Description ---------- -------------------------------- DEADCORR Dead time correction DCXDLEV Dead time corr XDL input value history Applied [**deadtime\_corr**]{} #### Odd Even Correction The odd even correction is performed by the [**sum\_flatfield**]{} giving the oddeven array for the specified detector as a parameter. The FITS keywords reflecting this processing are: Keyword Description ---------- --------------------------------------------------- ODEVCORR Odd-even correction history Applied [**sum\_flatfield**]{} with odd-even corr #### Local Gain Correction Local gain correction is performed by calling the subroutine [**local\_gain\_corr**]{}. The FITS keywords reflecting this processing are: Keyword Description ---------- ----------------------------------- LGAINCOR Local gain correction history Applied [**local\_gain\_corr**]{} #### Flatfield Correction The flatfield used is the one closest in time ahead of the image acquisition date. The processing itself is done by the function [**sum\_flatfield**]{} using the image and the flatfield data as parameters. The FITS keywords reflecting this processing are: Keyword Description ---------- -------------------------------- FLATCORR Record the processing FLATFILE Used flatfield data history Applied [**sum\_flatfield**]{} #### Distortion Correction Distortion correction for the data is done by calling the function [**destr\_bilin**]{} with the image data as parameter. The FITS keywords reflecting this processing are: Keyword Description ---------- ------------------------------ GEOMCORR Record the processing history Applied [**destr\_bilin**]{} #### Radiometric Calibration {#radiometric-calibration} The radiometric calibration is performed by calling the [**s\_fitsrad**]{} subroutine. This routine takes care of the calculation of the different radiometries (first or second order, KBr or bare) by calling the [**radiometry**]{} function with the appropriate parameters. Once the radiometry values for first and second order are calculated, the first order radiometry is applied to the image data. Both radiometry arrays are then stored in the FITS file as an extension. The FITS keywords reflecting this processing are: Keyword Description ---------- --------------------------------------------------------- RADCORR Radiometry calibration performed RADORDER Radiometry for wavelength order (first or second order) AVARADO Available radiometry orders IMGUNITS Units for image (W sr$^{-1}$ m$^{-2}$ [Å]{}$^{-1}$) LEVEL 1 PRODLVL L1 history Applied [**radiometry**]{} #### L1 FITS File Structure After having done all the calibration steps and gathered all needed data the data is written back to the FITS file in the following order: - This is the primary FITS header (an example is given in the electronic supplementary material). - This is the actual calibrated image data set. - ([*cf.*]{}, Section \[sec: fitslzfile\] - This extension includes the radiometry arrays for first and second order. Since only one radiometric calibration can be performed on the data at a time, the purpose of this data arrays is to be able to reverse the actual radiometric calibration (indicated in keyword RADORDER) and perform the other one - if available. The number of available orders/arrays is indicated in the keyword AVARADO. Processing Remarks ------------------ The IDL routines used for the FITS creation and calibration processing check on various FITS keywords for processing conditions and if processing can be performed on the specified data set. The routines also record performed processing steps in FITS keywords, so a processing step can not accidently be performed twice on the data. Conclusion ========== Scientists all over the world who have been using SUMER data have, sharing their experience, contributed to this work ([*cf.*]{}, Figure \[fig:publ\]). The benefits of this long learning process are comprised in the SUMER archive. Here, we have made an effort to describe the non-trivial task of processing these data in great detail and in a transparent manner. The trend in Figure \[fig:publ\] clearly indicates that there is still interest in SUMER data and that it is realistic to assume that future work will come. It is our intention to encourage future users to take advantage of such ready-to-use data that is not dependant on computer systems. SUMER is now close to end of the operational life time. Therefore, in the future the archive will – with the exception of [*Interface Region Imaging Spectrograph*]{} (IRIS; dePontieu [[*et al.*]{}]{}., 2012) spectra – be the only source of data in the SUMER spectral range. We tried hard to complete the archive so that it can be used for joint science with IRIS. And we hope that in this new format enough meta information is provided that can be used for data mining. The SUMER project is financially supported by DLR, CNES, NASA, and the ESA PRODEX Programme (Swiss contribution). SUMER is part of SOHO of ESA and NASA. The instrument was jointly operated by teams from IAS and MPS. We specially thank Gilles Poulleau servicing the ground equipment for so many years. Numerous scientists of the community helped to coordinate the science operations. Bureau International des Poids et Mesures (BIPM): 2006, [*Le système international d’unités (SI)*]{}, 8$^{\rm e}$ éd, Sèvres, France. Curdt, W., Brekke, P., Feldman, U., Wilhelm, K., Dwivedi, B.N., Schühle, U., Lemaire, P.: 2001, The SUMER spectral atlas of solar disk features. [ [*Astron. Astrophys.*]{}]{} [**375**]{}, 591–613. Curdt, W., Landi, E., Feldman, U.: 2004, The SUMER spectral atlas of solar coronal features. [ [*Astron. Astrophys.*]{}]{} [**427**]{}, 1045–1054. Delaboudinière, J.-P., Artzner, G.E., Brunaud, J., Gabriel, A.H., Hochedez, J.F., Millier, F., et al.: 1995, EIT: Extreme-Ultraviolet Imaging Telescope for the SOHO mission. [[*Solar Phys.*]{}]{} [**162**]{}, 291–312. Doschek, E.E., Laming, J.M., Doschek, G.A., Feldman, U., Wilhelm, K.: 1999, A comparison of measurements of solar extreme-ultraviolet spectral line intensities emitted by C, N, O, and S ions with theoretical calculations. [ [*Astrophys. J.*]{}]{} [**518**]{}, 909–917. Emerich, C., Lemaire, P., Vial, J.-C., Curdt, W., Schühle, U., Wilhelm, K.: 2005, A new relation between the central spectral solar H[i]{} Lyman $\alpha$ irradiance and the line irradiance measured by SUMER/SOHO during the cycle 23. [*Icarus*]{} [**178**]{}, 429–433. Feldman, U., Curdt, W., Landi, E., Wilhelm, K.: 2000, Identification of spectral lines in the 500-1600 [Å]{} wavelength range of highly ionized Ne, Na, Mg, Ar, K, Ca, Ti, Cr, Mn, Fe, Co, and Ni emitted by flares. [ [*Astron. Astrophys.*]{}]{} [**544**]{}, 508–521. Hollandt, J., Schühle, U., Paustian, W., Curdt, W., Kühne, M., Wende, B., Wilhelm, K.: 1996, Radiometric calibration of the telescope and ultraviolet spectrometer SUMER on SOHO. [*Appl. Opt.*]{} [**35**]{}, 5125–5133. Hollandt, J., Schühle, U., Curdt, W., Dammasch, I.E., Lemaire, P., Wilhelm, K.: 1998, Solar radiometry with the telescope and vacuum-ultraviolet spectrometer SUMER on the solar and heliospheric observatory (SOHO). [*Metrologia*]{} [**35**]{}, 671–675. Hollandt, J., Kühne, M., Huber, M.C.E., Wende, B.: 2002, Source standards for the radiometric calibration of astronomical instruments in the VUV spectral range traceable to the primary standard BESSY. In: Pauluhn, A., Huber, M.C.E., von Steiger, R. (eds.), [*The Radiometric Calibration of SOHO, ESA SR-002*]{}, 51–68. Landi, E., Feldman, U., Dere, K.P.: 2002, CHIANTI – An atomic database for emission lines. V. Comparison with an isothermal spectrum observed with SUMER. [*Astrophys. J. Suppl.*]{} [**139**]{}, 281–296. Lean, J.: 2000, Evolution of the Sun’s spectral irradiance since the Maunder Minimum. [ [*Geophys. Res. Lett.*]{}]{} [**27**]{}, 2425-2428. Lemaire, P.: 2002, SUMER stellar observations to monitor responsivity variations. In: Pauluhn, A., Huber, M.C.E., von Steiger, R. (eds.), [*The Radiometric Calibration of SOHO, ESA SR-002*]{}, 265–270. Lemaire, P., Wilhelm, K., Curdt, W., Schühle, U., Marsch, E., Poland, A.P., [[*et al.*]{}]{}: 1997, First results of the SUMER telescope and spectrometer on SOHO. II. Imagery and data management. [[*Solar Phys.*]{}]{} [**170**]{}, 105–122. Lemaire, P., Emerich, C., Curdt, W., Schühle, U., Wilhelm, K.: 1998, Solar H[i]{} Lyman $\alpha$ full disk profile obtained with the SUMER/SOHO spectrometer. [ [*Astron. Astrophys.*]{}]{} [**334**]{}, 1095–1098. Lemaire, P., Emerich, C., Vial, J.-C., Curdt, W., Schühle, U., Wilhelm, K.: 2002, Variation of the full Sun hydrogen Lyman $\alpha$ and $\beta$ profiles with the activity cycle. In: Wilson, A. (ed.), [*SOHO 11-Symposium on From Solar Min to Max: Half a Solar Cycle with SOHO, ESA SP-508*]{}, 219–222. Moran, T.G.: 2002, Solar and Heliospheric Observatory/Solar Ultraviolet Measurements of Estimated Radiation ultraviolet array detector distortion correction. [*Rev. Sci. Instrum.*]{} [**73**]{} 3982–3887. National Institute of Standards and Technology (NIST): 2008, [*Guide for the Use of the International System of Units (SI), NIST Special Publication 811.*]{} Quinn, T.J., Fröhlich, C.: 1999, Accurate radiometers should measure the output of the Sun. [ [*Nature*]{}]{} [**401**]{}, 841. Rottman, G.J., Woods, T.N., Sparn, T.P.: 1993, Solar-Stellar Irradiance Comparison Experiment 1. 1. Instrument design and operation. [ [*J. Geophys. Res.*]{}]{} [**98**]{}, 10667–10677. Rybák, J., Curdt, W., Kucera, A., Schühle, U., Wöhl, H.: 1999, Chromosperic and transition region dynamics - Reasons and consequences of the long period instrumental periodicities of SUMER/SOHO. In: Wilson, A. (ed.), [*Proc. 9th European Meeting on Solar Physics. Magnetic Fields and Solar Processes. ESA SP-448*]{}, 361–366. Saha, T.T., Leviton, D.B.: 1993, Theoretical and measured encircled energy and wide-angle scatter of SUMER demonstration telescope mirror in FUV. In: Bely, P.Y., Breckinridge, J.B. (eds.), [*Space Astronomical Telescopes and Instruments II. Proc. SPIE*]{} [**1945**]{}, 398–409. Siegmund, O.H., Stock, J.M., Marsh, D.R., Gummin, M.A., Raffanti, R., Hull, J., [[*et al.*]{}]{}: 1994, Delay-line detectors for the UVCS and SUMER instruments on the SOHO Satellite. In: Siegmund, O.H., Vallerga, J.V. (eds.), [*EUV, X-ray, and Gamma-Ray Instrumentation for Astronomie V. Proc. SPIE*]{}, [**2280**]{}, 89–100. Smith, P.L., Huber, M.C.E.: 2002, Spectroradiometry for solar physics in space. In: Pauluhn, A., Huber, M.C.E., von Steiger, R. (eds.), [*The Radiometric Calibration of SOHO, ESA SR-002*]{}, 21–36. Schühle, U.: 2003, Cleanliness and calibration stability of UV instruments on SOHO. In: Keil, S.L., Avakyan, S.V. (eds.), [*Innovative Telescopes and Instrumentation for Solar Astrophysics, Proc SPIE*]{} [**4853**]{}, 88-97. Schühle, U.: 1993, The cleanliness control program for SUMER/SOHO. In: Silver, E.H., Kahn, S.M. (eds.), [*UV and X-Ray Spectroscopy of Astrophysical and Laboratory Plasmas.*]{}, Cambridge University Press, Cambridge, 373–382. Schühle, U., Brekke, P., Curdt, W., Hollandt, J., Lemaire, P., Wilhelm, K.: 1998, Radiometric calibration tracking of the vacuum-ultraviolet spectrometer SUMER during the first year of the SOHO mission. [*Appl. Opt.*]{} [**37**]{}, 2646–2652. Schühle, U., Curdt, W., Hollandt, J., Feldman, U., Lemaire, P., Wilhelm, K.: 2000a, Radiometric calibration of the vacuum-ultraviolet spectrograph SUMER on the SOHO spacecraft with the B detector. [*Appl. Opt.*]{} [**39**]{}, 418–425. Schühle, U., Hollandt, J., Pauluhn, A., Wilhelm, K.: 2000b, Mid-term radiance variation of far-ultraviolet emission lines from quiet-Sun areas. In: Wilson, A. (ed.), [*Proc. 1st Solar and Space Weather Euroconference. The Solar Cycle and Terrestrial Climate, ESA SP-463*]{}, 427–430. Schwinger, J.: 1949, On the classical radiation of accelerated electrons. [*Phys. Rev.*]{} [**75**]{}, 1912–1925. Thomas, R.: 2002, 20:20 vision and SOHO cleanliness. In: Pauluhn, A., Huber, M.C.E., von Steiger, R. (eds.), [*The Radiometric Calibration of SOHO, ESA SR-002*]{}, 91–104. Wilhelm, K.: 2002a, Spectroradiometry of spatially-resolved solar plasma structures. In: Pauluhn, A., Huber, M.C.E., von Steiger, R. (eds.), [*The Radiometric Calibration of SOHO, ESA SR-002*]{}, 37–50. Wilhelm, K.: 2002b, Calibration and intercalibration of SOHO’s vacuum-ultraviolet instrumentaion. In: Pauluhn, A., Huber, M.C.E., von Steiger, R. (eds.), [*The Radiometric Calibration of SOHO, ESA SR-002*]{}, 69–90. Wilhelm, K.: 2009, Solar energy spectrum. In: Trümper, J. (ed.), [*Landolt-Börnstein Database VI. Astronomy and Astrophysics, 4B The Solar System*]{}, Springer, Berlin, 10–20. Wilhelm, K.: 2010, Quantitative solar spectroscopy. [*Astron. Nachr.*]{} [**331**]{}, 502–518. Wilhelm, K., Curdt, W., Marsch, E., Schühle, U., Lemaire, P., Gabriel, A.H., [[*et al.*]{}]{}: 1995, SUMER–Solar Ultraviolet Measurements of Emitted Radiation. [[*Solar Phys.*]{}]{} [**162**]{}, 189–231. Wilhelm, K., Lemaire, P., Curdt, W., Schühle, U., Marsch, E., Poland, A.P., [[*et al.*]{}]{}: 1997a, First results of the SUMER telescope and spectrometer on SOHO. I. Spectra and spectroradiometry. [[*Solar Phys.*]{}]{} [**170**]{}, 75–104. Wilhelm, K., Lemaire, P., Feldman, U., Hollandt, J., Schühle, U., Curdt, W.: 1997b, Radiometric calibration of SUMER: Refinement of the laboratory results under operational conditions on SOHO. [*Appl. Opt.*]{} [**36**]{}, 6416–6422. Wilhelm, K., Woods, T.N., Schühle, U., Curdt, W., Lemaire, P., Rottman, G.J.: 1999, The solar ultraviolet spectrum from 1200 [Å]{} to 1560 [Å]{}: A radiometric comparison between SUMER/SOHO and SOLSTICE/UARS. [ [*Astron. Astrophys.*]{}]{} [**352**]{}, 321–326. Wilhelm, K., Schühle, U., Curdt, W., Dammasch, I.E., Hollandt, J., Lemaire, P., Huber, M.C.E.: 2000, Solar spectroradiometry with the telescope and spectrograph SUMER on the solar and heliospheric observatory SOHO. [*Metrologia*]{} [**37**]{}, 393–398. Wilhelm, K., Schühle, U., Curdt, W., Dammasch, I.E., Hollandt, J., Lemaire, P., Huber, M.C.E.: 2002, Solar vacuum-ultraviolet radiometry with SUMER. In: Pauluhn, A., Huber, M.C.E., von Steiger, R. (eds.), [*The Radiometric Calibration of SOHO, ESA SR-002*]{}, 145–160. Wilhelm, K., Dwivedi, B.N., Marsch, E., Feldman, U.: 2004, Observations of the Sun at Vacuum-Ultraviolet Wavelengths from Space. Part I: Concepts and Instrumentation. [ [*Space Sci. Rev.*]{}]{} [**111**]{}, 415–480. Wilhelm, K., Marsch, E., Dwivedi, B.N., Feldman, U.: 2007, Observations of the Sun at vacuum-ultraviolet wavelengths from space. Part II: Results and Interpretations. [ [*Space Sci. Rev.*]{}]{} [**133**]{}, 103–179. Wilhelm, K., and Fröhlich, C.: 2010, Photons—from source to detector. In Huber, M.C.E., Pauluhn, A., Culhane, J.L., Timothy, J.G., Wilhelm, K., Zehnder, A. (eds.) [*Observing Photons in Space, ESA SR-009*]{}, 23–54. Wilkinson, E., Penton, S.V., B’eland, S., Vallerga, J.V., McPhate, J., Sahnow, D.: 2001, Algorithms for correcting geometric distortions in delay lines anodes. In: Siegmund, O.H., Fineschi, S., Gummin, M.A. (eds.), [*UV/EUV and Visible Space Instrumentation for Astronomy and Solar Physics, Proc. SPIE*]{}, [**4498**]{}, 267–274. Willson, R.C., Mordvinov, A.V.: 2003, Secular total solar irradiance trend during solar cycles 21-23. [ [*Geophys. Res. Lett.*]{}]{} [**30**]{}, 1199–1203. Woods, T.N., Rottman, G.J., Ucker, G.J.: 1993, Solar-Stellar Irradiance Comparison Experiment 1. II – Instrument calibrations. [ [*J. Geophys. Res.*]{}]{} [**98**]{}, 10679–10694. Wülser, J.-P., Title, A.M., Lemen, J.R., De Pontieu, B., Kankelborg, C.C., Tarbell, T., [[*et al.*]{}]{}: 2012, The interface region imaging spectrograph for the IRIS Small Explorer mission. In: [*Space Telescopes and Instrumentation 2012: Ultraviolet to Gamma Ray, Proc. SPIE*]{}, [**8443**]{}, 801–810. [^1]: http://www.mps.mpg.de/projects/soho/sumer/text/cookbook.html [^2]: http://hesperia.gsfc.nasa.gov/ssw/gen/idl/synoptic/image\_tool/image\_tool.pro [^3]: [sohowww.nascom.nasa.gov/data/archive, sohowww.estec.esa.nl/data/archive/, idc-medoc.ias.u-psud.fr]{} [^4]: sohowww.nascom.nasa. gov/publications/soho-documents/ICD/icd.pdf [^5]: www.mps.mpg.de/projects/soho/sumer/text/sum\_opguide.html [^6]: Berlin electron storage ring for synchrotron radiation; Physikalisch-Technische Bundesanstalt [^7]: sohowww.nascom.nasa.gov/solarsoft/soho/sumer/idl/contrib/wilhelm/rad/ [^8]: www.mps.mpg.de/projects/soho/sumer/text/sum\_opguide.html
--- author: - Andrzej Rostworowski title: 'Quasinormal frequencies of $D$-dimensional Schwarzschild black holes: evaluation via continued fraction method.' --- M. Smoluchowski Institute of Physics, Jagellonian University, Reymonta 4, 30-059 Kraków, Poland\ arostwor@th.if.uj.edu.pl Introduction and setup. ======================= Our motivation to study quasinormal modes of Schwarzschild black holes in higher dimensions comes mainly from the possibility of studying the dynamics of gravitational collapse in vacuum initiated with the work of Bizoń, Chmaj and Schmidt [@bcs]. At the expense of going to higher ($D \geq 5$) odd dimensions, they showed how to evade Birkhoff’s theorem and study gravitational collapse in vacuum at radial symmetry. It was shown [@bcs; @bcrst] that for $D=5, \, 9$, the D-dimensional Schwarzschild black hole is the attractor for large initial data and at some intermediate times the solution settling down to the Schwarzschild black hole, obtained from nonlinear numerical evolution, is well approximated outside the horizon by the least damped quasinormal mode. Therefore the precise values of fundamental quasinormal frequencies of Schwarzschild black holes in odd dimensions are urgently needed, as they help to check the validity of the numerical code used in evolution. The reliable values of quasinormal frequencies of Schwarzschild black hole are available for $D=5$ case [@cly1; @cly2], but for $D>5$ only results from WKB methods are published [@k1; @k2; @bcg] and it is known that they may be not accurate for small values of angular momentum and/or higher overtones. Therefore getting these values with Leaver’s method of continued fractions [@Leaver], giving the most precise values of quasinormal frequencies in $D=4$ and $D=5$ dimensions, is worthwhile. We describe below how to modify Leaver’s method to obtain gravitational vector and tensor quasinormal frequencies of Schwarzschild black hole in $D \geq 10$ dimensions.\ The line element of the Schwarzschild solution in D-dimensions has the form $$\label{SchwarzschildD} ds^2=A(r)dt^2 - A^{-1}(r)dr^2 - r^2\,d\Omega^2_{D-2},$$ with $$\label{laps} A(r)=1-\left(\frac {r_h} {r} \right)^{D-3}.$$ In what follows we take $r_h=1$. In linear approximation the radial component of gravitational vector and tensor perturbation of the metric (\[SchwarzschildD\]) satisfies the following Schrödinger type differential equation $$\label{master} \frac {d^2} {dx^2} \psi + A(r) \left( \frac {L(L+D-3)} {r^2} + \frac {(D-2)(D-4)} {4r^2} + \frac {(1-s^2)(D-2)^2} {4r^{D-1}} \right)\psi = k^2 \psi,$$ where the tortoise coordinate $x$ is defined by $dx/dr\,=\,A^{-1}(r)$ and the parameter $s$ depends on the type of the perturbation ($s=0$ for the gravitational tensor and $s=2$ for the gravitational vector perturbation). Eq. (\[master\]), derived independently by Gibbons and Hartnoll [@gh] and Ishibashi and Kodama [@ik], generalises the well known Regge-Wheeler equation [@rw] to D-dimensions. In this setting quasi normal modes are defined as solutions of (\[master\]), satisfying the outgoing wave boundary conditions $$\label{bc} \psi \stackrel{x \rightarrow \pm \infty}{\sim} \exp (\pm i k x),$$ with Im$(k)<0$. The corresponding values of $k$ are called quasi normal frequencies. Eq. (\[master\]) has $D-2$ regular singular points (at $r=0$ and at $D-3$ roots of $r^{D-3}=1$) and the irregular singular point at infinity. Leaver’s method [@Leaver] of determining quasi normal frequencies consists in separating boundary behavior and then transforming $r$ into $\rho(r)$ in such a way that the singularities at $r=1$ (horizon) and at $r=\infty$ become the closest singularities in the $\rho$ plane. In $D=4$ it is accomplished with the substitution $$\label{LeaverD4} \psi(r) = (r-1)^{-ik}r^{2ik}e^{ikr} u\left( \rho(r) \right),$$ where $$\label{series} u\left( \rho(r) \right) = \sum _{n=0}^{\infty} a_n \rho^n = \sum _{n=0}^{\infty} a_n \left(\frac {r-1} {r} \right)^n,$$ and the coefficients $a_n$ are given by the 3-term reccurence relation $$\label{reccurenceD3} \gamma^{(1)}_n a_{n+1} + \gamma^{(2)}_n a_{n} + \gamma^{(3)}_n a_{n-1} = 0,$$ with the initial condition $a_{0}=1$, $a_{-1}=0$ and $\gamma^{(j)}_n$ given in [@Leaver]. Then, the quantization condition is the convergence of the series (\[series\]) on the convergence radius $\rho=1$. The two linearly independent solutions of the reccurence behave as $$\label{anlimit} a_n \stackrel{n \rightarrow \infty}{\sim} \exp \left( \pm \sqrt{-8 i k n} \right),$$ thus the minimal solution makes the series (\[series\]) convergent at $\rho=1$. The discrete values of $k$ for which the solution given by the initial condition $a_{0}=1$ and $a_{-1}=0$ is a minimal one define quasi normal frequencies. For these values the following equation, involving an infinite continues fraction holds $$\label{continuedfrac} \frac{a_1}{a_0} = \frac{\gamma^{(2)}_0}{\gamma^{(1)}_0 } = - \frac {\gamma^{(3)}_1} {\gamma^{(2)}_1 -} \, \frac {\gamma^{(1)}_1 \gamma^{(3)}_2} {\gamma^{(2)}_2 -} \, \frac {\gamma^{(1)}_2 \gamma^{(3)}_3} {\gamma^{(2)}_3 -}\dots \,.$$ To determine the quasi normal frequencies we truncate the infinite continued fraction in (\[continuedfrac\]) at some denominator and seek for the solutions of (\[continuedfrac\]) which are stable with respect to the change of depth of this truncation.\ In general for even $D>4$ the substitution $$\label{even} \psi(r) = \left(\frac {r-1} {r}\right)^{-ik/(D-3)}e^{ikr} \sum _{n=0}^{\infty} a_n \left(\frac {r-1} {r} \right)^n$$ leads to a $(2(D-3)+1)$-term reccurence relation, while for odd $D$ the substitution $$\label{odd} \psi(r) = \left(\frac {r-1} {r+1}\right)^{-ik/(D-3)}e^{ikr} \sum _{n=0}^{\infty} a_n \left(\frac {r-1} {r} \right)^n$$ leads to a $2(D-3)$-term reccurence relation. These reccurence relations can be reduced to 3-term ones using Gauss elimination as in [@Leaver2]. However, as $D$ increases, more and more of the $D-3$ singularities, spaced uniformly on the unit circle $|r|=1$, approach the horizon at $r=1$ and no simple transformation can move them away from the circle centered at the horizon and the radius corresponding to $r=\infty$ in the $\rho$ plane. In the case of eq. (\[master\]), this difficulty arises first at $D=10$. (The transformation $\rho(r)$ has to be simple enough to be easily inverted into $r(\rho)$, in our case it is a homography). Therefore, for $D \geq 10$ the solution starting from the horizon has to be continued through some mid points $0<\rho<1$, laying within the convergence radius of the presently used series representation of the solution, and Leaver’s continued fraction condition can be applied only if it is the irregular singularity corresponding to $r=\infty$, which limits the convergence radius of the presently used series representation. The $D=11$ case. ================ As an example, to illustrate how the above prescription works, we determine quasinormal frequencies of the Schwarzschild black hole in $D=11$ dimensions. We choose $D=11$ as, due to our motivation given in the introduction, we are interested in odd dimensions and $D=11$ is the smallest odd dimension in which Leaver’s method in its original setting breaks down for D-dimensional generalization of Regge-Wheeler potential (\[master\]). In eq.(\[master\]) we substitute $$%\label{} \psi(r) = \left(\frac {r-1} {r+1}\right)^{-ik/(D-3)}e^{ikr} u\left(\rho(r)\right), \qquad \rho(r) = \frac {r-1} {r}.$$ The singular points of eq.(\[master\]) at $r = 1, \, e^{\pm i \pi / 4}, \, \infty$ are transformed into $\rho= 0, \, 1-1/\sqrt{2} \pm i/\sqrt{2}, \, 1$ respectively (other singular points are placed at $|\rho|>1$). The singularities at $\rho= 1-1/\sqrt{2} \pm i/\sqrt{2}$ limit the convergence radius of the series representation of the solution $u\left( \rho \right) = \sum _{n=0}^{\infty} a_n \rho^n$, to $\sqrt{2-\sqrt{2}} \approx 0.76$. We choose $\rho_0 = 1/2$ (which is a regular point) as a mid point. We have $$%\label{} u\left( \rho \right) = \sum _{n=0}^{\infty} a_n \rho^n = \sum _{n=0}^{\infty} \tilde{a}_n (\rho - \rho_0)^n,$$ where $$\label{atildas} \tilde{a}_0 = \sum _{n=0}^{\infty} a_n \rho_0^n, \qquad \tilde{a}_1 = \sum _{n=1}^{\infty} n a_n \rho_0^{n-1}.$$ The coefficients $\tilde{a}_n$ fulfill $2(D-3)+1=17$-term reccurence relation, which reduced to the 3-term one via Gauss elimination [@Leaver2], and then inserted into (\[continuedfrac\]) yields quasinormal frequencies.\ All reccurence relations are obtained analytically in *Mathematica* computer algebra package. Then all other tasks (finding $\tilde{a}_1 / \tilde{a}_0$ from eq. (\[atildas\]), reduction of the $(2(D-3)+1)$-term reccurence relation for $\tilde{a}_n$ to the 3-term one via Gauss elimination and finding the roots of the continued fraction relation (\[continuedfrac\])) are done numerically, by the program in the $C$ programing language. To determine the roots of the continued fraction relation (\[continuedfrac\]) we use Newton-Raphson root searching algorithm [@nr].\ The first three quasinormal frequencies for vector and tensor gravitational perturbations of the Schwarzschild black hole, for different values of $L$, are given in table \[D11\]. Our values of fundamental frequencies for tensor modes are consistent with [@k1] (in [@k1] the values of fundamental frequencies for scalar field perturbations were given and the scalar field perturbation obeys exactly the same equation as the tensor gravitational perturbation). In order to compare the values obtained from our modification of Leaver’s method with the values given in [@bcg] we also calculate the first three quasinormal frequencies for vector and tensor gravitational perturbation of the Schwarzschild black hole in $D=10$ dimensions. They are listed in table \[D10\]. Comparing with [@bcg] we see a perfect agreement for larger values of $L$. This makes us feel cofident in our results. However for smaller $L$ values, and especially for the overtones there are differences exceeding 10%. As Leaver’s method works well both for smaller and larger values of $L$, we believe that the error is on the WKB method side (see the comments in [@bcg]). $D=11$ -------- --------------------- --------------------- -------------------- --------------------- --------------------- -------------------- L $n=0$ $n=1$ $n=2$ $n=0$ $n=1$ $n=2$ 2 $3.6788 -1.0588 i$ $2.3419 -2.8190 i$ $1.2130 -3.9731 i$ $4.3920 -1.0577 i$ $3.3356 -3.0313 i$ $1.9912 -3.8491 i$ 3 $4.4533 -1.0331 i$ $3.4147 -2.9417 i$ $1.9955 -3.7743 i$ $5.1231 -1.0507 i$ $4.2669 -3.0765 i$ $2.7018 -4.0946 i$ 4 $5.2343 -1.0187 i$ $4.4049 -2.9628 i$ $2.8490 -4.0060 i$ $5.8540 -1.0463 i$ $5.1305 -3.0910 i$ $3.4995 -4.4424 i$ 5 $6.0134 -1.0120 i$ $5.3226 -2.9745 i$ $3.7955 -4.3603 i$ $6.5849 -1.0435 i$ $5.9561 -3.0967 i$ $4.4432 -4.7715 i$ 6 $6.7881 -1.0097 i$ $6.1936 -2.9862 i$ $4.8328 -4.6430 i$ $7.3160 -1.0415 i$ $6.7587 -3.0994 i$ $5.4481 -4.9519 i$ 7 $7.5579 -1.0096 i$ $7.0340 -2.9976 i$ $5.8529 -4.7970 i$ $8.0471 -1.0401 i$ $7.5459 -3.1007 i$ $6.4039 -5.0333 i$ 8 $8.3232 -1.0105 i$ $7.8533 -3.0082 i$ $6.8146 -4.8826 i$ $8.7783 -1.0390 i$ $8.3226 -3.1014 i$ $7.3091 -5.0753 i$ 9 $9.0846 -1.0120 i$ $8.6576 -3.0177 i$ $7.7286 -4.9366 i$ $9.5095 -1.0383 i$ $9.0914 -3.1019 i$ $8.1784 -5.1000 i$ 10 $9.8425 -1.0136 i$ $9.4505 -3.0262 i$ $8.6084 -4.9742 i$ $10.2408 -1.0377 i$ $9.8544 -3.1021 i$ $9.0222 -5.1159 i$ 11 $10.5976 -1.0152 i$ $10.2348 -3.0337 i$ $9.4632 -5.0021 i$ $10.9721 -1.0372 i$ $10.6128 -3.1023 i$ $9.8473 -5.1268 i$ : [The first three quasinormal frequencies for vector and tensor perturbation of the Schwarzschild black hole in $D=11$ dimensions.]{}[]{data-label="D11"} $D=10$ -------- --------------------- --------------------- --------------------- --------------------- --------------------- --------------------- L $n=0$ $n=1$ $n=2$ $n=0$ $n=1$ $n=2$ 2 $3.2334 -0.9603 i$ $2.0119 -2.7275 i$ $0.9784 -3.5136 i$ $3.9209 -0.9621 i$ $3.0410 -2.8514 i$ $1.5315 -3.5723 i$ 3 $3.9946 -0.9337 i$ $3.1233 -2.7391 i$ $1.5680 -3.5028 i$ $4.6311 -0.9555 i$ $3.9309 -2.8507 i$ $2.2219 -3.9757 i$ 4 $4.7607 -0.9211 i$ $4.0833 -2.7243 i$ $2.4839 -3.9023 i$ $5.3414 -0.9515 i$ $4.7545 -2.8456 i$ $3.1902 -4.4737 i$ 5 $5.5225 -0.9165 i$ $4.9654 -2.7248 i$ $3.6165 -4.2895 i$ $6.0519 -0.9489 i$ $5.5446 -2.8411 i$ $4.3063 -4.6452 i$ 6 $6.2781 -0.9158 i$ $5.8018 -2.7316 i$ $4.7032 -4.4370 i$ $6.7627 -0.9472 i$ $6.3149 -2.8377 i$ $5.2808 -4.6848 i$ 7 $7.0278 -0.9168 i$ $6.6097 -2.7400 i$ $5.6773 -4.5027 i$ $7.4735 -0.9460 i$ $7.0722 -2.8350 i$ $6.1763 -4.6985 i$ 8 $7.7725 -0.9185 i$ $7.3984 -2.7484 i$ $6.5828 -4.5412 i$ $8.1845 -0.9451 i$ $7.8206 -2.8330 i$ $7.0262 -4.7042 i$ 9 $8.5128 -0.9204 i$ $8.1735 -2.7561 i$ $7.4452 -4.5675 i$ $8.8955 -0.9444 i$ $8.5624 -2.8315 i$ $7.8469 -4.7067 i$ 10 $9.2495 -0.9222 i$ $8.9385 -2.7631 i$ $8.2785 -4.5869 i$ $9.6066 -0.9439 i$ $9.2994 -2.8303 i$ $8.6472 -4.7079 i$ 11 $9.9832 -0.9240 i$ $9.6957 -2.7692 i$ $9.0911 -4.6021 i$ $10.3178 -0.9435 i$ $10.0327 -2.8293 i$ $9.4327 -4.7084 i$ 12 $10.7144 -0.9256 i$ $10.4469 -2.7745 i$ $9.8882 -4.6143 i$ $11.0290 -0.9432 i$ $10.7629 -2.8285 i$ $10.2070 -4.7085 i$ 13 $11.4434 -0.9270 i$ $11.1932 -2.7792 i$ $10.6733 -4.6243 i$ $11.7402 -0.9429 i$ $11.4908 -2.8279 i$ $10.9725 -4.7085 i$ 14 $12.1707 -0.9282 i$ $11.9354 -2.7833 i$ $11.4490 -4.6327 i$ $12.4514 -0.9427 i$ $12.2166 -2.8273 i$ $11.7310 -4.7084 i$ 15 $12.8963 -0.9293 i$ $12.6743 -2.7869 i$ $12.2170 -4.6399 i$ $13.1627 -0.9425 i$ $12.9409 -2.8269 i$ $12.4839 -4.7082 i$ : [The first three quasinormal frequencies for vector and tensor perturbation of the Schwarzschild black hole in $D=10$ dimensions.]{}[]{data-label="D10"} The $D=9$ case. =============== Here we give the details for the $D=9$ case, skipped in [@bcrst]. The substitution (\[odd\]) leads to $2(D-3)=12$-term reccurence relation $$\label{reccurenceD9} \gamma^{(1)}_n a_{n+1} + \gamma^{(2)}_n a_{n} + ... + \gamma^{(12)}_n a_{n-10} = 0,$$ with $$\begin{aligned} \gamma^{(1)}_n &=& 216(1+ n)(ik- 3(1+ n)) \\ \gamma^{(2)}_n &=& - 9 (24 (5 + 12 n) ik + 40 k^2 - 252 + 147 s^2- 12 L(L+ 6)- 36 n (9 + 13 n)) \\ \gamma^{(3)}_n &=& 6 (12 (- 37 + 134 n) ik + 284 k^2 - 1323 + 1764 s^2- 36 L(L+ 6)+ 36 n (41 - 62 n)) \\ \gamma^{(4)}_n &=& - 9 (8 (- 289 + 292 n) ik + 452 k^2 - 4773 + 4312 s^2- 28 L(L+ 6)+ 12 n (515 - 253 n)) \\ \gamma^{(5)}_n &=& 4 (6 (- 2186 + 1279 n) ik + 1508 k^2 - 34290 + 21609 s^2- 36 L(L+ 6)+ 18 n (1870 - 547 n)) \\ \gamma^{(6)}_n &=& - (24 (- 3219 + 1324 n) ik + 6052 k^2 - 265050 + 130095 s^2- 36 L(L+ 6)+ 36 n (5579 - 1161 n)) \\ \gamma^{(7)}_n &=& 2 (12 (- 3109 + 986 n) ik + 2080 k^2 - 167877 + 69237 s^2+ 144 n (712 - 115 n)) \\ \gamma^{(8)}_n &=& - (48 (- 1007 + 260 n) ik + 1888 k^2 - 289215 + 105399 s^2+ 36 n (4097 - 541 n)) \\ \gamma^{(9)}_n &=& 8 (3 (- 849 + 185 n) ik + 64 k^2 - 21168 + 7056 s^2+ 9 n (1029 - 115 n)) \\ \gamma^{(0)}_n &=& - 2 (48 (- 53 + 10 n) ik + 32 k^2 - 32463 + 10143 s^2+ 18 n (691 - 67 n)) \\ \gamma^{(11)}_n &=& 6 (16 (- 6 + n) ik - 2463 + 735 s^2+ 24 n (35 - 3 n)) \\ \gamma^{(12)}_n &=& 9 (13 - 2 n+ 7 s) (13 - 2 n- 7 s)\end{aligned}$$ Gauss elimination [@Leaver2], and insertion into (\[continuedfrac\]) yields again quasinormal frequencies. The first three quasinormal frequencies for vector and tensor gravitational perturbation of the Schwarzschild black hole, for different values of $L$, are given in table \[D9\]. $D=9$ ------- --------------------- --------------------- --------------------- --------------------- --------------------- --------------------- L $n=0$ $n=1$ $n=2$ $n=0$ $n=1$ $n=2$ 2 $2.7928 -0.8542 i$ $1.7792 -2.5965 i$ $0.5699 -3.0121 i$ $3.4488 -0.8601 i$ $2.7548 -2.6116 i$ $0.9903 -3.3373 i$ 3 $3.5389 -0.8278 i$ $2.8438 -2.4808 i$ $1.1129 -3.2806 i$ $4.1342 -0.8541 i$ $3.5853 -2.5825 i$ $1.5262 -4.2850 i$ 4 $4.2863 -0.8180 i$ $3.7573 -2.4494 i$ $2.3546 -3.9155 i$ $4.8200 -0.8506 i$ $4.3624 -2.5661 i$ $3.2201 -4.3333 i$ 5 $5.0260 -0.8161 i$ $4.5953 -2.4469 i$ $3.5813 -4.0450 i$ $5.5061 -0.8483 i$ $5.1123 -2.5559 i$ $4.2023 -4.3007 i$ 6 $5.7578 -0.8171 i$ $5.3913 -2.4524 i$ $4.5724 -4.0828 i$ $6.1926 -0.8469 i$ $5.8463 -2.5491 i$ $5.0773 -4.2796 i$ 7 $6.4826 -0.8191 i$ $6.1617 -2.4597 i$ $5.4646 -4.1045 i$ $6.8792 -0.8458 i$ $6.5699 -2.5443 i$ $5.8994 -4.2651 i$ 8 $7.2018 -0.8214 i$ $6.9152 -2.4669 i$ $6.3037 -4.1198 i$ $7.5660 -0.8451 i$ $7.2863 -2.5409 i$ $6.6898 -4.2547 i$ 9 $7.9165 -0.8236 i$ $7.6569 -2.4735 i$ $7.1097 -4.1317 i$ $8.2529 -0.8445 i$ $7.9975 -2.5383 i$ $7.4592 -4.2470 i$ 10 $8.6275 -0.8255 i$ $8.3899 -2.4793 i$ $7.8933 -4.1413 i$ $8.9399 -0.8441 i$ $8.7048 -2.5363 i$ $8.2136 -4.2411 i$ 11 $9.3354 -0.8273 i$ $9.1161 -2.4844 i$ $8.6607 -4.1492 i$ $9.6269 -0.8438 i$ $9.4091 -2.5348 i$ $8.9571 -4.2365 i$ 12 $10.0409 -0.8288 i$ $9.8370 -2.4887 i$ $9.4161 -4.1558 i$ $10.3140 -0.8435 i$ $10.1111 -2.5336 i$ $9.6922 -4.2328 i$ 13 $10.7442 -0.8301 i$ $10.5537 -2.4925 i$ $10.1620 -4.1614 i$ $11.0011 -0.8433 i$ $10.8112 -2.5325 i$ $10.4207 -4.2298 i$ 14 $11.4458 -0.8313 i$ $11.2670 -2.4957 i$ $10.9004 -4.1662 i$ $11.6883 -0.8431 i$ $11.5098 -2.5317 i$ $11.1439 -4.2274 i$ 15 $12.1460 -0.8323 i$ $11.9774 -2.4986 i$ $11.6328 -4.1703 i$ $12.3755 -0.8430 i$ $12.2070 -2.5310 i$ $11.8628 -4.2254 i$ : [The first three quasinormal frequencies for vector and tensor gravitational perturbation of the Schwarzschild black hole in $D=9$ dimensions.]{}[]{data-label="D9"} Summary. ======== We have shown how to use Leaver’s [@Leaver] method of continued fraction in the case of a number of regular singular points, which set lower bounds on the convergence radius of the series representation of a solution than irregular singular point. This prescription is general and together with Leaver’s method makes a powerful tool in determination of quasinormal frequencies for wave equations and resonances in quantum mechanics. Acknowledgments {#acknowledgments .unnumbered} =============== I am greatly indebted to Piotr Bizoń for discussions, remarks and support in research. This work was supported by the Polish Ministry of Science Grant No. 1 P03B 012029. [1]{} E. Leaver, Proc. R. Soc. Lond. **A402** 285 (1985) P. Bizoń, T. Chmaj, A. Rostworowski, B.G. Schmidt and Z. Tabor, Phys. Rev. **D72** 121502(R) (2005) \[arXiv: gr-qc/0511064\]. P. Bizoń, T. Chmaj, B. Schmidt, Phys. Rev. Lett. **95** 071102 (2005) \[arXiv: gr-qc/0506074\]. V. Cardoso, J.P.S. Lemos, S. Yoshida, Phys. Rev. **D69** 044004 (2004) \[arXiv: gr-qc/0309112\]. V. Cardoso, J.P.S. Lemos, S. Yoshida, JHEP **0312** 041 (2003) \[arXiv: hep-th/0311260\]. R.A. Konoplya Phys.Rev. **D68** 024018 (2003) \[arXiv: gr-qc/0303052\]. R.A. Konoplya Phys.Rev. **D68** 124017 (2003) \[arXiv: gr-qc/0309030\]. E. Berti, M. Cavaglià, L. Gualtieri, Phys. Rev. **D69** 124011 (2004) \[arXiv: hep-th/0309203\]. G. Gibbons and S. A. Hartnoll, Phys. Rev. **D66**, 064024 (2002) \[arXiv: hep-th/0206202\]. A. Ishibashi and H. Kodama, Prog.Theor.Phys. **110**, 901 (2003) \[arXiv: hep-th/0305185\]. T. Regge, J.A. Wheeler, Phys. Rev **108** 1063 (1957). E. Leaver Phys. Rev. **41** 2986 (1990). W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, *Numerical Recipes in C. The Art of Scientific Computing. Second Edition.* Cambridge University Press 1992.
--- abstract: | We assume that we are given a time series of data from a dynamical system and our task is to learn the flow map of the dynamical system. We present a collection of results on how to enforce constraints coming from the dynamical system in order to accelerate the training of deep neural networks to represent the flow map of the system as well as increase their predictive ability. In particular, we provide ways to enforce constraints during training for all three major modes of learning, namely supervised, unsupervised and reinforcement learning. In general, the dynamic constraints need to include terms which are analogous to memory terms in model reduction formalisms. Such memory terms act as a restoring force which corrects the errors committed by the learned flow map during prediction. For supervised learning, the constraints are added to the objective function. For the case of unsupervised learning, in particular generative adversarial networks, the constraints are introduced by augmenting the input of the discriminator. Finally, for the case of reinforcement learning and in particular actor-critic methods, the constraints are added to the reward function. In addition, for the reinforcement learning case, we present a novel approach based on homotopy of the action-value function in order to stabilize and accelerate training. We use numerical results for the Lorenz system to illustrate the various constructions. author: - Panos Stinis bibliography: - 'theory.bib' title: 'Enforcing constraints for time series prediction in supervised, unsupervised and reinforcement learning' --- Introduction {#introduction .unnumbered} ============ Scientific machine learning, which combines the strengths of scientific computing with those of machine learning, is becoming a rather active area of research. Several related priority research directions were stated in the recently published report [@doe_sml_report]. In particular, two priority research directions are: (i) how to leverage scientific domain knowledge in machine learning (e.g. physical principles, symmetries, constraints); and (ii) how can machine learning enhance scientific computing (e.g reduced-order or sub-grid physics models, parameter optimization in multiscale simulations). Our aim in the current work is to present a collection of results that contribute to both of the aforementioned priority research directions. On the one hand, we provide ways to enforce constraints coming from a dynamical system during the training of a neural network to represent the flow map of the system. Thus, prior domain knowledge is incorporated in the neural network training. On the other hand, as we will show, the accurate representation of the dynamical system flow map through a neural network is equivalent to constructing a temporal integrator for the dynamical system modified to account for unresolved temporal scales. Thus, machine learning can enhance scientific computing. We assume that we are given data in the form of a time series of the states of a dynamical system (a training trajectory). Our task is to train a neural network to learn the flow map of the dynamical system. This means to optimize the parameters of the neural network so that when it is presented with the state of the system at one instant, it will predict accurately the state of the system at another instant which is a fixed time interval apart. If we want to use the data alone to train a neural network to represent the flow map, then it is easy to construct simple examples where the trained flow map has rather poor predictive ability [@stinis2018]. The reason is that the given data train the flow map to learn how to respond accurately as long as the state of the system is on the trajectory. However, at every timestep, when we invoke the flow map to predict the estimate of the state at the next timestep, we commit an error. After some steps, the predicted trajectory veers into parts of phase space where the neural network has not trained. When this happens, the neural network’s predictive ability degrades rapidly. One way to aid the neural network in its training task is to provide data that account for this inevitable error. In [@stinis2018], we advanced the idea of using a noisy version of the training data i.e. a noisy version of the training trajectory. In particular, we attach a noise cloud around each point on the training trajectory. During training, the neural network learns how to take as input points from the noise cloud, and map them back to the [*noiseless*]{} trajectory at the next time instant. This is an [*implicit*]{} way of encoding a restoring force in the parameters of the neural network (see Section \[implicit\_error\_correction\] for more details). We have found that this modification can improve the predictive ability of the trained neural network but up to a point (see Section \[numerical\] for numerical results). We want to aid the neural network further by enforcing constraints that we know the state of the system satisfies. In particular, we assume that we have knowledge of the differential equations that govern the evolution of the system (our constructions work also if we assume algebraic constraints see e.g. [@stinis2018]). Except for special cases, it is not advisable to try to enforce the differential equations directly at the continuum level. Instead we can discretize the equations in time using various numerical methods. We want to incorporate the discretized dynamics into the training process of the neural network. The purpose of such an attempt can be explained in two ways: (i) we want to aid the neural network so that it does not have to discover the dynamics (physics) from scratch; and (ii) we want the constraints to act as regularizers for the optimization problem which determines the parameters of the neural network. Closer inspection of the concept of noisy data and of enforcing the discretized constraints reveals that they can be combined. However, this needs to be done with care. Recall that when we use noisy data we train the neural network to map a point from the noise cloud back to the noiseless point at the next time instant. Thus, we cannot enforce the discretized constraints as they are because the dynamics have been modified. In particular, the use of noisy data requires that the discretized constraints be modified to account [*explicitly*]{} for the restoring force. We have called the modification of the discretized constraints the [*explicit*]{} error-correction (see Section \[explicit\_error\_correction\]). The meaning of the restoring force is analogous to that of memory terms in model reduction formalisms [@chorinstinis2007]. To see this, note that the flow map as well as the discretization of the original constraints are based on a [*finite*]{} timestep. The timescales that are smaller than the timestep used are [*not*]{} resolved explicitly. However, their effect on the resolved timescales cannot be ignored. In fact, it is what causes the inevitable error at each application of the flow map. The restoring force that we include in the modified constraints is there to remedy this error i.e. to account for the unresolved timescales albeit in a simplified manner. This is precisely the role played by memory terms in model reduction formalisms. In the current work we have restricted attention to [*linear*]{} error-correction terms. The linear terms come with coefficients whose magnitude is optimized as part of the training. In this respect, optimizing the error-correction term coefficients becomes akin to [*temporal renormalization*]{}. This means that the coefficients depend on the temporal scale at which we probe the system [@goldenfeld1992; @barenblatt2003]. Finally, we note that the error-correction term can be more complex than linear. In fact, it can be modeled by a separate neural network. Results for such more elaborate error-correction terms will be presented elsewhere. We have implemented constraint enforcing in all three major modes of learning. For [*supervised*]{} learning, the constraints are added to the objective function (see Section \[supervised\]). For the case of [*unsupervised*]{} learning, in particular generative adversarial networks [@goodfellowetal2014], the constraints are introduced by augmenting the input of the discriminator (see Section \[unsupervised\] and [@stinis2018]). Finally, for the case of [*reinforcement*]{} learning and in particular actor-critic methods [@sutton1999], the constraints are added to the reward function. In addition, for the reinforcement learning case, we have developed a novel approach based on homotopy of the action-value function in order to stabilize and accelerate training (see Section \[reinforcement\]). In recent years, there has been considerable interest in the development of methods that utilize data and physical constraints in order to train predictors for dynamical systems and differential equations e.g. see [@PhysRevE.91.032915; @raissi2018; @chen2018; @Han8505; @SIRIGNANO20181339; @felsberger2018; @wan2018; @MaE9994] and references therein. Our approach is different, it introduces the novel concept of training on purpose with modified (noisy) data in order to incorporate (implicitly or explicitly) a restoring force in the dynamics learned by the neural network flow map. We have also provided the connection between the incorporation of such restoring forces and the concept of memory in model reduction. The paper is organized as follows. Section \[constraints\_prediction\] explains the need for constraints to increase the accuracy/efficiency of time series prediction as well as the form that these constraints can have. Section \[enforcing\_constraints\] presents ways to enforce such constraints in supervised learning (Section \[supervised\]), unsupervised learning (Section \[unsupervised\]) and reinforcement learning (Section \[reinforcement\]). Section \[numerical\] contains numerical results for the various constructions using the Lorenz system as an illustrative example. Finally, Section \[discussion\] contains a brief discussion of the results as well as some ideas for current and future work. Constraints for time series prediction of dynamical systems {#constraints_prediction} =========================================================== Suppose that we are given a dynamical system described by an M-dimensional set of differential equations $$\label{odes} \frac{dx}{dt}=f(x),$$ where $x \in \mathbb{R}^M.$ The system needs to be supplemented with an initial condition $x(0)=x_0.$ Furthermore, suppose that we are provided with time series data from the system . This means a sequence of points from a trajectory of the system $\{x_i^{data}\}_{i=1}^N$ recorded at time intervals of length $\Delta t.$ We would like to use this time series data to train a neural network to represent the flow map of the system i.e. a map $H^{\Delta t}$ with the property $H^{\Delta t}x(t)=x(t+\Delta t)$ for all $t$ and $x(t).$ We want to find ways to enforce [*during training*]{} the constraints implied by the system . Before we proceed, we should mention that in addition to , one could have extra constraints. For example, if the system is Hamiltonian, we have extra algebraic constraints since the system must evolve on an energy surface determined by its initial condition. We note that the framework we present below can also enforce algebraic constraints but in the current work we will focus on the enforcing of dynamic constraints like the system . Enforcing dynamic constraints is more demanding than enforcing algebraic ones. Technically, the enforcing of algebraic constraints requires only knowledge of the state of the system. On the other hand, the enforcing of dynamic constraints requires knowledge of the state [*and*]{} of the rate of change of the state. It is not advisable to attempt enforcing directly the constraints in . To do that requires that the output of the neural network includes both the state of the system and its rate of change. This [*doubles*]{} the dimension of the output and makes the necessary neural network size larger and its training more demanding. Instead, we will enforce constraints that involve only the state, albeit at more than one instants. For example, we can consider the simplest temporal discretization scheme, the forward Euler scheme [@haireretal1987], and discretize with timestep $\Delta t$ to obtain $$\label{odes_discrete_1} \hat{x}(t+\Delta t)=\hat{x}(t)+\Delta t f(\hat{x}(t))$$ Then, we can choose to enforce during training of the flow map. To be more precise, we can train the neural network representation of the flow map such that holds for all the training data $\{x_i^{data}\}_{i=1}^N.$ In addition, one can consider more elaborate temporal discretization schemes e.g. explicit Runge-Kutta methods [@haireretal1987]. In such a case, is replaced by $$\label{odes_discrete_2} \hat{x}(t+\Delta t)=\hat{x}(t)+\Delta t f^{RK}(\hat{x}(t))$$ where $f^{RK}(\hat{x}(t))$ represents the functional form of the Runge-Kutta update. Such an approach of enforcing the constraint is [*not*]{} enough to guarantee that the trained neural network representation of the flow map will be accurate. In fact, as can be seen by simple numerical examples [@stinis2018], the trained neural network flow map can lose its [*predictive*]{} ability rather fast. The reason is that we have used data from a time series i.e. a trajectory to train the neural network. However, a single trajectory is extremely unlikely (has measure zero) in the phase space of the system. Thus, the trained network predicts accurately [*as long as the predicted state remains on the training trajectory*]{}. But this is impossible, since the action of the flow map at every (finite) timestep involves an inevitable approximation error. If left unchecked, this approximation error causes the prediction to deviate into a region of phase space that the network has never trained on. Soon after, all the predictive ability of the network is lost. This observation highlights the need for an alternate way of enforcing the constraints. In fact, as we will explain now, it points towards the need for the enforcing of [*alternate*]{} constraints altogether. In particular, this observation underlines the need for the training of the neural network to include some kind of error-correcting mechanism. Such an error-correcting mechanism can help restore the trajectory predicted by the learned flow map when it inevitably starts deviating due to the finiteness of the used timestep. The way we have devised to implement this error-correcting mechanism can be [*implicit*]{} or [*explicit*]{}. By implicit we mean that we do not specify the functional form of the mechanism but only what we want it to achieve (Section \[implicit\_error\_correction\]). On the other hand, the explicit implementation of the error-correcting mechanism does involve the specification of the [*functional form*]{} of the mechanism (Section \[explicit\_error\_correction\]). The common ingredient for both implicit and explicit implementations is the use of a [*noisy*]{} version of the data during training. The main idea is the fact that the training of the neural network must address the inevitability of error that comes with the use of a finite timestep. For example, suppose that we are given data that are recorded every $\Delta t.$ The flow map we wish to train will produce states of the system at time instants that are $\Delta t$ apart. Every time the flow map is applied, even if it is applied on a point from the exact trajectory it will produce a state that has deviated from the exact trajectory at the next timestep. So, the trained flow map must learn how to correct such a deviation. Implicit error-correction {#implicit_error_correction} ------------------------- The [*implicit*]{} implementation of the error-correcting mechanism can be realized by the following simple procedure. We can consider each point of the given time series data and we can enhance it by a (random) cloud of points centered at the point on the time series. Such a cloud of points accounts for our ignorance about the inevitable error that the flow map commits at every step. The next step is to train the neural network to map a point from this cloud back to the [*noiseless*]{} trajectory at the next timestep. In this way, the neural network is trained to incorporate an error-correcting mechanism [*implicitly*]{}. Of course, there are the questions of the extent of the cloud of noisy points as well as the number of samples we need from it. These parameters depend on the magnitude of the interval $\Delta t$ and the accuracy of the training data. For example, if the training data were produced by a numerical method with a known order of accuracy then we expect the necessary extent of the noisy cloud to follow a scaling law with respect to the interval $\Delta t.$ Similarly, if the training data were produced by a numerical experiment with known measurement error, we expect the extent of the noisy cloud to depend on the measurement error. Explicit error-correction {#explicit_error_correction} ------------------------- The [*explicit*]{} implementation of the error-correcting mechanism requires the specification of the functional form of the mechanism in addition to enhancing the given time series data by a noisy cloud. The main idea is that the need for the incorporation of the error-correcting mechanism means that the flow map we have to learn is not of the original system but of a [*modified*]{} system. Symbolically, the dynamics that the neural network based flow map must learn are given by “learned dynamics = original dynamics + error-correction". As we have explained before, the error-correction term is needed due to the inevitable error caused by the use of a finite timestep. Such error-correction terms can be interpreted as memory terms appearing in model reduction formalisms [@chorinstinis2007]. However, note that here the reduction is in the [*temporal*]{} sense since it is caused by the use of a [*finite*]{} timestep. It can be thought of as a way to account for all the timescales that are contained in the interval $\Delta t$ and it is akin to [*temporal*]{} renormalization. Another way to interpret such an error-correction term is as a control mechanism [@isidori1995]. For the specific case of the forward Euler scheme given in , the explicit implementation of the error-correcting mechanism will mean that we want our trained flow map to satisfy $$\label{odes_discrete_modified} \hat{x}(t+\Delta t)=\hat{x}(t)+\Delta t f(\hat{x}(t)) + \Delta t f^{C}(\hat{x}(t)),$$ where $f^{C}(\hat{x}(t))$ is the error-correcting term. The obvious question is what is the form of $f^{C}(\hat{x}(t)).$ The simplest state-dependent approximation is to assume $f^{C}(\hat{x}(t))$ is a linear function of the state $\hat{x}(t).$ For example, $f^{C}(\hat{x}(t))= A \hat{x}(t),$ where $A$ is a $M \times M$ matrix whose entries need to be determined. The entries of $A$ can be estimated during the training of the flow map neural network. There is no need to restrict the form of the correction term $f^{C}(\hat{x}(t))$ to a linear one. In fact, we can consider a [*separate*]{} neural network to represent $f^{C}(\hat{x}(t)).$ We have explored such constructions although a detailed presentation will appear in a future publication. A further question to be explored is the dependence of the elements of $A$ or of the parameters of the network for the representation of the error-correcting term on the timestep $\Delta t.$ In fact, we expect a scaling law dependence on $\Delta t$ which would be a manifestation of [*incomplete similarity*]{} [@barenblatt2003]. We also note that there is a further generalization of the error-correcting term $f^{C}(\hat{x}(t)),$ if we allow it to depend on the state of the system for times before $t.$ Given the analogy to memory terms alluded to above, such a dependence on the history of the evolution of the state of the system is an instance of a [*non-Markovian*]{} memory [@chorinstinis2007]. Finally, we note that offers one more way to interpret the error-correction term, namely as a [*modified*]{} numerical method, a modified Euler scheme in this particular case, where the role of the error-correction term is to account for the error of the Euler scheme. Enforcing constraints in supervised, unsupervised and reinforcement learning {#enforcing_constraints} ============================================================================ In this section we will examine ways of enforcing the constraints in the 3 major modes of learning, namely supervised, unsupervised and reinforcement learning. Supervised learning {#supervised} ------------------- The case of [*supervised*]{} learning is the most straightforward. Let us assume that the flow map is represented by a deep neural network denoted by $G$ depending on the parameter vector $\theta_G$ (the parameter vector $\theta_G$ contains the weights and biases of the neural network). The simplest objective function is the $L_2$ discrepancy between the network predictions and the training data trajectory given by $$\label{supervised_loss} Loss_{supervised}=\frac{1}{\Lambda}\sum_{i=1}^{\Lambda} (G(z_{i})-x_i^{data})^2,$$ where $z_i$ is a point from the noise cloud around a point of the given training trajectory and $x_i^{data}$ is the [*noiseless*]{} point on the given training trajectory after time $\Delta t.$ Note that we allowed freedom here in choosing the value of $\Lambda$ to accommodate various implementation choices e.g. number of samples from the noise cloud, mini-batch sampling etc. The parameter vector $\theta_G$ can be estimated by minimizing the objective function $Loss_{supervised}.$ For the sake of simplicity, suppose that we want to enforce the constraints given in with a [*diagonal*]{} linear representation of the error-correcting term i.e. $f_j^{C}(\hat{x}(t))= -a_j \hat{x}_j(t),$ for $j=1,\ldots,M$ (the minus sign is in anticipation of this being a restoring force). Then we can consider the modified objective function given by $$\begin{gathered} Loss_{supervised}^{constraints}=\frac{1}{\Lambda}\bigg[ \sum_{i=1}^{\Lambda} (G(z_{i})-x_i^{data})^2 \notag \\ + \sum_{j=1}^M (G_j(z_{i})-z_{ij}-\Delta t f_j(z_{ij}) + \Delta t a_j z_{ij})^2 \bigg], \label{supervised_loss_constraints}\end{gathered}$$ where $z_{ij}$ is the $j$-th component of the noise cloud point $z_i$ and $f_j$ is the $j$-th component of the vector $f.$ Notice that the minimization of the modified objective function $Loss_{supervised}^{constraints}$ leads to the determination of both the parameter vector $\theta_G$ [*and*]{} the error-correcting representation parameters $a_j, \; j=1,\ldots,M.$ Also note, that if instead of the forward Euler scheme we use e.g. a more elaborate Runge-Kutta method as given in , then we can still use but with the vector $f$ replaced by $f^{RK}.$ Unsupervised learning - Generative Adversarial Networks {#unsupervised} ------------------------------------------------------- The next mode of learning that we will examine how to enforce constraints for is [*unsupervised*]{} learning, in particular Generative Adversarial Networks (GANs) [@goodfellowetal2014]. This material appeared first in [@stinis2018] albeit with different notation. We repeat it here with the current notation for the sake of completeness. Generative Adversarial Networks comprise of two networks, a generator and a discriminator. The target is to train the generator’s output distribution $p_g(x)$ to be close to that of the true data $p_{data}.$ We define a prior input $p_z(z)$ on the generator input variables $z$ and a mapping $G(z;\theta_G)$ to the data space where $G$ is a differentiable function represented by a neural network with parameters $\theta_G.$ We also define a second neural network (the discriminator) $D(x;\theta_D),$ which outputs the probability that $x$ came from the true data distribution $p_{data}$ rather than $p_g.$ We train $D$ to [*maximize*]{} the probability of assigning the correct label to both training examples and samples from the generator $G.$ Simultaneously, we train $G$ to minimize $\log (1-D(G(z))).$ We can express the adversarial nature of the relation between $D$ and $G$ as the two-player min-max game with value function $V(D,G)$: $$\label{game_gan} \min_G \max_D V(D,G) = E_{x \sim p_{data}(x)}[\log D(x)] +E_{z \sim p_z(z)}[\log (1- D(G(z)))].$$ The min-max problem can be formulated as a bilevel minimization problem for the discriminator and the generator using the objective functions $-E_{x \sim p_{data}(x)}[\log D(x)]$ and $-E_{t \sim p_z(z)}[\log (D(G(z)))]$ respectively. The modification of the objective function for the generator has been suggested to avoid early saturation of $\log (1- D(G(z)))$ due to the faster training of the discriminator [@goodfellowetal2014]. On the other hand, while this modification avoids saturation, the well-documented instability of GAN training appears [@arjovskybottou2017]. Even though the min-max game can be formulated as a bilevel minimization problem, in practice the discriminator and generator neural networks are usually updated iteratively. We are interested in training the generator $G$ to represent the flow map of the dynamical system. That means that if $z$ is the state of the system at a time instant $t,$ we would like to train the generator $G$ to produce as output $G(z),$ an accurate estimate of the state of the system at time $t+ \Delta t.$ In [@stinis2018] we have presented a way to enforce constraints in the output of the generator $G$ that respects the game-theoretic setup of GANs. We can do so by [*augmenting*]{} the input of the discriminator by the constraint residuals i.e. how well does a sample satisfy the constraints. Of course, such an augmentation of the discriminator input should be applied [*both*]{} to the generator-created samples as well as the samples from the true distribution. This means that we consider a two-player min-max game with the modified value function $$\begin{gathered} \min_G \max_D V^{constraints}(D,G) \notag \\ = E_{x \sim p_{data}(x)}[\log D(x,\epsilon_D(x))] +E_{z \sim p_z(z)}[\log (1- D(G(z),\epsilon_G(z)))],\label{game_gan_constraints}\end{gathered}$$ where $\epsilon_D(x)$ is the constraint residual for the true sample and $\epsilon_G(z)$ is the constraint residual for the generator-created sample. Note that in our setup, the generator input distribution $p_z(z)$ will be from the noise cloud around the training trajectory. On the other hand, the true data distribution $p_{data}$ is the distribution of values of the (noiseless) training trajectory. As explained in [@stinis2018], taking the constraint residual $\epsilon_D(x)$ to be zero for the true samples can exacerbate the well-known saturation (instability) issue with training GANs. Thus, we take $\epsilon_D(x)$ to be a random variable with mean zero and small variance dictated by Monte-Carlo or other numerical/mathematical/physical considerations. On the other hand, for $\epsilon_G(z)$ we can use the constraint we want to enforce. For example, for the constraint based on the forward Euler scheme with [*diagonal*]{} linear error-correcting term, we take $\epsilon^j_G(z)=G_j(z)-z_{j}-\Delta t f_j(z_{j}) + \Delta t a_j z_{j}$ for $j=1,\ldots,M,$ where $z$ is a sample from the noise cloud around a point of the training time series data. The expression for the constraint residual $\epsilon^j_G(z)$ can be easily generalized for more elaborate numerical methods and error-correcting terms. Reinforcement learning - Actor-critic methods {#reinforcement} --------------------------------------------- The third and final mode of learning for which we will examine how to enforce constraints for time series prediction is [*reinforcement*]{} learning and in particular Actor-Critic (AC) methods [@sutton1999; @grondman2012; @silver2014; @lillicrap2015; @pfau2016]. We will also present a novel approach based on [*homotopy*]{} in order to stabilize and accelerate training. ### General setup of AC methods We will begin with the general setup of an AC method and then provide the necessary modifications to turn it into a computational device for training flow map neural network representations. The setup consists of an agent ([*actor*]{}) interacting with an environment in discrete timesteps. At each timestep $t$ the agent is supplied with an observation of the environment and the agent state $s_t.$ Based on the state $s_t$ it takes an action $a_t$ and receives a scalar reward $r_t.$ An agent’s behavior is based on a action policy $\pi,$ which is a map from the states to a probability distribution over the actions, $\pi: \; \mathcal{S} \rightarrow \mathcal{P_{\pi}}(\mathcal{A})$ where $\mathcal{S}$ is the state space and $\mathcal{A}$ is the action space. We also need to specify an initial state distribution $p_0(s_0),$ the transition function $\mathcal{P}(s_{t+1}|s_t,a_t)$ and the reward distribution $\mathcal{R}(s_t,a_t).$ The aim of an AC method is to learn in tandem an action-value function ([*critic*]{}) $$\label{action_value} Q^{\pi}(s_t,a_t)=\mathbb{E}_{s_{t+k+1} \sim \mathcal{P}, r_{t+k} \sim \mathcal{R}, a_{t+k+1} \sim \mathcal{\pi}}\bigg[ \sum_{k=0}^{\infty} \gamma^k r_{t+k} \bigg| s_t,a_t\bigg]$$ and an action policy that is optimal for the action-value function $$\label{policy} \pi^* = \arg \underset{\pi}{\max} \; \mathbb{E}_{s_0 \sim p_0,a_0 \sim \pi}[Q^{\pi}(s_0,a_0)].$$ The parameter $\gamma \in [0,1]$ is called the [*discount factor*]{} and it expresses the degree of trust in future actions. Eq. can be rewritten in a recursive manner as $$\label{bellman} Q^{\pi}(s_t,a_t)=\mathbb{E}_{r_t \sim \mathcal{R}, s_{t+1} \sim \mathcal{P}}[r_t+ \gamma \mathbb{E}_{a_{t+1}\sim \pi}[Q^{\pi}(s_{t+1},a_{t+1})]]$$ which is called the Bellman equation. Thus, the task of finding the action-value function is equivalent to solving the Bellman equation. We can solve the Bellman equation by reformulating it as an optimization problem $$\label{bellman_opt} Q^{\pi}= \arg \underset{Q}{\min} \mathbb{E}_{s_t,a_t \sim \pi}\big[ (Q(s_t,a_t)-y_t)^2\big]$$ where $$\label{bellman_opt_target} y_t=\mathbb{E}_{r_t \sim \mathcal{R}, s_{t+1} \sim \mathcal{P}, a_{t+1}\sim \pi}[r_t+ \gamma Q(s_{t+1},a_{t+1})]$$ is called the [*target*]{}. In , instead of the square of the distance of the action-value function from the target, we could have used any other divergence that is positive except when the action-value function and target are equal [@pfau2016]. Using the objective functions $\mathbb{E}_{s_t,a_t \sim \pi}\big[ (Q(s_t,a_t)-y_t)^2\big] $ and $-\mathbb{E}_{s_0 \sim p_0,a_0 \sim \pi}[Q^{\pi}(s_0,a_0)]$ for the action-value function and action policy respectively, we can express the task of reinforcement learning also as a bilevel minimization problem [@pfau2016]. However, as in the case of GANs discussed before, in practice the action-value function and action policy are usually updated iteratively. Before we adapt the AC setup to our task of enforcing constraints for time series prediction we will focus on two special choices: (i) the use of [*deterministic*]{} target policies and (ii) the of use neural networks to represent both the action-value function and the action policy [@silver2014; @lillicrap2015]. We start with the effect of using a deterministic target policy denoted as $\mu: \mathcal{S} \rightarrow \mathcal{A}.$ Then, the Bellman equation can be written as $$\label{bellman_deterministic} Q^{\mu}(s_t,a_t)=\mathbb{E}_{r_t \sim \mathcal{R}, s_{t+1} \sim \mathcal{P}}[r_t+ \gamma Q^{\mu}(s_{t+1},\mu(s_{t+1}))]$$ Note that the use of a deterministic target policy $a_{t+1}$ has allowed us to drop the expectation with respect to $a_{t+1}$ that appeared in and find $$\label{bellman_opt_target_drop} y_t=\mathbb{E}_{r_t \sim \mathcal{R}, s_{t+1} \sim \mathcal{P}}[r_t+ \gamma Q(s_{t+1},\mu(s_{t+1})].$$ Also, note that the expectations in and depend only on the environment. This means that it is possible to learn $Q^{\mu}$ off-policy, using transitions that are generated from a different stochastic behavior policy $\beta$. We can rewrite the optimization problem - as $$\label{bellman_opt_off} Q^{\mu}= \arg \underset{Q}{\min} \mathbb{E}_{s_t \sim \rho^{\beta},a_t \sim \beta, r_t \sim \mathcal{R}}\big[ (Q(s_t,a_t)-y_t)^2\big]$$ where $$\label{bellman_opt_target_off} y_t=r_t+ \gamma Q(s_{t+1},\mu(s_{t+1})).$$ The state visitation distribution $\rho^{\beta}$ is related to the policy $\beta.$ We will use below this flexibility to introduce our noise cloud around the training trajectory. We continue with the effect of using neural networks to represent both the action-value function and the policy. We restrict attention to the case of a deterministic policy since this will be the type of policy we will use later for our time series prediction application. To motivate the introduction of neural networks we begin with the concept of $Q$-learning as a way to learn the action-value function and the policy [@watkins1992; @mnih2015]. In $Q$-learning, the optimization problem - to find the action-value function is coupled with the greedy policy estimate $\mu(s)=\arg \underset{a}{\max}\;Q(s,a).$ Thus, the greedy policy requires an optimization at every timestep. This can become prohibitively costly for the type of action spaces that are encountered in many applications. This has led to (i) the adoption of (deep) neural networks for the representation of the action-value function and the policy and (ii) the update of the neural network for the policy after [*each*]{} $Q$-learning iteration for the action-value function [@silver2014]. We assume that the action-value function $Q(s_t,a_t|\theta_Q)$ is represented by a neural network with parameter vector $\theta_Q$ and the deterministic policy $\mu(s|\theta_{\mu})$ by a neural network with parameter vector $\theta_{\mu}.$ The deterministic policy gradient algorithm [@silver2014] uses - to learn $Q(s_t,a_t|\theta_Q)$. The policy $\mu(s|\theta_{\mu})$ is updated after every iteration of the $Q$-optimization using the policy gradient $$\begin{gathered} \nabla_{\theta_{\mu}} \mathbb{E}_{s_t \sim \rho^{\beta}}[Q(s,a|\theta_Q)|_{s=s_t,a=\mu(s_t|\theta_{\mu})}] \notag \\ = \mathbb{E}_{s_t \sim \rho^{\beta}}[\nabla_{\theta_{\mu}}Q(s,a|\theta_Q)|_{s=s_t,a=\mu(s_t|\theta_{\mu})}] \label{policy_gradient}\end{gathered}$$ which can be computed through the chain rule [@silver2014; @lillicrap2015]. ### AC methods for time series prediction and enforcing constraints {#enforcing_ac} We explain now how an AC method can be used to train the flow map of a dynamical system. In addition, we provide a way for enforcing constraints during training. We begin by identifying the state $s_t$ with the state of the dynamical system at time $t.$ Also, we identify the discrete timesteps with the iterations of the flow map that advance the state of the dynamical system by $\Delta t$ units in time. The action policy $\mu(s_t|\theta_{\mu})$ is the action that needs to be taken to bring the state of the system from $s_t$ to $s_{t+1}.$ However, instead of learning separately the action policy that results in $s_t$ being mapped to $s_{t+1},$ we can [*identify*]{} the policy $\mu(s_t|\theta_{\mu})$ with the state $s_{t+1}$ i.e. $\mu(s_t|\theta_{\mu})=s_{t+1}.$ In this way, training for the policy $\mu(s_t|\theta_{\mu})$ is equivalent to training for the flow map of the dynamical system. We also take advantage of the off-policy aspect of - to choose the distribution of states $\rho^{\beta}$ to be the one corresponding to the noise cloud around the training trajectory needed to implement the error-correction. Thus, we see that the intrinsic statistical nature of the AC method bodes well with our approach to error-correcting. To complete the specification of the AC method as a method for training the flow map of a dynamical system we need to specify the reward function. The specification of the reward function is an important aspect of AC methods and reinforcement learning in general [@sorg2010; @Guo2016]. We have chosen a simple [*negative*]{} reward function. To conform with the notation from Sections \[supervised\] and \[unsupervised\] we specify the reward function as $$\label{reward} r(z,x)=-\sum_{j=1}^{M} (\mu_j(z)-x_j^{data})^2,$$ where $z$ is a point on the noise cloud at the time $t,$ $\mu_j(z)$ is the $j$-th component of its image through the flow map and $x_j^{data}$ is the $j$-th component of the [*noiseless*]{} point on the training trajectory at time $t+\Delta t$ that it is mapped to. Similarly, for the case when we want to enforce constraints e.g. [*diagonal*]{} linear representation for the error-correcting term we can define the reward function as $$\label{reward_constraints} r(z,x)=-\sum_{j=1}^{M} \bigg[ (\mu_j(z)-x_j^{data})^2 + (\mu_j(z)-z_j-\Delta t f_j(z) + \Delta t a_j z_j)^2 \bigg].$$ For each time $t,$ the reward function that we have chosen uses information only from the state of the system at time $t$ and $t+\Delta t.$ Of course, how much credit we assign to this information is determined by the value of the discount factor $\gamma.$ If $\gamma=0,$ then we disregard any information beyond time $t+\Delta t.$ In this case, the AC method becomes a supervised learning method in disguise. In fact, from - we see that when $\gamma=0,$ the task of maximizing the action-value function is equivalent to maximizing the reward. The maximum value for our [*negative*]{} reward is 0 which is the optimal value for the supervised learning loss function (the average over the noise cloud in is the same as the average w.r.t. to $\rho^{\beta}$). A similar conclusion holds for the case of the reward with constraints and the supervised learning loss function with constraints . If on the other hand we set $\gamma=1,$ we assign equal importance to current and future rewards. This corresponds to the case where the environment is deterministic and thus the same actions result in the same rewards. ### Homotopy for the action-value function {#homotopy_action} AC methods utilizing deep neural networks to represent the action-value function and the policy have proven to be difficult to train due to instabilities. As a result, a lot of techniques have been devised to stabilize training (see e.g. [@pfau2016] and references therein for a review of stabilizing techniques). In our numerical experiments we tried some of these techniques but could not get satisfactory training results neither for the action-value function nor for the policy. This is the reason we developed a novel approach based on homotopy which indeed resulted in successful training (see results in Section \[numerical\_reinforcement\]). To motivate our approach we examine the case when $\gamma=0,$ although similar arguments hold for the other values of $\gamma.$ As we have discussed in the previous section, when $\gamma=0,$ the AC method is a supervised learning method in disguise. In fact, the AC method tries to achieve the same result as a supervised learning method but does it in a rather inefficient way. If we look at -, we see that in the action-value update (which is effected through ), the AC method tries to minimize the distance between the action-value function and the reward function. Then, in the action policy update step (which is effected through the use of ), the AC method tries to maximize the action-value function. In essence, through this two-step procedure, the AC method tries to maximize the reward but does so in a roundabout way. If we think of a plane (in function space) where on one axis we have the action-value function $Q(s_t,a_t)$ and on the other the reward $r_t$, we are trying to find the point on the line $Q(s_t,a_t)=r_t$ which maximizes $Q(s_t,a_t).$ But hitting this line from a random initialization of the neural networks for $Q(s_t,a_t)$ and of the policy $\mu(s_t)$ is extremely unlikely. We would be better off if we started our optimization from a point [*on*]{} the line and then look for the maximum of $Q(s_t,a_t).$ In other words, for the case of $\gamma=0,$ we have a better chance of training accurately if we let $Q(s_t,a_t)=r_t$ in . A similar argument for the case $\gamma \neq 0$ shows why we will have a better chance of training if we let $Q(s_t,a_t)=r_t + \gamma Q(s_{t+1},\mu(s_{t+1}))$ in . There is an extra mathematical reason why the identification $Q(s_t,a_t)=r_t + \gamma Q(s_{t+1},\mu(s_{t+1}))$ can result in better training. Recall from that the reward function $r_t$ contains [*all*]{} the information from the training trajectory and the constraints we wish to enforce. In addition, $r_t$ depends on the parameter vector $\theta_{\mu}$ for the neural network that represents the action policy $\mu.$ Thus, when we use the expression $r_t + \gamma Q(s_{t+1},\mu(s_{t+1}))$ in for the update step of $\theta_{\mu},$ we back-propagate [*directly*]{} the available information from the training trajectory and the constraints to $\theta_{\mu}.$ This is because we differentiate directly $r_t$ w.r.t. $\theta_{\mu}.$ On the other hand, in the original formulation we do [*not*]{} differentiate $r_t$ at all, because there $r_t$ appears [*only*]{} in the update step for the action-value function. That update step involves differentiation w.r.t. the action-value function parameter vector $\theta_{Q}$ but not $\theta_{\mu}.$ Of course, if we make the identification $Q(s_t,a_t)=r_t + \gamma Q(s_{t+1},\mu(s_{t+1}))$ in we have modified the original problem. The question is how is the solution to the modified problem related to the original one. Through algebraic inequalities, one can show that the optimum for $Q(s_t,a_t)$ for the modified problem provides a lower bound on the optimum for the original problem. It can also provide an upper bound if we make extra assumptions about the difference $Q(s_t,a_t)-r_t - \gamma Q(s_{t+1},\mu(s_{t+1}))$ e.g. the convex-concave assumptions appearing in the min-max theorem [@osborne2004]. To avoid the need for such extra assumptions, we have developed an alternative approach. We initialize the training procedure with the identification $Q(s_t,a_t)=r_t + \gamma Q(s_{t+1},\mu(s_{t+1}))$ in . As the training progresses we morph the modified problem back to the original one via [*homotopy*]{}. In particular, we use in instead of $Q(s_t,a_t)$ the expression $$\delta \times Q(s_t,a_t) + (1-\delta) \times [r_t + \gamma Q(s_{t+1},\mu(s_{t+1}))],$$ where $\delta$ is the homotopy parameter. A user-defined schedule evolves $\delta$ during training from 0 (modified problem) to 1 (original problem). The accuracy of the training is of course dependent on the schedule for $\delta.$ However, in our numerical experiments we obtained good results without the need for a very refined schedule. One general rule of thumb is that the schedule should be slower for larger values of $\gamma$ i.e. allow more iterations between increases in the value of $\delta.$ This is to be expected because for larger values of $\gamma,$ the influence of $r_t$ in the optimization of $r_t + \gamma Q(s_{t+1},\mu(s_{t+1}))$ is reduced. Thus, it is more difficult to back-propagate the information from $r_t$ to the action policy parameter vector $\theta_{\mu}.$ However, note that larger values of $\gamma$ allow us to take more into account future rewards, thus allowing the AC method to be more versatile. Numerical results {#numerical} ================= We use the example of the Lorenz system to illustrate the constructions presented in Sections \[constraints\_prediction\] and \[enforcing\_constraints\]. The Lorenz system is given by $$\begin{aligned} \frac{d x_1}{dt}&=\sigma (x_2-x_1) \label{lorenz1} \\ \frac{d x_2}{dt}&= \rho x_1 - x_2 - x_1 x_3 \label{lorenz2} \\ \frac{d x_3}{dt}&= x_1 x_2 - \beta x_3 \ \label{lorenz3}\end{aligned}$$ where $\sigma, \rho$ and $\beta$ are positive. We have chosen for the numerical experiments the commonly used values $\sigma=10,$ $\rho=28$ and $\beta=8/3.$ For these values of the parameters the Lorenz system is chaotic and possesses an attractor for almost all initial points. We have chosen the initial condition $x_1(0)=0,$ $x_2(0)=1$ and $x_3(0)=0.$ We have used as training data the trajectory that starts from the specified initial condition and is computed by the Euler scheme with timestep $\delta t=10^{-4}.$ In particular, we have used data from a trajectory for $t \in [0,3].$ For all three modes of learning, we have trained the neural network to represent the flow map with timestep $\Delta t=1.5 \times 10^{-2}$ i.e. 150 times larger than the timestep used to produce the training data. After we trained the neural network that represents the flow map, we used it to predict the solution for $t \in [0,9].$ Thus, the trained flow map’s task is to predict (through iterative application) the whole training trajectory for $t \in [0,3]$ starting from the given initial condition and then keep producing predictions for $t \in (3,9].$ This is a severe test of the learned flow map’s predictive abilities for four reasons. First, due to the chaotic nature of the Lorenz system there is no guarantee that the flow map can correct its errors so that it can follow closely the training trajectory even for the interval $[0,3]$ used for training. Second, by extending the interval of prediction beyond the one used for training we want to check whether the neural network has actually learned the map of the Lorenz system and not just overfitting the training data. Third, we have chosen an initial condition that is far away from the attractor but our integration interval is long enough so that the system does reach the attractor and then evolves on it. In other words, we want the neural network to learn both the evolution of the transient and the evolution on the attractor. Fourth, we have chosen to train the neural network to represent the flow map corresponding to a much larger timestep than the one used to produce the training trajectory in order to check the ability of the error-correcting term to account for a significant range of unresolved timescales (relative to the training trajectory). We performed experiments with different values for the various parameters that enter in our constructions. We present here indicative results for the case of $N=2\times10^4$ samples ($N/3$ for training, $N/3$ for validation and $N/3$ for testing). We have chosen $N_{cloud}=100$ for the cloud of points around each input. Thus, the timestep $\Delta t=1.5 \times 10^{-2}.$ This is because there are $20000/100=200$ time instants in the interval $[0,3]$ at a distance $\Delta t = 3/200=1.5 \times 10^{-2}$ apart. The noise cloud for the neural network at a point $t$ was constructed using the point $x_i(t)$ for $i=1,2,3,$ on the training trajectory and adding random disturbances so that it becomes the collection $x_{il}(t) (1-R_{range}+2R_{range} \times \xi_{il})$ where $l=1,\ldots,N_{cloud}.$ The random variables $\xi_{il} \sim U[0,1]$ and $R_{range} =2\times 10^{-2}.$ As we have explained before, we want to train the neural network to map the input from the noise cloud at a time $t$ to the [*noiseless*]{} point $x_i(t + \Delta t)$ (for $i=1,2,3,$) on the training trajectory at time $t+ \Delta t.$ We have to also motivate the value of $R_{range}$ for the range of the noise cloud. Recall that the training trajectory was computed with the Euler scheme which is a first-order scheme. For the interval $\Delta t=1.5 \times 10^{-2}$ we expect the error committed by the flow map to be of similar magnitude and thus we should accommodate this error by considering a cloud of points within this range. We found that taking $R_{range}$ slightly larger and equal to $2\times 10^{-2}$ helps the accuracy of the training. We denote by $(F_1(z),F_2(z),F_3(z))$ the output of the neural network flow map for an input $z.$ This corresponds to $(G_1(z),G_2(z),G_3(z))$ for the notation of Section \[supervised\] (supervised learning) and \[unsupervised\] (unsupervised learning) and to $(\mu_1(z),\mu_2(z),\mu_3(z))$ for the notation of Section \[reinforcement\]. As explained in detail in [@stinis2018], we employ a learning rate schedule that we have developed and which uses the relative error of the neural network flow map. For a mini-batch of size $m,$ we define the relative error as $$\begin{gathered} RE_m= \frac{1}{m}\sum_{j=1}^m \frac{1}{3} \biggl[ \frac{|F_1(z_j)-x_1(t_j+ \Delta t)|}{|x_1(t_j + \Delta t)|} + \frac{|F_2(z_j)-x_2(t_j+ \Delta t)|}{|x_2(t_j + \Delta t)|} \\ +\frac{|F_3(z_j)-x_3(t_j+ \Delta t)|}{|x_3(t_j + \Delta t)|} \biggr] ,\end{gathered}$$ where $(F_1(z_j),F_2(z_j),F_3(z_j))$ is the neural network flow map prediction at $t_j + \Delta t$ for the input vector $z_j=(z_{j1},z_{j2},z_{j3})$ from the noise cloud at time $t_j.$ Also, $(x_1(t_j + \Delta t),x_2(t_j + \Delta t),x_3(t_j + \Delta t))$ is the point on the training trajectory computed by the Euler scheme with $\delta t=10^{-4}.$ The tolerance for the relative error was set to $TOL = 1/\sqrt{N/3}=1/\sqrt{20^4/3} \approx 0.0122.$ (see [@stinis2018] for more details about $TOL$). For the mini-batch size we have chosen $m=1000$ for the supervised and unsupervised cases and $m=33$ for the reinforcement learning case. We also need to specify the constraints that we want to enforce. Using the notation introduced above, we want to train the neural network flow map so that its output $(F_1(z_j),F_2(z_j),F_3(z_j))$ for an input data point $z_j=(z_{j1},z_{j2},z_{j3})$ from the noise cloud satisfies $$\begin{aligned} F_1(z_j) &= z_{j1} + \Delta t [\sigma (z_{j2}-z_{j1})] - \Delta t a_1 z_{j1} \label{lorenz_modified1} \\ F_2(z_j) &= z_{j2} + \Delta t [\rho z_{j1} - z_{j2}- z_{j1} z_{j3}]- \Delta t a_2 z_{j2} \label{lorenz_modified2} \\ F_3(z_j) &= z_{j3} + \Delta t [ z_{j1} z_{j2} - \beta z_{j3}] - \Delta t a_3 z_{j3} \label{lorenz_modified3}\end{aligned}$$ where $a_1, a_2$ and $a_3$ are parameters to be optimized during training. The first two terms on the RHS of - come from the forward Euler scheme, while the third is the [*diagonal*]{} linear error-correcting term. Supervised learning {#numerical_supervised} ------------------- We begin the presentation of results with the case of [*supervised*]{} learning. Our aim in this subsection is threefold: (i) show that the [*explicit*]{} enforcing of the constraints is better than the [*implicit*]{} one, (ii) show that the addition of noise to the training trajectory is beneficial and (iii) show that the addition of error-correcting terms to the constraints can be beneficial [*even*]{} if we use the [*noiseless*]{} trajectory. The latter point highlights once again the promising influx to predictive machine learning of ideas from model reduction. ### Implicit versus explicit constraint enforcing We used a deep neural network for the representation of the flow map with 10 hidden layers of width 20. We note that because the solution of the Lorenz system acquires values outside of the region of the activation function we have removed the activation function from the last layer of the generator (alternatively we could have used batch normalization and kept the activation function). Fig. \[plot\_lorenz\_supervised\] compares the evolution of the prediction for $x_1(t)$ of the neural network flow map starting at $t=0$ and computed with a timestep $\Delta t=1.5\times10^{-2}$ to the ground truth (training trajectory) computed with the forward Euler scheme with timestep $\delta t=10^{-4}.$ We show plots only for $x_1(t)$ since the results are similar for the $x_2(t)$ and $x_3(t).$ We want to make two observations. First, the prediction of the neural network flow map is able to follow with adequate accuracy the ground truth not only during the interval $[0,3]$ that was used for training, but also during the interval $(3,9].$ Second, the [*explicit*]{} enforcing of constraints i.e. the enforcing of the constraints - (see results in Fig. \[plot\_lorenz\_supervisedb\]) is better than the [*implicit*]{} enforcing of constraints. ### Noisy versus noiseless training trajectory We have advocated the use of a noisy version of the training trajectory in order for the neural network flow map to be exposed to larger parts of the phase space. The objective of such an exposure is to train the flow map to know how to respond to points away from the training trajectory where it is bound to wander due to the inevitable error committed through its repeated application during prediction. In this subsection we present results which corroborate our hypothesis. Fig. \[plot\_lorenz\_supervised\_noisevsnoiseless\] compares the predictions of neural networks trained with noisy and noiseless training data. In addition, we perform such comparison both for the case [*with enforced constraints*]{} during training and [*without enforced constraints*]{}. Fig. \[plot\_lorenz\_supervised\_noisevsnoiselessa\] shows that when the constraints are [*not*]{} enforced during training, the use of noisy data can have a significant impact. This is along the lines of our argument that the data from a single training trajectory are not enough by themselves to train the neural network accurately for prediction purposes. Fig. \[plot\_lorenz\_supervised\_noisevsnoiselessb\] shows that when the constraints [*are*]{} enforced during training, the difference between the predictions based on noisy and noiseless training data is reduced. However, using noisy data results in better predictions for parts of the trajectory where there are rapid changes. Also the use of noisy data helps the prediction to stay “in phase" with the ground truth for longer times. We have to stress that we conducted several numerical experiments and the performance of the neural network flow map trained with [*noisy*]{} data was consistently more robust than when it was trained with [*noiseless*]{} data. A thorough comparison will appear in a future publication. ### Error-correction for training with [*noiseless*]{} trajectory The results from Fig. \[plot\_lorenz\_supervised\_noisevsnoiselessb\] prompted us to examine in more detail the role of the error-correction term in the case of training with [*noiseless*]{} data. In particular, we would like to see how much of the predictive accuracy is due to enforcing the forward Euler scheme alone i.e. set $a_1=a_2=a_3=0$ in - versus allowing $a_1,a_2,a_3$ to be optimized during training. Fig. \[plot\_lorenz\_supervised\_noiselessa\] compares to the ground truth the prediction from the trained neural network flow map when we do [*not*]{} enforce any constraints and the prediction from the trained neural network flow map when we enforce [*only*]{} the forward Euler part of the constraints - ($a_1=a_2=a_3=0$). We see that indeed, even if we enforce only the forward Euler part of the constraint we obtain much more accurate results than not enforcing any constraint at all. Fig. \[plot\_lorenz\_supervised\_noiselessb\] examines how is the performance of the neural network affected further if we allow also the error-correcting term in - i.e. optimize $a_1,a_2,a_3$ during training. The inclusion of the error-correcting term allows the solution to remain for longer “in phase" with the ground truth than if the error-correction term is absent. This is expected since we are examining the solution of the Lorenz system as it transitions from an initial condition far from the attractor to the attractor and then evolves on it. While on the attractor the solution remains oscillatory and bounded, so the main error of the neural network flow map prediction comes from going “out of phase" with the ground truth. The error-correcting term keeps the predicted trajectory closer to the ground truth thus reducing the loss of phase. Recall that the error-correcting term is one of the simplest possible. From our prior experience with model reduction, we anticipate larger gains in accuracy if we use more sophisticated error-correcting terms. We want to stress again that training with [*noiseless*]{} data is significantly less robust than training with [*noisy*]{} data. However, we have chosen to present results of training with [*noiseless*]{} data that exhibit good prediction accuracy to raise various issues that should be more thoroughly investigated. Unsupervised learning {#numerical_unsupervised} --------------------- We continue with the case of [*unsupervised*]{} learning and in particular the case of a GAN. We have used for the GAN generator a deep neural network with 9 hidden layers of width 20 and for the discriminator a neural network with 2 hidden layers of width 20. The numbers of hidden layers both for the generator and the discriminator were chosen as the smallest that allowed the GAN training to reach its game-theoretic optimum without at the same time requiring large scale computations. Fig. \[plot\_lorenz\_unsupervised\] compares the evolution of the prediction of the neural network flow map starting at $t=0$ and computed with a timestep $\Delta t=1.5\times10^{-2}$ to the ground truth (training trajectory) computed with the forward Euler scheme with timestep $\delta t=10^{-4}.$ Fig. \[plot\_lorenz\_unsuperviseda\] shows results for the [*implicit*]{} enforcing of constraints. We see that this is not enough to produce a neural network flow map with long-term predictive accuracy. Fig. \[plot\_lorenz\_unsupervisedb\] shows the significant improvement in the predictive accuracy when we enforce the constraints [*explicitly*]{}. The results for this specific example are not as good as in the case of supervised learning presented earlier. We note that training a GAN with or without constraints is a delicate numerical task as explained in more detail in [@stinis2018]. One needs to find the right balance between the expressive strengths of the generator and the discriminator (game-theoretic optimum) to avoid instabilities but also train the neural network flow map i.e. the GAN generator, so that it has predictive accuracy. We also note that training with [*noiseless*]{} data is even more brittle. For the very few experiments where we avoided instability the predicted solution from the trained GAN generator was not accurate at all. Reinforcement learning {#numerical_reinforcement} ---------------------- The last case we examine is that of [*reinforcement*]{} learning. In particular, we want to see how an actor-critic method performs in the difficult case when the discount factor $\gamma=1.$ We repeat that $\gamma=1$ corresponds to the case of a deterministic environment which means that the same actions always produce the same rewards. This is the situation in our numerical experiments where we are given a training trajectory that does not change. We have conducted more experiments for other values of $\gamma$ but a detailed presentation of those results will await a future publication. For the representation of the action-value function we used a deep neural network with 15 hidden layers of width 20. For the representation of the deterministic action policy i.e. the neural network flow map in our parlance, we used a deep neural network with 10 hidden layers of width 20. The task of learning an accurate representation of the action-value function is more difficult than that of finding the action policy. This justifies the need for a stronger network to represent the action-value function. As we have mentioned Section \[reinforcement\], researchers have developed various modifications and tricks to stabilize the training of AC methods [@pfau2016]. The one that enabled us to stabilize results in the first place is that of [*target networks*]{} [@mnih2015; @lillicrap2015]. However, the predictive accuracy of the trained neural network flow map i.e. the action policy, was extremely poor unless we [*also*]{} used our homotopy approach for the action-value function. This was true for both cases of enforcing or not constraints explicitly during training. With this in mind we present results with and without the homotopy approach for the action-value function to highlight the accuracy improvement afforded by the use of homotopy. Before we present the results we provide some details about the target networks, the reward function and the specifics of the homotopy schedule. The target network concept uses different networks to represent the action-value function and the action policy that appear in the expression for the target . In particular, if $\theta_Q$ and $\theta_{\mu}$ are the parameter vectors for the action-value function and action policy respectively, then we use neural networks with parameter vectors $\theta_{Q'}$ and $\theta_{\mu'}$ (the [*target networks*]{}) to evaluate the target expression . The vectors $\theta_{Q'}$ and $\theta_{\mu'}$ can be initialized with the same values as $\theta_Q$ and $\theta_{\mu}$ but they evolve in a different way. In fact, after every iteration update for $\theta_Q$ and $\theta_{\mu}$ we apply the update rule $$\begin{aligned} \theta_{Q'} & \leftarrow \tau \theta_{Q} + (1-\tau) \theta_{Q'} \\ \theta_{\mu'} & \leftarrow \tau \theta_{\mu} + (1-\tau) \theta_{\mu'}\end{aligned}$$ where we have taken $\tau=0.001$ [@lillicrap2015]. The reward function (with constraints) for an input point $z$ from the noise cloud $$\begin{gathered} r(z,x)=-\bigg \{ \sum_{j=1}^{3} \bigg[ (\mu_j(z)-x_j^{data})^2 \bigg] \\ + (\mu_1(z)-z_{1} - \Delta t [\sigma (z_{2}-z_{1})] + \Delta t a_1 z_{1})^2 \\ + (\mu_2(z)-z_{2} - \Delta t [\rho z_{1} - z_{2}- z_{1} z_{3}] + \Delta t a_2 z_{2})^2 \\ + (\mu_3(z)-z_{3} - \Delta t [ z_{1} z_{2} - \beta z_{3}] + \Delta t a_3 z_{3} )^2 \bigg \}\end{gathered}$$ where $x^{data}$ is the [*noiseless*]{} point from the training trajectory. As we have explained in Section \[reinforcement\] (see the comment after ), in the AC method context the distribution of the noise cloud of the input data points at every timestep corresponds to the state visitation distribution $\rho^{\beta}$ appearing in . The homotopy schedule we used is a rudimentary one that we did not attempt to optimize. Obviously, this is a topic of further investigation that will appear elsewhere. We initialized the homotopy parameter $\delta$ at 0, and increased its value (until it reached 1) every 2000 iterations of the optimization. Fig. \[plot\_lorenz\_reinforcement\] presents results of the prediction performance of the neural network flow map when it was trained with and without the use of homotopy for the action value function. In Fig. \[plot\_lorenz\_reinforcementa\] we have results for the [*implicit*]{} enforcing of constraints while in Fig. \[plot\_lorenz\_reinforcementb\] for the [*explicit*]{} enforcing of constraints. We make two observations. First, both for implicit and explicit enforcing of the constraints, the use of homotopy leads to accurate results for long times. Especially for the case of explicit enforcing which gave us some of the best results from all the numerical experiments we conducted for the different modes of learning. Second, if we do not use homotopy, the predictions are extremely poor both for implicit and explicit forcing. Indeed, the green curve in Fig. \[plot\_lorenz\_reinforcementa\] representing the prediction of $x_1(t)$ for the case of implicit constraint enforcing [*without*]{} homotopy is as inaccurate as it looks. It starts at 0 and within a few steps drops to a negative value and does not change much after that. The predictions for $x_2(t)$ and $x_3(t)$ are equally inaccurate. Discussion and future work {#discussion} ========================== We have presented a collection of results about the enforcing of known constraints for a dynamical system during the training of a neural network to represent the flow map of the system. We have provided ways that the constraints can be enforced in all three major modes of learning, namely supervised, unsupervised and reinforcement learning. In line with the law of scientific computing that one should build in an algorithm as much prior information is known as possible, we observe a striking improvement in performance when known constraints are enforced during training. We have also shown the benefit of training with noisy data and how these correspond to the incorporation of a restoring force in the dynamics of the system. This restoring force is analogous to memory terms appearing in model reduction formalisms. In our framework, the reduction is in a [*temporal*]{} sense i.e. it allows us to construct a flow map that remains accurate though it is defined for [*large*]{} timesteps. The model reduction connection opens an interesting avenue of research that makes contact with complex systems appearing in real-world problems. The use of larger timesteps for the neural network flow map than the ground truth without sacrificing too much accuracy is important. We can imagine an online setting where observations come at [*sparsely*]{} placed time instants and are used to update the parameters of the neural network flow map. The use of sparse observations could be dictated by [*necessity*]{} e.g. if it is hard to obtain frequent measurements or [*efficiency*]{} e.g. the local processing of data in field-deployed sensors can be costly. Thus, if the trained flow map is capable of accurate estimates using [*larger*]{} timesteps then its successful updated training using only sparse observations becomes more probable. The constructions presented in the current work depend on a large number of details that can potentially affect their performance. A thorough study of the relative merits of enforcing constraints for the different modes of learning needs to be undertaken and will be presented in a future publication. We do believe though that the framework provides a promising research direction at the nexus of scientific computing and machine learning. Acknowledgements ================ The author would like to thank Court Corley, Tobias Hagge, Nathan Hodas, George Karniadakis, Kevin Lin, Paris Perdikaris, Maziar Raissi, Alexandre Tartakovsky, Ramakrishna Tipireddy, Xiu Yang and Enoch Yeung for helpful discussions and comments. The work presented here was partially supported by the PNNL-funded “Deep Learning for Scientific Discovery Agile Investment“ and the DOE-ASCR-funded ”Collaboratory on Mathematics and Physics-Informed Learning Machines for Multiscale and Multiphysics Problems (PhILMs)". Pacific Northwest National Laboratory is operated by Battelle Memorial Institute for DOE under Contract DE-AC05-76RL01830.
--- author: - 'J. Krtička' date: Received title: 'Hot-star wind models with magnetically split line blanketing' --- Introduction ============ The surface magnetic fields of about 10% of hot spectral type A and late-B stars have strengths on the order of $0.1-10\,$kG [@dvojka; @rompreh]. In such stars, the radiative diffusion may operate in a relatively quiet environment, leading to chemical peculiarity [@vaupreh; @mpoprad]. Precise spectropolarimetric observations show that about the same fraction of O and early-B stars also have strong magnetic fields [@morbob; @wamimes; @grunmimes]. In these stars the radiative force launches mass outflow, that is, the stellar wind [see @pulvina for a review] that allows for interaction between the magnetic field and the wind. The radiatively driven wind of hot stars is ionized, therefore it flows along the magnetic field lines. That the stellar wind is channeled along the magnetic field has numerous observational consequences [@malykor]. When the stellar wind energy density dominates the magnetic field energy density, the magnetic field opens up and the wind leaves the star [@udo]. The opposite case leads to relatively complex flow structures that include the inhibition of the outflow and fall-back of the wind onto the stellar surface [@udorot; @kuk], or the trapping of the wind in centrifugally supported clouds [@labor; @towog]. The interaction of the stellar wind with a strong magnetic field has evolutionary consequences. The wind is forced to the corotation at large distances from the star, leading to angular momentum loss and rotational braking [@brzdud; @membrzd]. This effect was discovered not only on evolutionary timescales [@shauc], but also on human timescales [@town]. Moreover, that the stellar wind is channeled by the magnetic field also affects the mass-loss rate. The local wind mass flux becomes proportional to the tilt of the magnetic field [@owoudan]. Moreover, wind may leave a star only along open magnetic field lines, but it falls back along closed magnetic field lines [@owoan]. The resulting wind quenching leads to an additional reduction of the mass-loss rate that resembles a weakening of the wind at low metallicity. This means that magnetic stars lose less mass than their non-magnetic counterparts, and the magnetic fields provide an alternative explanation of the high mass of black hole binary merger progenitors [@magvln]. The magnetic field affects not only wind dynamics, but also the radiative transfer, which may be important in radiatively driven winds. The Zeeman and Hanle effects lead to the polarization of the radiation in spectral lines. This might be used to detect even relatively weak magnetic fields in the winds [@igzeeman; @ihanle; @gizeeman]. Moreover, the associated line splitting affects the line force and therefore also the mass-loss rate. Stronger absorption due to line splitting may enhance the wind blanketing effect [@acko; @hd191612], which contributes to the light variability that is observed in magnetic O stars [@koeyer; @nazdis]. Despite its possible evolutionary consequences, the influence of the Zeeman effect on line-driven winds has never been studied in greater detail. In general, this would require self-consistent wind models with polarized line transfer [e.g., @adam] that account for the mutual radiative interaction of individual Zeeman components induced by the Doppler effect [@igzeeman; @gizeeman]. Such models are not available. However, the strongest influence of the Zeeman effect on the line-driving mechanism presumably arises from the line splitting, which may modify the line force. Even including this effect, however, requires wind models for which the radiative force is calculated in a more advanced approach than with the single-line Sobolev approximation. $\Delta J$ $S_i(0)$ for $\Delta M=0$ $S_i(1)$ for $\Delta M=1$ $S_i(-1)$ for $\Delta M=-1$ -------------- --------------------------- ------------------------------------- ------------------------------------- 0 $M_i^2$ $\frac{1}{4}(J_u+M_i)(J_u+1-M_i)$ $\frac{1}{4}(J_u-M_i)(J_u+1+M_i)$ \[2pt\] 1 $J_u^2-M_i^2$ $\frac{1}{4}(J_u+M_i)(J_u-1+M_i)$ $\frac{1}{4}(J_u-M_i)(J_u-1-M_i)$ \[2pt\] $-1$ $(J_u+1)^2-M_i^2$ $\frac{1}{4}(J_u+1-M_i)(J_u-M_i+2)$ $\frac{1}{4}(J_u+1+M_i)(J_u+M_i+2)$ \[2pt\] While the dynamical effects of the magnetic field (i.e., the magnetic field tilt and the field divergence) on the mass-loss rate have been studied in detail using magnetohydrodynamic (MHD) models [@udo], the effect of the line splitting was neglected. This might have a significant effect on the reliability of evolutionary models that include magnetized mass-loss [e.g., @magvln]. To understand the influence of the Zeeman effect on the radiative force and on the wind mass-loss rate, we modified our METUJE wind models to account for Zeeman splitting. Our wind models calculate the radiative force consistently in the comoving frame (CMF) in a global approach. In this way, the models account for the interaction of individual Zeeman components and allowed us to predict the influence of magnetically split line blanketing on emergent fluxes. To pinpoint the effect of the line splitting, we neglect the dynamical effects of the magnetic field connected with wind channeling along the magnetic field lines. Global wind models ================== Wind models with magnetically split line blanketing were calculated using the METUJE code [@cmfkont]. The code provides global (unified) models of the stellar photosphere and radiatively driven wind. The METUJE code solves the radiative transfer equation, the kinetic (statistical) equilibrium equations, and the equations of continuity, momentum, and energy in the photosphere and in the wind. Models are calculated assuming stationary (time-independent) and spherically symmetric wind flow. The radiative transfer equation is solved in the comoving-frame [CMF, @mikuh]. To solve the equation, we account for line and continuum transitions that are relevant in photospheres and winds of hot stars. The considered elements and ions are listed in @nlteiii. The ionization and excitation state is calculated from the kinetic equilibrium equations (also called non-local thermal equilibrium (NLTE) equations, see @hubenymihalas). We account for the radiative and collisional excitation, deexcitation, ionization, and recombination. The bound-free radiative rates are consistently calculated from the CMF mean intensity, while the bound-bound rates rely on the Sobolev approximation. The ion models were either adopted from the TLUSTY model stellar atmosphere input data [@ostar2003; @bstar2006] or prepared by us. Both sources use the same strategy to construct the ionic models, that is, the data are based on the Opacity and Iron Project calculations [@topt; @zel0] and are corrected for the observational line and level data available in the NIST database [@nist]. An exception is the ionic model of phosphorus, which was prepared using data described by @pahole. The ionic levels with low excitation energy are explicitly included in the calculations, while levels with higher excitation energy are merged into superlevels [see @ostar2003; @bstar2006 for details]. Depending on the location in the atmosphere, we use three different methods to solve the energy equation. The differential form of the transfer equation is applied deep in the photosphere, while the integral form of this equation is used in the upper layers of the photosphere [@kubii], and the electron thermal balance method [@kpp] is applied in the wind. In all three cases, the individual terms in the energy equation are taken from the CMF radiative field. These terms, together with the CMF radiative force calculated accounting for line, bound-free, and free-free transitions and light scattering on free electrons, are inserted in the hydrodynamical equations. The hydrodynamical equations, that is, the continuity equation, equation of motion, and the energy equation, are solved iteratively to obtain the wind density, velocity, and temperature structure. The final model is derived by varying the base velocity to search for a smooth transonic solution with the maximum mass-loss rate [@cmfkont]. The output from TLUSTY model stellar atmospheres [@ostar2003; @bstar2006] was used as the initial guess of the solution in the photosphere. These TLUSTY models were calculated for the same effective temperature, surface gravity, and chemical composition as the wind models, but neglecting the magnetic field. Including magnetically split line blanketing ============================================ The inclusion of the magnetic line splitting into our wind code closely follows the quantum mechanical theory of the Zeeman effect [@sobel; @novyzak]. When a magnetic field is present, each atomic level $k$ described by the total, orbital, and spin angular momentum quantum numbers $J_k$, $L_k$, and $S_k$ is split into $2J_k+1$ sublevels with magnetic quantum numbers $M_i=-J_k,\dots,J_k$. According to the selection rules, only the transitions with $\Delta M=M_u-M_l=-1,\,0,\,1$ are allowed between magnetically split upper $u$ and lower $l$ levels. The splitting of the energy levels leads to the wavelength shift $\Delta\lambda$ relative to the laboratory line wavelength $\lambda_0$ $$\label{zeemstep} \Delta\lambda=\frac{e\lambda_0^2B}{4\pi m_\text{e}c^2}(g_lM_l-g_uM_u),$$ where $e$ and $m_\text{e}$ are the elementary charge and the electron mass, $B$ is the field modulus, and $g_l$ and $g_u$ are the Landé factors. In our non-magnetic models, the line force is calculated based on line data derived from the VALD database (Piskunov et al. [-@vald1], Kupka et al. [-@vald2]) with some updates using the NIST data [@nist]. To account for the magnetic field, we replaced the original lines by their split components selected according to quantum-mechanical rules and with wavelength shifts given by Eq. \[zeemstep\]. The oscillator strengths of each split line $j$ were computed from the original oscillator strength $g\!f$ $$\begin{aligned} (g\!f)_j=&\frac{1}{2}S_j(0)(g\!f), \quad\text{for}\;\Delta M=0,\\ (g\!f)_j=&\frac{1}{4}S_j(\pm1)(g\!f), \quad\text{for}\;\Delta M=\pm1,\end{aligned}$$ where the relative line strengths given in Table \[sobel\] are additionally normalized to unity for each group of the Zeeman components $$\sum_iS_i(-1)=\sum_iS_i(0)=\sum_iS_i(1)=1.$$ The Landé factors were mostly taken from the Kurucz line list[^1] using cross-matching of lines with the VALD line list. For the remaining lines, the Landé factors were computed assuming LS coupling $$\label{glande} g_k=1+\frac{J_k(J_k+1)-L_k(L_k+1)+S_k(S_k+1)}{2J_k(J_k+1)} ,$$ with term designation from the Kurucz line line list, or we assumed mean Landé factors $g_k=1.2$ [e.g., @koks] when the designation was not available. The number of unsplit lines in the original line list with different sources of Landé factors and the total number of magnetically split lines that are accounted for in the calculation is given in Table \[magcar\]. ----------------------------------------------- --------- Number of lines with known Landé factors 140474 Number of lines with Landé factors calculated 42556 assuming LS coupling Eq.  Number of lines with assumed $g_k=1.2$ 35468 Total number of magnetically split lines 3420041 ----------------------------------------------- --------- : Number of unsplit lines in the input line list with different sources of Landé factors (upper rows) and the total number of magnetically split line components used in calculations (last row).[]{data-label="magcar"} ---------------------------------------------- ------------------------------ -------------------------- ------------------- ------------- ------------------------- ------------------------- ------------------------- ------------------------- ${\ensuremath{T_\mathrm{eff}}}$ $[\text{K}]$ $R_{*}$ $[\text{R}_{\odot}]$ $M$ $[\text{M}_{\odot}]$ $\log(L/L_\odot)$ $Z/Z_\odot$ $B=0$G $B=10^3\,$G $B=10^4\,$G $B=10^5\,$G 30000 6.6 12.9 4.50 1.0 $9.26{\times{10^{-9}}}$ $9.26{\times{10^{-9}}}$ $8.72{\times{10^{-9}}}$ $8.01{\times{10^{-9}}}$ 0.5 $4.57{\times{10^{-9}}}$ $5.25{\times{10^{-9}}}$ $5.47{\times{10^{-9}}}$ $3.75{\times{10^{-9}}}$ 37500 19.8 48.3 5.84 1.0 $1.03{\times{10^{-6}}}$ $1.03{\times{10^{-6}}}$ $1.04{\times{10^{-6}}}$ $0.98{\times{10^{-6}}}$ 0.5 $6.46{\times{10^{-7}}}$ $6.56{\times{10^{-7}}}$ $6.43{\times{10^{-7}}}$ $6.08{\times{10^{-7}}}$ 42500 18.5 70.3 6.00 1.0 $1.79{\times{10^{-6}}}$ $1.79{\times{10^{-6}}}$ $1.80{\times{10^{-6}}}$ $1.57{\times{10^{-6}}}$ 0.5 $8.54{\times{10^{-7}}}$ $8.59{\times{10^{-7}}}$ $8.53{\times{10^{-7}}}$ $7.53{\times{10^{-7}}}$ ---------------------------------------------- ------------------------------ -------------------------- ------------------- ------------- ------------------------- ------------------------- ------------------------- ------------------------- The magnetic field varies with radius according to $\text{the div}\boldmath{B}=0$ constraint. We neglected this effect and assumed a constant magnetic field throughout the whole computational domain, because the mass-loss rate is determined close to the star, where the magnetic field is nearly equal to its surface value. Moreover, the magnetic field is typically so strong that it dominates even in deep photospheric layers, therefore we can assume that the field has the same strength for great and small optical depths. This enabled us to split the lines in the external file and not in the code itself, while the effect of this assumption on our final results is negligible. Moreover, the magnetic field also varies across the stellar surface. By neglecting these variations, we provide in fact models for concentric cones, in which the surface variations of magnetic field can be neglected. Hot-star wind models with Zeeman line splitting =============================================== The adopted stellar parameters, that is, the effective temperature ${\ensuremath{T_\mathrm{eff}}}$, radius $R_{*}$, mass $M$, and luminosity $L$, together with derived mass-loss rates $\dot M,$ are given in Table \[ohvezpar\]. We selected a representative sample of O-star parameters that correspond to a main-sequence star with ${\ensuremath{T_\mathrm{eff}}}=30\,\text{kK}$ and to two supergiants with ${\ensuremath{T_\mathrm{eff}}}=37.5\,\text{kK}$ and ${\ensuremath{T_\mathrm{eff}}}=42.5\,\text{kK}$. The stellar parameters were derived using the formulas of @okali. The magnetic field strengths we selected cover typical surface fields found in O stars, which are up to few kilogauss [@donthetoric; @donhd191612; @wadhd14; @wadngc; @hubcpd]. We assumed two metallicity values that correspond to that of our Sun [@asp09] and to that of the Large Magellanic Cloud ($Z=0.5Z_\odot$). This enables us to study the metallicity effect on the magnetically split line blanketing that is due to the variation in the contribution of individual elements with metallicity [e.g., @vikolamet]. The mass-loss rates given in Table \[ohvezpar\] do not account for the dynamical effects of the magnetic field. Therefore, these values in fact correspond to $\dot M_{B=0}$ rates that need to be further corrected to obtain the local mass flux that accounts for dynamical effects [@owoan]. It follows from the predicted mass-loss rates in Table \[ohvezpar\] that Zeeman splitting has a negligible effect on the radiative force and on the wind mass-loss rates. The relative change in mass-loss rates is on the order of a few percent for magnetic field strengths of up to 10 kG. The mass-loss rate decreases by about 10%  for the strongest magnetic field considered, 100 kG, which surpasses any magnetic field ever detected on the surfaces of OB stars, however [e.g., @wadngc; @grunmimes]. The decrease can be explained as a result of line broadening, which, as shown in the case of turbulent broadening, leads to a decrease in mass-loss rate [@cmf1]. For the strongest magnetic fields we considered, the implicit assumption that the magnetic splitting of the energy levels is small compared to the fine-structure splitting may not be appropriate. For such fields a more general approach describing the so-called Paschen–Back effect should be used [e.g., @khalan]. This does not significantly affect the general results, however. The magnetically split line blanketing is important only if the line shifts are comparable with the line broadening. In our models we only assume thermal broadening[^2], in which case Eq.  gives the condition for the minimum magnetic field strength, $$\label{becko} B=\frac{4\pi m_\text{e}c}{e\lambda_0g_l}\sqrt{\frac{2kT}{m}}=77\,\text{kG} {\left(\frac{\lambda_0}{1000\,\text{\AA}}\right)}^{-1} {\left(\frac{T}{10^4\,\text{K}}\right)}^{1/2} {\left(\frac{m}{m_\text{H}}\right)}^{-1/2},$$ assuming $g_l=g_u=1.2$ and $\Delta M=1$. Here $m$ is the atomic mass and $m_\text{H}$ is the hydrogen atom mass. Eq.  shows that a magnetic field with a strength of about 100kG is needed to affect the line force. Such a magnetic field is higher than the upper limit of magnetic fields that have been observed in nondegenerate stars. This also explains why we did not find any strong effect of the magnetic field on the line force. The magnetic line splitting exceeds the Doppler shift that is connected with the radial wind motion for magnetic fields that are stronger than about 1MG. Such strong fields are typically found in some white dwarfs [see @ab for a review]. In this case, the magnetically split lines behave independently and do not interact with each other. Consequently, a stronger radiative force and higher mass-loss rates can be expected. This might have implications for hot ($T_\text{eff}\gtrsim100\,\text{kK}$) magnetic white dwarfs that have winds (Krtička et al., in preparation). We did not find any strong flux variability that would be due to the magnetically split line blanketing. The typical flux changes in the optical region at $5500\,$Åcorrespond to magnitude variations of about $10^{-4}$mag. Consequently, we do not expect any strong rotationally modulated flux variability in magnetic O stars that would be purely due to the Zeeman splitting. A similar result was obtained in magnetic main-sequence BA stars, where the magnetic field only affects emergent fluxes in strongly overabundant atmospheres [@zeeman-paper2]. The observed light variability in magnetic O stars [@koeyer; @nazmagmm] is therefore due to other processes, such as wind blanketing that is modulated by the tilt of the magnetic field and stellar rotation [@hd191612] or due to light absorption in a magnetically confined circumstellar environment [@wahot; @melimag]. Conclusions =========== We studied the effect of line splitting that is due to the magnetic field (Zeeman effect) on the wind properties in massive stars. We used our own numerical wind code with CMF radiative transfer and NLTE level populations to estimate the influence of the Zeeman splitting on the line force. We showed that for the magnetic fields that are typically found in OB stars, the Zeeman splitting has a negligible influence on the line force and also on the wind mass-loss rates and terminal velocities. The line splitting only affects the radiative force for magnetic fields that are stronger than about 100kG. We found only very weak flux variability that is due to the magnetically split line blanketing. We conclude that only dynamical effects connected with a magnetic field have a strong effect on the mass-loss rate. These effects were deliberately neglected here because they were studied using MHD models in detail, and we aimed at understanding of the effect of the line splitting. Acknowledgements {#acknowledgements .unnumbered} ================ The author thanks O. Kochukhov for discussing the problem and the anonymous referee for constructive comments. This work was supported by grant GA ČR 16-01116S. Access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum provided under the program “Projects of Large Research, Development, and Innovations Infrastructures” (CESNET LM2015042) is greatly appreciated. Abbott, D. C., & Hummer, D. G. 1985, , 294, 286 Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481 Aurière, M., Wade, G. A., Silvester, J. et al. 2007, A&A, 475, 1053 Donati, J.-F., Babel, J., Harries, T. J., et al. 2002, , 333, 55 Donati, J.-F., Howarth, I. D., Bouret, J.-C., et al. 2006, , 365, L6 Gayley, K. G., & Ignace, R., 2010, ApJ, 708, 615 Grunhut, J. H., Wade, G. A., Neiner, C., et al. 2017, , 465, 2432 Hubeny, I., & Mihalas, D., 2014, Theory of Stellar Atmospheres (Princeton University Press, Princeton) Hubrig, S., Sch[ö]{}ller, M., Kholtygin, A. F., et al. 2015, MNRAS, 447, 1885 Hummer, D. G., Berrington, K. A., Eissner, W., et al. 1993, A&A, 279, 298 Ignace, R., & Gayley, K. G., 2003, MNRAS, 341, 179 Ignace, R., Nordsieck, K. H., & Cassinelli J. P., 2004, ApJ, 609, 1018 Kawka, A. 2018, Contributions of the Astronomical Observatory Skalnate Pleso, 48, 228 Khan, S.A., & Shulyak, D. 2006, , 448, 1153 Khalack, V., & Landstreet, J. D. 2012, , 427, 569 Kochukhov, O., Khan, S., & Shulyak, D. 2005, A&A, 433, 671 Koen, C., & Eyer, L. 2002, , 331, 45 Kramida, A., Ralchenko, Y., Reader, J., and NIST ASD Team (2015), NIST Atomic Spectra Database (ver. 5.3) Krti[č]{}ka, J. 2016, , 594, A75 Krtička, J., & Kubát, J. 2009, MNRAS, 394, 2065 Krtička, J., & Kubát, J. 2010, A&A, 519, A5 Krtička, J., & Kubát, J. 2017, A&A, 606, A31 Kubát, J. 1996, , 305, 255 Kubát, J., Puls, J., & Pauldrach, A. W. A. 1999, A&A, 341, 587 K[ü]{}ker, M., 2017, AN, 338, 868 Kupka, F., Piskunov, N. E., Ryabchikova, T. A., Stempels, H. C., & Weiss, W. W. 1999, A&AS, 138, 119 Landi Degl’Innocenti, E., & Landolfi, M. 2004, Polarization in spectral lines, Astrophysics and Space Science Library, 307 Landstreet, J. D. & Borra, E. F. 1978, ApJL, 224, 5 Lanz, T., & Hubeny, I. 2003, ApJS, 146, 417 Lanz, T., & Hubeny, I. 2007, ApJS, 169, 83 Martins, F., Schaerer, D., & Hillier, D. J. 2005, A&A, 436, 1049 Meynet, G., Eggenberger, P., & Maeder, A. 2011, , 525, L11 Michaud, G. 2004, in The A-Star Puzzle, IAU Symposium No. 224, eds. J. Zverko, J. Žižňovský, S. J. Adelman, & W.W. Weiss (Cambridge: Cambridge Univ. Press), 173 Mihalas, D., Kunasz, P. B., & Hummer, D. G. 1975, ApJ, 202, 465 Morel, T., Castro, N., Fossati, L., et al. 2015, New Windows on Massive Stars, 307, 342 Munoz, M., Wade, G. A., Naz[é]{}, Y., Bagnulo, S., & Puls, J., 2018, CoSka, 48, 149 Nazé, Y. 2004, Ph.D. thesis, Univ. Liège Naz[é]{}, Y., Walborn, N. R., Morrell, N., Wade, G. A., & Szyma[ń]{}ski, M. K. 2015, , 577, A107 Owocki, S. P., & ud-Doula, A. 2004, ApJ, 600, 1004 Owocki, S. P., ud-Doula, A., Sundqvist, J. O., et al. 2016, , 462, 3830 Pauldrach, A. W. A., Hoffmann, T. L., & Lennon, M. 2001, A&A, 375, 161 Petit, V., Owocki, S. P., Wade, G. A. et al. 2013, MNRAS, 429, 398 Petit, V., Keszthelyi, Z., MacInnis, R., et al. 2017, MNRAS, 466, 1052 Piskunov, N. E., Kupka, F., Ryabchikova, T. A., Weiss, W. W., & Jeffery, C. S. 1995, A&AS, 112, 525 Puls, J., Vink, J. S., & Najarro, F. 2008, A&ARv, 16, 209 Romanyuk, I. I. 2007, Astrophysical Bulletin, 62, 62 Seaton, M. J., Zeippen, C. J., Tully, J. A., et al. 1992, Rev. Mexicana Astron. Astrofis., 23, 19 Shultz, M., Wade, G., Rivinius, T., et al. 2017, IAUS, 329, 126 Sobelman, I. I. 1977, Introduction to the theory of atomic spectra (Nauka: Moscow) Sundqvist, J. O., Petit, V., Owocki, S. P., et al. 2013, , 433, 2497 Tich[ý]{}, A., [Š]{}t[v e]{}p[á]{}n, J., Trujillo Bueno, J., & Kub[á]{}t, J. 2015, Polarimetry, 305, 401 Townsend, R.H.D. Owocki, S.P., & Groote, D. 2005, ApJ, 630, L81 Townsend, R.H.D., Oksala, M.E., Cohen, D.H., Owocki, S.P., & ud-Doula, A. 2010, , 714, 318 ud-Doula, A., & Owocki, S. P. 2002, , 576, 413 ud-Doula, A., Owocki, S. P., & Townsend, R. H. D. 2008, , 385, 97 ud-Doula, A., Owocki, S. P., & Townsend, R. H. D. 2009, MNRAS, 392, 1022 Vauclair, S. 2003, Ap&SS, 284, 205 Vink, J. S., de Koter, A., & Lamers, H. J. G. L. M. 2001, A&A, 369, 574 Wade, G. A., Howarth, I. D., Townsend, R. H. D., et al. 2011, , 416, 3160 Wade, G. A., Grunhut, J., Gr[ä]{}fener, G., et al.  2012a, MNRAS, 419, 2459 Wade, G. A. Ma[í]{}z Apell[á]{}niz, J., Martins, F., et al. 2012b, MNRAS, 425, 1278 Wade, G. A., Neiner, C., Alecian, E., et al. 2016, , 456, 2 [^1]: http://kurucz.harvard.edu [^2]: We can neglect other types of broadening for our purpose because, for example, in stars with strong magnetic fields macroturbulent broadening can be neglected because subsurface convection is likely inhibited in strong magnetic fields [@sundin].
--- abstract: 'We show that free-by-free groups satisfying a particular homological criterion are incoherent. This class is large in nature, including many examples of hyperbolic and non-hyperbolic free-by-free groups. We apply this criterion to finite index subgroups of $F_2\rtimes F_n$ to show incoherence of all such groups, and to other similar classes of groups.' author: - Robert Kropholler and Genevieve Walsh title: 'Incoherence of Many Free-by-Free groups' --- Introduction {#sec:intro} ============ We begin with a definition. A group is [*coherent*]{} if every finitely generated subgroup is finitely presented. A group is $G$ is called [*incoherent*]{} if $G$ has a finitely generated subgroup $H$ which is not finitely presented, and we call such a subgroup a [*witness to incoherence*]{}. There are many examples of incoherent groups. For example, $F_2 \times F_2$ is well known to be incoherent. A construction of many incoherent groups is given by Rips [@Ripsconstruction]. Here we present some substantial evidence towards the following conjecture, which was independently and previously made by Dani Wise: Let $ G = F_m \rtimes F_n$, where $m, n \geq 2$. Then $G$ is incoherent. In this paper we show that this conjecture is true when $G = F_m \rtimes F_n$ and $rk(H^1(G)) \geq n+1$ in Theorem \[thm:excessive\] and for $G = F_2 \rtimes F_n$ in Theorem \[thm:f2fn\]. Note that in contrast, $F_m \rtimes \mathbb{Z}$ is always coherent [@FeignHandel]. The techniques of this paper are inspired by two examples. The first is Bowditch and Mess’s example of an incoherent hyperbolic 4-manifold group, [@BowditchMess]. To construct this example one starts with a closed 3-manifold $M$ which contains a totally geodesic surface $S_g$, such that $M$ is also fibered with fiber $F$. They then consider the following space $M\cup_{S_g}M$. This space is homotopy equivalent to a compact quotient of a convex subset of $\mathbb{H}^4$ by a subgroup of ${\mathrm{Isom}}(H^4)$. This is a higher-dimensional analog of gluing two closed surfaces together along a geodesic and thickening to get a convex co-compact Kleinian manifold. They show, among other things, that the resulting hyperbolic 4-manifold has incoherent fundamental group. The witness to incoherence is obtained by taking the subgroup generated by the two fibers. This subgroup can be written as an amalgamated free product as $F\ast_{F\cap S_g}F$. The second example we present here is a proof that $F_2\times F_2$ is incoherent. We can write $F_2\times F_2$ as an amalgamated free product as $(F_2\times {\mathbb{Z}})\ast_{F_2}(F_2\times {\mathbb{Z}}) = \langle a, b, s\rangle\ast_{\langle a, b\rangle}\langle a, b, t\rangle$. We can take a non-standard fibration of each side and get fibers $\langle as^{-1}, bs^{-1}\rangle$ and $\langle at^{-1}, bt^{-1}\rangle$. The union of these two fibers gives a witness to incoherence which can be written as an amalgamated free product $\langle as^{-1}, bs^{-1}\rangle \ast_{\langle a, b\rangle\cap \langle at^{-1}, bt^{-1}\rangle}\langle at^{-1}, bt^{-1}\rangle$. The analog to these construction here are Corollaries \[cor:fibering\], \[cor:freebyfree\]. One should note that in both examples we have ommited a key detail. Namely, that the amalgamating subgroups $F\cap S_g$ and $\langle a, b\rangle\cap \langle at^{-1}, bt^{-1}\rangle$ are not finitely generated. This is because, as discussed later, free groups and surface groups do not algebraically fiber. Our results are quite general and apply to constructions involving finitely generated groups which do not algebraically fiber. Most of our results follow from our main theorem. If $G = H \rtimes F_k$, then we say $G$ has [*excessive homology*]{} if $rk(H^1(G, \mathbb{R})) \geq k+1$. Let $G = H \rtimes F_k$, where $H$ is finitely generated and does not algebraically fiber. If $G$ has excessive homology, then $G$ is incoherent. Some consequences are as follows. Let $G = F_m \rtimes F_n $. If $G$ has excessive homology, then $G$ is incoherent. A group is [*virtually special*]{} if it virtually acts co-specially on a $\operatorname{CAT}(0)$ cube complex. Let $G = H \rtimes F_n$, $m,n \geq 2$ where $H$ is either a closed hyperbolic surface group or a free group of rank $\geq 2$. If $G$ is virtually special, then $G$ is incoherent. \[thm:F2\] Let $G = F_2 \rtimes F_n$. Then $G$ is incoherent. The general plan of the paper is as follows. In Section \[sec:fiber\] we discuss algebraic fiberings and reveiw some results of Bieri-Neumann-Strebel that we will use. In Section \[sec:coherence\] we make some general remarks on coherence. Our main Theorem \[thm:excessive\] is proven in Section \[sec:Hbyfree\] as well as several corollaries and related results including Corollary \[cor:freebyfree\] and Theorem \[thm:cube\]. Theorem \[thm:f2fn\] is proven in Section \[sec:F2\]. .2 in [**Acknowledgements:**]{} We wish to thank the other participants of the Emerging Topics Workshop: “Coherence and Quasiconvex subgroups" held at The Institute for Advanced Study in March 2019 for productive conversations. The second author was partially supported through NSF DMS- 1709964. Background on algebraic fiberings {#sec:fiber} ================================= We say that a group $G$ [*algebraically fibers*]{} if there exists a map to ${\mathbb{Z}}$ with finitely generated kernel. There are many examples of groups that do not fiber. For instance, $F_n, n\geq 2$, $\pi_1(S_g), g\geq2$ and $BS(1, n)$. The set of algebraic fibers of a group is analogous to the fibers of a hyperbolic 3-manifold, and there is a well-developed theory of algebraic fibers (which is part of a more comprehensive theory) similar to the theory of fibrations [@Thurstonnorm]. The [*character sphere of $G$*]{}, $S(G)$, is $H^1(G, {\mathbb{R}})\smallsetminus \{0\}/\sim$, where $\chi\sim\chi'$ if there is $\lambda\in {\mathbb{R}}_+$ such that $\chi = \lambda\chi'$. These are equivalence classes of maps $G \rightarrow {\mathbb{R}}$, and we call elements of $S(G)$ [*characters.*]{} Bieri, Neumann and Strebel described the following invariant in [@BNS87]. Also see [@Strebelnotes]. We follow notation and give specific references from these notes. The [*BNS invariant*]{} $\Sigma^1(G)$ is the set of characters $\chi \in S(G)$ such that the full subgraph on the vertices of $\mathrm{Cay}(G, S)$ where $\chi(v)\geq0$ is a connected graph. [@Strebelnotes Corollary A4.3] Let $\chi\in S(G)$. Then $\ker(\chi)$ is finitely generated if and only if $\chi, -\chi\in \Sigma^1(G)$. [@Strebelnotes Corollary A3.3] $\Sigma^1(G)$ is an open subset of $S(G)$. Suppose that $rk(H^1(G;{\mathbb{R}}))\geq 2$ and there exists $\chi$ with finitely generated kernel. Then there are infinitely many other $\chi'\colon G\to{\mathbb{Z}}$ such that $\ker(\chi')$ is finitely generated. Background on incoherence {#sec:coherence} ========================= While many groups are known to be incoherent by a result of Rips [@Ripsconstruction], the most important concrete example is $F_2 \times F_2$. The original proof is attributed to Stallings and uses the second homology of the group. The proof in Section \[sec:intro\] is related to our techniques. We provide the key detail missing from Section \[sec:intro\] here. To create a finitely generated but not finitely presented group we use the following lemma from [@BowditchMess] where it is attributed to B. Neumann. \[notfinpresented\][@BowditchMess] (B. Neumann) Let $G_1, G_2$ be finitely generated groups. Let $G = G_1\ast_H G_2$. If $H$ is not finitely generated, then $G$ is not finitely presented. Now consider $F_2 \times F_2 = \langle a,b\rangle \times \langle s,t\rangle$. Consider the map $\phi$ to $\mathbb{Z}$ that sends each generator to 1. Then let $ H= \langle as^{-1}, bs^{-1}\rangle *_K \langle at^{-1},bt^{-1}\rangle$, where $K$ is the kernel restricted of $\phi$ to $\langle a,b\rangle$. Since free groups do not algebraically fiber, $K$ must be infinitely generated and the incoherence of $F_2 \times F_2$ follows from Neumann’s theorem. Since any group that contains an incoherent group is incoherent, we have immediately that the right-angled Coxeter group on the graph $K_{3,3}$ is incoherent. However, to illustrate the subtlety of the problem, we note that if even one edge is subdivided, this group is coherent. To see this we use the following result of Karass and Solitar. [@KarrassSolitar] Let $G, G'$ be coherent groups. Let $H$ be a subgroup of $G$ and $G'$. If every subgroup of $H$ is finitely generated, then $G\ast_H G'$ is coherent. Therefore, if we even subdivide one edge of the $K_{3,3}$ graph, the resulting right angled Coxeter group is coherent. Indeed we can write this new group as a free product with amalgamation of two groups on planar graphs over a virtually cyclic group. Since a right-angled Coxeter group defined by a planar graph is virtually a 3-manifold group [@DavisOkun Theorem 11.4.1], both of these groups are coherent. Therefore, Karrass and Solitar’s result implies that the right-angled Coxeter group on the subdivided graph is coherent. More generally, the same argument shows that: The right-angled Coxeter group on the barycentric subdivision of any graph is coherent. Incoherence of $H$-by-free groups {#sec:Hbyfree} ================================= We will be studying groups of the form $H\rtimes F_k$. It will be useful to first have an understanding on the homology of such a group. \[lem:homologyoffreebyfree\] Let $G = H\rtimes F_k$ be a finitely presented group. Let $\phi_1, \dots, \phi_k$ be the corresponding automorphisms. Let $\Phi_i$ be the automorphism induced on the abelianisation of $H$. Then $H_1(G;{\mathbb{Z}}) = {\mathbb{Z}}^k\times (H_1(H;{\mathbb{Z}})/\langle(\Phi_i - I)(H_1(H;{\mathbb{Z}}))\rangle)$, here $I$ is the identity matrix. Let $H = \langle a_1, \dots, a_n\mid R\rangle.$ There is a presentation for $G$ of the form $$\langle a_1, \dots, a_n, t_1, \dots, t_k\mid t_ia_jt_i^{-1} = \phi_i(a_j), R\rangle.$$ Thus, when we abelianise we arrive at ${\mathbb{Z}}^{k}\oplus H_1(H;{\mathbb{Z}})$ with the extra relations that $\phi_i(a_j) = a_j$. We replace $\phi_i$ with the map on the abelianization $\Phi_i$ and rewrite the relation as $\Phi_i(a_j)-a_j$ or $(\Phi_i - I)(a_j)$. Thus, we arrive at the desired conclusion. The proof of the fact that $H^1(G;{\mathbb{R}}) = {\mathbb{R}}^k\times (H^1(H;{\mathbb{R}}))/\langle(\Phi_i - I)(H^1(H;{\mathbb{R}}))\rangle)$ is extremely similar. .2 in Given an extension $H\rtimes F_k$ there is a natural map $F_k\to \mathrm{Aut}(H)$. This also gives a natural map $F_k\to\mathrm{Out}(H)$. Our main interest is when $H$ contains a non-abelian free subgroup. In this case we can reduce to the case that the map $F_k\to\mathrm{Out}(H)$ is injective. Let $G = H\rtimes F_k$. Suppose that $H$ contains a non-abelian free subgroup and that the natural map $F_k\to\mathrm{Out}(H)$ is not injective. Then $G$ contains a copy of $F_2\times F_2$. Thus, $G$ is incoherent. Since the map $F_k\to\mathrm{Out}(H)$ is not injective let $s$ be a non-trivial element of the kernel and let $t$ be a conjugate of $s$ in $F_k$ such that $s$, $t$ generate a free subgroup. Let $a, b\in H$ be generators of a free subgroup of $H$. Then $s$ and $t$ both act by conjugation on $H$, that is, $s^{-1}as = g_sag_s^{-1}, s^{-1}bs = g_sbg_s^{-1}, t^{-1}at = g_tag_t^{-1}$ and $t^{-1}bt = g_tbg_t^{-1}$, for some $g_s$ and $g_t$ in $H$. Thus we can see that $\langle a, b, sg_s, tg_t\rangle$ is a copy of $F_2\times F_2$. Thus for the most part we will be interested in cases where the map $F_k\to\mathrm{Out}(H)$ is injective. In general, we will consider the case that $H$ does not algebraically fiber. Let $G_i = H\rtimes_{\phi_i}{\mathbb{Z}}$, where $H$ does not algebraically fiber. Suppose that $\alpha_i\colon G_i\to{\mathbb{Z}}$ are homomorphisms such that ${{ \left.\kern-\nulldelimiterspace \alpha_1 \vphantom{\big|} \right|_{H} }} = {{ \left.\kern-\nulldelimiterspace \alpha_2 \vphantom{\big|} \right|_{H} }}$ are non trivial. Suppose further that $K_i = \ker(\alpha_i)$ is finitely generated. Then $G = G_1\ast_HG$ is incoherent. We know that $K_i\cap H$ is also the kernel of a homomorphism to ${\mathbb{Z}}$ and so is infinitely generated since $H$ does not algebraically fiber. Furthermore since ${{ \left.\kern-\nulldelimiterspace \alpha_1 \vphantom{\big|} \right|_{H} }} = {{ \left.\kern-\nulldelimiterspace \alpha_2 \vphantom{\big|} \right|_{H} }}$ we know that $K_1\cap H = K_2\cap H = L$. Let $N$ be the subgroup of $G$ generated by $K_1$ and $K_2$. We can write $N$ as an amalgamated free product $K_1\ast_{L}K_2$. This is the amalgamated free product of two finitely generated groups over an infinitely generated group and so is not finitely presented by Theorem \[notfinpresented\]. \[thm:incohwithhomology\] Let $G_i = H\rtimes_{\phi_i}{\mathbb{Z}}$. Assume that $H$ is finitely generated and does not algebraically fiber. Let $G = G_1\ast_HG_2$. If $H^1(G; {\mathbb{R}})$ has rank $\geq 3$, then $G$ is incoherent. Let $G_1 = \langle H, s\rangle$ and $G_2 = \langle H, t\rangle$. Let $\alpha_s$ be the homomorphism $G\to {\mathbb{Z}}$ be defined by counting the exponent sum of $s$, define $\alpha_t$ similarly. Since $H^1(G;{\mathbb{R}})$ has rank $\geq 3$, there is another class $\gamma\colon G\to {\mathbb{Z}}$ which is not in the span of $\alpha_s$ and $\alpha_t$. We now use the BNS invariant $\Sigma^1(G_i)$ to find other fiberings of $G_i= H \rtimes_{\phi_i} {\mathbb{Z}}$. Note that $\Sigma^1(G_i)$ is not empty since $H$ is finitely generated. Consider the homomorphisms $\beta_1 = a\gamma|_{G_1} + \alpha_s|_{G_1}$ and $\beta_2 = a\gamma|_{G_2} + \alpha_s|_{G_2}$. These define two homomorphisms from $G_i\to{\mathbb{R}}$. If we pick $a$ to be rational we can assume that the images of $\beta_1, \beta_2$ are cyclic subgroups. Since the BNS invariant is an open subset of the character sphere, we can take $a$ small enough so that the kernels of ${{ \left.\kern-\nulldelimiterspace \beta_s \vphantom{\big|} \right|_{G_1} }}$ and ${{ \left.\kern-\nulldelimiterspace \beta_t \vphantom{\big|} \right|_{G_2} }}$, respectively, are finitely generated subgroups $K_1$ and $K_2$ of $G_1$ and $G_2$ respectively. By construction $K_1\cap H = \ker({{ \left.\kern-\nulldelimiterspace \beta_s \vphantom{\big|} \right|_{H} }}) = \ker({{ \left.\kern-\nulldelimiterspace a\alpha \vphantom{\big|} \right|_{H} }}) = \ker({{ \left.\kern-\nulldelimiterspace \beta_t \vphantom{\big|} \right|_{H} }}) = K_2\cap H$. Since $H$ doesn’t algebraically fiber $K_1\cap H$ is not finitely generated. Thus the group $\langle K_1, K_2\rangle = K_1\ast_{K_1\cap H}K_2$ is finitely generated but not finitely presented (by Lemma \[notfinpresented\]) and is a witness for incoherence. Note here that $G$ can also be written as $H \rtimes F_2$, and we are requiring that $G$ has one more map to ${\mathbb{Z}}$ than comes from the natural map to $F_2$. Recall that if $G = H \rtimes F_k$, we say that $G$ has [*excessive homology*]{} if $rk (H_1(G; {\mathbb{R}})) \geq k+1$. \[thm:excessive\] Let $G = H \rtimes F_k$, where $H$ is finitely generated and does not algebraically fiber. If $G$ has excessive homology, then $G$ is incoherent. The condition that $G$ has excessive homology can be rephrased as there is a map $\gamma$ from $G$ to ${\mathbb{Z}}$ such that an element of $H$ has non-trivial image. Consider the subgroup of $G$ given by $N = H\rtimes F_2$ where the $F_2$ is generated by any two elements of $F_k$. We see that $N$ has excessive homology from the restriction of $\gamma$ to $N$. We can now appeal to Theorem \[thm:incohwithhomology\] deducing that $N$ is incoherent and hence so if $G$. Note that there are a lot of different ways to get witnesses to incoherence. For example, we could have followed the proof of Theorem \[thm:incohwithhomology\] replacing 2 maps by $k$ maps. We now detail corollaries to this theorem. \[cor:fibering\] Let $M_1, M_2$ be two fibered 3-manifolds with isomorphic fiber $S_g, g\geq 2$. Let $X = M_1\cup _{S_g}M_2$. Suppose that $H^1(X;{\mathbb{R}})$ has rank $\geq 3$. Then $\pi_1(X)$ is incoherent. As noted previously fundamental groups of surfaces do not algebraically fiber. Thus we can apply Theorem \[thm:incohwithhomology\]. \[cor:freebyfree\] Let $G = F_m \rtimes F_n$. If $H^1(X;{\mathbb{R}})$ has rank $\geq n +1$, then $G$ is incoherent. \[thm:cube\] Let $G = H \rtimes F_n$, $m,n \geq 2$ where $H$ is either a closed hyperbolic surface group or a free group of rank $\geq 2$. If $G$ is virtually special, then $G$ is incoherent. Since $G$ is virtually special, it is virtually a subgroup of a right-angled Artin group by [@HaglundWise]. Such groups virtually retract onto their quasi-convex subgroups [@Haglund Theorem F]. Now consider a virtual retraction onto some cyclic subgroup of $H < H \rtimes F_n$. This is a map from $G' = H' \rtimes F_l \rightarrow \mathbb{Z}$ which illustrates that $G'$ has excessive homology, since it is not a linear combination of the maps which are retracts to cyclic subgroups of $F_l$. Then by Theorem \[thm:excessive\], $G'$ is incoherent and so is $G$. In the case that $G$ above is hyperbolic, cubulation is sufficient hypothesis. Indeed, if a group is hyperbolic and $\operatorname{CAT}(0)$ cubulated, it virtually acts co-specially on a $\operatorname{CAT}(0)$ cube complex by [@Agol]. By the work of Gersten [@Ger] one can show that there are $F_m \rtimes F_n$ groups with $m,n \geq 2$ which are not $\operatorname{CAT}(0)$. Suppose that $G = H\rtimes F_k$ where $H^1(H;{\mathbb{R}}) = {\mathbb{R}}$. Suppose further that $H$ is finitely generated and does not algebraically fiber. Then $G$ is incoherent. The action of $F_k$ on $H^1(H;{\mathbb{R}})$ will leave the one-dimensional vector space $H^1(H; {\mathbb{R}})$ invariant. There is an index two subgroup of $G$ such that the induced action of an index 2 subgroup of $F_k$ is the identity on $H^1(H;{\mathbb{R}})$. Thus there is an index 2 subgroup with excessive homology by Lemma \[lem:homologyoffreebyfree\]. Then by Theorem \[thm:excessive\], $G$ is incoherent. For example, if $|m|\neq |n|$, then $G = B(n, m)\rtimes F_k$ is incoherent since $BS(n,m)$ has one-dimensional 1st cohomology. Free of rank 2 by free is incoherent. {#sec:F2} ====================================== \[thm:f2fn\] Let $G = F_2\rtimes F_n$, where $n \geq 2$. Then $G$ is incoherent. Let $F_2$ be the free group generated by $a, b$, let $G = F_2\rtimes F_n$ be a free-by-free group. We have seen that if the natural map $F_n\to \mathrm{Aut}(F_2)$ is not injective, then $G$ contains $F_2\times F_2$ and so is incoherent. Thus we will assume that $F_n\to\mathrm{Aut}(F_2)$ is injective. Let $H$ be the subgroup of $F_2$ generated by $\{a, b^2, bab^{-1}\}$. This subgroup is not characteristic however it is preserved by an index 3 subgroup of $\mathrm{Aut}(F_2)$. Define two automorphisms of $F_2$ by: $$\begin{aligned} \lambda:a&\mapsto ab &\rho:a&\mapsto a\\ b&\mapsto b &b&\mapsto ba \end{aligned}$$ The automorphisms $\lambda, \rho$ generate $\mathrm{Out}(F_2) = SL_2({\mathbb{Z}})$. There are three index two subgroups of $F_2$ and this set is preserved by any automorphism. Thus we can consider the action of $\lambda$ and $\rho$ on this set and determine the stabilizer of any one of these subgroups in terms of these generators. By inspection, the subgroup $H$ is preserved by $\lambda^2, \rho, \lambda\rho^2\lambda^{-1}$ and $\lambda\rho\lambda\rho^{-1}\lambda^{-1}$. These four elements generate an index 3 subgroup of $\mathrm{Out}(F_2)$. Thus we will pass to a finite index subgroup of $G$ which is of the form $G_1 = H\rtimes F$, where $F$ is a free group. The action of $F$ on $H$ is the restriction of the original action (a subgroup of ${\mathrm{Aut}}(F_2)$) intersect the index 3 subgroup above. We can compute the homology of $G_1$ from \[lem:homologyoffreebyfree\]. So we must compute the matrices corresponding to elements of $F$. We will in fact compute the matrices for the generators of the index 3 subgroup above. We arrive at $$\begin{aligned} \Phi_{\lambda^2} &=\left(\begin{matrix} 1&0&0\\ 1&1&1\\ 0&0&1 \end{matrix}\right) &\Phi_{\rho} &=\left(\begin{matrix} 1&1&0\\ 0&1&0\\ 0&1&1 \end{matrix}\right)\\ \Phi_{\lambda \rho^2 \lambda^{-1}} &=\left(\begin{matrix} 0&2&-1\\ -1&3&-1\\ -1&2&0 \end{matrix}\right) &\Phi_{\lambda \rho\lambda\rho^{-1}\lambda^{-1}} &=\left(\begin{matrix} 2&-1&1\\ 2&-1&2\\ 1&-1&2 \end{matrix}\right) \end{aligned}$$ After taking away the identity matrix from each of the above we can compute the span. The spans of $\Phi_{\lambda^2} - I$, $\Phi_{\rho} -I$, $\Phi_{\lambda \rho^2 \lambda^{-1}} -I$, and $\Phi_{\lambda \rho\lambda\rho^{-1}\lambda^{-1}} -I$, respectively, are generated by $\begin{bmatrix} 0 \\ 1 \\ 0\\ \end{bmatrix}$, $\begin{bmatrix} 1 \\ 0 \\ 1\\ \end{bmatrix}$, $\begin{bmatrix} 1 \\ 1 \\ 1\\ \end{bmatrix}$, and $\begin{bmatrix} 1 \\ 2 \\ 1\\ \end{bmatrix}$, respectively. Thus we see that $x$ does not die in the abelianization map. We can now apply Theorem \[thm:incohwithhomology\] and see that $G_1$ is incoherent and hence $G$ is incoherent. If we have a group of the form $G = F_2\rtimes F_n$ where the natural map $F_n\to\mathrm{Out}(F_2)$ is injective, then $G$ embeds in $\mathrm{Aut}(F_2)$. Indeed, $F_2$ is a subgroup of ${\mathrm{Aut}}(F_2)$. This provides an alternative proof that $\mathrm{Aut}(F_2)$ is incoherent. This was originally proved by Cameron Gordon in [@Gordoncoherence]. [@Gordoncoherence] ${\mathrm{Aut}}(F_2)$ is incoherent. In light of Thoerem \[thm:f2fn\], we outline a strategy for how one may attempt to prove all groups of the form $G = F_k\rtimes F_l$ are incoherent. There is a universal group of the form $F_k\rtimes F_n$ coming from any surjection $F_n\to {\mathrm{Out}}(F_k)$. If one could prove that this group has a finite index subgroup with excessive homology then all subgroups of the form $F_k\rtimes F_l$ will also have excessive homology in a finite index subgroup. We note that ${\mathrm{Out}}(F_k)$ can be very different from ${\mathrm{Out}}(F_2)$ for large $k$. [10]{} I. Agol, *The virtual Haken conjecture*. With an appendix by I. Agol, D. Groves and J. Manning. Documenta Math. [**18**]{} 1045–1087 (2013).\ R. Bieri, *Normal subgroups in duality groups and in groups of cohomological dimension 2*, J. Pure and App. Alg., [**7**]{} 35–51 (1976).\ R. Bieri, W. D. Neumann, and R. Strebel, *A geometric invariant of discrete groups*, Invent. Math., [**90**]{} no. 3, 451–477 (1987).\ B. H. Bowditch and G. Mess, *A 4-dimensional Kleinian group*, Trans. Amer. Math. Soc., [**344**]{}, No. 1, 391–405 (1994).\ M. R. Bridson and A. Haefliger, *Metric spaces of non-positive curvature*, Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\], [**319**]{}, Springer-Verlag, Berlin (1999).\ M. W. Davis and B. Okun. *Vanishing theorems and conjectures for the $l^2$-homology of right-angled Coxeter groups*. Geom. Topol., [**5**]{} 7–74 (2001). M. Feign and M. Handel, *Mapping tori of free group automorphisms are coherent*, Ann. Math. [**149**]{} 1061–1077 (1999).\ S. M. Gersten, *The automorphism group of a free group is not a $\operatorname{CAT}(0)$ group*, Proc. Amer. Math. Soc. [**121**]{} 999-1002 (1994).\ C. McA. Gordon, *Artin groups, 3-manifolds and coherence*. Bol. Soc. Mat. Mexicana (3) [**10**]{} Special Issue, 193–198 (2004).\ F. Haglund, [*Finite index subgroups of graph products*]{}. Geom. Dedicata [**135**]{} 167–209 (2008).\ F. Haglund and D. T. Wise, *Special cube complexes*. Geom. Funct. Anal. [**17**]{} no. 5, 1551–1620 (2008).\ A. Karrass and D. Solitar, *The subgroups of a free product of two groups with an amalgamated subgroup*. Trans. Amer. Math. Soc. 150 227–255 (1970).\ E. Rips, *Subgroups of small cancellation groups*, Bull. London Math. Soc. [**14**]{} no. 1, 45–47 (1982).\ G. P. Scott, *Finitely generated 3-manifold groups are finitely presented*. J. London Math. Soc. (2) [**6**]{} 437–440 (1973).\ R. Strebel, *Notes on the Sigma invariants*, Version 2, preprint. arXiv:1204.0214v2 (2013).\ E. L. Swenson, *Quasi-convex subgroups of isometries of negatively curved spaces*, Topology and its Applications, [**110**]{} 119–129 (2001).\ W. P. Thurston, [*A norm for the homology of 3-manifolds*]{}. Mem. Amer. Math. Soc. 59 no. 339, i-vi and 99–130 (1986).\
--- author: - Andrey Tydnyuk date: 'November 6, 2006' --- **Rational Solution of the KZ equation (example)** Andrey Tydnyuk E-mail address:andrey.tydniouk@verizon.net\ 735 Crawford Ave.Brooklyn, NY 11223. USA.\ [Abstract]{} We investigate the Knizhnik-Zamolodchikov linear differential system. The coefficients of this system are rational functions. We prove that the solution of the KZ system is rational when $k$ is equal to two and $n$ is equal to three. While doing so, we found the coefficients of expansion in a neighborhood of a singular point. **Mathematics Subject Classification (2000).** Primary 34M05, Secondary 34M55,47B38.\ **Keywords:** Symmetric group, linear differential system, rational solution. [Introduction]{} The Knizhnik-Zamolodchikov differential system has the form \[1\],\[2\]: $$\frac{dW}{dz}=2A(z)W,$$ where $A(z)$ and $W(z)$ are $3{\times}3$ matrices, $z_{1}{\ne}z_{2}$. We suppose that $A(z)$ has the form $$A(z)=\frac{P_{1}}{z-z_{1}}+\frac{P_{2}}{z-z_{2}}.$$ Here: $$P_{1}=\left[\begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array}\right]$$ $$P_{2}=\left[\begin{array}{ccc} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{array}\right]$$ The matrices $P_{1}$ and $P_{2}$ are connected with the matrix representation of the symmetric group. In this paper, we consider the case when $S_{3}$. We prove that in this case the solution of the Knizhnik-Zamolodchikov is rational. We find the coefficients of the Laurent expansion in the neighborhood of the point $z_{1}$. We use the method of L. Sakhnovich \[3\]. MAIN NOTIONS ============ In a neighborhood of $z_{1}$ the matrix function $A(z)$ can be represented in the form: $$A(z)=\frac{a_{-1}}{z-z_{1}}+a_{0}+a_{1}(z-z_{1})+...\quad,$$ where $$a_{-1}=P_{1} ,\quad a_{r}=(-1)^{r}\frac{P_{2}}{(z_{2}-z_{1})^{r+1}},\quad r{\geq}0.$$ **Proposition 1.1** (see\[3\])(necessary and sufficient condition) *If the matrix system* $$b_{q+1}=2\sum_{j+\ell=q}a_{j}b_{\ell},\quad-2{\leq}q+1{\leq}2$$ *has a solution $b_{-2}, \quad b_{-1}, \quad b_{0}, \quad b_{1}, \quad b_{2}$ and $b_{-2}{\ne}0$. Then system (1) has a solution* $$W(z)=\sum_{p{\geq}-2}b_{p}(z-z_{1})^{p}, \quad b_{-2}{\ne}0.$$ System (1.3) can be written in the following form $$b_{-2}=I_{3}-P_{1}$$ $$b_{-1}=-2(I_{3}+2P_{1})^{-1}a_{0}b_{-2}$$ $$b_{0}=-P_{1}(a_{0}b_{-1}+a_{1}b_{-2})$$ $$b_{1}=2(I_{3}-2P_{1})^{-1}(a_{0}b_{0}+a_{1}b_{-1}+a_{2}b_{-2})$$ $$(I_{3}-P_{1})b_{2}=(a_{0}b_{1}+a_{1}b_{0}+a_{2}b_{-1}+a_{3}b_{-2})$$ Direct calculations show that: $$b_{-2}=\left[\begin{array}{ccc} 1 & -1 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & 0 \\ \end{array}\right],$$ $$b_{-1}=\frac{1}{-9(z_{2}-z_{1})}\left[\begin{array}{ccc} -12 & 12 & 0 \\ 6 & -6 & 0 \\ 6 & -6 & 0 \\ \end{array}\right],$$ $$b_{0}=\frac{1}{-9(z_{2}-z_{1})^{2}}\left[\begin{array}{ccc} 3 & -3 & 0 \\ -6 & 6 & 0 \\ 3 & -3 & 0 \\ \end{array}\right],$$ $$b_{1}=\frac{1}{-9(z_{2}-z_{1})^{3}}\left[\begin{array}{ccc} 6 & -6 & 0 \\ 6 & -6 & 0 \\ -12 & 12 & 0 \\ \end{array}\right].$$ It follows from system (1.3) and the relations (1.10)-(1.13) that $$(I_{3}-P_{1})b_{2}=\frac{1}{-9(z_{2}-z_{1})^{4}}\left[\begin{array}{ccc} 1 & -1 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & 0 \\ \end{array}\right].$$ It is easy to see that equation (1.14) has a solution. Using proposition 1 we obtain the statement.\ **Proposition 1.2** *Differential system $(0.1)$ has a rational fundamental solution.* [References]{} 1\. Chervov A., Talalaev D., KZ equation, G-opers, quantum Drinfeld-Sokolov reduction and quantum Cayley-Hamilton identity, arXiv:hep-th/0607250, 2006.\ 2. Etingof P.I., Frenkel I.B., Kirillov A.A. jr.,Lectures on Representation Theory and Knizhnik-Zamolodchikov Equations, Amer. Math. Society, 1998.\ 3. Sakhnovich L.A., Meromorphic Solutions of Linear Differential Systems, Painleve Type Functions. Preprint (to appear).
--- abstract: 'We present an updated measurement of the $B_s^0$ lifetime using the semileptonic decays $B_s^0\rightarrow D_s^-\mu^+\nu X$, with $D_s^- \to \phi \pi^-$ and $\phi \to K^+K^-$ (and the charge conjugate process). This measurement uses the full Tevatron Run II sample of proton-antiproton collisions at $\sqrt{s} = 1.96$ TeV, comprising an integrated luminosity of 10.4 fb$^{-1}$. We find a flavor-specific lifetime $\tau_{\mathrm{fs}}(B_s^0)=1.479\pm0.010\thinspace{\rm(stat)}\pm0.021\thinspace{\rm(syst)}\thinspace\rm{ps}$. This technique is also used to determine the $B^0$ lifetime using the analogous $B^0\to D^-\mu^+\nu X$ decay with $D^-\to\phi\pi^-$ and $\phi\to K^+K^-$, yielding $\tau(B^0)=1.534\pm0.019\thinspace{\rm(stat)}\pm0.021\thinspace{\rm(syst)}\thinspace\rm{ps}$. Both measurements are consistent with the current world averages, and the $B_s^0$ lifetime measurement is one of the most precise to date. Taking advantage of the cancellation of systematic uncertainties, we determine the lifetime ratio $\tau_{\mathrm{fs}}(B_s^0)/\tau(B^0) = 0.964\pm0.013\thinspace{\rm(stat)}\pm0.007\thinspace{\rm(syst)}$.' date: 'Received 7 October 2014; revised manuscript received 1 December 2014; published 9 February 2015' title: 'Measurement of the Lifetime in the Flavor-Specific Decay Channel ' --- author\_list.tex The decays of hadrons containing a $b$ quark are dominated by the weak interaction of the $b$ quark. In first-order calculations, the decay widths of these hadrons are independent of the flavor of the accompanying light quark(s). Higher-order predictions break this symmetry, with the spectator quarks having roles in the time evolution of the $B$ hadron decay [@theory1; @theory2]. The flavor dependence leads to an expected lifetime hierarchy of $\tau(B_c)<\tau(\Lambda_b)<\tau(B_s^0) \approx \tau(B^0) < \tau(B^+)$, which has been observed experimentally [@pdg]. The ratios of the lifetimes of different $b$ hadrons are precisely predicted by heavy quark effective theories and provide a way to experimentally study these higher-order effects, and to test for possible new physics beyond the standard model [@bobeth]. Existing measurements are in excellent agreement with predictions [@pdg] for the lifetime ratio $\tau(B^+)/\tau(B^0)$, but until recently the experimental precision has been insufficient to test the corresponding theoretical prediction for $\tau(B_s^0)/\tau(B^0)$. In particular, predictions using inputs from unquenched lattice QCD calculations give $0.996 < \tau(B_s^0)/\tau(B^0) < 1$ [@theory2]. More precise measurements of both $B_s^0$ lifetime and the ratio to its lighter counterparts are needed to test and refine the models. A flavor-specific final state such as $B_s^0\to D_s^-\mu^+\nu$ is one where the charges of the decay products can be used to know whether the meson was a $B^0_s$ or $\bar{B^0_s}$ at the time of decay. As a consequence of neutral $B$ meson flavor oscillations, the $B_s^0$ lifetime as measured in semileptonic decays is actually a combination of the lifetimes of the heavy and light mass eigenstates with an equal mixture of these two states at time $t=0$. If the resulting superposition of two exponential distributions is fitted with a single exponential function, one obtains to second order [@theory4] $$\begin{aligned} \tau_{\mathrm{fs}}(B_s^0) = \frac{1}{\Gamma_s} \cdot \frac{1 + (\Delta \Gamma_s / 2\Gamma_s)^2}{1 - (\Delta \Gamma_s / 2\Gamma_s)^2},\end{aligned}$$ where $\Gamma_s = (\Gamma_{sL} + \Gamma_{sH})/2$ is the average decay width of the light and heavy states, and $\Delta\Gamma_s$ is the difference $\Gamma_{sL} - \Gamma_{sH}$. This dependence makes the flavor-specific lifetime an important parameter in global fits [@hfag] used to extract $\Delta\Gamma_s$, and hence, to constrain possible $CP$ violation in the mixing and interference of $B_s^0$ mesons. Previous measurements have been performed by the CDF [@cdf], D0 [@slprl], and LHCb [@lhcb; @lhcb2] Collaborations, with additional earlier measurements from LEP [@lep]. During Run II of the Tevatron collider from $2002$–$2011$, the D0 detector [@d0det] accumulated 10.4 fb$^{-1}$ of $p\bar{p}$ collisions at a center-of-mass energy of $1.96~\mathrm{TeV}$. We present a precise measurement of the $B_s^0$ lifetime that uses the flavor-specific decay $B_s^0\rightarrow D_s^-\mu^+\nu X$, with $D_s^- \to \phi \pi^-$ and $\phi \to K^+K^-$ [@conjugate], selected from this dataset. It is superseding our previous measurement [@slprl]. A detailed description of the D0 detector can be found elsewhere [@d0det]. The data for this analysis were collected with a single muon trigger. Events are considered for selection if they contain a muon candidate identified through signatures both inside and outside the toroid magnet [@d0det]. The muon must be associated with a central track, have transverse momentum ($p_T$) exceeding 2.0 GeV$/c$, and a total momentum of $p > 3.0$ GeV$/c$. Candidate $B_s^0 \to D_s^-\mu^+\nu X$ decays are reconstructed by first combining two charged particle tracks of opposite charge, which are assigned the charged kaon mass. Both tracks must satisfy $p_T > 1.0~\mathrm{GeV/}c$, and the invariant mass of the two-kaon system must be consistent with a $\phi$ meson, $1.008~\mathrm{GeV/}c^2 < M(K^+K^-) < 1.032~\mathrm{GeV/}c^2$. This $\phi$ candidate is then combined with a third track, assigned the charged pion mass, to form a $D_s^- \to \phi \pi^-$ candidate. The pion candidate must have $p_T > 0.7~\mathrm{GeV/}c$, and the invariant mass of the $\phi \pi^-$ system must lie within a window that includes the $D_s^-$ meson, $1.73~\mathrm{GeV/}c^2 < M(\phi\pi^-) < 2.18~\mathrm{GeV/}c^2$. The combinatorial background is reduced by requiring that the three tracks create a common $D_s^-$ vertex as described in Ref. [@durham]. Lastly, each $D_s^-$ meson candidate is combined with the muon to reconstruct a $B_s^0$ candidate. The invariant mass must be within the range $3~\mathrm{GeV/}c^2 < M(D_s^-\mu^+) < 5~\mathrm{GeV/}c^2$. All four tracks must be associated with the same $p\bar{p}$ interaction vertex (PV), and have hits in the silicon and fiber tracker detectors. Muon and pion tracks from genuine $B_s^0$ decays must have opposite charges, which defines the right-sign sample. The wrong-sign sample is also retained to help constrain the background model. In the right-sign sample, the reconstructed $D_s^-$ meson is required to be displaced from the PV in the same direction as its momentum in order to reduce background. The flavor-specific $B_s^0$ lifetime, $\tau({B_s^0})$, can be related to the decay kinematics in the transverse plane, $c\tau({B_s^0}) = L_{xy}M/p_T(B_s^0)$, where $M$ is the $B_s^0$ mass, taken as the world average [@pdg], and $L_{xy} = \vec{X}\cdot\vec{p}_T/|\vec{p}_T|$ is the transverse decay length, where $\vec{X}$ is the displacement vector from the PV to the secondary vertex in the transverse plane. Since the neutrino is not detected, and the soft hadrons and photons from decays of excited charmed states are not explicitly included in the reconstruction, the $p_T$ of the $B_s^0$ meson cannot be fully reconstructed. Instead, we use the combined $p_T$ of the muon and $D_s^-$ meson, $p_T(D_s^-\mu^+)$. The reconstructed parameter is the pseudoproper decay length, PPDL $= L_{xy}M/p_T(D_s^-\mu^+)$. To model the effects of the missing $p_T$ and of the momentum resolution when the $B_s^0$ lifetime is extracted from the PPDL distribution, a correction factor $K$ is introduced, defined by $K = p_T(D_s^-\mu^+)/p_T(B_s^0)$. It is extracted from a Monte Carlo (MC) simulation, separately for a number of specific decays comprising both signal and background components. MC samples are produced using the [pythia]{} event generator [@pythia] to model the production and hadronization phase, interfaced with [evtgen]{} [@evtgen] to model the decays of $b$ and $c$ hadrons. The events are passed through a detailed [geant]{} simulation of the detector [@geant] and additional algorithms to reproduce the effects of digitization, detector noise, and pileup. All selection cuts described above are applied to the simulated events. To ensure that the simulation fully describes the data, and in particular to account for the effect of muon triggers, we weight the MC events to reproduce the muon transverse momentum distribution observed in data. [l&gt;X]{} Decay channel & Contribution\ $D_s^-\mu^+\nu_{\mu}$ & $(27.5\pm2.4)\%$\ $D_s^{*-}\mu^+\nu_{\mu}\times(D_s^{*-}\rightarrow D_s^-\gamma/D_s^-\pi^0)$ & $(66.2\pm4.4)\%$\ $D_{s(J)}^{*-}\mu^+\nu_{\mu}\times(D_{s(J)}^{*-}\rightarrow D_s^{*-}\pi^0/D_s^{*-}\gamma)$ & $(0.4\pm5.3)\%$\ $D_s^{(*)-}\tau^+\nu_{\tau}\times (\tau^+\rightarrow \mu^+\bar{\nu}_{\mu}\nu_{\tau})$ & $(5.9\pm2.7)\%$\ Table \[table1\] summarizes the semileptonic $B_s^0$ decays that contribute to the $D_s^-\mu^+$ signal. Experimentally these processes differ only in the varying amount of energy lost to missing decay products, which is reflected in the final $K$-factor distribution. Table \[table2\] shows the list of non-negligible processes from subsequent semileptonic charm decays which also contribute to the signal. These two tables represent the sample composition of the $D_s^-\mu^+$ signal. [l&gt;X]{} Decay channel & Contribution\ $B^+\rightarrow D_s^-DX$ & $(3.81\pm0.75)\%$\ $B^0\rightarrow D_s^-DX$ & $(4.13\pm0.70)\%$\ $B_s^0\rightarrow D_s^-D_s^{(*)}X$ & $(1.11\pm0.36)\%$\ $B_s^0\rightarrow D_s^-DX$ & $(0.92\pm0.44)\%$\ $c\bar{c}\rightarrow D_s^-\mu^+$ & $(9.53\pm1.65)\%$\ We partition the dataset into five data-collection periods, separated by accelerator shutdowns, each comprising $1$–$3$ fb$^{-1}$ of integrated luminosity, to take into account time- or luminosity-dependent effects. The behavior and overall contribution of the dominant combinatorial backgrounds changed as the collider, detector, and trigger conditions evolved over the course of the Tevatron Run II. Figure \[fig1\] shows the $M(\phi \pi^-)$ invariant mass distribution for the right-sign $D_s^- \mu^+$ candidates for one of these data periods. Lifetimes are extracted separately for each period; they are consistent within uncertainties and a weighted average is made for the final measurement. The MC weighting as a function of $p_T$ is performed separately for each of the five data samples. The $K$ factors are extracted independently in each sample, with significant shifts observed due to the changing trigger conditions. The $K$-factor distribution peaks at $\approx 0.9$ for the $D_s^-$ signal and at $\approx 0.8$ for the first four backgrounds listed in Table \[table2\]. The $K$-factor distribution populates $0.5 < K < 1$ for both the signal and background components. ![Distributions of the invariant mass $M(\phi\pi^-)$ for $D_s^-\mu^+$ candidates passing all selection criteria in one of the five data periods. The higher-mass peak is the $D_s^-$ signal, with a smaller $D^-$ peak at lower mass. Sidebands for right-sign sample are indicated with dashed lines and the corresponding distribution for the wrong-sign sample is also shown.[]{data-label="fig1"}](fig1.eps){width="\columnwidth"} To determine the number of events in the signal region and define the signal and background samples, we fit a model to the $M(\phi\pi^-)$ invariant mass distribution as shown in Fig. \[fig1\]. The $D_s^-$ and $D^-$ mass peaks are each modeled using an independent Gaussian distribution to represent the detector mass resolution, and a second-order polynomial is used to model the combinatorial background. Using the information obtained from these fits, we define the signal sample (SS) as those events in the $M(\phi\pi^-)$ mass distribution that are within $\pm 2\sigma$ of the fitted mean $D_s^-$ meson mass, where $\sigma$ is the Gaussian width of the $D_s^-$ mass peak obtained from the fit. We find a total of $72\thinspace 028 \pm 727$ $D_s^-\mu^+$ signal events in the full dataset. Yields observed in the different periods are consistent with expectations taking into account changing trigger conditions and detector performance. The background sample (BS) includes those events in the sidebands of the $D_s^-$ mass distribution given by $-9\sigma$ to $-7\sigma$ and $+7\sigma$ to $+9\sigma$ from the fitted mean mass. Wrong-sign events in the full $M(\phi\pi^-)$ range are also included in the background sample, yielding more events to constrain the behavior of the combinatorial background. The extraction of the flavor-specific $B_s^0$ lifetime is performed using an unbinned maximum likelihood fit to the data, based on the PPDL of each candidate [@jorgethesis]. The effects of finite $L_{xy}$ resolution of the detector and the $K$ factors are included in this fit to relate the underlying decay time of the candidates to the corresponding observed quantity. The signal and background samples defined above are fitted simultaneously, with a single shared set of parameters used to model the combinatorial background shape. To validate the lifetime measurement method, we perform a simultaneous fit of the $B^0$ lifetime using the Cabibbo suppressed decay $B^0\rightarrow D^-\mu^{+} X$ seen in Fig. \[fig1\] at lower masses. This measurement also enables the ratio $\tau_{\rm{fs}}({B^0_s})/\tau({B^0})$ to be measured with high precision, since the dominant systematic uncertainties are highly correlated between the two lifetime measurements. For simplicity, the details of the fitting function are illustrated for the $B_s^0$ lifetime fit alone. In practice an additional likelihood product is included to extract the $B^0$ lifetime in an identical manner. The likelihood function $\mathcal{L}$ is defined as $$\mathcal{L} = \prod_{i\in \text{SS}}[ f_{D_s\mu}\mathcal{F}_{D_s\mu}^{i} + (1-f_{D_s\mu})\mathcal{F}_{\text{comb}}^{i}] \prod_{j\in \text{BS}}\mathcal{F}_{\text{comb}}^{j}\thinspace , \label{pdf}$$ where $f_{D_s\mu}$ is the fraction of $D_s^-\mu^+$ candidate events in the signal sample, obtained from the fit of the $D_s^-$ mass distribution, and $\mathcal{F}_{D_s\mu\text{(comb)}}^{i}$ is the candidate (combinatorial background) probability density function (PDF) evaluated for the $i{\text{th}}$ event. The probability density $\mathcal{F}_{D_s\mu}^i$ is given by $$\begin{aligned} \mathcal{F}_{D_s\mu}^i &=& f_{\bar{c}c}F_{\bar{c}c}^i + f_{B1}F_{B1}^i + f_{B2}F_{B2}^i + f_{B3}F_{B3}^i + f_{B4}F_{B4}^i \nonumber \\ &+& \Big(1-f_{\bar{c}c} - f_{B1} - f_{B2} - f_{B3} - f_{B4}\Big)F_s^i. \label{SigLL}\end{aligned}$$ Each factor $f_X$ is the expected fraction of a particular component $X$ in the signal sample, obtained from simulations and listed in Tables \[table1\] and \[table2\]. The first term accounts for the prompt $c\bar{c}$ component, and the decays $B1$–$B4$ represent the first four components listed in Table \[table2\]. The last term of the sum in Eq. (\[SigLL\]) represents the signal events $S \equiv (B_s^0\rightarrow D_s^-\mu^+\nu X)$ listed in Table \[table1\]. The factor $F_{\bar{c}c}$ is the lifetime PDF for the $\bar{c}c$ events, given by a Gaussian distribution with a mean of zero and a free width. Each $B$ decay mode is associated with a separate PDF, $F_{X}$, modeling the PPDL distribution, given by an exponential decay convoluted with a resolution function and with the $K$-factor distribution. All $B$-meson decays are subject to the same PPDL resolution function. A double-Gaussian distribution is used for the resolution function, with widths given by the event-by-event PPDL uncertainty determined from the $B^0_s$ candidate vertex fit multiplied by two overall scale factors and a ratio between their contributions that are all allowed to vary in the fit. The combinatorial background PDF, $\mathcal{F}_{\text{comb}}$, is chosen empirically to provide a good fit to the combinatorial background PPDL distribution. It is defined as the sum of the double-Gaussian resolution function and two exponential decay functions for both the positive and negative PPDL regions. The shorter-lived exponential decays are fixed to have the same slope for positive and negative regions, while different slopes are allowed for the longer-lived exponential decays. Figure  \[fig2\] shows the PPDL distribution for one of the five data periods for the signal sample, along with the comparison with the fit model. The corresponding $\chi^2$ per degree of freedom for each data-taking period are: $1.58, 1.21, 1.29, 1.18,$ and $1.14$. ![Top: PPDL distribution for $D_s^-\mu^+$ candidates in the signal sample for one of the five data periods. The projections of the lifetime fitting model, the background function, and the signal function are superimposed. Bottom: fit residuals demonstrating the agreement between the data and the fit model.[]{data-label="fig2"}](fig2.eps){width="\columnwidth"} The corresponding $B^0$ lifetime measurement uses exactly the same procedure for events in the $D^-$ mass peak, including a calculation of dedicated $K$ factors and background contributions from semileptonic decays. The lifetime fitting procedure is tested using MC pseudoexperiments, in which the generated $B_{(s)}^0$ lifetime is set to a range of different values, and the full fit performed on the simulated data. Good agreement is found between the input and extracted lifetimes in all cases. As an additional cross-check, the data are divided into pairs of subsamples, and the fit is performed separately for both samples. The divisions correspond to low and high $p_T(B_{(s)}^0)$, central and forward pseudorapidity $|\eta(B_{(s)}^0)|$ regions, and $B_{(s)}^0$ versus $\bar{B}_{(s)}^0$ decays. In all cases the measured lifetimes are consistent within uncertainties. To evaluate systematic uncertainties on the measurements of $c\tau({B_s^0})$, $c\tau({B^0})$, and the ratio $\tau_{\rm{fs}}({B_s^0})/\tau({B^0})$, we consider the following possible sources: modeling of the decay length resolution, combinatorial background evaluation, $K$-factor determination, background contribution from charm semileptonic decays, signal fraction, and alignment of the detector. All other sources investigated are found to be negligible. The effect of possible mismodeling of the decay length resolution is tested by repeating the lifetime fit with alternative resolution models, using a single Gaussian component. A systematic uncertainty is assigned based on the shift in the measured lifetime. We repeat the fit using different combinatorial background samples using only the sideband data or only the wrong-sign sample. The maximum deviation from the central lifetime measurement is assigned as a systematic uncertainty. To determine the effect of uncertainties on the $K$ factors for the signal events, the fractions of the different components are varied within their uncertainties given in Table \[table1\]. We also recalculate the $K$ factors using different MC decay models [@evtgen] leading to a harder $p_T$ distribution of the generated $B$ hadrons. The fraction of each component from semileptonic decays is varied within its uncertainties, and the shift in the measured lifetime is used to assign a systematic uncertainty. The signal fraction parameter, $f_{D_s\mu}$, is fixed for each mass fit performed. We vary this parameter within its statistical and systematic uncertainty, obtained from fit variations to the background and signal model of the mass PDFs, and assign the observed deviation as the uncertainty arising from this source. Finally, to assess the effect of possible detector misalignment, a single MC sample is passed through two different reconstruction algorithms, corresponding to the nominal detector alignment and an alternative model with tracking detector elements shifted spatially within their uncertainties. The observed change in the lifetime is taken as systematic uncertainty due to alignment. Table \[TableSummary\] lists the contributions to the systematic uncertainty from all sources considered. The most significant effect comes from the combinatorial background determination. Correlations in the systematic uncertainties for the $B_s^0$ and $B^0$ meson lifetimes are taken into account when evaluating the effect on the lifetime ratio, where the $K$ factor determination dominates. [lcc&gt;X]{} Uncertainty source & $\Delta (c\tau_{B_s^0}) \mu\rm{m}$ & $\Delta (c\tau_{B^0}) \mu\rm{m}$ &   $\Delta R$  \ Resolution & $0.7$ & $2.1$ & $0.003$\ Combinatorial background & $5.0$ & $4.9$ & $0.001$\ $K$ factor & $1.6$ & $1.3$ & $0.006$\ Semileptonic components & $2.6$ & $2.0$ & $0.001$\ Signal fraction & $1.0$ & $1.8$ & $0.002$\ Alignment of the detector & $2.0$ & $2.0$ & $0.000$\ Total & $6.3$ & $6.4$ & $0.007$\ The measured flavor-specific lifetime of the $B_s^0$ meson is $c\tau_{\rm{fs}}({B_s^0}) = 443.3 \pm 2.9 \thinspace{\rm(stat)} \pm 6.3 \thinspace{\rm(syst)}\thinspace\mu\rm{m}$, which is consistent with the current world average of $439.2 \pm 9.3\thinspace\mu\rm{m}$ [@pdg; @hfag] and has a smaller total uncertainty of $6.9\thinspace\mu\rm{m}$. The uncertainty in this measurement is dominated by systematic effects. The $B^0$ lifetime in the semileptonic decay $B^0\rightarrow D^-\mu^+\nu X$ is measured to be $c\tau({B^0}) = 459.8 \pm 5.6 \thinspace{\rm(stat)} \pm 6.4 \thinspace{\rm(syst)}\thinspace\mu\rm{m}$, consistent with the world average of $c\tau({B^0})= 455.4\pm 1.5\thinspace\mu\rm{m}$ [@pdg]. Using both lifetimes obtained in the current analysis, their ratio is determined to be $\tau_{\rm{fs}}({B_s^0})/\tau({B^0}) = 0.964 \pm 0.013 \thinspace{\rm(stat)} \pm 0.007 \thinspace{\rm(syst)}$. Both results are in reasonable agreement with theoretical predictions from lattice QCD [@theory1; @theory2], the flavor-specific lifetime has a better precision than the current world average [@pdg; @hfag], and agrees reasonably well with the slightly more precise recent measurement from the LHCb Collaboration [@lhcb2]. In summary, we measure the $B_s^0$ lifetime in the inclusive semileptonic channel $B_s^0\rightarrow D_s^-\mu^+\nu X$ and obtain one of the most precise determinations of the flavor-specific $B_s^0$ lifetime. Combining this result and that of Ref. [@lhcb2] with global fits of lifetime measurements in $B_s^0 \to J/\psi K^+ K^-$ decays [@hfag] gives the most precise determination of the fundamental parameters $\Delta\Gamma_s$ and $\Gamma_s$ which are important for constraining $CP$ violation in the $B_s^0$ system. Our precise measurement of the ratio of $B_s^0$ and $B^0$ lifetimes can be used to test and refine theoretical QCD predictions and offers a sensitive test of new physics [@bobeth]. acknowledgement.tex [99]{} D. Becirevic, Proc. Sci. HEP (2001) 098; M. Neubert and C.T. Sachrajda, Nucl. Phys. B [**483**]{}, 339 (1997). A. Lenz and U. Nierste, J. High Energy Phys. 06 (2007) 072 (and recent update arXiv:1102.4274). K.A. Olive [*et al.*]{} (Particle Data Group), Chin. Phys. C [**38**]{}, 090001 (2014). C. Bobeth, U. Haisch, A. Lenz, B. Pecjak and G. Tetlalmatzi-Xolocotzi, J. High Energy Phys. 06 (2014) 040. K. Hartkorn and H.-G. Moser, Eur. Phys. J. C [**8**]{}, 381 (1999). Y. Amhis [*et al.*]{} (Heavy Flavor Averaging Group Collaboration), arXiv:1207.1158, with web update at <http://www.slac.stanford.edu/xorg/hfag/osc/PDG_2014/>. T. Aaltonen [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett.  [**107**]{}, 272001 (2011); F. Abe [*et al.*]{} (CDF Collaboration), Phys. Rev. D [**59**]{}, 032004 (1999). V.M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. Lett. [**97**]{}, 241801 (2006). R. Aaij [*et al.*]{} (LHCb Collaboration), Phys. Rev. Lett.  [**112**]{}, 111802 (2014); R. Aaij [*et al.*]{} (LHCb Collaboration), Phys. Lett. B [**736**]{}, 446 (2014). R. Aaij [*et al.*]{} (LHCb Collaboration), Phys. Rev. Lett.  [**113**]{}, 172001 (2014). P. Abreu [*et al.*]{} (DELPHI Collaboration), Eur. Phys. J. C [**16**]{}, 555 (2000); K. Ackerstaff [*et al.*]{} (OPAL Collaboration), Phys. Lett. B [**426**]{}, 161 (1998); D. Buskulic [*et al.*]{} (ALEPH Collaboration), Phys. Lett. B [**377**]{}, 205 (1996). V.M. Abazov [*et al.*]{} (D0 Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. A [**565**]{}, 463 (2006). Charge conjugation is implied throughout this article. J. Abdallah [*et al.*]{} (DELPHI Collaboration), Eur. Phys. J. C [**32**]{}, 185 (2004). T. Sjöstrand, P. Edén, C. Friberg, L. Lönnblad, G. Miu, S. Mrenna, and E. Norrbin, Comput. Phys. Commun. [**135**]{}, 238 (2001). We use version 6.409. D.J. Lange, Nucl. Instrum. Methods Phys. Res., Sect. A [**462**]{}, 152 (2001). R. Brun [*et al.*]{}, CERN Report No. CERN-DD-EE-84-1 1987. J. Martínez Ortega, Ph. D. Thesis, Cinvestav, \[FERMILAB-THESIS-2012-60, 2012 (unpublished)\] <http://inspirehep.net/record/1315759>.
--- abstract: 'We discuss in detail the option to access the transversity distribution function $h_1(x)$ by utilizing the analyzing power of interference fragmentation functions in two-pion production inside the same current jet. The transverse polarization of the fragmenting quark is related to the transverse component of the relative momentum of the hadron pair via a new azimuthal angle. As a specific example, we spell out thoroughly the way to extract $h_1(x)$ from a measured single spin asymmetry in two-pion inclusive lepton-nucleon scattering. To estimate the sizes of observable effects we employ a spectator model for the fragmentation functions. The resulting asymmetry of our example is discussed as arising in different scenarios for the transversity.' address: - | Dipartimento di Fisica Nucleare e Teorica, Università di Pavia, and\ Istituto Nazionale di Fisica Nucleare, Sezione di Pavia, I-27100 Pavia, Italy - 'Fachbereich Physik, Universität Wuppertal, D-42097 Wuppertal, Germany' - | Dipartimento di Chimica e Fisica per i Materiali e per l’Ingegneria,\ Università di Brescia, I-25133 Brescia, Italy author: - Marco Radici - Rainer Jakob - Andrea Bianconi title: | [Preprint\ WU B 01-09]{} Accessing transversity with\ interference fragmentation functions --- =-1cm Introduction {#sec:intro} ============ At leading power in the hard scale $Q$, the quark content of a nucleon state is completely characterized by three distribution functions (DF). They describe the quark momentum and spin with respect to a preferred longitudinal direction induced by a hard scattering process. Two of them, the momentum distribution $f_1$ and the longitudinal spin distribution $g_1$, have been reliably extracted from experiments and accurately parametrized. Their knowledge has deeply contributed to the studies of the quark-gluon substructure of the nucleon. The third one, the transversity distribution $h_1$, measures the probability difference to find the quark polarization parallel versus antiparallel to the transverse polarization of a nucleon target. Therefore, it correlates quarks with opposite chiralities and is usually referred to as a “chiral-odd” function. Since hard scattering processes in QCD preserve chirality at leading twist, the $h_1$ is difficult to measure and is systematically suppressed like ${\cal O}(1/Q)$, for example, in inclusive deep inelastic scattering (DIS). A chiral-odd partner is needed to filter the transversity out of the cross section. Historically, the so-called double spin asymmetry (DSA) in Drell-Yan processes with two transversely polarized protons ($p^\uparrow$) was suggested first [@Ralston:1979ys]. However, the transversity distribution $h_1$ for antiquarks in the proton is presumably small [@Jaffe:1997yz]. Moreover, an upper limit for the DSA derived in a next-to-leading order analysis by using the Soffer bounds on transversity was found to be discouraging low [@Martin:1998rz]. As for DIS, semi-inclusive reactions need to be considered in order to provide the chiral-odd partner to $h_1$. In fact, in this case new functions enter the game, the fragmentation functions (FF), which give information on the hadronic structure complementary to the one delivered by the DF. At leading twist, the FF describe the hadron content of quarks and, more generally, they contain information on the hadronization process leading to the detected hadrons; as such, they give information also on the quark content of hadrons that are not (or even do not exist as) stable targets. The FF are also universal, but are presently less known than the DF because a very high resolution and good particle identification are required in the detection of the final state. Since pions are the most abundant particles detected in the calorimeter, it would be natural to consider semi-inclusive processes where a single collinear pion is detected together with the final lepton inelastically scattered from a transversely polarized nucleon target. However, the $h_1$ would appear convoluted with a chiral-odd fragmentation function only at twist three and, therefore, suppressed like ${\cal O}(1/Q)$ [@Jaffe:1992ra]. It seems more convenient to select the more rare final state where a polarized $\Lambda$ decays into protons and pions [@Jaffe:1993xb]. The analysis of the decay products reveals the $\Lambda$ polarization and a DSA isolates at leading twist a contribution proportional to the product of $h_1$ and a chiral-odd FF, $H_1$, which describes how a transversely polarized quark $q^\uparrow$ fragments into a transversely polarized $\Lambda^\uparrow$. But again, as in the case of the Drell-Yan DSA, low rates are expected here too because of the few $\Lambda$ particles produced in a hard reaction. Moreover, the theoretical knowledge of the mechanisms that determine the polarization transfer $q^\uparrow \rightarrow \Lambda^\uparrow$ (i.e. of $H_1$) is not yet firmly established. For all these reasons, building single spin asymmetries (SSA) seems to be a better strategy, i.e. considering DIS or $p-p$ processes where only one particle (the target) is transversely polarized, but selecting more complicated final states. The most famous example is the Collins effect: the analyzing power of the transverse polarization of the fragmenting quark is represented by the transverse component of the momentum of the detected hadron with respect to the current jet axis. The typical reactions would be, therefore, a semi-inclusive DIS, $ep^\uparrow \rightarrow e'\pi X$, or $p^\uparrow p \rightarrow \pi X$, where the pion is detected not collinear with the jet axis. At leading twist, a specific SSA allows for the deconvolution of $h_1$ from the so-called Collins function $H_1^\perp$, the prototype of a new class of FF, the interference FF, which are not only chiral-odd, but also [*naive*]{} T-odd: in absence of two or more reaction channels with a significant relative phase, they are forbidden by time-reversal invariance [@Collins:1993kk]. From the experimental point of view, extraction of $h_1$ via the Collins effect is quite a demanding task, because it requires the complete determination of the transverse momentum of the detected hadron (though first observations of a non-zero SSA have been reported [@Airapetian:2000tv]). On the other side, it is not sufficient to limit the theoretical analysis at leading order. Because of the explicit dependence on an intrinsic transverse momentum, some soft gluon divergencies (introduced by loop corrections to the tree level result) do not cancel and must be summed up in Sudakov form factors. The net result is a dilution of the transverse momentum distribution of the fragmenting quark and a final suppression of the SSA, particularly when in the fragmentation process there is another scale very different from the hard one $Q$, as for instance the transverse momentum of the produced hadron in the Collins effect. The same phenomenon happens “squared” in $e^+e^-$ processes, that, consequently, do not help in determining the Collins function [@Boer:2001he]. Moreover, modelling this interference FF by definition requires the ability of giving a microscopic description of the relevant phase produced by the quantum interference of different channels leading to the same detected hadron: a very difficult task that implies a description of the structure of the residual jet (as discussed in [@Bianconi:2000cd]), or the introduction of dressed quark propagators [@Collins:1993kk] which may be effectively modelled, for instance, by pion loop corrections [@Bacchetta:2001di]. As a better alternative, the SSA with detection of two unpolarized leading hadrons inside the same jet was suggested [@Collins:1994ax; @Collins:1994kq; @Jaffe:1998hf]. In a previous work, we have discussed the general framework for the interference FF arising in this case [@Bianconi:2000cd]. Assuming that the residual interactions between each leading hadron and the undetected jet is of higher order than the one between the two hadrons themselves, the main result was that $h_1$ gets factorized at leading twist through a novel interference FF, $H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$, that relates the transverse polarization of the fragmenting $q^\uparrow$ to the relative motion of the two detected hadrons. This new analyzing power, $H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$, filters out the $h_1$ in a very advantageous way, because collinear factorization holds, which leads to an exact cancellation of all collinear divergences, and makes the evolution equations much simpler. Moreover, it is also easier to model the residual interaction between the two hadrons only. In another previous work, we have presented a model calculation for the case of the two hadrons being a $\pi$ and a $p$ with invariant mass close to the Roper resonance [@Bianconi:2000uc]. In the present paper we carry the calculation on to the experimentally more relevant case of $\pi^+ \pi^-$ production with invariant mass close to the $\rho$ resonance, and we discuss some of the practical details for the extraction of $h_1$ from a spin asymmetry in semi-inclusive lepton-nucleon DIS. This observable should be accessible, for instance, at HERMES (when the transversely polarized target will be operative) or even better at COMPASS (because of higher counting rates); it will also be a very interesting quantity for the future options in hadronic physics like ELFE, TESLA-N and EIC. However, as we like to emphasize, the calculation of the $\pi^+\pi^-$ fragmentation is process independent and could be most useful also for the spin physics program at RHIC, where the extraction of transversity is planned via a SSA in $p-p$ reactions. The rest of the paper is organized as follows. In Sec. \[subsec:iff\] we briefly recall the kinematics and the properties of the FF arising when a transversely polarized quark fragments into two unpolarized leading hadrons in the same current jet. Then, in Sec. \[subsec:h1fromssa\] we specialize the formulae to the case of semi-inclusive lepton-nucleon DIS and detail the strategy for building a SSA that allows for the extraction of $h_1$ at leading twist. In Sec. \[sec:spectator\] we consider the two hadrons to be two charged pions with invariant mass around the $\rho$ resonance and we explicitly calculate both the (process independent) interference FF and the SSA (for semi-inclusive lepton-nucleon DIS) in the spectator model approximation. In Sec. \[sec:results\] results are presented and commented. Conclusions and outlooks are given in Sec. \[sec:end\]. Single spin asymmetry for two hadron-inclusive lepton-nucleon DIS {#sec:ssa} ================================================================= In this Section, we discuss the general properties of two-hadron interference FF when the kinematics is specialized to semi-inclusive DIS, and for this process we work out the formula for a SSA that isolates the transversity at leading twist. However, we emphasize that under the assumption of factorization the soft parts of the process, i.e. the DF and the interference FF, are universal objects and, therefore, the results can be generalized to other hard processes, such as proton-proton scattering. Interference Fragmentation Functions in semi-inclusive DIS {#subsec:iff} ---------------------------------------------------------- At leading order, the hadron tensor for two unpolarized hadron-inclusive lepton-nucleon DIS reads [@Bianconi:2000cd] $$\begin{aligned} 2M\, {\cal W}^{\mu\nu} &=& \int \d p^-\,\d k^+\,\d^2{\vec p}_{{\scriptscriptstyle T}}^{}\,\d^2{\vec k}_{{\scriptscriptstyle T}}^{}\; \delta^2\!\left({\vec p}_{{\scriptscriptstyle T}}^{}+{\vec q}_{{\scriptscriptstyle T}}^{}-{\vec k}_{{\scriptscriptstyle T}}^{}\right) \mbox{Tr}\big[ \; \Phi(p;P,S) \; \gamma^\mu \; \Delta(k;P_1,P_2) \; \gamma^\nu \; \big] \Big|_{\tiny\begin{array}{c} p^+ = x P^+ \\ k^- = P_h^-/z \end{array}} {\nonumber}\\ & & {}+ \left(\begin{array}{c} q\leftrightarrow -q \\ \mu \leftrightarrow \nu \end{array} \right) \; , \label{eq:tensor}\end{aligned}$$ where $M$ is the target mass. The kinematics, also depicted in Fig. \[fig:handbag\], represents a nucleon with momentum $P (P^2=M^2)$ and a virtual hard photon with momentum $q$ that hits a quark carrying a fraction $p^+ = x P^+$ of the parent hadron momentum. We describe a 4-vector $a$ as ${\left[\;a^-\;,\;a^+\;,\;{\vec a}_{{\scriptscriptstyle T}}\;\right]}$ in terms of its light-cone components $a^\pm = (a^0\pm a^3)/\sqrt{2}$ and a transverse bidimensional vector ${\vec a}_{{\scriptscriptstyle T}}$. Because of momentum conservation in the hard vertex, the scattered quark has momentum $k=p+q$, and it fragments into two unpolarized hadrons, which carry a fraction $(P_1+P_2)^-\equiv P_h^- = z k^-$ of the “parent quark” momentum, and the rest of the jet. The quark-quark correlator $\Phi$ describes the nonperturbative processes that make the parton $p$ emerge from the spin-1/2 target, and it is symbolized by the lower shaded blob in Fig. \[fig:handbag\]. Using Lorentz invariance, hermiticity and parity invariance, the partly-integrated $\Phi$ can be parametrized at leading twist in terms of DF as $$\begin{aligned} \Phi(x,{\vec p}_{{\scriptscriptstyle T}}) &\equiv & \left. \int \d p^-\;\Phi(p;P,S) \right|_{p^+ = x P^+} \!\!\!\!=\frac{1}{2}\,\Biggl\{ f_1\, {{\kern 0.2 em n\kern -0.45em /}}_+ + f_{1T}^\perp\, \epsilon_{\mu \nu \rho \sigma}\gamma^\mu \frac{n_+^\nu p_{{\scriptscriptstyle T}}^\rho S_{{{\scriptscriptstyle T}}}^\sigma}{M} - \left(\lambda\,g_{1L} +\frac{({\vec p}_{{\scriptscriptstyle T}}\cdot{\vec S}_{{{\scriptscriptstyle T}}})}{M}\,g_{1T}\right) {{\kern 0.2 em n\kern -0.45em /}}_+ \gamma_5 {\nonumber}\\[2 mm] && {}- h_{1T}\,i\sigma_{\mu\nu}\gamma_5 S_{{{\scriptscriptstyle T}}}^\mu n_+^\nu - \left(\lambda\,h_{1L}^\perp +\frac{({\vec p}_{{\scriptscriptstyle T}}\cdot{\vec S}_{{{\scriptscriptstyle T}}})}{M}\,h_{1T}^\perp\right)\, \frac{i\sigma_{\mu\nu}\gamma_5 p_{{\scriptscriptstyle T}}^\mu n_+^\nu}{M} + h_1^\perp \, \frac{\sigma_{\mu\nu} p_{{\scriptscriptstyle T}}^\mu n_+^\nu}{M}\Biggl\} \; , \label{eq:phi}\end{aligned}$$ where the DF depend on $x, {\vec p}_{{\scriptscriptstyle T}}$ and the polarization state of the target is fully specified by the light-cone helicity $\lambda = M S^+ / P^+$ and the transverse component ${\vec S}_{{\scriptscriptstyle T}}$ of the target spin. Similarly, the correlator $\Delta$, symbolized by the upper shaded blob in Fig. \[fig:handbag\], represents the fragmentation of the quark into the two detected hadrons and the rest of the current jet and can be parametrized as [@Bianconi:2000cd] $$\begin{aligned} \Delta &\equiv & \left.\frac{1}{4z}\int \d k^+ \; \Delta(k;P_1,P_2) \right|_{k^-=P_h^-/z} {\nonumber}\\[2mm] &= & \frac{1}{4}\left\{ D_1\,{\kern 0.2 em n\kern -0.45em /}_- - G_1^\perp\, \frac{{\epsilon}_{\mu\nu\rho{\sigma}}\,{\gamma}^\mu\,n_-^\nu\,k_{{\scriptscriptstyle T}}^\rho\,R_{{\scriptscriptstyle T}}^{\sigma}} {M_1 M_2}\,{\gamma}_5 + H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}\, \frac{{\sigma}_{\mu\nu}\, R_{{\scriptscriptstyle T}}^\mu\, n_-^\nu}{M_1 M_2} + H_1^{\perp}\, \frac{{\sigma}_{\mu\nu}\, k_{{\scriptscriptstyle T}}^\mu\, n_-^\nu}{M_1 M_2} \right\} \; , \label{eq:delta}\end{aligned}$$ where $n_\pm={\left[\;1\mp 1\;,\;1\pm 1\;,\;{\vec 0}_{{\scriptscriptstyle T}}\;\right]}/2$ are light-cone versors and $R\equiv (P_1-P_2)/2$ is the relative momentum of the hadron pair. For convenience, we will choose a frame where, besides $\vec P_{{\scriptscriptstyle T}}= 0$, we have also $\vec P_{h{{\scriptscriptstyle T}}} = 0$. By defining the light-cone momentum fraction $\xi = P_1^-/P_h^-$, we can parametrize the final-state momenta as $$\begin{aligned} k&=& {\left[\;\frac{P_h^-}{z}\;,\;z\frac{k^2+{\vec k}_{{\scriptscriptstyle T}}^2}{2P_h^-}\;,\;{\vec k}_{{\scriptscriptstyle T}}\;\right]} \;,{\nonumber}\\ P_1&=& {\left[\;\xi\,P_h^-\;,\;\frac{M_1^2+{\vec R}_{{\scriptscriptstyle T}}^2}{2\,\xi\,P_h^-}\;,\;{\vec R}_{{\scriptscriptstyle T}}\;\right]} \;,{\nonumber}\\ P_2&=& {\left[\;(1-\xi)\,P_h^-\;,\;\frac{M_2^2+{\vec R}_{{\scriptscriptstyle T}}^2}{2\,(1-\xi)\,P_h^-}\;,\;-{\vec R}_{{\scriptscriptstyle T}}\;\right]} \;. \label{eq:vectors}\end{aligned}$$ From the definition of the invariant mass of the hadron pair, i.e. $M_h^2 \equiv P_h^2 = 2 P_h^+ P_h^-$, and the on-shell condition for the two hadrons themselves, $P_1^2=M_1^2 , P_2^2=M_2^2$, we deduce the relation $${\vec R}_{{\scriptscriptstyle T}}^2=\xi\,(1-\xi)\,M_h^2-(1-\xi)\,M_1^2-\xi\,M_2^2 \label{eq:rt2}$$ which in turn puts a constraint on the invariant mass from the positivity requirement ${\vec R}_{{\scriptscriptstyle T}}^2 \geq 0$: $$M_h^2 \geq \frac{M_1^2}{\xi}+\frac{M_2^2}{1-\xi} \; . \label{eq:mh2}$$ After having given all the details of the kinematics, we can specify the actual dependence of the quark-quark correlator $\Delta$ and of the FF. From the frame choice $\vec P_{h{{\scriptscriptstyle T}}} = 0$, the on-shell condition for both hadrons, Eq. (\[eq:rt2\]), the constraint on $k^-$ and the integration over $k^+$ implied by the definition of $\Delta$ in Eq. (\[eq:delta\]), we deduce that the actual number of independent components of the three 4-vectors $k,P_1,P_2$ is five (cf. [@Bianconi:2000cd]). They can conveniently be chosen as the fraction of quark momentum carried by the hadron pair, $z$, the subfraction in which this momentum is further shared inside the pair, $\xi$, and the “geometry” of the pair in the momentum space. Namely, the “opening” of the pair momenta, ${\vec R}_{{\scriptscriptstyle T}}^2$, the relative position of the jet axis and the hadron pair axis, ${\vec k}_{{\scriptscriptstyle T}}^2$, and the relative position of hadron pair plane and the plane formed by the jet axis and the hadron pair axis, ${\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}$ (see Fig. \[fig:kin\]). Both DF and FF can be deduced from suitable projections of the corresponding quark-quark correlators. In particular, by defining $$\Delta^{[\Gamma]} (z,\xi,{\vec k}_{{\scriptscriptstyle T}}^2,{\vec R}_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) \equiv \frac{1}{4z}\left.\int \d k^+\;\mbox{Tr}[\Gamma \, \Delta(k,P_1,P_2)] \right|_{k^-=P_h^-/z} \; , \label{eq:proj}$$ we can deduce \[eq:ff\] $$\begin{aligned} \Delta^{[{\gamma}^-]} &=& D_1(z_h,\xi,{\vec k}_{{\scriptscriptstyle T}}^{\,2},{\vec R}_{{\scriptscriptstyle T}}^{\,2}, {\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) \label{eq:d1} \\[2mm] \Delta^{[{\gamma}^- {\gamma}_5]}&=& \frac{{\epsilon}_{{\scriptscriptstyle T}}^{ij} \,R_{{{\scriptscriptstyle T}}i}\,k_{{{\scriptscriptstyle T}}j}}{M_1\,M_2}\; G_1^\perp (z_h,\xi,{\vec k}_{{\scriptscriptstyle T}}^{\,2},{\vec R}_{{\scriptscriptstyle T}}^{\,2}, {\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) \label{eq:g1} \\[2mm] \Delta^{[i{\sigma}^{i-} {\gamma}_5]} &=& {\epsilon_{{\scriptscriptstyle T}}^{ij}R_{{{\scriptscriptstyle T}}j}\over M_1+M_2}\, H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}(z_h,\xi,{\vec k}_{{\scriptscriptstyle T}}^{\,2},{\vec R}_{{\scriptscriptstyle T}}^{\,2}, {\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) + {\epsilon_{{\scriptscriptstyle T}}^{ij}k_{{{\scriptscriptstyle T}}j}\over M_1+M_2}\, H_1^\perp(z_h,\xi,{\vec k}_{{\scriptscriptstyle T}}^{\,2},{\vec R}_{{\scriptscriptstyle T}}^{\,2}, {\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) \; . \label{eq:h1} \end{aligned}$$ The leading-twist projections give a nice probabilistic interpretation of FF related to the Dirac operator $\Gamma$ used. Hence, $D_1$ is the probability for a unpolarized quark to fragment into the unpolarized hadron pair, $G_1^\perp$ is the probability difference for a longitudinally polarized quark with opposite chiralities to fragment into the pair, both $H_1^\perp$ and $H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$ give the same probability difference but for a transversely polarized fragmenting quark. A different interpretation for $H_1^\perp$ and $H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$ comes only from the possible origin for a non-vanishing probability difference, which is induced by the direction of $k_{{\scriptscriptstyle T}}$ and $R_{{\scriptscriptstyle T}}$, respectively. $G_1^\perp , H_1^\perp, H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$ are all [*naive*]{} T-odd and $H_1^\perp, H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$ are further chiral-odd. $H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$ represents a genuine new effect with respect to the Collins one, because it relates the transverse polarization of the fragmenting quark to the orbital angular motion of the transverse component of the pair relative momentum ${\vec R}_{{\scriptscriptstyle T}}$ via the new angle $\phi$ defined by $$\sin \phi = \frac{{\vec S}'_{{\scriptscriptstyle T}}\cdot {\vec P}_2 \times {\vec P}_1} {|{\vec S}'_{{\scriptscriptstyle T}}| |{\vec P}_2 \times {\vec P}_1|} = \frac{{\vec S}'_{{\scriptscriptstyle T}}\cdot {\vec P}_h \times {\vec R}} {|{\vec S}'_{{\scriptscriptstyle T}}| |{\vec P}_h \times {\vec R}|} \equiv \frac{{\vec S}'_{{\scriptscriptstyle T}}\cdot {\vec P}_h \times {\vec R}_{{\scriptscriptstyle T}}} {|{\vec S}'_{{\scriptscriptstyle T}}| |{\vec P}_h \times {\vec R}_{{\scriptscriptstyle T}}|} = \cos \left( \phi_{S'_{{\scriptscriptstyle T}}} - \frac{\pi}{2} - \phi_{R_{{\scriptscriptstyle T}}} \right) = \sin (\phi_{S_{{\scriptscriptstyle T}}} + \phi_{R_{{\scriptscriptstyle T}}}) \; , \label{eq:angle}$$ where we have used the condition ${\vec P}_{h{{\scriptscriptstyle T}}} = 0$ and $\phi_{S_{{\scriptscriptstyle T}}}$ ($\phi_{S^\prime_{{\scriptscriptstyle T}}}$), $\phi_{R_{{\scriptscriptstyle T}}}$ are the azimuthal angles of the initial (final) quark transverse polarization and of ${\vec R}_{{\scriptscriptstyle T}}$ with respect to the scattering plane, respectively (see also Fig. \[fig:kin\]). Isolating transversity from the SSA {#subsec:h1fromssa} ----------------------------------- Usually, the analysis of experimental observables is better accomplished in the frame where the target momentum $P$ and the momentum transfer $q$ are collinear and with no transverse components. Using a different notation, we have ${\vec P}_\perp = {\vec q}_\perp = 0$ and ${\vec P}_{h\perp} \neq 0$. An appropriate transverse Lorentz boost transforms this frame to the previous one where ${\vec P}_{{\scriptscriptstyle T}}= {\vec P}_{h{{\scriptscriptstyle T}}} = 0$ and ${\vec q}_{{\scriptscriptstyle T}}= -{\vec P}_{h\perp}/z$ [@Bianconi:2000cd]. However, the difference between the components of vectors in each frame is suppressed like ${\cal O}(1/Q)$. Since we are here considering expressions for the observables at leading twist only, this difference can be safely neglected. By using Eq. (\[eq:rt2\]), the complete cross section at leading twist for the two-hadron inclusive DIS of a unpolarized beam on a transversely polarized target, where two unpolarized hadrons are detected in the same quark current jet, is given by $$\begin{aligned} \lefteqn{ \frac{\d\sigma} {\d\Omega\,\d x\,\d z\,\d\xi\,\d^2{\vec P}_{h\perp}\, \d M_h^2\,\d\phi_{R_\perp}} \, =\frac{\xi (1-\xi)}{2} \, \frac{\d\sigma} {\d\Omega\,\d x\,\d z\,\d\xi\,\d^2{\vec P}_{h\perp}\, \d^2{\vec R}_\perp}} {\nonumber}\\[2mm] \lefteqn{ \phantom{ \frac{\d\sigma} {\d\Omega\,\d x\,\d z\,\d\xi\,\d^2{\vec P}_{h\perp}\, \d M_h^2\,\d\phi_{R_\perp}} \,} = \frac{\d\sigma_{OO}} {\d\Omega\,\d x\,\d z\,\d\xi\,\d^2{\vec P}_{h\perp}\, \d M_h^2\,\d\phi_{R_\perp}} \, + \, |{\vec S}_\perp| \, \frac{\d\sigma_{OT}} {\d\Omega\,\d x\,\d z\,\d\xi\,\d^2{\vec P}_{h\perp}\, \d M_h^2\,\d\phi_{R_\perp}}} {\nonumber}\\[2mm] & = & \frac{\alpha_{em} sx}{(2\pi )^3 2 Q^4} \, \Bigg\{ {}A(y)\;{\cal F}\left[f_1 \, D_1\right] {\nonumber}\\[2mm] && {}+|{\vec R}_\perp|\;B(y)\;\sin(\phi_h+\phi_{R_\perp})\; {\cal F}\left[\,{\hat g}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \frac{h_1^{\perp} \, H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}}{M(M_1+M_2)}\right] {}-|{\vec R}_\perp|\;B(y)\;\cos(\phi_h+\phi_{R_\perp})\; {\cal F}\left[\,{\hat h}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \frac{h_1^{\perp} \, H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}}{M(M_1+M_2)}\right] {\nonumber}\\[2mm] && {}-B(y)\;\cos(2\phi_h)\; {\cal F}\left[\left(2\,{\hat h}\!\cdot\!\vec p_{{\scriptscriptstyle T}}^{}\, \,{\hat h}\!\cdot \! \vec k_{{\scriptscriptstyle T}}^{}\, -\,\vec p_{{\scriptscriptstyle T}}^{}\!\cdot \! \vec k_{{\scriptscriptstyle T}}^{}\,\right) \frac{h_1^{\perp} \, H_1^{\perp}}{M(M_1+M_2)}\right] {\nonumber}\\[2mm] && {}-B(y)\;\sin(2\phi_h)\; {\cal F}\left[\left( \,{\hat h}\!\cdot\!\vec p_{{\scriptscriptstyle T}}^{}\, \,{\hat g}\!\cdot \! \vec k_{{\scriptscriptstyle T}}^{}\, +\,{\hat h}\!\cdot\!\vec k_{{\scriptscriptstyle T}}^{}\, \,{\hat g}\!\cdot \! \vec p_{{\scriptscriptstyle T}}^{}\,\right) \frac{h_1^{\perp} \, H_1^{\perp}}{M(M_1+M_2)}\right] \Bigg\} {\nonumber}\\[2mm] & + &\frac{\alpha_{em} sx}{(2\pi )^3 2 Q^4} \, |{\vec S}_\perp| \, \Bigg\{ A(y)\;\sin(\phi_h-\phi_{S_\perp})\; {\cal F}\left[\,{\hat h}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \frac{f_{1T}^{\perp} \, D_1}{M}\right] \, + \, A(y)\;\cos(\phi_h-\phi_{S_\perp})\; {\cal F}\left[\,{\hat g}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \frac{f_{1T}^{\perp} \, D_1}{M}\right]{\nonumber}\\ && \quad {}+ B(y)\;\sin(\phi_h+\phi_{S_\perp}) {\cal F}\left[\,{\hat h}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \frac{h_1 \, H_1^{\perp}}{M_1+M_2}\right] \, + \, B(y)\;\cos(\phi_h+\phi_{S_\perp}) {\cal F}\left[\,{\hat g}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \frac{h_1 \, H_1^{\perp}}{M_1+M_2}\right]{\nonumber}\\ && \quad {}+ |{\vec R}_\perp|\;B(y)\;\sin(\phi_{R_\perp}+\phi_{S_\perp})\; {\cal F}\left[\frac{h_1 \, H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}}{M_1+M_2}\right]{\nonumber}\\ && \quad {}- |{\vec R}_\perp|\;A(y)\; \cos(\phi_h-\phi_{S_\perp})\;\sin(\phi_h-\phi_{R_\perp})\; {\cal F}\left[\,{\hat h}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \,{\hat h}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \frac{g_{1T} \, G_1^{\perp}}{MM_1M_2}\right]{\nonumber}\\ && \quad {}+ |{\vec R}_\perp|\;A(y)\; \sin(\phi_h-\phi_{S_\perp})\;\sin(\phi_h-\phi_{R_\perp})\; {\cal F}\left[\,{\hat h}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \,{\hat g}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \frac{g_{1T} \, G_1^{\perp}}{MM_1M_2}\right]{\nonumber}\\ && \quad {}- |{\vec R}_\perp|\;A(y)\; \cos(\phi_h-\phi_{S_\perp})\;\cos (\phi_h-\phi_{R_\perp})\; {\cal F}\left[\,{\hat g}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \,{\hat h}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \frac{g_{1T} \, G_1^{\perp}}{MM_1M_2}\right]{\nonumber}\\ && \quad {}+ |{\vec R}_\perp|\;A(y)\; \sin(\phi_h-\phi_{S_\perp})\;\cos(\phi_h-\phi_{R_\perp})\; {\cal F}\left[\,{\hat g}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \,{\hat g}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \frac{g_{1T} \, G_1^{\perp}}{MM_1M_2}\right]{\nonumber}\\ && \quad {}+ B(y)\;\cos(3\phi_h-\phi_{S_\perp})\; {\cal F}\left[\,{\hat h}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \,{\hat h}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \,{\hat g}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \frac{h_{1T}^{\perp} \, H_1^{\perp}}{M^2(M_1+M_2)}\right]{\nonumber}\\ && \quad {}+ B(y)\;\sin(2\phi_h)\,\cos(\phi_h-\phi_{S_\perp})\; {\cal F}\left[\,{\hat h}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \left(\,{\hat h}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\,\right)^2 \frac{h_{1T}^{\perp} \, H_1^{\perp}}{M^2(M_1+M_2)}\right]{\nonumber}\\ && \quad {}- B(y)\;\cos(2\phi_h)\,\sin(\phi_h-\phi_{S_\perp})\; {\cal F}\left[\,{\hat h}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \left(\,{\hat g}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\,\right)^2 \frac{h_{1T}^{\perp} \, H_1^{\perp}}{M^2(M_1+M_2)}\right]{\nonumber}\\ && \quad {}- B(y)\;\sin(3\phi_h-\phi_{S_\perp})\; {\cal F}\left[\,{\hat g}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \,{\hat h}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \,{\hat g}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \frac{h_{1T}^{\perp} \, H_1^{\perp}}{M^2(M_1+M_2)}\right]{\nonumber}\\ && \quad {}+ B(y)\;\cos(2\phi_h)\,\cos(\phi_h-\phi_{S_\perp})\; {\cal F}\left[\,{\hat g}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \left(\,{\hat h}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\,\right)^2 \frac{h_{1T}^{\perp} \, H_1^{\perp}}{M^2(M_1+M_2)}\right]{\nonumber}\\ && \quad {}+ B(y)\;\sin(2\phi_h)\,\sin(\phi_h-\phi_{S_\perp})\; {\cal F}\left[\,{\hat g}\!\cdot \!\vec k_{{\scriptscriptstyle T}}^{}\, \left(\,{\hat g}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\,\right)^2 \frac{h_{1T}^{\perp} \, H_1^{\perp}}{M^2(M_1+M_2)}\right]{\nonumber}\\ && \quad {}+ |{\vec R}_\perp|\;B(y)\; \sin(2\phi_h+\phi_{R_\perp}-\phi_{S_\perp}) {\cal F}\left[\left(({\hat h}\!\cdot\!\vec p^{}_{{\scriptscriptstyle T}})^2 -({\hat g}\!\cdot\!\vec p^{}_{{\scriptscriptstyle T}})^2 +2\,{\hat h}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\, \,{\hat g}\!\cdot \!\vec p_{{\scriptscriptstyle T}}^{}\,\right) \frac{h_{1T}^{\perp} \, H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}}{2M^2(M_1+M_2)}\right] \Bigg\} \, , \label{eq:cross}\end{aligned}$$ where $\alpha_{em}$ is the fine structure constant, $s=Q^2/xy=-q^2/xy$ is the total energy in the center-of-mass system and $$A(y) = \left( 1-y+\frac{1}{2} y^2 \right) \quad , \quad B(y) = (1-y) \quad , \quad C(y) = y (2-y) \label{eq:abc}$$ with the lepton invariant $y=(P \cdot q)/ (P \cdot l) \approx q^-/l^-$. The convolution of distribution and fragmentation functions is defined as $${\cal F}\left[w({\vec p}_{{\scriptscriptstyle T}}^{},{\vec k}_{{\scriptscriptstyle T}}^{})\; f\, D\right] \equiv \; \sum_a e_a^2\; \int\d^{2}{\vec p}_{{\scriptscriptstyle T}}^{}\; \d^{2}{\vec k}_{{\scriptscriptstyle T}}^{}\; \delta^2 ({\vec k}_{{\scriptscriptstyle T}}^{}-{\vec p}_{{\scriptscriptstyle T}}^{}+\frac{{\vec P}_{h\perp}}{z}) \; w({\vec p}_{{\scriptscriptstyle T}}^{},{\vec k}_{{\scriptscriptstyle T}}^{})\; f^a(x,{\vec p}_{{\scriptscriptstyle T}}^{\;2})\,D^a(z_h,\xi,{\vec k}_{{\scriptscriptstyle T}}^{\,2},{\vec R}_{{\scriptscriptstyle T}}^{\,2}, {\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) \;,$$ where $w({\vec p}_{{\scriptscriptstyle T}}^{},{\vec k}_{{\scriptscriptstyle T}}^{})$ is a weight function and the sum runs over all quark (and anti-quark) flavors, with $e_a$ the electric charges of the quarks. The versors appearing in the weight function $w$ are defined as ${\hat h} = {\vec P}_{h\perp} / |{\vec P}_{h\perp}|$ and ${\hat g}^i = \epsilon_{{\scriptscriptstyle T}}^{ij} \, {\hat h}^j$ (with $\epsilon_{{\scriptscriptstyle T}}^{ij} \equiv \epsilon^{-+ij}$), respectively, and they represent the two independent directions in the $\perp$ plane perpendicular to ${\hat z} \parallel {\vec q}/|{\vec q}|$. All azimuthal angles $\phi_{S_\perp}, \phi_{R_\perp}$ and $\phi_h$ (relative to ${\vec P}_{h\perp}$) lie in the $\perp$ plane and are measured with respect to the scattering plane (see Fig. \[fig:reaction\]). Eq. (\[eq:cross\]) corresponds to the sum of Eqs. (B1) and (B4) in Ref. [@Bianconi:2000cd], where, however, the expressions are simpler because they rely on the assumption of a symmetrical cylindrical distribution of hadron pairs around the jet axis, in order to have fragmentation functions depending on even powers of ${\vec k}_{{\scriptscriptstyle T}}$ only (this assumption would make all terms including the ${\hat g}$ versor disappear from Eq. (\[eq:cross\]); see also Ref. [@Barone:2001sp] for a comparison). During experiments the scattering plane changes (different scales $Q$ imply different positions of the scattered beam). Therefore, it is better to define the laboratory frame as the plane formed by the beam and the direction of the target polarization. All azimuthal angles are conveniently reexpressed with respect to the laboratory frame as $$\begin{aligned} \phi_{R_\perp} &= &\phi_{R_\perp}^L-\phi^L {\nonumber}\\ \phi_{S_\perp} &= &-\phi^L {\nonumber}\\ \phi_h &= &\phi_h^L - \phi^L \, , \label{eq:azangles}\end{aligned}$$ where the superscript $^L$ indicates the new reference frame. The oriented angle between the scattering plane and the laboratory frame is $\phi^L$ (see Fig. \[fig:reaction\]). At leading order, the azimuthal angle of Eq. (\[eq:angle\]) becomes $\phi = \phi_{R_\perp}^L- 2\phi^L$ in the new frame. The new expression for the cross section is obtained by simply replacing Eq. (\[eq:azangles\]) inside the angular dependence of Eq. (\[eq:cross\]). After replacement and apart from phase space coefficients, each term of the cross section will look like $$d\sigma^{tw} \, \propto \, t (\phi_{R_\perp}^L,\phi^L,\phi_h^L) \; {\cal F} \left[ w \; \mbox{DF} \; \mbox{FF} \right] \, = \, t(\phi_{R_\perp}^L,\phi^L,\phi_h^L) \; I(z,\xi,{\vec R}_{{\scriptscriptstyle T}}^{\, 2}) \, , \label{eq:term}$$ where $t$ is a trigonometric function, $w$ is the specific weight function for each combination of distribution and fragmentation functions (DF and FF, respectively), and $I$ is the result of the convolution integral. It is easy to verify that folding the cross section by $$\frac{1}{2\pi} \int_0^{2\pi} \d\phi^L \d\phi_{R_\perp}^L \; \sin(\phi_{R_\perp}^L -2\phi^L) \; \frac{\d\sigma} {\d\Omega\,\d x\,\d z\,\d\xi\,\d^2{\vec P}_{h\perp}\, \d M_h^2\,\d\phi_{R_\perp}} \label{eq:fold}$$ makes only those $d\sigma^{tw}$ terms survive where $H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$ shows up in the convolution, i.e. for the following combinations \[eq:tws\] $$\begin{aligned} t = \cos (\phi_h^L + \phi_{R_\perp}^L -2 \phi^L) &\quad , &\quad w = {\hat h} \cdot {\vec p}_{{\scriptscriptstyle T}}\quad ; \label{eq:OOh} \\ t = \sin (\phi_h^L + \phi_{R_\perp}^L -2 \phi^L) &\quad , &\quad w = {\hat g} \cdot {\vec p}_{{\scriptscriptstyle T}}\quad ; \label{eq:OOg} \\ t = \sin (\phi_{R_\perp}^L - 2\phi^L) &\quad , &\quad w = 1 \quad ; \label{eq:OT1} \\ t = \sin (2\phi_h^L + \phi_{R_\perp}^L - 2\phi^L) &\quad , &\quad w = \left(({\hat h}\cdot {\vec p}_{{\scriptscriptstyle T}})^2 - ({\hat g}\cdot {\vec p}_{{\scriptscriptstyle T}})^2 + 2 \, {\hat h}\cdot {\vec p}_{{\scriptscriptstyle T}}\, {\hat g}\cdot {\vec p}_{{\scriptscriptstyle T}}\right) \label{eq:OThg} \, .\end{aligned}$$ Similarly, it is straightforward to proof that integrating these surviving terms upon $\d^2{\vec P}_{h\perp}$, and performing the integrals in the convolution ${\cal F}[w\;\mbox{DF}\; \mbox{FF}]$, makes only the combination (\[eq:OT1\]) to survive presenting the transversity in a factorized form. In fact, by integrating also upon $\d\xi$ we finally have $$\begin{aligned} \frac{<\d\sigma_{OT} >}{\d y\,\d x\,\d z\,\d M_h^2} &\equiv & \frac{1}{2\pi} \, \int_0^{2\pi} \d\phi^L\,\d\phi_{R_\perp}^L \int \d^2{\vec P}_{h\perp} \; \int \d\xi \; \sin(\phi_{R_\perp}^L -2\phi^L) \; \frac{\d\sigma}{\d\Omega\,\d x\,\d z\,\d\xi\,\d M_h^2\, \d\phi_{R_\perp}^L\,\d^2{\vec P}_{h\perp}} {\nonumber}\\ &= & \frac{\pi \alpha_{em}^2 s x}{(2 \pi)^3 Q^4} \; \frac{B(y) \, |{\vec S}_\perp |}{2(M_1+M_2)} \; \sum_a e_a^2\; \int \d^2{\vec p}_{{\scriptscriptstyle T}}\; h_1^a(x,{\vec p}_{{\scriptscriptstyle T}}^{\; 2}) {\nonumber}\\ & &\times \int \d\xi \; |{\vec R}_\perp | \int_0^{2 \pi} \d\phi_{R_\perp}^L \int \d^2{\vec k}_{{\scriptscriptstyle T}}\; H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\, a} (z,\xi,M_h^2,{\vec k}_{{\scriptscriptstyle T}}^{\; 2}, {\vec k}_{{\scriptscriptstyle T}}\cdot{\vec R}_{{\scriptscriptstyle T}}) {\nonumber}\\ &= & \frac{\pi \alpha_{em}^2 s}{(2 \pi)^3 Q^4} \; \frac{B(y) \, |{\vec S}_\perp |}{2(M_1+M_2)} \; \sum_a e_a^2\; x \; h_1^a(x) \; H_{1\, (R)}^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\, a} (z,M_h^2) \, , \label{eq:crossfact}\end{aligned}$$ where, for sake of simplicity, the same notations are kept for DF and FF before and after integration, distinguished by the explicit arguments only; the subscript $_{(R)}$ reminds of the additional weighting factor $|\vec R_\perp|$. Analogously, $$\begin{aligned} \frac{<\d\sigma_{OO} >}{\d y\,\d x\,\d z\,\d M_h^2} &\equiv & \frac{1}{2\pi} \, \int_0^{2\pi} \d\phi^L\, \d\phi_{R_\perp}^L \int \d^2{\vec P}_{h\perp} \; \int \d\xi \; \frac{\d\sigma}{\d\Omega\,\d x\,\d z\,\d\xi\,\d M_h^2\, \d\phi_{R_\perp}^L\,\d^2{\vec P}_{h\perp}}{\nonumber}\\ &= & \frac{\pi \alpha_{em}^2 s x}{(2 \pi)^3 Q^4} \, A(y) \, \sum_a e_a^2\; \int \d^2{\vec p}_{{\scriptscriptstyle T}}\; f_1^a (x,{\vec p}_{{\scriptscriptstyle T}}^{\; 2}) {\nonumber}\\ & &\times \int \d\xi \int_0^{2 \pi} \d\phi_{R_\perp}^L \int \d^2{\vec k}_{{\scriptscriptstyle T}}\; D_1^a (z,\xi,M_h^2,{\vec k}_{{\scriptscriptstyle T}}^{\; 2}, {\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) {\nonumber}\\ &= & \frac{\pi \alpha_{em}^2 s}{(2 \pi)^3 Q^4} \, A(y) \, \sum_a e_a^2\; x \; f_1^a(x) \; D_1^a (z,M_h^2) \, , \label{eq:crossfact2}\end{aligned}$$ from which we can build the single spin asymmetry $$\begin{aligned} A^{\sin \phi} (y,x,z,M_h^2) &\equiv & \frac{<\d\sigma_{OT} >}{\d y\,\d x\,\d z\,\d M_h^2} \, \left[ \frac{<\d\sigma_{OO} >}{\d y\,\d x\,\d z\,\d M_h^2} \right]^{-1} {\nonumber}\\ &= & \frac{B(y)}{A(y)} \; \frac{|{\vec S}_\perp |}{2(M_1+M_2)} \; \frac{\sum_a e_a^2 \; x \, h_1^a (x) \, H_{1 \, (R)}^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\, a} (z,M_h^2)} {\sum_a e_a^2 \; x \, f_1^a (x) \, D_1^a (z,M_h^2)} \, . \label{eq:ssa}\end{aligned}$$ Spectator model for $\pi^+ \pi^-$ fragmentation {#sec:spectator} =============================================== In the field theoretical description of hard processes, the FF represent the soft processes that connect the hard quark to the detected hadrons via fragmentation, i.e. they are hadronic matrix elements of nonlocal operators built from quark (and gluon) fields [@Soper:1977jc]. For a quark fragmenting into two hadrons inside the same current jet, the appropriate quark-quark correlator (in the light-cone gauge) reads [@Collins:1994kq; @Collins:1994ax] $$\Delta_{ij}(k,P_1,P_2)= {\kern 0.2 em {\textstyle\sum} \kern -1.1 em \int_X}\; \int \frac{\d^{4\!}\zeta}{(2\pi)^4} \; e^{ik\cdot\zeta}\; \langle 0|\psi_i(\zeta)|P_1,P_2,X\rangle \langle X,P_2,P_1|{\overline}{\psi}_j(0)|0\rangle \, , \label{eq:defDelta}$$ where the sum runs over all the possible intermediate states containing the hadron pair. \ The basic idea of the spectator model is to make a specific ansatz for this spectral decomposition by replacing the sum with an effective spectator state with a definite mass and quantum numbers [@Meyer:1991fr; @Jakob:1997wg; @Bianconi:2000uc]. By specializing the model to the case of $\pi^+ \pi^-$ fragmentation with $P_1=P_{\pi^+}$ and $P_2=P_{\pi^-}$, the spectator has the quantum numbers of an on-shell valence quark with a constituent mass $m_q=340$ MeV. Consequently, the quark-quark correlator (\[eq:defDelta\]) simplifies to $$\begin{aligned} \Delta_{ij}(k,P_{\pi^+},P_{\pi^-}) &\approx& \frac{\theta\!\left((k-P_h)^+\right)}{(2\pi)^3} \; \delta\left((k-P_h)^2-m_q^2\right)\; \langle 0|\psi_i(0)|P_{\pi^+},P_{\pi^-},q\rangle \langle q,P_{\pi^-},P_{\pi^+}| {\overline}{\psi}_j(0)|0\rangle {\nonumber}\\ &\equiv& \widetilde\Delta_{ij}(k,P_{\pi^+},P_{\pi^-})\; \delta(\tau_h-{\sigma}_h+M_h^2-m_q^2) \;, \label{eq:specDelta}\end{aligned}$$ where $\tau_h = k^2$ and $\sigma_h = 2k\cdot P_h$. When inserting Eq. (\[eq:specDelta\]) into Eq. (\[eq:proj\]), the projections drastically simplify to $$\Delta^{[\Gamma]}(z_h,\xi,{\vec k}_{{\scriptscriptstyle T}}^{\,2},\vec R_{{\scriptscriptstyle T}}^{\, 2}, {\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}})= \left.\frac{\mbox{Tr}[\Gamma \, \widetilde\Delta]}{8(1-z)P_h^-} \right|_{\tau_h=\tau_h(z,{\vec k}_{{\scriptscriptstyle T}}^{\,2})} \; , \label{eq:projspect}$$ with $$\tau_h(z,{\vec k}_{{\scriptscriptstyle T}}^{\,2})=\frac{z}{1-z}{\vec k}_{{\scriptscriptstyle T}}^{\,2} +\frac{m_q^2}{1-z}+\frac{M_h^2}{z} \;. \label{eq:tauspect}$$ We will consider the $\pi^+ \pi^-$ system with an invariant mass $M_h$ close to the $\rho$ resonance, specifically $m_\rho - \Gamma_\rho/2 \leq M_h \leq m_\rho + \Gamma_\rho/2$, where $\Gamma_\rho$ is the width of the $\rho$ resonance. Hence, the most appropriate and simplest diagrams that can replace the quark decay of Fig. \[fig:handbag\] at leading twist, and leading order in $\alpha_s$, are represented in Fig. \[fig:diagspec\]: the $\pi^+ \pi^-$ can be produced from the $\rho$ decay or directly via a quark exchange in the $t$-channel (the background diagram); the quantum interference of the two processes generates the [*naive*]{} T-odd FF described in Sec. \[subsec:iff\]. A suitable selection of “Feynman” rules for the vertices and propagators of the diagrams in Fig. \[fig:diagspec\] allows for the analytic calculation of the matrix elements defining $\widetilde \Delta$ in Eq. (\[eq:specDelta\]) and, consequently, of the projections $\Delta^{[\Gamma]}$ defining the FF. Propagators {#sec:propagators} ----------- The propagators involved in the diagrams of Fig. \[fig:diagspec\] are: - quark with momentum $\kappa$ $$\begin{aligned} \left(\frac{i}{{\kern 0.2 em \kappa\kern -0.45em /}-m_q}\right)_{ij} \end{aligned}$$ The propagator occurs with $\kappa^2 = \tau_h \equiv k^2$ or $\kappa^2 = (k-P_{\pi^+})^2$. In both cases, the off-shell condition $k^2 \neq m_q^2$ is guaranteed by Eq. (\[eq:tauspect\]). - $\rho$ with momentum $P_h$ $$\begin{aligned} & &\frac{i}{P_h^2-m_\rho^2+im_\rho\Gamma_\rho} \left(-g^{\mu\nu}+\frac{P_h^\mu\,P_h^\nu}{P_h^2}\right) \end{aligned}$$ where $\Gamma_{\rho} = \displaystyle{\frac{f^2_{\rho \pi \pi}}{4\pi} \frac{m_{\rho}}{12} \left( 1 - \frac{4 m_{\pi}^2}{m_{\rho}^2} \right)^{\frac{3}{2}}}$ [@Ioffe:1984ep]. Vertices {#sec:vertices} -------- In analogy with previous works on spectator models [@Jakob:1997wg; @Bianconi:2000uc], we choose the vertex form factors to depend on one invariant only, generally denoted $\kappa^2$, that represents the virtuality of the external entering quark line. Therefore, we can have $\kappa^2 = \tau_h \equiv k^2$ or $\kappa^2 = (k-P_{\pi^+})^2$. The power laws are such that the asymptotic behaviour is in agreement with the expectations based on dimensional counting rules. Finally, the normalization coefficients have dimensions such that $\int \d^2{\vec k}_{{\scriptscriptstyle T}}\int \d^2{\vec R}_{{\scriptscriptstyle T}}\, D_1(z,\xi,{\vec k}_{{\scriptscriptstyle T}}^2, {\vec R}_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}})$ is a pure number to be interpreted as the probability for the hadron pair to carry a $z$ fraction of the valence quark momentum and to share it in $\xi$ and $1-\xi$ parts. - $\rho \pi \pi$ vertex $$\begin{aligned} & &\Upsilon^{\rho\pi\pi, \mu} = f_{\rho\pi\pi} R^\mu \end{aligned}$$ where $\displaystyle{\frac{f_{\rho\pi\pi}^2}{4\pi}}=2.84\pm 0.50$ [@Ioffe:1984ep]. - $q \rho q$ vertex $$\begin{aligned} \Upsilon^{q\rho q,\mu}_{ij}&=&\frac{f_{q\rho q} (\kappa^2)}{\sqrt{2}} \; [{\gamma}^\mu]_{ij} {\nonumber}\\ &= &\frac{N_{q\rho}}{\sqrt{2}} \; \frac{1}{|\kappa^2-\Lambda_{\rho}^2|^{\alpha}} \; [{\gamma}^\mu]_{ij} \end{aligned}$$ where $\Lambda_\rho$ excludes large virtualities of the quark. The power $\alpha$ is determined consistently with the quark counting rule that determines the asymptotic behaviour of the FF at large $z$ [@Ioffe:1984ep], i.e. $$(1-z)^{2\alpha -1} = (1-z)^{-3+2r+2|\lambda|} \, , \label{eq:alpha}$$ where $r$ is the number of constituent quarks in the considered hadron, and $\lambda$ is the difference between the quark and the hadron helicities. Thus, here we have $\alpha = 3/2$. The normalization $N_{q\rho}$ is such that the sum rule $$\int_0^1 \d z \; z\,D_1(z) \leq 1 \label{eq:sumrule}$$ is satisfied. In fact, in the infinite momentum frame the integral in Eq. (\[eq:sumrule\]) represents the total fraction $z$ of the quark energy taken by all hadron pairs of the type under consideration. Since in this frame low-energy mass effects can be neglected, we estimate that charged pion pairs with an invariant mass inside the $\rho$ resonance width represent $\sim 50\%$ of the total pions detected in the calorimeter, which in turn can be considered $\sim 80\%$ of all particles detected. Neglecting mass effects, we may assume that the fraction of quark energy taken by charged pions, relative to the energy taken by other hadrons, follows their relative numbers. Therefore, we chose two values, $N_{q\rho} = 0.9$ GeV$^3$ and $1.6$ GeV$^3$, which correspond to rather extreme scenarios where the integral Eq. (\[eq:sumrule\]) amounts to 0.14 and 0.48, respectively. - $q \pi q$ vertex $$\begin{aligned} \Upsilon^{q\pi q}_{ij}&=&\frac{f_{q\pi q} (\kappa^2)}{\sqrt{2}} \; [{\gamma}_5]_{ij} \nonumber \\ &= &\frac{N_{q\pi}}{\sqrt{2}} \, \frac{1}{|\kappa^2-\Lambda_{\pi}^2|^\alpha} \;[{\gamma}_5]_{ij} \end{aligned}$$ where $\Lambda_\pi$ excludes large virtualities of the quark, as well. From quark counting rules, still $\alpha = 3/2$. The normalization $N_{q\pi}$ can be deduced from $N_{q\rho}$ by generalizing the Goldberger-Treiman relation to the $\rho$-quark coupling [@Glozman:1998fs]: $$\begin{aligned} \frac{g_{\pi qq}^2}{4\pi} &= &\left( \frac{g_q^A}{g_N^A} \right)^2 \left( \frac{m_q}{m_N} \right)^2 \frac{g_{\pi NN}^2}{4\pi} = \left( \frac{3}{5} \right)^2 \left( \frac{340}{939} \right)^2 14.2 = 0.67 {\nonumber}\\ \frac{(g_{\rho qq}^V+g_{\rho qq}^T)^2}{4\pi} &= & \left( \frac{g_q^A}{g_N^A} \right)^2 \left( \frac{m_q}{m_N} \right)^2 \frac{(g_{\rho NN}^V+g_{\rho NN}^T)^2}{4\pi} = \left( \frac{3}{5} \right)^2 \left( \frac{340}{939} \right)^2 27.755 = 1.31 \, , \label{eq:g-t} \end{aligned}$$ where $g^A_N, m_N$ are the nucleon axial coupling constants and mass, respectively, as well as $g^A_q, m_q$ the quark ones. The $\pi NN$ coupling is $g_{\pi NN}^2 /4\pi = 14.2$; the vector $\rho NN$ coupling is $(g_{\rho NN}^V)^2 /4\pi = 0.55$ and its ratio to the tensor coupling is $g_{\rho NN}^T / g_{\rho NN}^V=6.105$ [@Ericson:1988gk]. From the above relations, we deduce $$\frac{g_{\pi qq}}{(g_{\rho qq}^V+g_{\rho qq}^T)} \equiv \frac{N_{q\pi}}{N_{q\rho}} = 0.715 \, . \label{eq:qpi-coup}$$ As a final comment, we have explicitly checked that with the above rules the background diagram leads to a cross section that qualitatively shows the same $s$ dependence of experimental data for $\pi \pi$ production in the relative $L=0$ channel when $s$ is inside the $\rho$ resonance width, in any case below the first dip corresponding to the resonance $f_0 (980)$ [@Pennington:1999fa]. If we reasonably assume that the resonant diagram exhausts almost all of the $\pi \pi$ production in the relative $L=1$ channel and we also assume that in the given energy interval the $L=0,1$ channels approximate the whole strength for $\pi \pi$ production, we can safely state that the diagrams of Fig. \[fig:diagspec\] give a satisfactory reproduction of the $\pi \pi$ cross section, with invariant mass in the given interval, without invoking any scalar $\sigma$ resonance (cf. [@Collins:1994ax; @Jaffe:1998hf]). Interference FF {#sec:speciff} --------------- With the above rules applied to the diagrams of Fig. \[fig:diagspec\], we can calculate all the matrix elements of Eq. (\[eq:specDelta\]) and, consequently, all the projections (\[eq:projspect\]) leading to the FF. The [*naive*]{} T-odd $G_1^\perp, H_1^\perp, H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$ receive contributions from the interference diagrams only. In particular, they result proportional to the imaginary part of the $\rho$ propagator ($\sim m_\rho \Gamma_\rho$), while the real part ($\sim M_h^2 - m_\rho^2$) contributes to $D_1$. Therefore, contrary to the findings of Ref. [@Jaffe:1998hf], a complex amplitude with a resonant behaviour is needed here to produce nonvanishing interference FF. For a $u$ quark fragmenting into $\pi^+ \pi^-$, we have at leading twist $$\begin{aligned} \lefteqn{ D_1^{u\rightarrow \pi^+ \pi^-} (z,\xi,M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) \;=\; \frac{N_{q\rho}^2 \, f_{\rho\pi\pi}^2 \, z^2 \, (1-z)^2} {4(2\pi)^3 \, [(M_h^2-m_\rho^2)^2+m_\rho^2\,\Gamma_\rho^2] \, a^2 \, \vert a+b\vert^3} {\nonumber}}\\ & & \qquad\qquad\times \Bigg\{ \frac{c}{4} [ \, c-za(2\xi -1) ] + z^2 (1-z) \left( \frac{M_h^2}{4}-m_\pi^2 \right) [ \, a-(1-z)M_h^2 \, ] \Bigg\} {\nonumber}\\[3mm] & & {}+ \frac{N_{q\pi}^4 \, z^7 \, (1-z)^7} {8 (2\pi )^3 \, a^2 \, d^2 \, \vert d+{\tilde b} \vert^3 \, \vert a+{\tilde b}\vert^3} {\nonumber}\\[2mm] & & \qquad\times \Bigg\{ -az \, [ z\xi (1-z) + (1-\xi) d ] -z(1-z) m_\pi^2 \, \frac{a-c-z(1-z)M_h^2}{2} \, + \, d \, \frac{a-c+z(1-z)M_h^2}{2} \Bigg\}{\nonumber}\\[3mm] & & {} + \frac{\sqrt{2} \, (M_h^2-m_\rho^2) \, z^{\frac{9}{2}} \, (1-z)^{\frac{9}{2}} \, N_{q\pi}^2 \, N_{q\rho} \, f_{\rho \pi \pi}} {8 (2\pi)^3 [(M_h^2-m_\rho^2)^2+m_\rho^2\,\Gamma_\rho^2] \, a^2 \, d \, \vert a+b \vert^{\frac{3}{2}} \, \vert a+{\tilde b} \vert^{\frac{3}{2}} \, \vert d+{\tilde b} \vert^{\frac{3}{2}}} {\nonumber}\\[2mm] & & \qquad\times \Bigg\{ az(1-z) \left( 2m_\pi^2 - \frac{M_h^2}{2}\right) + \frac{a(1-2z\xi) +c +z(1-z)M_h^2}{4} \, [ \, d+z(1-z) (M_h^2-5m_\pi^2) ] {\nonumber}\\[2mm] & & \qquad\qquad {}+ \, \frac{a \, [2z(1-\xi)-1]+c-z(1-z)M_h^2}{4} \, [\, 3d -a +z(1-z)m_\pi^2 \, ] \Bigg\} \label{eq:d1spec}\\[5mm] \lefteqn{ H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\, u\rightarrow \pi^+ \pi^-} (z,\xi,M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) =} {\nonumber}\\[2mm] & & {}-\, \frac{m_\rho \, \Gamma_\rho \, m_\pi \, m_q \, z^{\frac{13}{2}} \, (1-z)^{\frac{11}{2}} \, N_{q\pi}^2 \, N_{q\rho} \, f_{\rho \pi \pi}} {2\sqrt{2} (2\pi)^3 [(M_h^2-m_\rho^2)^2+m_\rho^2\Gamma_\rho^2] \, a \, d \, \vert a+b\vert^{\frac{3}{2}} \, \vert a+{\tilde b}\vert^{\frac{3}{2}} \, \vert d + z(1-z)(m_q^2-\Lambda_\pi^2)\vert^{\frac{3}{2}}} \label{eq:h1spec} \\[5mm] \lefteqn{ H_1^{\perp \, u\rightarrow \pi^+ \pi^-} (z,\xi,M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) = 0 } \label{eq:h10} \\[5mm] & & G_1^{\perp \, u\rightarrow \pi^+ \pi^-} (z,\xi,M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) = - \frac{m_\pi}{2m_q} \, H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\, u\rightarrow \pi^+ \pi^-} (z,\xi,M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) \, , \label{eq:g1perp}\end{aligned}$$ where $$\begin{aligned} a & = & z^2(k_{{\scriptscriptstyle T}}^2+m_q^2)+(1-z) M_h^2 \quad , \quad b= z(1-z)(m_q^2-\Lambda_\rho^2) \quad , \quad {\tilde b} = z(1-z)(m_q^2 - \Lambda_\pi^2) {\nonumber}\\ c & = &(2\xi -1) [z^2(k_{{\scriptscriptstyle T}}^2+m_q^2) -(1-z)^2M_h^2] -4z(1-z) {\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}{\nonumber}\\ d & = & z^2(1-\xi) (k_{{\scriptscriptstyle T}}^2+m_q^2) +\xi (1-z)^2 M_h^2 +z(1-z)(m_\pi^2 + 2{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) \, . \label{eq:coeffs}\end{aligned}$$ The simplifications induced by the spectator model reduce the number of independent FF, Eq. (\[eq:g1perp\]), and make $H_1^\perp$ vanish, i.e. the analogue of the Collins effect in this context turns out to be a higher-order effect. The structure induced by the model is simply not rich enough to produce a non-vanishing $H_1^\perp$. Moreover, the FF do not depend on the flavor of the fragmenting valence quark, provided that the charges of the final detected pions are selected according to the diagrams of Fig. \[fig:diagspec\]. Hence, the FF are the same for $u\rightarrow \pi^+ \pi^-$ and for $d\rightarrow \pi^- \pi^+$, where the final state differs only by the interchange of the two pions, i.e. by leaving everything unaltered but ${\vec R}_{{\scriptscriptstyle T}}\rightarrow -{\vec R}_{{\scriptscriptstyle T}}$ and $\xi \rightarrow (1-\xi)$: $$\begin{aligned} D_1^{\,u\rightarrow \pi^+\pi^-} (z,\xi,M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) & = & D_1^{\,d\rightarrow \pi^- \pi^+} (z,\xi,M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) {\nonumber}\\ & = & D_1^{\,d\rightarrow \pi^+ \pi^-} (z,(1-\xi),M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot(-{\vec R}_{{\scriptscriptstyle T}})) {\nonumber}\\[2mm] H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\,u\rightarrow \pi^+ \pi^-} (z,\xi,M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) & = & H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\,d\rightarrow \pi^- \pi^+} (z,\xi,M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) {\nonumber}\\ & = & H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\, d\rightarrow \pi^+ \pi^-} (z,(1-\xi),M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot (-{\vec R}_{{\scriptscriptstyle T}})) \, . \label{eq:flavorsym}\end{aligned}$$ When integrating the FF over $\d^2{\vec k}_{{\scriptscriptstyle T}}$ and $\d \xi$, the dependence on the direction of ${\vec R}_{{\scriptscriptstyle T}}$ is lost $$\begin{aligned} D_1^{\,u\rightarrow \pi^+ \pi^-}(z,M_h^2) & \equiv & \int_0^1\d\xi \int \d^2{\vec k}_{{\scriptscriptstyle T}}\; D_1^{\,u\rightarrow \pi^+ \pi^-} (z,\xi,M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot {\vec R}_{{\scriptscriptstyle T}}) {\nonumber}\\ & = & \int_0^1\d\xi \int \d^2{\vec k}_{{\scriptscriptstyle T}}\; D_1^{\,d\rightarrow \pi^+ \pi^-} (z,(1-\xi),M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot (- {\vec R}_{{\scriptscriptstyle T}})) {\nonumber}\\ & = & \int_0^1\d\xi \int \d^2{\vec k}_{{\scriptscriptstyle T}}\; D_1^{\,d\rightarrow \pi^+ \pi^-} (z,\xi,M_h^2,k_{{\scriptscriptstyle T}}^2,{\vec k}_{{\scriptscriptstyle T}}\cdot (- {\vec R}_{{\scriptscriptstyle T}})) {\nonumber}\\ & \equiv & D_1^{\,d\rightarrow \pi^+ \pi^-}(z,M_h^2) \, ,\end{aligned}$$ and similarly for $H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$. Therefore, we can conclude that the integrated FF do not depend in general on the flavor of the fragmenting quark. Consequently, the SSA of Eq. (\[eq:ssa\]) simplifies to $$A^{\sin \phi} (y,x,z,M_h^2) = \frac{B(y)}{A(y)} \; \frac{|{\vec S}_\perp |}{4m_\pi} \; \frac{\left[ \displaystyle{ \frac{8}{9} x \, h_1^u (x) + \frac{1}{9} x \, h_1^d (x) }\right] \; H_{1 \, (R)}^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\, u} (z,M_h^2)} {\left[ \displaystyle{ \frac{8}{9} x \, f_1^u (x) + \frac{1}{9} x \, f_1^d (x) }\right] \; D_1^u (z,M_h^2)} \, . \label{eq:ssaspec}$$ In the following, we will discuss the SSA without the inessential $|{\vec S}_\perp| B(y)/A(y)$ factor and after integrating away the $z$ dependence and, in turn, the $x$ or $M_h$ dependence according to $$\begin{aligned} A^{\sin \phi}_x (x) &\equiv & \frac{1}{4m_\pi} \; \frac{\displaystyle{ \left[ \frac{8}{9} x \, h_1^u (x) + \frac{1}{9} x \, h_1^d (x) \right] \; \int \d z \,\d M_h^2 }\; H_{1 \, (R)}^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\, u} (z,M_h^2)} {\displaystyle{\left[ \frac{8}{9} x \, f_1^u (x) + \frac{1}{9} x \, f_1^d (x) \right] \; \int \d z \,\d M_h^2 }\; D_1^u (z,M_h^2)} \label{eq:ssa-x} \\[2mm] A^{\sin \phi}_{M_h} (M_h) &\equiv & \frac{1}{4m_\pi} \; \frac{\displaystyle{\int \d x \, \left[ \frac{8}{9} x \, h_1^u (x) + \frac{1}{9} x \, h_1^d (x) \right]\; \int \d z }\; H_{1 \, (R)}^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\, u} (z,M_h^2)} {\displaystyle{\int \d x \, \left[ \frac{8}{9} x \, f_1^u (x) + \frac{1}{9} x \, f_1^d (x) \right]\; \int \d z }\; D_1^u (z,M_h^2)} \label{eq:ssa-mh} \, .\end{aligned}$$ Numerical Results {#sec:results} ================= In the remainder of the paper, we present numerical results in the context of the spectator model for both the process-independent FF and the SSA of Eqs. (\[eq:ssa-x\]) and (\[eq:ssa-mh\]) for semi-inclusive lepton-nucleon DIS. Considering different possible scenarios for $h_1$, we discuss the implications for an experimental search for transversity. The input parameters of the calculation can basically be grouped in three classes: - values of masses and coupling constants taken from phenomenology, as $m_\pi = 0.139$ GeV, $m_\rho = 0.785$ GeV, with $f_{\rho \pi \pi}$ and $\Gamma_\rho$ as described in Sec. \[sec:vertices\] and Sec. \[sec:propagators\], respectively; - values consistent with other works on the spectator model and the constituent quark model, as $\Lambda_\pi = 0.4$ GeV, $\Lambda_\rho = 0.5$ GeV and $m_q = 0.34$ GeV [@Jakob:1997wg; @Bianconi:2000uc]; - parameters, such as $N_{q\pi}$ and $N_{q\rho}$, without constraints that are firmly established, or at least usually adopted, in the literature. As previously anticipated in Sec. \[sec:vertices\], the last ones are constrained using the integral (\[eq:sumrule\]) and the proportionality (\[eq:qpi-coup\]) derived from the Goldberger-Treiman relation. All results will be plotted according to two extreme scenarios, where the integral (\[eq:sumrule\]) amounts to 0.14 ($N_{q\rho} = 0.9$ GeV$^3$, corresponding to solid lines in the figures) and 0.48 ($N_{q\rho} = 1.6$ GeV$^3$, corresponding to dashed lines in the figures). Because of the high degree of arbitrariness due to the lack of any data, the results should be interpreted as the indication not only of the sensitivity of the considered observables to the input parameters, but also of the degree of uncertainty that can be reached within the spectator model. In the same spirit, when dealing with the SSA of Eqs. (\[eq:ssa-x\],\[eq:ssa-mh\]), $f_1$ and $h_1$ are calculated consistently within the spectator model [@Jakob:1997wg] or, alternatively, $f_1$ and $g_1$ are taken from consistent parametrizations and $h_1$ is calculated again according to two extreme scenarios: the nonrelativistic prediction $h_1 = g_1$ or the saturation of the Soffer inequality, $h_1 = (f_1+g_1)/2$. The parametrizations for $f_1, g_1,$ are extracted at the same lowest possible scale ($Q^2 = 0.8$ GeV$^2$), consistently with the valence quark approximation assumed for the calculation of the FF. In Fig. \[fig:ffplot\] the integrated $D_1^u (z)$ and $H_{1\, (R)}^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\, u} (z)$ are shown. Again, we recall that the solid line corresponds to a weaker $q\rho q$ coupling than the dashed line. The choice of the form factors at the vertices also guarantees the regular behaviour at the end points $z=0,1$. The strongest asymmetry in the fragmentation (recall that $H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$ is defined as the probability difference for the fragmentation to proceed from a quark with opposite transverse polarizations) is reasonably reached at $z\sim 0.4$. Once again, we stress that this result, particularly its persistent negative sign, does not depend on a specific hard process and can influence the corresponding azimuthal asymmetry. In fact, the SSA (\[eq:ssa-x\]) and (\[eq:ssa-mh\]) for two-pion inclusive lepton-nucleon DIS as shown in Figs. \[fig:ssa-xplot\] and \[fig:ssa-mhplot\], respectively, turn out to be negative due to the sign of $H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}\,u}$. The solid and dashed lines again refer to the weaker or stronger $q\rho q$ couplings in the FF, respectively. For each parametrization, three different choices of DF are shown. The label SP refers to the DF calculated in the spectator model [@Jakob:1997wg]. The label NR indicates that $f_1$ and $g_1$ are taken consistently from the leading-order parametrizations of Ref. [@Gluck:1998xa] and Ref. [@Gluck:1996yr], respectively, with $h_1 = g_1$. The label SO indicates the same parametrizations but with the Soffer inequality saturated, i.e. $h_1 = (f_1 + g_1)/2$. In the lower plot of each figure the “uncertainty band” is shown as a guiding line. It is built by taking, for each $z$ or $M_h$, the maximum and the minimum among the six curves displayed in the corresponding upper plot. The first obvious comment is that even the simple mechanism described in Fig. \[fig:diagspec\] produces a measurable asymmetry. For the HERMES experiment the size of the asymmetry may be at the lower edge of possible measurements, given the observed rather small average multiplicity which does not favor the detection of two pions in the final state. On the other hand, the planned transversely polarized target clearly will improve the situation of azimuthal spin-asymmetry measurements compared to the present one. COMPASS or possible future experiments at the ELFE, TESLA-N, or EIC facilities will have less problems because of higher counting rates. The second important result is that the sensitivity of the SSA to the parameters of the model calculation for the FF and to the different parametrizations for the DF is weak enough that the unambigous message of a negative asymmetry emerges through all the range of both $x$ and $m_\rho -\Gamma_\rho /2 = 0.69$ GeV $\leq M_h \leq m_\rho +\Gamma_\rho /2 = 0.84$ GeV. In particular, we do not find any change in sign for $A^{\sin \phi}_{M_h}$, contrary to what is predicted in Ref. [@Jaffe:1998hf]. Outlooks {#sec:end} ======== In this paper we have discussed a way for addressing the transversity distribution $h_1$ that we consider most advantageous compared to other strategies discussed in the literature. At present, the SSA seem anyway preferable to the DSA. But the fragmentation of a transversely polarized quark into two unpolarized leading hadrons in the same current jet looks less complicated than the Collins effect, both experimentally and theoretically. Collinear factorization implies an exact cancellation of the soft divergencies, avoiding any dilution of the asymmetry because of Sudakov form factors, and in principle makes the QCD evolution simpler, though we have not addressed this subject in the present paper. The new effect, that allows for the extraction of $h_1$ at leading twist through the new interference FF $H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$, relates the transverse polarization of the quark to the transverse component of the relative momentum of the hadron pair via a new azimuthal angle. This is the only key quantity to be determined experimentally, while the Collins effect requires the determination of the complete transverse momentum vector of the detected hadron. We have shown also quantitative results for $H_1^{{{<\kern -0.3 em{\scriptscriptstyle )}}}}$ in the case of $\pi^+ \pi^-$ detection, and the related SSA for the example of lepton-nucleon scattering, because modelling the interference between different channels leading to the same final state is simpler than describing the Collins effect, where a microscopic knowledge of the structure of the residual jet is required. We have adopted a spectator model approximation for $\pi^+ \pi^-$ with an invariant mass inside the $\rho$ resonance width, limiting the process to leading-twist mechanisms. The interference between the decay of the $\rho$ and the direct production of $\pi^+ \pi^-$ is enough to produce sizeable and measurable asymmetries. Despite the theoretical uncertainty due to the arbitrariness in fixing the input parameters of the calculation of FF and in choosing the parametrizations for the DF, the unambigous result emerges that in the explored ranges in $x$ and invariant mass $M_h$ the SSA are always negative and almost flat. Anyway, it should be stressed again that, even if there are good arguments for considering the mechanisms depicted in Fig. \[fig:diagspec\] a good representation of $\pi^+ \pi^-$ production in the considered energy range, still the calculation has been performed at leading twist and in a valence-quark scenario. Therefore, higher-twist corrections and QCD evolution need to be explored before any realistic comparison with experiments could be attempted. We acknowledge very fruitful discussions with Alessandro Bacchetta and Daniel Boer, in particular about the symmetry properties of the interference FF.This work has been supported by the TMR network HPRN-CT-2000-00130. [99]{} J. P. Ralston and D. E. Soper, Nucl. Phys. B [**152**]{}, 109 (1979). R. L. Jaffe, [hep-ph/9710465]{}. O. Martin, A. Schafer, M. Stratmann and W. Vogelsang, Phys. Rev. D [**57**]{}, 3084 (1998). R. L. Jaffe and X. Ji, Nucl. Phys. B [**375**]{}, 527 (1992). R. L. Jaffe and X. Ji, Phys. Rev. Lett.  [**71**]{}, 2547 (1993);D. Boer, [hep-ph/0007047]{}. J. Collins, Nucl. Phys. B [**396**]{}, 161 (1993). A. Airapetian [*et al.*]{} \[HERMES Collaboration\], Phys. Rev. Lett.  [**84**]{} (2000) 4047;A. Airapetian [*et al.*]{} \[HERMES Collaboration\], Phys. Rev. D [**64**]{} (2001) 097101;A. Bravar \[Spin Muon Collaboration\], Nucl. Phys. Proc. Suppl.  [**79**]{} (1999) 520. D. Boer, Nucl. Phys. B [**603**]{}, 195 (2001); [*and private communications.*]{} A. Bianconi, S. Boffi, R. Jakob and M. Radici, Phys. Rev. D [**62**]{}, 034008 (2000). A. Bacchetta, R. Kundu, A. Metz and P. J. Mulders, Phys. Lett. B [**506**]{} (2001) 155 J. C. Collins and G. A. Ladinsky, [hep-ph/9411444]{}. J. C. Collins, S. F. Heppelmann and G. A. Ladinsky, Nucl. Phys. B [**420**]{}, 565 (1994). R. L. Jaffe, Xuemin Jin and Jian Tang, Phys. Rev. Lett.  [**80**]{}, 1166 (1998). A. Bianconi, S. Boffi, R. Jakob and M. Radici, Phys. Rev. D [**62**]{}, 034009 (2000). V. Barone, A. Drago and P. G. Ratcliffe, [hep-ph/0104283]{}. D. E. Soper, Phys. Rev. D [**15**]{}, 1141 (1977);D. E. Soper, Phys. Rev. Lett.  [**43**]{}, 1847 (1979);J. C. Collins and D. E. Soper, Nucl. Phys. B [**194**]{}, 445 (1982);R. L. Jaffe, Nucl. Phys. B [**229**]{}, 205 (1983). H. Meyer and P. J. Mulders, Nucl. Phys. A [**528**]{}, 589 (1991);W. Melnitchouk, A. W. Schreiber and A. W. Thomas, Phys. Rev. D [**49**]{}, 1183 (1994);J. Rodrigues, A. Henneman and P. J. Mulders, [nucl-th/9510036]{}. R. Jakob, P. J. Mulders and J. Rodrigues, Nucl. Phys. A [**626**]{}, 937 (1997). B. L. Ioffe, V. A. Khoze and L. N. Lipatov, [*Hard Processes*]{}, Vol. 1 (Elsevier, Amsterdam, 1984). L. Y. Glozman, Z. Papp, W. Plessas, K. Varga and R. F. Wagenbrunn, Phys. Rev. C [**57**]{} (1998) 3406;R. F. Wagenbrunn, L. Y. Glozman, W. Plessas and K. Varga, Nucl. Phys. A [**663**]{}&[**664**]{} (2000) 703. T. E. Ericson and W. Weise, [*Pions in Nuclei*]{} (Clarendon Press, Oxford, 1988). M. R. Pennington, [hep-ph/9905241]{}. M. Glück, E. Reya and A. Vogt, Eur. Phys. J. C [**5**]{}, 461 (1998). M. Glück, E. Reya, M. Stratmann and W. Vogelsang, Phys. Rev. D [**53**]{}, 4775 (1996).
--- abstract: 'The transition between the class-B and class-A dynamical behaviors of a semiconductor laser is directly observed by continuously controlling the lifetime of the photons in a cavity of sub-millimetric to centimetric length. It is experimentally and theoretically proved that the transition from a resonant to an overdamped behavior occurs progressively, without any discontinuity. In particular, the intermediate regime is found to exhibit features typical from both the class-A and class-B regimes. The laser intensity noise is proved to be a powerful probe of the laser dynamical behavior.' author: - 'Ghaya Baili,$^1$ Mehdi Alouini,$^{1,2}$ Thierry Malherbe,$^1$ Daniel Dolfi,$^1$ Isabelle Sagnes,$^{3}$ and Fabien Bretenaker$^{4}$' title: 'Direct Observation of the Class-B to Class-A Transition in the Dynamical Behavior of a Semiconductor Laser' --- Since their discovery, lasers have been considered to be among the most exciting dynamical systems according to the wide variety of behaviors they offer. Laser dynamics is so rich that it became a tool of choice to analyze other dynamical systems even in new area of physics. For instance, a fruitful analogy can be found between Bose-Einstein condensation and laser phase transition [@Scully1999]. Although the laser is considered as a system far away from thermal equilibrium, tremendous theoretical and experimental studies have been carried out in order to find a thermodynamic reinterpretation of most laser phenomena. For instance, when the electromagnetic field is taken as the order parameter and the population inversion plays the role of temperature, laser threshold appears as a second-order phase transition [@Graham1970; @DeGiorgio1970]. Similar analogy is found in an active micro-cavity whose dimension is inferior to wavelength. It is shown that the transition occurring from spontaneous emission enhancement/inhibition, due to confinement, into collective stimulated emission can be reinterpreted as second order phase transition by analogy with ferromagnetism and superconductivity [@DeMartini1988]. First order transition can also be observed in lasers. For instance, homogenously broadened lasers can sustain the oscillation of two bistable optical fields. In this case, the laser field switching behaves as a first order transition [@Lett1981]. More generally, it has been well established for long that the dynamical behavior complexity of a system increases as the number of degrees of freedom increases, leading even to chaotic dynamics [@Berge1987]. In the peculiar case of lasers, chaos can be obtained provided that more than two degrees of freedom are present in the system. Practical examples include lasers on which an external optical field, gain or loss feedback is applied in order to increase the number of degrees of freedom [@Arecchi1986; @Pieroux1994] or some molecular far infrared laser. In the case of single mode lasers, the number of degrees of freedom is determined by the time scale of the system constants, namely (i) the active medium polarization decay rate $\gamma_{\bot}$, (ii) the population decay rate $\gamma_{\parallel}$, and (iii) the cavity decay rate $\gamma_{\mathrm{cav}}$. Indeed, within the semi-classical approximation in which the atoms are treated quantum mechanically but the optical field is treated classically, the Maxwell-Bloch equations leads to five differential nonlinear equations whose resolution difficulty depends upon the time scale of $\tau_{\bot}=1/\gamma_{\bot}$, $\tau_{\parallel}=1/\gamma_{\parallel}$ and $\tau_{\mathrm{cav}}=1/\gamma_{\mathrm{cav}}$. Following the early classification of Arecchi et al. [@Arecchi1984], class-C lasers are those for which the three decay rates are of the same order of magnitude. This class includes some molecular far infrared lasers. Solving the Maxwell-Bloch equations gives rise to a large number of solutions including chaotic behaviors. In class-B lasers the active medium polarization decays so rapidly that it can be adiabatically eliminated from the Maxwell-Bloch equations. The number of degrees of freedom is reduced to two, leading to a couple of so-called rate equations. Class-B includes most of the lasers used today such as solid-state lasers and semiconductor lasers. Finally, when the active medium polarization and population inversion both decay much faster than the optical field, the population inversion can be eliminated adiabatically as well. The system has one degree of freedom and the laser dynamics is then ruled by a single field equation. Most atomic gas lasers belong to this family. The purpose of the present paper is to explore experimentally how the transition from class-B to class-A occurs. To this aim, we intend to probe the laser dynamics while the system evolves from two degrees of freedom to one degree of freedom. Although this transition can be modeled without too much difficulty, it has been not yet observed experimentally. Thus the agreement between the theoretical predictions and experimental results is still an open question. ![Sketch of the experiment. The control of the cavity length $L$ permits to control the photon decay rate $\gamma_{\mathrm{cav}}$. OI’s are optical isolators, DM is a dichroic mirror, BS is a beam-splitter, and HWP’s are half-wave plates.[]{data-label="Figure 01"}](Figure01){width="8.6"} Direct observation of class-B to class-A transition is not easy to handle experimentally because it relies on finding technical solutions to a certain number of constraints. As a starting point, the population inversion and cavity decay rates must be of the same order of magnitude. This situation can be reached using a semiconductor active medium in conjunction with a few millimeter-long high-finesse optical cavity. Second, one must be able to tune continuously one of the two decay rates while keeping the other parameters of the laser constant. The obvious approach is to adjust the photon cavity lifetime rather than the population inversion lifetime. Keeping in mind that the laser parameters must remain constant during class-A to class-B transition, the only way to change the cavity lifetime without modifying the other laser parameters, such as threshold and pumping rate, is to adjust the cavity length. Finally, the laser must remain single frequency within, and in between, the two boundary situations namely, class-A oscillation (long cavity) and class-B oscillation (short cavity). All these constraints can be fulfilled using a semiconductor active medium inserted into a dedicated tunable external cavity, as sketched in Fig.\[Figure 01\]. Following this approach, the active medium we have chosen is a half-Vertical Cavity Surface Emitting Laser (half-VCSEL). A design based on a surface-emitting semiconductor is preferred to that of edge-emitting semiconductor because pure single mode operation is easier to obtain, in particular when the laser cavity becomes long. Furthermore, it is worthwhile to notice that common Vertical Cavity Surface Emitting Lasers (VCSEL) belong to the class-B family. Their cavity length being in the micrometer range, the photon lifetime is shorter than the population inversion lifetime. Thus, their dynamics behave as a second order filter exhibiting damped relaxation oscillations [@Halbritter2004]. On the other hand, our recent experiments on intensity noise reduction in external cavity VCSELs have confirmed that when the optical cavity is long enough, so that the photon lifetime gets longer than the carrier lifetime, the laser dynamics behave as a low-pass first-order filter [@Baili2008] proving that the laser operates in the class-A regime. Given that the two lasers in refs. [@Halbritter2004] and [@Baili2008; @Baili2007] have active media of same nature whereas they exhibit two different dynamics, using a half-VCSEL in order to achieve a continuous transition from class-A to class-B regime is a good starting point. In our experiment (see Fig.\[Figure 01\]), we have thus used the half-VCSEL whose structure is described in refs. [@Baili2007; @Baili2008]. When inserted inside an optical cavity and pumped at 808 nm, this structure oscillates at 1000 nm. To be able to reach sub-mm cavity lengths, the quantum wells in the active medium are pumped through the laser output coupler, which has a radius of curvature of 25 mm and a transmission equal to 1% at 1000 nm. In these conditions, the laser beam diameter for a 1-mm cavity length is of the order of 100 $\mu$m. The pump beam, provided by a 3 W fiber-coupled diode, is focused to the same diameter on the structure using a set of four lenses, as shown in Fig.\[Figure 01\], forcing the laser to oscillate in a single TEM$_{00}$ transverse mode. The laser beam at the output of the VECSEL is isolated using a dichroic mirror DM and then analyzed. We check that the VECSEL oscillates in a single longitudinal mode. ![Noise transfer function versus frequency for a cavity length $L=0.85\;\mathrm{mm}$ for three values of the relative pumping rate $r$. The noisy curves are measurements. The smooth ones are fits obtained using Eq.\[equation01\]. Insert: evolution of the square of the relaxation oscillation frequency versus $r-1$.[]{data-label="Figure 02"}](Figure02){width="8.6"} ![image](Figure03){width="17.8"} We first choose a cavity short enough for the laser to exhibit a class-B dynamics. To this aim, we adjust the cavity length down to $L=0.85\,\mathrm{mm}$. In these conditions, we expect the cavity photon lifetime $\tau_{\mathrm{cav}}$ to lie between 0.3 and 0.6 ns. Indeed, the cavity round-trip losses must be between 1% (transmission of the output coupler) and 2% (maximum gain of our half-VECSEL). Since the carrier lifetime $\tau_{\parallel}$ is of the order of 3 ns, we expect our VECSEL to behave like a class-B laser with a relaxation oscillation frequency $f_{\mathrm{r}}$ of the order of 100 MHz. To monitor the dynamical behavior of the laser, rather than measuring its modulation transfer function by modulating its gain or losses, we deduce this transfer function by observing how the pump laser intensity noise is transferred to the laser intensity noise [@Yu1987]. It is indeed well known that the intensity noise of the laser is a good probe of the laser dynamics [@McCumber1966], provided the measured noise is well above the shot noise limit. We thus measure the VECSEL relative intensity noise (RIN) spectrum and divide it by the measured RIN spectrum of the pump laser (see Fig.\[Figure 01\]). We call $T_{\mathrm{RIN}}$ the RIN transfer function, i.e., the ratio of these RIN values. Typical measurements of $T_{\mathrm{RIN}}$ versus noise frequency are reproduced in Fig.\[Figure 02\] for three values of the relative excitation $r$ (pump power normalized to threshold) of the laser. These transfer functions exhibit the typical shape expected from a second-order resonant filter with a 40 dB/decade roll-off. Such a behavior is a signature of the class-B regime. The experimental spectra of Fig.\[Figure 02\] are fitted using the following expression derived for class-B lasers [@Baili2008]: $$T_{\mathrm{RIN}}(f)=\frac{\gamma_{\parallel}^2\gamma_{\mathrm{cav}}^2 r^2}{\left[\gamma_{\parallel}\gamma_{\mathrm{cav}}(r-1)-(2\pi f)^2\right]^2+\left(2\pi f\gamma_{\parallel}r\right)^2}\ .\label{equation01}$$ Using Eq.\[equation01\], the fits of Fig.\[Figure 02\] lead to $\tau_{\parallel}=3.3\;\mathrm{ns}$ and $\tau_{\mathrm{cav}}=0.31\;\mathrm{ns}$ for $r=1.39$, $\tau_{\parallel}=1.9\;\mathrm{ns}$ and $\tau_{\mathrm{cav}}=0.31\;\mathrm{ns}$ for $r=1.74$, and $\tau_{\parallel}=2.4\;\mathrm{ns}$ and $\tau_{\mathrm{cav}}=0.40\;\mathrm{ns}$ for $r=2.4$. The corresponding values of the relaxation oscillation frequency $f_{\mathrm{r}}$ are 89, 133, and 179 MHz, respectively. The evolution of $f_{\mathrm{r}}$ versus $r$ is reproduced in the inset in Fig.\[Figure 02\]. It confirms that $f_{\mathrm{r}}^2$ evolves linearly with $(r-1)$, as expected for a class-B laser. Moreover, the value of the photon lifetime $\tau_{\mathrm{cav}}$ deduced from the fits is consistent with our initial guess and is ten times shorter than the carrier lifetime, proving also that the laser with $L=0.85\;\mathrm{mm}$ is clearly a class-B laser. The variations in the values of $\tau_{\parallel}$ deduced from the fits can be attributed to measurement uncertainties and also to som dependence of this effective lifetime on the pump power. In order to get closer to the class-B to class-A transition, we increase the photon lifetime from about 0.3 ns to about 0.8 ns by increasing the cavity length up to 1.26 mm. The corresponding transfer function is reproduced in Fig.\[Figure 03\](a) for two values of $r$. The fits using Eq.\[equation01\] lead to $\tau_{\parallel}=4.0\;\mathrm{ns}$ and $\tau_{\mathrm{cav}}=0.71\;\mathrm{ns}$ for $r=2.1$, and $\tau_{\parallel}=3.0\;\mathrm{ns}$ and $\tau_{\mathrm{cav}}=0.82\;\mathrm{ns}$ for $r=2.79$. One can notice that the relaxation oscillations are barely visible on the two spectra of Fig.\[Figure 03\](a), showing that we are closer to the class-A regime than in Fig.\[Figure 02\]. ![Same as Fig.\[Figure 02\] for $L=44\;\mathrm{mm}$ and $r=1.8$. The fit has been obtained using Eq. \[equation02\]. Inset: theoretical (full line) and experimental (dots) evolution of the RIN transfer function at $f=50\ \mathrm{MHz}$ versus photon lifetime.[]{data-label="Figure 04"}](Figure04bis){width="8.6"} We go one step further by increasing the cavity length up to $L=2.0\;\mathrm{mm}$. The corresponding measured transfer function is reproduced in Fig.\[Figure 03\](b). We can see that the resonance has disappeared (compare with Figs.\[Figure 03\](a) and \[Figure 02\]). The transfer function now looks like a low-pass filter, a feature usually considered to be typical of class-A laser. However, the roll-off is still equal to 40 dB/decade, which is typical of class-B lasers [@Verdeyen1995]. We are thus exactly in the intermediate case in which the laser behavior exhibits features from both the class-A and class-B regimes. This is consistent with the fact that the fit using Eq.\[equation01\] gives values of $\tau_{\parallel}$ and $\tau_{\mathrm{cav}}$ which are of the same order of magnitude ($\tau_{\parallel}=2.8\;\mathrm{ns}$ and $\tau_{\mathrm{cav}}=1.4\;\mathrm{ns}$). It is worth noting that the vanishing of the relaxation oscillations in the spectrum of Fig.\[Figure 03\](b) is really a signature of a modification of the laser dynamics at the border between the class-A and class-B regimes. It is different from the overdamping of relaxation oscillations that occurs in diode lasers due to spontaneous emission or to gain compression [@Petermann1991]. When the laser becomes really a class-A laser, then $\gamma_{\mathrm{cav}}\ll\gamma_{\parallel}$, and, for $f\ll\gamma_{\parallel}/2\pi$, Eq.\[equation01\] becomes: $$T_{\mathrm{RIN}}(f)=\frac{\gamma_{\mathrm{cav}}^2}{\left[\gamma_{\mathrm{cav}}(\frac{r-1}{r})\right]^2+(2\pi f)^2}\ .\label{equation02}$$ This is the transfer function of a first-order low-pass filter, with a cut-off frequency given by $(\frac{r-1}{r})\frac{\gamma_{\mathrm{cav}}}{2\pi}$ and a 20 dB/decade roll-off. To reach this regime, we increase the cavity length up to $L=44\;\mathrm{mm}$. The output coupler now has a 50 mm radius of curvature and has again a 1% transmission at 1000 nm. In this cavity, the structure is no longer pumped through the output coupler but from the side of the cavity (for details, see [@Baili2007; @Baili2008]). A 150 $\mu$m étalon forces the laser to operate in single-frequency regime. Now the laser transfer function exhibits a 20 dB/decade roll-off, and can be fitted using Eq.\[equation02\], as shown by the full line in Fig.\[Figure 04\]. It leads to $\tau_{\mathrm{cav}}=9.8\;\mathrm{ns}$ (corresponding to 1.5% losses per round-trip) which is indeed much longer that $\tau_{\parallel}$. This proves definitely that the laser has now reached a pure class-A behavior. The transition from the class-B to class-A regime is also clearly seen the inset of Fig.\[Figure 04\], which displays the evolution of the noise transfer function at a fixed frequency (50 MHz) with the photon lifetime. In this inset, the theoretical plot has been obtained using Eq.\[equation01\] with $r=1.89$ and $\tau_{\parallel}=2.9\;\mathrm{ns}$ and the experimental dots correspond to $r$ close to 2. The abrupt decrease of $T_{\mathrm{RIN}}$ versus $\tau_{\mathrm{cav}}$ is a clear signature of the transition. In conclusion, the transition from class-B to class-A dynamics has been directly observed by continuously modifying the photon lifetime in a dedicated single mode laser cavity. We have confirmed that this transition occurs progressively as expected theoretically. Furthermore, we have been able to isolate an intermediate regime in which the laser exhibits simultaneously features typical of the two regimes. Indeed, the relaxation oscillations disappear as expected from class-A lasers, while the transfer function roll-off is still characteristic of class-B lasers. These observations have been made technically possible by obtaining single mode oscillation in a very low losses sub-mm optical resonator including an active medium whose population inversion decay time is of the same order of magnitude of the resonator decay time and whose polarization can be eliminated adiabatically. The laser intensity noise in such a system is shown to be a powerful probe of the laser dynamics. The control of the exact conditions in which a laser switches from class-A to class-B dynamics is important for the control of the intensity noise in MEMS-VCSELs. Indeed, one wants the cavity to be as short as possible to extend the laser tuning range while keeping the laser quiet in order to maximize the signal-to-noise ratio when the laser is used to probe absorption [@Lackner2006]. Controlling the nature of the laser dynamics is also important in fundamental studies aiming at understanding the role played by the enhancement of the spontaneous emission in the laser relaxation oscillations and noise [@Bjork1994]. M. O. Scully, Phys. Rev. Lett. **82**, 3927 (1999). R. Graham and H. Haken, Z. Phys. **237**, 31 (1970). V. DeGiorgio and M. O. Scully, Phys. Rev. A **2**, 1170 (1970). F. De Martini and G. R. Jacobovitz, Phys. Rev. Lett. **60**, 1711 (1988). P. Lett, W. Christian, S. Singh, and L. Mandel, Phys. Rev. Lett. **47**, 1892 (1981). P. Bergé, Y. Pomeau, et C. Vidal, *Order Within Chaos: Towards a Deterministic Approach to Turbulence* , Wiley (1987). F. T. Arecchi, W. Gadomski, and R. Meucci, Phys. Rev. A **34**, 1617 (1986). D. Pieroux, T Erneux, and K. Otsuka, Phys. Rev. A **50**, 1822 (1994). F. T. Arecchi, G. L. Lippi, G. P. Puccioni, and J. R. Tredicce, Opt. Commun. **51**, 308 (1984). H. Halbritter, F. Riemenschneider, J. Jacquet, J.-G. Provost, I. Sagnes, and P. Meissner, IEEE Photon. Tech. Lett. **16**, 723 (2004). G. Baili et al., J. Lightwave Tech. **26**, 952 (2008). G. Baili et al, Opt. Lett. **32**, 650 (2007). A. W. Yu, G. P. Agrawal, and R. Roy, Opt. Lett, **12**, 806 (1987). D. E. McCumber, Phys. Rev. **141**, 306 (1966). J. T. Verdeyen, *Laser Electronics*, 3rd edition, Prentice Hall (1995). K. Petermann, *Laser Diode Modulation and Noise*, Kluwer (1991). M. Lackner, M. Schwarzott, F. Winter, B. Kögel, S. Jatta, H. Halbritter, and P. Meissner, Opt. Lett. **31**, 3170 (2006). G. Björk, A. Karlsson, and Y. Yamamoto, Phys. Rev. A **50**, 1675 (1994).
--- abstract: 'A Yukawa-Higgs model with Ginsparg-Wilson (GW) fermions, proposed recently by Bhattacharya, Martin and Poppitz as a possible lattice formulation of chiral gauge theories, is studied. A simple argument shows that the gauge boson always acquires mass by the Stückelberg (or, in a broad sense, Higgs) mechanism, regardless of strength of interactions. The gauge symmetry is spontaneously broken. When the gauge coupling constant is small, the physical spectrum of the model consists of massless fermions, massive fermions and *massive* vector bosons.' author: - Hiroshi Suzuki title: 'Perturbative Spectrum of a Yukawa-Higgs Model with Ginsparg-Wilson Fermions' --- Recently, Bhattacharya, Martin and Poppitz [@Bhattacharya:2006dc] proposed a Yukawa-Higgs model with GW fermions as a possible lattice formulation of chiral gauge theories. (For reviews on various approaches on this problem, see Refs. [@Petcher:1993mn; @Shamir:1995zx; @Golterman:2000hr; @Golterman:2004wd].) This approach was subsequently studied by analytical and numerical methods [@Giedt:2007qg; @Poppitz:2007tu]. The idea [@Bhattacharya:2006dc] is that half the fermion sector (“mirror fermions”) in a vector-like theory decouples, forming heavy composite fermions by strong Yukawa interactions and, at the same time, the gauge symmetry is not spontaneously broken by keeping the Higgs sector in a symmetric phase (by choosing a coupling $\kappa$ small; see below). They argued that, in this way, a desired pattern of spectrum as chiral gauge theory, that is, massless Weyl fermions interacting via massless gauge bosons, can be realized. If this scenario comes true, it implies a great simplification because the lattice chiral gauge theory formulated in Refs. [@Luscher:1998du; @Luscher:1999un] on the basis of the GW relation requires ingenious construction of the fermion integration measure. An “ideal” measure must be consistent with the locality, gauge invariance and smoothness and its construction is far from being trivial. Although an explicit way of construction is known for (anomaly-free) $\operatorname{U}(1)$ gauge theories [@Luscher:1998du; @Kadoh:2003ii; @Kadoh:2004uu; @Kadoh:2005fa] (and for the electroweak $\operatorname{SU}(2)_L\times\operatorname{U}(1)_Y$ theory [@Kikukawa:2000kd; @Kadoh:2007]), for general non-abelian theories the way of construction has been known only to all orders of perturbation theory [@Luscher:2000zd]. (The existence of an ideal measure in perturbation theory was shown in Refs. [@Suzuki:2000ii; @Igarashi:2000zi].) Construction of the fermion integration measure in a non-perturbative level is a mathematically complex problem requiring, first of all, non-abelian generalization of a local cohomology argument on the lattice [@Luscher:1998kn; @Fujiwara:1999fi; @Fujiwara:1999fj; @Kikukawa:2001mw; @Igarashi:2002zz; @Kadoh:2003ii; @Kadoh:2004uu; @Kadoh:2005fa] that is so far available only for the gauge group $\operatorname{U}(1)$. On the other hand, as we will review below, the fermion integration measure in the proposal of Ref. [@Bhattacharya:2006dc] is quite simple. Therefore, there is hope such that the mirror fermions and the Higgs field “dynamically” provide an ideal integration measure of massless Weyl fermions while evading the above complexity. In this brief report, we show that the model unfortunately fails to meet above expectations. The physical vector boson always acquires mass by the Stückelberg (or Higgs) mechanism, regardless of strength of interactions. In this sense, the gauge symmetry is always spontaneously broken. Our argumentation to show this is very simple and kinematical. That is, it relies only on a symmetrical structure of the model. Because of the simplicity of this argument, we believe that some workers in this field have already arrived at the conclusion identical to ours. In fact, it has been known that a compact Higgs field (see below) can be interpreted as a Stückelberg field; see, for example, Ref. [@Golterman:1991re]. On the other hand, it appears that the point we want to emphasize below is not so well-appreciated. As an example, we take the so-called “345” model studied in Ref. [@Bhattacharya:2006dc]. The target theory is a two-dimensional $\operatorname{U}(1)$ chiral gauge theory that contains two left-handed Weyl fermions (their $\operatorname{U}(1)$ charges are 3 and 4, respectively) and one right-handed Weyl fermion (its $\operatorname{U}(1)$ charge is 5). Since $3^2+4^2=5^2$, this system is free from the gauge anomaly (the issue of the gauge anomaly plays no central role in what follows, however). The partition function of the model, according to Refs. [@Bhattacharya:2006dc; @Giedt:2007qg; @Poppitz:2007tu], is defined by $$\mathcal{Z}=\int\prod_x\left(\prod_\mu{{\rm d}}U(x,\mu)\right)\, {{\rm d}}\phi(x)\, \left(\prod_{q=0,3,4,5}{{\rm d}}\psi_q(x)\,{{\rm d}}\overline\psi_q(x)\right)e^{-S}, \label{one}$$ where $\mu$ runs from 0 to 1. In this expression, $U(x,\mu)$ denotes the $\operatorname{U}(1)$ link variables and $\phi(x)\in\operatorname{U}(1)$ is a *compact* Higgs field. ${{\rm d}}U(x,\mu)$ and ${{\rm d}}\phi(x)$ are the Haar measures. There are four fermion fields, $\psi_0(x)$, $\psi_3(x)$, $\psi_4(x)$ and $\psi_5(x)$. The first one $\psi_0$ is a spectator having no $\operatorname{U}(1)$ charge and it is introduced to form appropriate Yukawa interactions below. Note that the integration measure of the fermions is trivial in a sense that it is a simple product of Grassmann integrals (like that in lattice QCD). This point is quite different from construction of the fermion integration measure in the framework of Refs. [@Luscher:1998du; @Luscher:1999un] that requires a careful choice of basis vectors in which the Weyl fermion fields are expanded. The total action is given by $$S=S_{\text{G}}+S_\kappa+S_{\text{light}}+S_{\text{mirror}}. \label{two}$$ We do not need to specify an explicit form of the gauge action $S_{\text{G}}$, although we assume that it belongs to a same universality class as the plaquette action. What is important to us is its invariance under the lattice gauge transformation ($\hat\mu$ denotes a unit vector in the $\mu$-direction and the lattice spacing $a$ is set to 1 in most part of this paper) $$\begin{aligned} U(x,\mu)\to\Lambda(x)U(x,\mu)\Lambda(x+\hat\mu)^{-1}, \label{three}\end{aligned}$$ where $\Lambda(x)\in\operatorname{U}(1)$. The kinetic term of the Higgs field $S_\kappa$ is $$S_\kappa=\kappa\sum_x\sum_\mu\operatorname{Re}\left\{1-\phi(x)^{-1}U(x,\mu)\phi(x+\hat\mu)\right\}, \label{four}$$ where we have assumed that the field $\phi(x)$ has the $\operatorname{U}(1)$ charge $+1$. The gauge transformation of $\phi$ is thus given by $$\begin{aligned} \phi(x)\to\Lambda(x)\phi(x). \label{five}\end{aligned}$$ Of course, $S_\kappa$ is invariant under the gauge transformations (\[three\]) and (\[five\]). The actions of “light” fermions, which correspond to massless Weyl fermions in the target theory, are given by $$S_{\text{light}}=\sum_x\left\{ \overline\psi_{0,+}D_0\psi_{0,+}+\overline\psi_{3,-}D_3\psi_{3,-} +\overline\psi_{4,-}D_4\psi_{4,-}+\overline\psi_{5,+}D_5\psi_{5,+} \right\} \label{six}$$ and, for “mirror” ones $$\begin{aligned} S_{\text{mirror}}&=\sum_x\left\{ \overline\psi_{0,-}D_0\psi_{0,-}+\overline\psi_{3,+}D_3\psi_{3,+} +\overline\psi_{4,+}D_4\psi_{4,+}+\overline\psi_{5,-}D_5\psi_{5,-} \right\} \nonumber\\ &\quad{}+y\sum_x\bigl\{ \overline\psi_{0,-}(\phi^{-1})^3\psi_{3,+} +\overline\psi_{3,+}(\phi)^3\psi_{0,-} +\overline\psi_{0,-}(\phi^{-1})^4\psi_{4,+} +\overline\psi_{4,+}(\phi)^4\psi_{0,-} \nonumber\\ &\qquad\qquad{}+ \overline\psi_{3,+}(\phi^{-1})^2\psi_{5,-} +\overline\psi_{5,-}(\phi)^2\psi_{3,+} +\overline\psi_{4,+}(\phi^{-1})\psi_{5,-} +\overline\psi_{5,-}(\phi)\psi_{4,+} \bigr\} \nonumber\\ &\quad{}+h\sum_x\bigl\{ \psi_{0,-}^TB(\phi^{-1})^3\psi_{3,+} -\overline\psi_{3,+}B(\phi)^3\overline\psi_{0,-}^T +\psi_{0,-}^TB(\phi^{-1})^4\psi_{4,+} -\overline\psi_{4,+}B(\phi)^4\overline\psi_{0,-}^T \nonumber\\ &\qquad\qquad{}+ \psi_{3,+}^TB(\phi^{-1})^8\psi_{5,-} -\overline\psi_{5,-}B(\phi)^8\overline\psi_{3,+}^T +\psi_{4,+}^TB(\phi^{-1})^9\psi_{5,-} -\overline\psi_{5,-}B(\phi)^9\overline\psi_{4,+}^T \bigr\}, \label{seven}\end{aligned}$$ where $B$ denotes the charge conjugation matrix in two dimensions. The expressions (\[six\]) and (\[seven\]) need some explanation. The subscript $q$ of the lattice Dirac operators $D_q$ ($q=0$, 3, 4 or 5) indicates the $\operatorname{U}(1)$ charge of the fermion it acts. In the lattice Dirac operator $D_q$, the link variables are contained with the representation $(U(x,\mu))^q$. The Dirac operator $D_q$ must be gauge covariant. That is, under the gauge transformation (\[three\]), it transforms as $D_q\to(\Lambda)^qD_q(\Lambda^{-1})^q$. It is also assumed that $D_q$ satisfies the GW relation [@Ginsparg:1981bj] $$\gamma_5D_q+D_q\gamma_5=D_q\gamma_5D_q. \label{eight}$$ Neuberger’s operator [@Neuberger:1997fp; @Neuberger:1998wv] is simplest among such lattice Dirac operators. Defining the combination $\hat\gamma_{q,5}=\gamma_5(1-D_q)$, one has from the GW relation $$(\hat\gamma_{q,5})^2=1,\qquad D_q\hat\gamma_{q,5}=-\gamma_5D_q \label{nine}$$ and hence $\hat\gamma_{5,q}$ is a lattice analogue of the $\gamma_5$ [@Luscher:1998pq; @Narayanan:1998uu; @Niedermayer:1998bi]. We also introduce projection operators $$\hat P_{q,\pm}={1\over2}(1\pm\hat\gamma_{q,5}),\qquad P_\pm={1\over2}(1\pm\gamma_5) \label{ten}$$ and define chiral components of lattice fermions by $$\psi_{q,\pm}(x)\equiv\hat P_{q,\pm}\psi_q(x),\qquad \overline\psi_{q,\pm}(x)\equiv\overline\psi_q(x)P_\mp \label{eleven}$$ for each $q$. Note that, because of the property (\[nine\]), the action of a lattice Dirac fermion completely decomposes into the right- and the left-handed parts $$\overline\psi_q(x)D_q\psi_q(x)= \overline\psi_{q,+}(x)D_q\psi_{q,+}(x)+\overline\psi_{q,-}(x)D_q\psi_{q,-}(x). \label{twelve}$$ As emphasized in Refs. [@Bhattacharya:2006dc; @Giedt:2007qg; @Poppitz:2007tu], this complete chiral separation of a lattice action is peculiar to formulation based on the lattice Dirac operator satisfying the GW relation. Since the Dirac operator is gauge covariant, so are the projection operators, $\hat P_{q,\pm}\to(\Lambda)^q\hat P_{q,\pm}(\Lambda^{-1})^q$ (and of course $P_\pm\to(\Lambda)^qP_\pm(\Lambda^{-1})^q$). Then the actions (\[six\]) and (\[seven\]) are clearly invariant under the simultaneous gauge transformations (\[three\]), (\[five\]) and $$\psi_q(x)\to(\Lambda(x))^q\psi_q(x),\qquad \overline\psi_q(x)\to\overline\psi_q(x)(\Lambda(x)^{-1})^q. \label{thirteen}$$ The action for light fermions $S_{\text{light}}$ is identical to the action of the Weyl fermions that would be taken in the formulation of Ref. [@Luscher:1998du]. See also Ref. [@Niedermayer:1998bi]. The Yukawa interactions in Eq. (\[seven\]) are chosen [@Bhattacharya:2006dc] so that they break all global (vector as well as chiral) $\operatorname{U}(1)$ transformations of mirror fermions, $\psi_{0,-}$, $\psi_{3,+}$, $\psi_{4,+}$ and $\psi_{5,-}$, except the global $\operatorname{U}(1)$ part of the gauge transformations (\[thirteen\]) and (\[five\]). Now, our argument is based on a simple change of integration variables in Eq. (\[one\]). Instead of gauge variant original variables $U(x,\mu)$, $\psi_q(x)$ and $\overline\psi_q(x)$, one may use gauge *invariant* ones $$\begin{aligned} &U'(x,\mu)=\phi(x)^{-1}U(x,\mu)\phi(x+\hat\mu), \nonumber\\ &\psi_q'(x)=(\phi(x)^{-1})^q\psi_q(x),\qquad \overline\psi_q'(x)=\overline\psi_q(x)(\phi(x))^q. \label{fourteen}\end{aligned}$$ For any fixed configuration of $\phi(x)$, the jacobian from $\{U(x,\mu),\psi_q(x),\overline\psi_q(x)\}$ to $\{U'(x,\mu),\psi_q'(x),\overline\psi_q'(x)\}$ is unity because $\phi(x)\in U(1)$ and the numbers of integration variables $\psi_q(x)$ and $\overline\psi_q(x)$ are same. It is obvious that the action $S$, when expressed in terms of these primed variables, does not contain the $\phi$-field anymore. This is simply a reflection of the gauge invariance of the action and the fact that the compact field $\phi(x)\in\operatorname{U}(1)$ can be regarded as a parameter of the lattice gauge transformation. Then, since $\phi$ is compact, we can integrate it out from the partition function. After this change of variables, the kinetic term of the $\phi$-field becomes the mass term of the (gauge invariant) vector boson [^1] $$S_\kappa=\kappa\sum_x\sum_\mu\operatorname{Re}\left\{1-U'(x,\mu)\right\}.$$ Thus we see that the vector boson acquires mass by the Stückelberg (or, in a broad sense, Higgs) mechanism [^2]. (For a review on the Stückelberg mechanism, see Ref. [@Ruegg:2003ps].) Our choice of the primed variables (\[fourteen\]) corresponds to the so-called unitary gauge and one can say that the gauge symmetry is spontaneously broken. In terms of the primed variables, the Yukawa interactions in $S_{\text{mirror}}$ become mass terms of mirror fermions. Note that the above argument holds regardless of strength of interactions. In the present two-dimensional theory, the dimensionless gauge coupling constant $ag$ goes to zero in the continuum limit $a\to0$. For $ag\ll1$, the situation relevant in the continuum limit, the spectrum of the model consists of massless fermions, massive fermions and *massive* vector bosons, interacting through chiral couplings. The mass of the massive fermions is $O(y/a)$ or $O(h/a)$. The mass of the vector boson is, on the other hand, $O(\kappa g)$. Since the variables (\[fourteen\]) are gauge invariant, this is a physical spectrum. This perturbative physical spectrum differs from the one, that might be expected in chiral gauge theories in the perturbative regime. In the above example, the Higgs field has the $\operatorname{U}(1)$-charge $+1$ and this charge is, according to the terminology of Ref. [@Fradkin:1978dv], the “fundamental representation”. In fact, our argument above is nothing but the argument used in Ref. [@Fradkin:1978dv] to show that lattice gauge models with a compact Higgs field in the fundamental representation are in the Higgs phase. The presence of fermions is not relevant in this argument. Here, one cannot repeat an argument of Ref. [@Fradkin:1978dv] which shows the existence of the Coulomb phase (in which the gauge symmetry is not spontaneously broken) for $\kappa\ll1$, because that argument is based on the presence of a phase transition in pure gauge models. In two-dimensional gauge models, such a phase transition does not occur. A similar argument can be repeated for the two-dimensional “1-0” model [@Giedt:2007qg; @Poppitz:2007tu] that contains two fermions with the $\operatorname{U}(1)$ charges $+1$ and $0$, respectively. The target chiral gauge theory of this model is anomalous because $1^2\neq0$ but nevertheless our argument proceeds without any essential change. We again have massless fermions, massive fermions and massive vector bosons. This is very natural because two-dimensional anomalous $\operatorname{U}(1)$ chiral gauge theory would be consistent, if the vector boson is allowed to be massive [@Jackiw:1984zi; @Halliday:1985tg]. In Refs. [@Bhattacharya:2006dc; @Giedt:2007qg; @Poppitz:2007tu], the authors are considering the limit $ag=0$, where $ag$ is the dimensionless gauge coupling constant, as a first approximation. Then they completely neglect the gauge fields *including* the gauge degrees of freedom. What we wanted to emphasize in this note is that this kind of approximation which neglects the underlying gauge symmetry can sometimes be misleading. In other words, the nature of the spontaneous breaking of a continuous symmetry crucially depends on whether the symmetry is global or local (i.e., gauged). For example, global symmetries cannot be spontaneously broken in two dimensions [@Coleman:1973ci], while the Higgs mechanism in two dimensions itself is not prohibited. The above construction of $\operatorname{U}(1)$ models can be generalized to four dimensions. Our conclusion on the massive vector boson is similar, except the point that now the models should be used with finite lattice spacings, because the models are not renormalizable (in the first place, due to Yukawa couplings with a compact Higgs field). Finally, we comment on generalization to a non-abelian compact gauge group $G$. Natural generalization of the Higgs action is $$S_\kappa=\kappa\sum_x\sum_\mu\operatorname{Re}\operatorname{tr}\left\{1-\phi(x)^{-1}U(x,\mu)\phi(x+\hat\mu)\right\},$$ where the compact Higgs field $\phi(x)$ is $G$-valued and the Higgs field transforms as $\phi(x)\to\Lambda(x)\phi(x)$ under the lattice gauge transformation. The fermion actions would be replaced by $$\begin{aligned} &S_{\text{light}}=\sum_x\left\{ \overline\chi_+D_0\chi_++\overline\psi_-D\psi_-\right\}, \nonumber\\ &S_{\text{mirror}}=\sum_x\left\{ \overline\chi_-D_0\chi_-+\overline\psi_+D\psi_+\right\} \nonumber\\ &\qquad\qquad{} +y\sum_x\left\{ \overline\chi_-R(\phi^{-1})\psi_+ +\overline\psi_+R(\phi)\chi_-\right\} +h\sum_x\left\{ \chi_-^TBR(\phi^{-1})\psi_+ -\overline\psi_+BR(\phi)\overline\chi_-^T\right\},\end{aligned}$$ where $B$ denotes the charge conjugation matrix. We assumed that the fermion $\psi$ belongs to a unitary (generally reducible) representation $R$ of $G$ and, $R(\phi)$, for example, denotes the Higgs field in that representation. $\chi$ is a spectator (gauge singlet) and we have to introduce $\dim R$ spectators. The lattice Dirac operators and the chirality projections are defined according to the gauge representations of the fermions. We do not write down an explicit form of gauge transformations, etc, because generalization from the abelian case is obvious. Now, we may make change of variables (that corresponds to the unitary gauge) $$\begin{aligned} &U'(x,\mu)=\phi(x)^{-1}U(x,\mu)\phi(x+\hat\mu), \nonumber\\ &\psi'(x)=R(\phi(x)^{-1})\psi(x),\qquad \overline\psi'(x)=\overline\psi(x)R(\phi(x)).\end{aligned}$$ Then the total action becomes independent of the Higgs field $\phi$ and we have the physical spectrum similar to that of the above $\operatorname{U}(1)$ case. Note that, in this model, all vector bosons become massive and the $G$ gauge symmetry is completely broken. The unitary gauge is equivalent to take $\phi(x)\equiv1$ and this configuration is not invariant under any non-trivial gauge transformation. Thus, with the above construction, it is impossible to leave some subgroup $H$, such as the $\operatorname{U}(1)_{\text{EM}}$ within the standard model $\operatorname{SU}(3)\times\operatorname{SU}(2)_L\times\operatorname{U}(1)_Y$, unbroken. The $G$-valued compact Higgs field precisely corresponds to the “fundamental representation” case considered in Ref. [@Fradkin:1978dv] and our conclusion is consistent with that of Ref. [@Fradkin:1978dv]; the model is in the Higgs phase. In two dimensions, because of the absence of a phase transition in the pure gauge sector, an argument of Ref. [@Fradkin:1978dv] for the existence of the Coulomb phase does not apply. In four dimensions, non-abelian models with massive vector bosons in which the mass is provided by the Stückelberg (not Higgs in a limited sense) mechanism is not renormalizable and the model should be used with finite lattice spacings. On a related issue, see Ref. [@Preskill:1990fr]. In conclusion, the Yukawa-Higgs model with GW fermions proposed in Ref. [@Bhattacharya:2006dc] regrettably cannot be a starting point for lattice formulation of chiral gauge theories, because the gauge symmetry is always spontaneously broken. [99]{} T. Bhattacharya, M. R. Martin and E. Poppitz, Phys. Rev.  D [**74**]{}, 085028 (2006) \[arXiv:hep-lat/0605003\]. D. N. Petcher, arXiv:hep-lat/9301015. Y. Shamir, Nucl. Phys. Proc. Suppl.  [**47**]{}, 212 (1996) \[arXiv:hep-lat/9509023\]. M. Golterman, Nucl. Phys. Proc. Suppl.  [**94**]{}, 189 (2001) \[arXiv:hep-lat/0011027\]. M. Golterman and Y. Shamir, Nucl. Phys. Proc. Suppl.  [**140**]{}, 671 (2005) \[arXiv:hep-lat/0409052\]. J. Giedt and E. Poppitz, arXiv:hep-lat/0701004. E. Poppitz and Y. Shang, arXiv:0706.1043 \[hep-th\]. M. Lüscher, Nucl. Phys.  B [**549**]{}, 295 (1999) \[arXiv:hep-lat/9811032\]. M. Lüscher, Nucl. Phys.  B [**568**]{}, 162 (2000) \[arXiv:hep-lat/9904009\]. D. Kadoh, Y. Kikukawa and Y. Nakayama, JHEP [**0412**]{}, 006 (2004) \[arXiv:hep-lat/0309022\]. D. Kadoh and Y. Kikukawa, JHEP [**0501**]{}, 024 (2005) \[arXiv:hep-lat/0401025\]. D. Kadoh and Y. Kikukawa, arXiv:hep-lat/0504021. Y. Kikukawa and Y. Nakayama, Nucl. Phys.  B [**597**]{}, 519 (2001) \[arXiv:hep-lat/0005015\]. D. Kadoh and Y. Kikukawa, private communication. M. Lüscher, JHEP [**0006**]{}, 028 (2000) \[arXiv:hep-lat/0006014\]. H. Suzuki, Nucl. Phys.  B [**585**]{}, 471 (2000) \[arXiv:hep-lat/0002009\]. H. Igarashi, K. Okuyama and H. Suzuki, arXiv:hep-lat/0012018. M. Lüscher, Nucl. Phys.  B [**538**]{}, 515 (1999) \[arXiv:hep-lat/9808021\]. T. Fujiwara, H. Suzuki and K. Wu, Nucl. Phys.  B [**569**]{}, 643 (2000) \[arXiv:hep-lat/9906015\]. T. Fujiwara, H. Suzuki and K. Wu, Phys. Lett.  B [**463**]{}, 63 (1999) \[arXiv:hep-lat/9906016\]. Y. Kikukawa, Phys. Rev.  D [**65**]{}, 074504 (2002) \[arXiv:hep-lat/0105032\]. H. Igarashi, K. Okuyama and H. Suzuki, Nucl. Phys.  B [**644**]{}, 383 (2002) \[arXiv:hep-lat/0206003\]. M. F. L. Golterman, D. N. Petcher and J. Smit, Nucl. Phys.  B [**370**]{}, 51 (1992). P. H. Ginsparg and K. G. Wilson, Phys. Rev.  D [**25**]{}, 2649 (1982). H. Neuberger, Phys. Lett.  B [**417**]{}, 141 (1998) \[arXiv:hep-lat/9707022\]. H. Neuberger, Phys. Lett.  B [**427**]{}, 353 (1998) \[arXiv:hep-lat/9801031\]. M. Lüscher, Phys. Lett.  B [**428**]{}, 342 (1998) \[arXiv:hep-lat/9802011\]. R. Narayanan, Phys. Rev.  D [**58**]{}, 097501 (1998) \[arXiv:hep-lat/9802018\]. F. Niedermayer, Nucl. Phys. Proc. Suppl.  [**73**]{}, 105 (1999) \[arXiv:hep-lat/9810026\]. H. Ruegg and M. Ruiz-Altaba, Int. J. Mod. Phys.  A [**19**]{}, 3265 (2004) \[arXiv:hep-th/0304245\]. E. H. Fradkin and S. H. Shenker, Phys. Rev.  D [**19**]{}, 3682 (1979). R. Jackiw and R. Rajaraman, Phys. Rev. Lett.  [**54**]{}, 1219 (1985) \[Erratum-ibid.  [**54**]{}, 2060 (1985)\]. I. G. Halliday, E. Rabinovici, A. Schwimmer and M. S. Chanowitz, Nucl. Phys.  B [**268**]{}, 413 (1986). S. R. Coleman, Commun. Math. Phys.  [**31**]{}, 259 (1973). J. Preskill, Annals Phys.  [**210**]{}, 323 (1991). [^1]: The importance of this phenomenon in a somewhat different context was stressed to me by Yoshio Kikukawa. [^2]: Here, by the Stückelberg mechanism, we mean the situation in which the gauge boson acquires mass by completely absorbing all scalar fields and all the scalar fields correspond to gauge degrees of freedom.
--- abstract: 'Machine learning has been used to detect new malware in recent years, while malware authors have strong motivation to attack such algorithms.Malware authors usually have no access to the detailed structures and parameters of the machine learning models used by malware detection systems, and therefore they can only perform black-box attacks. This paper proposes a generative adversarial network (GAN) based algorithm named MalGAN to generate adversarial malware examples, which are able to bypass black-box machine learning based detection models. MalGAN uses a substitute detector to fit the black-box malware detection system. A generative network is trained to minimize the generated adversarial examples’ malicious probabilities predicted by the substitute detector. The superiority of MalGAN over traditional gradient based adversarial example generation algorithms is that MalGAN is able to decrease the detection rate to nearly zero and make the retraining based defensive method against adversarial examples hard to work.' author: - Weiwei Hu - | Ying Tan[^1]\ Key Laboratory of Machine Perception (MOE), and Department of Machine Intelligence\ School of Electronics Engineering and Computer Science, Peking University, Beijing, 100871 China\ {weiwei.hu, ytan}@pku.edu.cn bibliography: - 'ijcai17.bib' title: 'Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN' --- Introduction ============ In recent years, many machine learning based algorithms have been proposed to detect malware, which extract features from programs and use a classifier to classify programs between benign programs and malware. For example, Schultz et al. proposed to use DLLs, APIs and strings as features for classification [@schultz2001data], while Kolter et al. used byte level N-Gram as features [@kolter2004learning; @kolter2006learning]. Most researchers focused their efforts on improving the detection performance (e.g. true positive rate, accuracy and AUC) of such algorithms, but ignored the robustness of these algorithms. Generally speaking, the propagation of malware will benefit malware authors. Therefore, malware authors have sufficient motivation to attack malware detection algorithms. Many machine learning algorithms are very vulnerable to intentional attacks. Machine learning based malware detection algorithms cannot be used in real-world applications if they are easily to be bypassed by some adversarial techniques. Recently, adversarial examples of deep learning models have attracted the attention of many researchers. Szegedy et al. added imperceptible perturbations to images to maximize a trained neural network’s classification errors, making the network unable to classify the images correctly [@szegedy2013intriguing]. The examples after adding perturbations are called adversarial examples. Goodfellow et al. proposed a gradient based algorithm to generate adversarial examples [@goodfellow2014explaining]. Papernot et al. used the Jacobian matrix to determine which features to modify when generating adversarial examples [@papernot2016limitations]. The Jacobian matrix based approach is also a kind of gradient based algorithm. Grosse et al. proposed to use the gradient based approach to generate adversarial Android malware examples [@grosse2016adversarial]. The adversarial examples are used to fool a neural network based malware detection model. They assumed that attackers have full access to the parameters of the malware detection model. For different sizes of neural networks, the misclassification rates after adversarial crafting range from 40% to 84%. In some cases, attackers have no access to the architecture and weights of the neural network to be attacked; the target model is a black box to attackers. Papernot et al. used a substitute neural network to fit the black-box neural network and then generated adversarial examples according to the substitute neural network [@papernot2016practical]. They also used a substitute neural network to attack other machine learning algorithms such as logistic regression, support vector machines, decision trees and nearest neighbors [@papernot2016transferability]. Liu et al. performed black-box attacks without a substitute model [@liu2016delving], based on the principle that adversarial examples can transfer among different models [@szegedy2013intriguing]. Machine learning based malware detection algorithms are usually integrated into antivirus software or hosted on the cloud side, and therefore they are black-box systems to malware authors. It is hard for malware authors to know which classifier a malware detection system uses and the parameters of the classifier. However, it is possible to figure out what features a malware detection algorithm uses by feeding some carefully designed test cases to the black-box algorithm. For example, if a malware detection algorithm uses static DLL or API features from the import directory table or the import lookup tables of PE programs [@microsoft2016pe], malware authors can manually modify some DLL or API names in the import directory table or the import lookup tables. They can modify a benign program’s DLL or API names to malware’s DLL or API names, and vice versa. If the detection results change after most of the modifications, they can judge that the malware detection algorithm uses DLL or API features. Therefore, in this paper we assume that malware authors are able to know what features a malware detection algorithm uses, but know nothing about the machine learning model. Existing algorithms mainly use gradient information and hand-crafted rules to transform original samples into adversarial examples. This paper proposes a generative neural network based approach which takes original samples as inputs and outputs adversarial examples. The intrinsic non-linear structure of neural networks enables them to generate more complex and flexible adversarial examples to fool the target model. The learning algorithm of our proposed model is inspired by generative adversarial networks (GAN) [@goodfellow2014generative]. In GAN, a discriminative model is used to distinguish between generated samples and real samples, and a generative model is trained to make the discriminative model misclassify generated samples as real samples. GAN has shown good performance in generating realistic images[@mirza2014conditional; @denton2015deep]. The proposed model in this paper is named as MalGAN, which generates adversarial examples to attack black-box malware detection algorithms. A substitute detector is trained to fit the black-box malware detection algorithm, and a generative network is used to transform malware samples into adversarial examples. Experimental results show that almost all of the adversarial examples generated by MalGAN successfully bypass the detection algorithms and MalGAN is very flexible to fool further defensive methods of detection algorithms. Architecture of MalGAN ====================== Overview -------- The architecture of proposed MalGAN is shown in Figure \[fig:malgan\]. The black-box detector is an external system which adopts machine learning based malware detection algorithms. We assume that the only thing malware authors know about the black-box detector is what kind of features it uses. Malware authors do not know what machine learning algorithm it uses and do not have access to the parameters of the trained model. Malware authors are able to get the detection results of their programs from the black-box detector. The whole model contains a generator and a substitute detector, which are both feed-forward neural networks. The generator and the substitute detector work together to attack a machine learning based black-box malware detector. In this paper we only generate adversarial examples for binary features, because binary features are widely used by malware detection researchers and are able to result in high detection accuracy. Here we take API feature as an example to show how to represent a program. If $M$ APIs are used as features, an $M$-dimensional feature vector is constructed for a program. If the program calls the $d$-th API, the $d$-th feature value is set to 1, otherwise it is set to 0. The main difference between this model and existing algorithms is that the adversarial examples are dynamically generated according to the feedback of the black-box detector, while most existing algorithms use static gradient based approaches to generate adversarial examples. The probability distribution of adversarial examples from MalGAN is determined by the weights of the generator. To make a machine learning algorithm effective, the samples in the training set and the test set should follow the same probability distribution or similar probability distributions. While the generator can change the probability distribution of adversarial examples to make it far from the probability distribution of the black-box detector’s training set. In this case the generator has sufficient opportunity to lead the black-box detector to misclassify malware as benign. Generator --------- The generator is used to transform a malware feature vector into its adversarial version. It takes the concatenation of a malware feature vector $\boldsymbol{m}$ and a noise vector $\boldsymbol{z}$ as input. $\boldsymbol{m}$ is a $M$-dimensional binary vector. Each element of $\boldsymbol{m}$ corresponds to the presence or absence of a feature. $\boldsymbol{z}$ is a $Z$-dimensional vector, where $Z$ is a hyper-parameter. Each element of $\boldsymbol{z}$ is a random number sampled from a uniform distribution in the range $[0, 1)$. The effect of $\boldsymbol{z}$ is to allow the generator to generate diverse adversarial examples from a single malware feature vector. The input vector is fed into a multi-layer feed-forward neural network with weights $\theta_g$. The output layer of this network has $M$ neurons and the activation function used by the last layer is sigmoid which restricts the output to the range $(0, 1)$. The output of this network is denoted as $\boldsymbol{o}$. Since malware feature values are binary, binarization transformation is applied to $\boldsymbol{o}$ according to whether an element is greater than 0.5 or not, and this process produces a binary vector $\boldsymbol{o'}$. When generating adversarial examples for binary malware features we only consider to add some irrelevant features to malware. Removing a feature from the original malware may crack it. For example, if the “WriteFile" API is removed from a program, the program is unable to perform normal writing function and the malware may crack. The non-zero elements of the binary vector $\boldsymbol{o'}$ act as the irrelevant features to be added to the original malware. The final generated adversarial example can be expressed as $\boldsymbol{m'}=\boldsymbol{m}|\boldsymbol{o'}$ where “$|$" is element-wise binary OR operation. $\boldsymbol{m'}$ is a binary vector, and therefore the gradients are unable to back propagate from the substitute detector to the generator. A smooth function $G$ is defined to receive gradient information from the substitute detector, as shown in Formula \[equ:g\]. $$\label{equ:g} G_{\theta_g}(\boldsymbol{m},\boldsymbol{z}) = \max \left(\boldsymbol{m}, \boldsymbol{o} \right) .$$ $\max \left( \cdotp , \cdotp \right)$ represents element-wise max operation. If an element of $\boldsymbol{m}$ has the value 1, the corresponding result of $G$ is also 1, which is unable to back propagate the gradients. If an element of $\boldsymbol{m}$ has the value 0, the result of $G$ is the neural network’s real number output in the corresponding dimension, and gradient information is able to go through. It can be seen that $\boldsymbol{m'}$ is actually the binarization transformed version of $G_{\theta_g}(\boldsymbol{m},\boldsymbol{z})$. Substitute Detector ------------------- Since malware authors know nothing about the detailed structure of the black-box detector, the substitute detector is used to fit the black-box detector and provides gradient information to train the generator. The substitute detector is a multi-layer feed-forward neural network with weights $\theta_d$ which takes a program feature vector $\boldsymbol{x}$ as input. It classifies the program between benign program and malware. We denote the predicted probability that $\boldsymbol{x}$ is malware as $D_{\theta_d}(\boldsymbol{x})$. The training data of the substitute detector consist of adversarial malware examples from the generator, and benign programs from an additional benign dataset collected by malware authors. The ground-truth labels of the training data are not used to train the substitute detector. The goal of the substitute detector is to fit the black-box detector. The black-box detector will detect this training data first and output whether a program is benign or malware. The predicted labels from the black-box detector are used by the substitute detector. Training MalGAN =============== To train MalGAN malware authors should collect a malware dataset and a benign dataset first. The loss function of the substitute detector is defined in Formula \[equ:d-loss\]. $$\label{equ:d-loss} \begin{split} {L_D} = & - {\mathbb{E}_{\boldsymbol{x}\in{BB_{Benign}}}}\log\left( {1 - D_{\theta_d}(\boldsymbol{x})} \right)\\ &- {\mathbb{E}_{\boldsymbol{x}\in{BB_{Malware}}}}\log {D_{\theta_d}\left( \boldsymbol{x} \right)}. \end{split}$$ $BB_{Benign}$ is the set of programs that are recognized as benign by the black-box detector, and $BB_{Malware}$ is the set of programs that are detected as malware by the black-box detector. To train the substitute detector, $L_D$ should be minimized with respect to the weights of the substitute detector. The loss function of the generator is defined in Formula \[equ:g-loss\]. $$\label{equ:g-loss} {L_G} = {\mathbb{E}_{\boldsymbol{m}\in{S_{Malware}},\boldsymbol{z}\sim{\boldsymbol{p}_{{\rm{uniform}}[0,1)}}}}\log {D_{\theta_d}\left( {G_{\theta_g}\left( {\boldsymbol{m},\boldsymbol{z}} \right)} \right)}.$$ $S_{Malware}$ is the actual malware dataset, not the malware set labelled by the black-box detector. $L_G$ is minimized with respect to the weights of the generator. Minimizing $L_G$ will reduce the predicted malicious probability of malware and push the substitute detector to recognize malware as benign. Since the substitute detector tries to fit the black-box detector, the training of the generator will further fool the black-box detector. The whole process of training MalGAN is shown in Algorithm \[alg:training\]. \[step:m\]Sample a minibatch of malware $\boldsymbol{M}$ Generate adversarial examples $\boldsymbol{M'}$ from the generator for $\boldsymbol{M}$ \[step:b\]Sample a minibatch of benign programs $\boldsymbol{B}$ Label $\boldsymbol{M'}$ and $\boldsymbol{B}$ using the black-box detector Update the substitute detector’s weights $\theta_d$ by descending along the gradient ${\nabla _{{\theta _d}}}{L_D}$ Update the generator’s weights $\theta_g$ by descending along the gradient ${\nabla _{{\theta _g}}}{L_G}$ In line \[step:m\] and line \[step:b\], different sizes of minibatches are used for malware and benign programs. The ratio of $\boldsymbol{M}$’s size to $\boldsymbol{B}$’s size is the same as the ratio of the malware dataset’s size to the benign dataset’s size. Experiments =========== Experimental Setup ------------------ The dataset used in this paper was crawled from a program sharing website[^2]. We downloaded 180 thousand programs from this website and about 30% of them are malware. API features are used in this paper. An 160-dimensional binary feature vector is construct for each program, based on 160 system level APIs. In order to validate the transferability of adversarial examples generated by MalGAN, we tried several different machine learning algorithms for the black-box detector. The used classifiers include random forest (RF), logistic regression (LR), decision trees (DT), support vector machines (SVM), multi-layer perceptron (MLP), and a voting based ensemble of these classifiers (VOTE). We adopted two ways to split the dataset. The first splitting way regards 80% of the dataset as the training set and the remaining 20% as the test set. MalGAN and the black-box detector share the same training set. MalGAN further picks out 25% of the training data as the validation set and uses the remaining training data to train the neural networks. Some black-box classifiers such as MLP also need a validation set for early stopping. The validation set of MalGAN cannot be used for the black-box detector since malware authors and antivirus vendors do not communicate on how to split dataset. Splitting validation set for the black-box detector should be independent of MalGAN; MalGAN and the black-box detector should use different random seeds to pick out the validation data. The second splitting way picks out 40% of the dataset as the training set for MalGAN, picks out another 40% of the dataset as the training set for the black-box detector, and uses the remaining 20% of the dataset as the test set. In real-world scenes the training data collected by the malware authors and the antivirus vendors cannot be the same. However, their training data will overlap with each other if they collect data from public sources. In this case the actual performance of MalGAN will be between the performances of the two splitting ways. Adam [@kingma2014adam] was chosen as the optimizer. We tuned the hyper-parameters on the validation set. 10 was chosen as the dimension of the noise vector $\boldsymbol{z}$. The generator’s layer size was set to 170-256-160, the substitute detector’s layer size was set to 160-256-1, and the learning rate 0.001 was used for both the generator and the substitute detector. The maximum number of epochs to train MalGAN was set to 100. The epoch with the lowest detection rate on the validation set is finally chosen to test the performance of MalGAN. Experimental Results -------------------- We first analyze the case where MalGAN and the black-box detector use the same training set. For malware detection, the true positive rate (TPR) means the detection rate of malware. After adversarial attacks, the reduction in TPR can reflect how many malware samples successfully bypass the detection algorithm. TPR on the training set and the test set of original samples and adversarial examples is shown in Table \[tab:samedata\]. ------ ---------- -------- -- ---------- -------- Original Adver. Original Adver. RF 97.62 0.20 95.38 0.19 LR 92.20 0.00 92.27 0.00 DT 97.89 0.16 93.98 0.16 SVM 93.11 0.00 93.13 0.00 MLP 95.11 0.00 94.89 0.00 VOTE 97.23 0.00 95.64 0.00 ------ ---------- -------- -- ---------- -------- : True positive rate (in percentage) on original samples and adversarial examples when MalGAN and the black-box detector are trained on the same training set. “Adver." represents adversarial examples. \[tab:samedata\] For random forest and decision trees, the TPRs on adversarial examples range from 0.16% to 0.20% for both the training set and the test set, while the TPRs on the original samples are all greater than 93%. When using other classifiers as the black-box detector, MalGAN is able to decrease the TPR on generated adversarial examples to zero for both the training set and the test set. That is to say, for all of the backend classifiers, the black-box detector can hardly detect any malware generated by the generator. The proposed model has successfully learned to bypass these machine learning based malware detection algorithms. The structures of logistic regression and support vector machines are very similar to neural networks and MLP is actually a neural network. Therefore, the substitute detector is able to fit them with a very high accuracy. This is why MalGAN can achieve zero TPR for these classifiers. While random forest and decision trees have quite different structures from neural networks so that MalGAN results in non-zero TPRs. The TPRs of random forest and decision trees on adversarial examples are still quite small, which means the neural network has enough capacity to represent other models with quite different structures. The voting of these algorithms also achieves zero TPR. We can conclude that the classifiers with similar structures to neural networks are in the majority during voting. The convergence curve of TPR on the training set and the validation set during the training process of MalGAN is shown in Figure \[fig:rftpr\]. The black-box detector used here is random forest, since random forest performs very well in Table \[tab:samedata\]. TPR converges to about zero near the 40th epoch, but the convergence curve is a bit shaking, not a smooth one. This curve reflects the fact that the training of GAN is usually unstable. How to stabilize the training of GAN have attracted the attention of many researchers [@radford2015unsupervised; @salimans2016improved; @arjovsky2017towards]. Now we will analyze the results when MalGAN and the black-box detector are trained on different training sets. Fitting the black-box detector trained on a different dataset is more difficult for the substitute detector. The experimental results are shown in Table \[tab:differentdatatpr\]. ------ ---------- -------- -- ---------- -------- Original Adver. Original Adver. RF 95.10 0.71 94.95 0.80 LR 91.58 0.00 91.81 0.01 DT 91.92 2.18 91.97 2.11 SVM 92.50 0.00 92.78 0.00 MLP 94.32 0.00 94.40 0.00 VOTE 94.30 0.00 94.45 0.00 ------ ---------- -------- -- ---------- -------- : True positive rate (in percentage) on original samples and adversarial examples when MalGAN and the black-box detector are trained on different training sets. “Adver." represents adversarial examples. \[tab:differentdatatpr\] For SVM, MLP and VOTE, TPR reaches zero, and TPR of LR is nearly zero. These results are very similar to Table \[tab:samedata\]. TPRs of random forest and decision trees on adversarial examples become higher compared with the case where MalGAN and the black-box detector use the same training data. For decision trees the TPRs rise to 2.18% and 2.11% on the training set and the test set respectively. However, 2% is still a very small number and the black-box detector will still miss to detect most of the adversarial malware examples. It can be concluded that MalGAN is still able to fool the black-box detector even trained on a different training set. Comparison with the Gradient based Algorithm to Generate Adversarial Examples ----------------------------------------------------------------------------- Existing algorithms of generating adversarial examples are mainly for images. The difference between image and malware is that image features are continuous while malware features are binary. Grosse et al. modified the traditional gradient based algorithm to generate binary adversarial malware examples [@grosse2016adversarial]. They did not regard the malware detection algorithm as a black-box system and assumed that malware authors have full access to the architecture and the weights of the neural network based malware detection model. The misclassification rates of adversarial examples range from 40% to 84% under different hyper-parameters. This gradient based approach under white-box assumption is unable to generate adversarial examples with zero TPR, while MalGAN produces nearly zero TPR with a harder black-box assumption. Their algorithm uses an iterative approach to generate adversarial malware examples. At each iteration the algorithm finds the feature with the maximum likelihood to change the malware’s label from malware to benign. The algorithm modifies one feature at each iteration, until the malware is successfully classified as a benign program or there are no features available to be modified. We tried to migrate this algorithm to attack a random forest based black-box detection algorithm. A substitute neural network is trained to fit the black-box random forest. Adversarial malware examples are generated based on the gradient information of the substitute neural network. TPR on the adversarial examples over the iterative process is shown in Figure \[fig:grad\]. Please note that at each iteration not all of the malware samples are modified. If a malware sample has already been classified as a benign program at previous iterations or there are no modifiable features, the algorithm will do nothing on the malware sample at this iteration. On the training set and the test set, TPR converges to 93.52% and 90.96% respectively. In this case the black-box random forest is able to detect most of the adversarial examples. The substitute neural network is trained on the original training set, while after several iterations the probability distribution of adversarial examples will become quite different from the probability distribution of the original training set. Therefore, the substitute neural network cannot approximate the black-box random forest well on the adversarial examples. In this case the adversarial examples generated from the substitute neural network are unable to fool the black-box random forest. In order to fit the black-box random forest more accurately on the adversarial examples, we tried to retraining the substitute neural network on the adversarial examples. At each iteration, the current generated adversarial examples from the whole training set are used to retrain the substitute neural network. As shown in Figure \[fig:grad\], the retraining approach make TPR converge to 46.18% on the training set, which means the black-box random forest can still detect about half of the adversarial examples. However, the retrained model is unable to generalize to the test set, sine the TPR on the test set converges to 90.12%. The odd probability distribution of these adversarial examples limits the generalization ability of the substitute neural network. MalGAN uses a generative network to transform original samples into adversarial samples. The neural network has enough representation ability to perform complex transformations, making MalGAN able to result in nearly zero TPR on both the training set and the test set. While the representation ability of the gradient based approach is too limited to generate high-quality adversarial examples. Retraining the Black-Box Detector --------------------------------- Several defensive algorithms have been proposed to deal with adversarial examples. Gu et al. proposed to use auto-encoders to map adversarial samples to clean input data [@gu2014towards]. An algorithm named defensive distillation was proposed by Papernot et al. to weaken the effectiveness of adversarial perturbations [@papernot2016distillation]. Li et al. found that adversarial retraining can boost the robustness of machine learning algorithms [@li2016general]. Chen et al. compared these defensive algorithms and concluded that retraining is a very effective way to defend against adversarial examples, and is robust even against repeated attacks [@chen2016evaluation]. In this section we will analyze the performance of MalGAN under the retraining based defensive approach. If antivirus vendors collect enough adversarial malware examples, the can retrain the black-box detector on these adversarial examples in order to learn their patterns and detect them. Here we only use random forest as the black-box detector due to its good performance. After retraining the black-box detector it is able to detect all adversarial examples, as shown in the middle column of Table \[tab:retraintpr\]. Before Retraining MalGAN After Reraining MalGAN -------------- -------------------------- ------------------------ Training set 100 0 Test set 100 0 : True positive rate (in percentage) on the adversarial examples after the black-box detector is retrained. \[tab:retraintpr\] However, once antivirus vendors release the updated black-box detector publicly, malware authors will be able to get a copy of it and retrain MalGAN to attack the new black-box detector. After this process the black-box detector can hardly detect any malware again, as shown in the last column of Table \[tab:retraintpr\]. We found that reducing TPR from 100% to 0% can be done within one epoch during retraining MalGAN. We alternated retraining the black-box detector and retraining MalGAN for ten times. The results are the same as Table \[tab:retraintpr\] for the ten times. To retrain the black-box detector antivirus vendors have to collect enough adversarial examples. It is a long process to collect a large number of malware samples and label them. Adversarial malware examples have enough time to propagate before the black-box detector is retrained and updated. Once the black-box detector is updated, malware authors will attack it immediately by retraining MalGAN and our experiments showed that retraining takes much less time than the first-time training. After retraining MalGAN, new adversarial examples remain undetected. This dynamic adversarial process lands antivirus vendors in a passive position. Machine learning based malware detection algorithms can hardly work in this case. Conclusions =========== This paper proposed a novel algorithm named MalGAN to generate adversarial examples from a machine learning based black-box malware detector. A neural network based substitute detector is used to fit the black-box detector. A generator is trained to generate adversarial examples which are able to fool the substitute detector. Experimental results showed that the generated adversarial examples are able to effectively bypass the black-box detector. Adversarial examples’ probability distribution is controlled by the weights of the generator. Malware authors are able to frequently change the probability distribution by retraining MalGAN, making the black-box detector cannot keep up with it, and unable to learn stable patterns from it. Once the black-box detector is updated malware authors can immediately crack it. This process making machine learning based malware detection algorithms unable to work. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the Natural Science Foundation of China (NSFC) under grant no. 61375119 and the Beijing Natural Science Foundation under grant no. 4162029, and partially supported by National Key Basic Research Development Plan (973 Plan) Project of China under grant no. 2015CB352302. [^1]: Prof. Ying Tan is the corresponding author. [^2]: https://malwr.com/
--- author: - 'Marc Boullé, Romain Guigourès and Fabrice Rossi' bibliography: - 'author.bib' title: Nonparametric Hierarchical Clustering of Functional Data --- Introduction ============ In functional data analysis (FDA [@RamsayEtAl05]), observations are functions (or curves). Each function is sampled at possibly different evaluation points, leading to variable-length sets of pairs (evaluation point, function value). Functional data arise in many domains, such as daily records of precipitation at a weather station or hardware monitoring where each curve is a time series related to a physical quantity recorded at a specified sampling rate. Exploratory analysis methods for large functional data sets are needed in practical applications such as e.g. electric consumption monitoring [@HebrailEtAl10]. They reduce data complexity by combining clustering techniques with function approximation methods, representing a functional data set by a small set of piecewise constant prototypes. In this type of approach, both the number of prototypes and the number of segments (constant parts of the prototypes) are under user control. On a positive side, this limits the risk of cognitive overwhelming as the user can ask for a low complexity representation. Unfortunately, this can also induce under/over-fitting of the model to the data; additionally the number of prototypes and the number of segments both need to be tuned, while they can be adjusted independently in [@HebrailEtAl10], increasing the risk of over/under-fitting. Other parametric approaches for function clustering and/or function approximation can be found in e.g. [@CadezEtAl00; @chamroukhi_et_al_neurocomputing2010; @GaffneySmythNips04; @RamsayEtAl05]. All those methods make (sometimes implicit) assumptions on the distribution of the functions and/or on the measurement noise. Nonparametric functional approaches (e.g. [@FerratyEtAl06]) have been proposed, in particular in [@GasserEtAl98; @DelaigleEtAl10], where the problem of density estimation of a random function is considered. However, those models do not tackle directly the summarizing problem outlined in [@HebrailEtAl10] and recalled above. Nonparametric Bayesian approaches based on Dirichlet process have also been applied to the problem of curves clustering. They aim at inferring a clustering distribution on an infinite mixture model [@nguyen; @Teh2010a]. The clustering model is obtained by sampling the posterior distribution using Bayesian inference methods. The present paper proposes a new nonparametric exploratory method for functional data, based on data grid models [@BoulleHOPR10]. The method makes assumption neither on the functional data distribution nor on the measurement noise. Given a set of sampled functions defined on a common interval $[a,b]$, with values in $[u,v]$, the method outputs a clustering of the functions associated to partitions of $[a,b]$ and $[u,v]$ in sub-intervals which can be used to summarize the values taken by the functions in each cluster, leading to results comparable to those of [@HebrailEtAl10]. Both approaches are for that matter compared in this article. The method has no parameters and obtains in a fully automated way an optimal summary of the functional data set, using a Bayesian approach with data dependent priors. In some cases, especially for large scale data sets, the optimal number of clusters and of sub-intervals may be too large for a user to interpret all the discovered fine grained patterns in a reasonable time. Therefore, the method is complemented with a post-processing step which offers the user a way to decrease the number of clusters in a greedy optimal way. The number of sub-intervals, that is the level of details kept in the functions, is automatically adjusted in an optimal way when the number of clusters is reduced. The post-processing technique consists in merging successively the clusters in the least costly way, from the finest clustering model to one single cluster containing all the curves. It appears that the cost of the merge of two clusters is a weighted sum of Kullback-Leibler divergences from the merged clusters to the created cluster which can be interpreted as a dissimilarity measure between the two clusters that have been merged. Thus, the post-processing technique can be considered as an agglomerative hierarchical clustering [@HastieEtAl01]. Decision-making tools can be plotted using a dendrogram and a Pareto chart of the criterion value as a function of the number of clusters. The rest of the paper is organized as follows. Section \[sec:FunctionApproximationBasedMethods\] introduces the problem of curves clustering and relates our method to alternative approaches. Next, in Section \[sec:MODL\], the clustering method based on joint density estimation is introduced. Then, the post-processing technique is detailed in section \[sec:AHC\]. In Section \[sec:results\] the results of experimentations on an artificial data set and on a power consumption data set are shown. Finally Section \[sec:Conclusion\] gives a summary. Functional data exploratory analysis {#sec:FunctionApproximationBasedMethods} ==================================== In this section, we describe in formal terms the data under analysis and the goals of the analysis. Let $\mathcal{C}$ be a collection of $n$ functions or curves, $c_i, 1 \leq i \leq n$, defined from $[a,b]$ to $[u,v]$, two real intervals. Each curve is sampled at $m_i$ values in $[a,b]$, leading to a series of observations denoted $c_i = (x_{ij}, y_{ij})_{j=1}^{m_i}$, with $y_{ij}=c_i(x_{ij})$. As in all data exploratory settings, our main goal is to reduce the complexity of the data set and to discover patterns in the data. We are therefore interested in finding clusters of similar functions as well as in finding functional patterns, that is systematic and simple regular shapes in individual functions. In [@chamroukhi_et_al_neurocomputing2010; @HebrailEtAl10] functional patterns are simple functions such as interval indicator functions or polynomial functions of low degree: a function is approximated by a linear combination of such simple functions in [@HebrailEtAl10] or generated by a logistic switching process based on low degree polynomial functions in [@chamroukhi_et_al_neurocomputing2010]. B-splines could also be used as in [@AbrahamEtAl2003] but with no simplification virtues. Let us denote $k_C$ the number of curve clusters. Given $k_C$ classes $\mathcal{F}_k$ of “simple functions” used to discover functional patterns (e.g., piecewise constant functions with $P$ segments), the method proposed in [@HebrailEtAl10] finds a partition $(\mathcal{C}_k)_{k=1}^{k_C}$ of $\mathcal{C}$ and $k_C$ simple functions $(f_k\in \mathcal{F}_k)_{k=1}^{k_C}$ which aim at minimizing $$\sum_{k=1}^{k_C}\sum_{c_i\in \mathcal{C}_k}\sum_{j=1}^{m_i}\left(y_{ij}-f_k(x_{ij})\right)^2, \label{eq:fda:model:based}$$ which corresponds to a form of K-means constrained by the choice of the segments, in the functional space $L^2$. The approach of [@chamroukhi_et_al_neurocomputing2010] optimizes a similar criterion obtained from a maximum likelihood estimation of the parameters of the functional generative model. Given a specific choice of the simple function classes, the functional prototypes $(f_k)_{k=1}^{k_C}$ obtained by [@chamroukhi_et_al_neurocomputing2010; @HebrailEtAl10] induce $k_C$ partitions of $[a,b]$ into sub-intervals on which functions are roughly constant. Those partitions are the main tool used by the analyst to understand the functional pattern inside each cluster. The general abstract goal of functional data exploration is therefore to build clusters of similar functions associated to sub-intervals of the input space of the functions which summarize the behavior of the functions. Bayesian Approaches, as described in [@nguyen], assume that the collection of curves realizations can be represented by a set of canonical curves drawn from a Gaussian Process and organized into clusters. The clusters are described using a label function that is a realization of a multinomial distribution with a Dirichlet prior. Whereas parametric models using a fixed and finite number of parameters may suffer from over- or under-fitting, Bayesian nonparametric approaches were proposed to overcome these issues. By using a model with an unbounded complexity, underfitting is mitigated, while the Bayesian approach of computing or approximating the full posterior over parameters lessens over-fitting [@Teh2010a]. Finally, the parameters distribution is obtained by sampling the posterior distribution using Bayesian inference methods such as Markov Chain Monte Carlo [@Neal] or Variational Inference [@Jordan]. Then a post-treatment is required for the choice of the clustering parameters among their distribution. The Dirichlet Process prior requires two parameters : a concentration parameter and a base distribution. For a concentration parameter $\alpha$ and a data set containing $n$ curves, the expected number of clusters $\bar{k}$ is $\bar{k}=\alpha \log(n)$ [@Wallach]. Hence, the concentration parameter has a significant impact on the obtained number of clusters. For that matter, according to [@Vogt], one should not expect to be able to reliably estimate this parameter. Our method - named MODL and detailed in Section \[sec:MODL\] - is comparable to approaches based on Dirichlet process (DP) in so much as all estimate a posterior probability based on the likelihood and a prior distribution of the parameters. The methods are also nonparametric with an unbounded complexity, since the number of parameters is not fixed and grows with the amount of available data. Nevertheless, MODL is intrinsically different from the DP based methods. First, approaches based on DP are Bayesian and yield a distribution of clusterings, the final clustering being selected using a post-treatment like chosing the mode of the posterior distribution or by studying the clusters co-occurence matrix. By contrast, MODL is a MAP approach, the most probable model is directly obtained using optimization algorithms. Secondly, MODL is not applied on the values but on the order statistics of the sample. One first benefit is to avoid outliers or scaling problems. By using order statistics, the retrieved models are invariant by any monotonic transformation of the input data, which makes sense since the method aims at modeling the correlations between the variables, not the values directly. Then, DP based methods consider distributions of the parameters that lie in $\mathbb{R}$ or any continuous space, which measure is consequently infinite. As for MODL, the correlations between the variables are modeled on a sample. In the case of curves clustering, these variables are the location $X$, the corresponding curve realization $Y$, and the curve label $C$. This allows to work on a finite discrete space and thus to simplify the model computation, that mainly comes down to counting problems. Finally, the MODL approach is clearly data dependant. In a first phase, the data sample is used cautiously to build the model space and the prior : only the size of the sample and the values (or empirical ranks) of each variable taken independently are exploited. The correlation model is inferred in a second phase, using a standard MAP approach. Hence, proving the consistency of this data dependant modeling technique is still an open issue. Actually, experimental results with both reliable and fine grained retrieved patterns show the relevancy of the approach. MODL Approach for Functional Data Analysis {#sec:MODL} ========================================== In this section, we summarize the principles of data grid models, detailed in [@BoulleHOPR10], and apply this approach on the functional data. Data Grid Models ---------------- Data grid models [@BoulleHOPR10] have been introduced for the data preparation phase of the data mining process [@ChapmanEtAl00], which is a key phase, both time consuming and critical for the quality of the results. They allow to automatically, rapidly and reliably evaluate the class conditional probability of any subset of variables in supervised learning and the joint probability in unsupervised learning. Data grid models are based on a partitioning of each variable into intervals in the numerical case and into groups of values in the categorical case. The cross-product of the univariate partitions forms a multivariate partition of the representation space into a set of cells. This multivariate partition, called data grid, is a piecewise constant nonparametric estimator of the conditional or joint probability. The best data grid is searched using a Bayesian model selection approach and efficient combinatorial algorithms. Application to Functional Data ------------------------------ We propose to represent the collection $\mathcal{C}$ of $n$ curves as a unique data set with $m=\sum_{i=1}^n m_i$ observations and three variables, $C$ to store the curve identifier, $X$ and $Y$ for the point coordinates. We can apply the data grid models in the unsupervised setting to estimate the joint density $p(C, X, Y)$ between the three variable. The curve variable $C$ is grouped into clusters of curves, whereas each point dimension $X$ and $Y$ is discretized into intervals. The cross-product of these univariate partitions forms a data grid of cells, whith a peacewise constant joint density estimation per triplet of curve cluster, $X$ interval and $Y$ interval. As $p(X, Y | C) = \frac {p(C, X, Y)} {p(C)}$, this can also be interpreted as an estimator of the joint density between the point dimensions, which is constant per cluster of curves. This means that similar curves with respect to the joint density of their point dimensions will tend to be grouped into the same clusters. It is noteworthy that the $(X,Y)$ discretization is optimized globally for the set of all curves and not locally per cluster as in [@HebrailEtAl10]. We introduce in Definition \[FunctionalDataClusteringModel\] a family of functional data clustering models, based on clusters of curves, intervals for each point dimension, and a multinomial distribution of all the points on the cells of the resulting data grid. \[FunctionalDataClusteringModel\] A functional data clustering model is defined by: - a number of clusters of curves, - a number of intervals for each point dimension, - the repartition of the curves into the clusters of curves, - the distribution of the points of the functional data set on the cells of the data grid, - the distribution of the points belonging to each cluster on the curves of the cluster.\ $\phantom{x}$ - $\mathcal{C}$: collection of curves, size $n=|\mathcal{C}|$. - $\mathcal{P}$: point data set containing all points of $\mathcal{C}$ using 3 variables, size $m=|\mathcal{P}|$. - $C$: curve variable - $X, Y$: variables for the point dimensions - $k_C$: number of clusters of curves - $k_X, k_Y$: number of intervals for variables $X$ and $Y$ - $k = k_C k_X k_Y$: number of cells of the data grid - $n_{i_C}$: number of curves in cluster $i_C$ - $m_i$: number of points for curve $i$ - $m_{i_C}$: cumulated number of points for curves of cluster $i_C$ - $m_{j_X}$, $m_{j_Y}$: cumulated number of points for intervals $j_X$ of $X$ and $j_Y$ of $Y$ - $m_{i_C j_X j_Y}$: cumulated number of points for cell $(i_C, j_X, j_Y)$ of the data grid We assume that the numbers of curves $n$ and points $m$ are known in advance and we aim at modeling the joint distribution of the $m$ points on the curve and the point dimensions. In order to select the best model, we apply a Bayesian approach, using the prior distribution on the model parameters described in Definition \[FunctionalDataClusteringPrior\]. \[FunctionalDataClusteringPrior\] The prior for the parameters of a functional data clustering model are chosen hierarchically and uniformly at each level: - the numbers of clusters $k_C$ and of intervals $k_X, k_Y$ are independent from each other, and uniformly distributed between $1$ and $n$ for the curves, between $1$ and $m$ for the point dimensions, - for a given number $k_C$ of clusters, every partitions of the $n$ curves into $k_C$ clusters are equiprobable, - for a model of size $(k_C, k_X, k_Y)$, every distributions of the $m$ points on the $k=k_C k_X k_Y$ cells of the data grid are equiprobable, - for a given cluster of curves, every distributions of the points in the cluster on the curves of the cluster are equiprobable, - for a given interval of $X$ (resp. $Y$), every distributions of the ranks of the $X$ (resp. $Y$) values of points are equiprobable. Taking the negative log of the posterior probability of a model given the data, this provides the evaluation criterion given in Theorem \[FunctionalDataClusteringTheorem\], which specializes to functional data clustering the unsupervised data grid model general criterion [@BoulleHOPR10]. \[FunctionalDataClusteringTheorem\] A functional data clustering model $M$ distributed according to a uniform hierarchical prior is Bayes optimal if the value of the following criteria is minimal $$\label{eq:crit} \begin{split} c(M) =& -log(P(M))-log(P(\mathcal{P}|M)) \\ =& \log n + 2 \log m + \log B(n, k_C)\\ &+ \log \binom {m + k - 1} {k - 1} + \sum_{i_C=1}^{k_C} {\log \binom {m_{i_C} + n_{i_C} - 1} {n_{i_C} - 1}}\\ &+ \log m! - \sum_{i_C = 1}^{k_C} {\sum_{j_X = 1}^{k_X} {\sum_{j_Y = 1}^{k_Y} {\log m_{i_C j_X j_Y}!}}}\\ &+ \sum_{i_C = 1}^{k_C} {\log m_{i_C}!} - \sum_{i = 1}^{n} {\log m_i!}\\ &+ \sum_{j_X = 1}^{k_X} {\log m_{j_X}!} + \sum_{j_Y = 1}^{k_Y} {\log m_{j_Y}!} \end{split}$$ $B(n,k)$ is the number of divisions of $n$ elements into $k$ subsets (with eventually empty subsets). When $n=k$, $B(n,k)$ is the Bell number. In the general case, $B(n,k)$ can be written as $B(n,k) = \sum_{i=1}^k {S(n,i)}$, where $S(n,i)$ is the Stirling number of the second kind [@AbramowitzEtAl70], which stands for the number of ways of partitioning a set of $n$ elements into $i$ nonempty subsets. As negative log of probabilities are coding lengths, the model selection technique is similar to a minimum description length approach [@Rissanen78]. The first line in Formula \[eq:crit\] relates to the prior distribution of the numbers of cluster $k_C$ and of intervals $k_X$ and $k_Y$, and to the specification of the partition of the curves into clusters. The second line represents the specification of the parameters of the multinomial distribution of the $m$ points on the $k$ cells of the data grid, followed by the specification of the multinomial distribution of the points of each cluster on the curves of the cluster. The third line stands for the likelihood of the distribution of the points on the cells, by the mean of a multinomial term. The last line corresponds to the likelihood of the distribution of the points of each cluster on the curves of the cluster, followed by the likelihood of the distribution of the ranks of the $X$ values (resp. $Y$ values) in each interval. Optimization Algorithm ---------------------- The optimization heuristics have practical scaling properties, with O$(m)$ space complexity and O$(m \sqrt m \log m)$ time complexity. The main heuristic is a greedy bottom-up heuristic, which starts with a fine grained model, with a few points per interval on $X$ and $Y$ and a few curves per cluster, considers all the merges between clusters and adjacent intervals, and performs the best merge if the criterion decreases after the merge, as detailed in Algorithm \[alg:gbum\]\ This heuristic is enhanced with post-optimization steps (moves of interval bounds and of curves across clusters), and embedded into the variable neighborhood search (VNS) meta-heuristic [@HansenEtAl01], which mainly benefits from multiple runs of the algorithm with different initial random solutions. $M$ (initial solution) $M^*\mbox{ ; } c(M^*) \leq c(M)$ $M^* \gets M$ $M' \gets M^*$ $M^+ \gets M^*+u$ $M' \gets M^+$ $M^* \gets M'$ (improved solution) The optimization algorithms summarized above have been extensively evaluated in [@BoulleHOPR10], using a large variety of artificial data sets, where the true data distribution is known. Overall, the method is both resilient to noise and able to detect complex fine grained patterns. It is able to approximate any data distribution, provided that there are enough instances in the train data sample. Agglomerative Hierarchical Clustering {#sec:AHC} ===================================== The model carried out by the method detailed in the section \[sec:MODL\] is optimal according to the criterion introduced in Theorem \[FunctionalDataClusteringTheorem\]. This parameter-free solution allows to track fine and relevant patterns without over-fitting. This provides a suitable initial solution to lead an exploratory analysis. Still, this initial solution may be too fine for an easy interpretation. We propose here a post-processing technique which aims at simplifying the clustering while minimizing the loss of information. This allows to explore the retrieved patterns at any granularity, up to the finest model, without any user parameter. We first study the impact of a merge on the criterion, then focus on the properties of the proposed dissimilarity measure and finally describe the agglomerative hierarchical clustering heuristic. It is noteworthy than the same modeling criterion is optimized both for building the initial clustering and for aggregating the clusters in the agglomerative heuristic. The Cost of Merging two Clusters -------------------------------- Let $M_{1_C,2_C}$ and $M_{\gamma_C}$ be two clustering models, the first one is the model before the merge of the clusters $1_C$ and $2_C$, the second one is the model after the merge, that yields a new cluster $\gamma_C=1_C\cup2_C$. We denote $\Delta c(1_C,2_C)$ the cost of the merge of $1_C$ and $2_C$, defined as: $$\begin{aligned} \Delta c(1_C,2_C) = c(M_{\gamma_C}) - c(M_{1_C,2_C})\end{aligned}$$ It results from Theorem \[FunctionalDataClusteringTheorem\] that the clustering model $M_{\gamma_C}$ is a less probable MODL explanation of the data set $\mathcal{P}$ than $M_{1_C,2_C}$ according to a factor based on $\Delta c(1_C,2_C)$. $$\begin{aligned} p(M_{\gamma_C}|\mathcal{P})=e^{-\Delta c(1_C,2_C)}p(M_{1_C,2_C}|\mathcal{P}) \label{eq:varcrit1}\end{aligned}$$ We focus on the asymptotic behavior of $\Delta c(1_C,2_C)$ when the number of data points $m$ tends to infinity. The criterion variation is asymptotically equal to a weighted sum of the Kullback-Leibler divergences from the clusters $1_C$ and $2_C$ to $\gamma_C$, estimated on the $k_X \times k_Y$ bivariate discretization. $$\label{eq:varcrit2} \begin{split} \Delta c(1_C,2_C) =& m_{1_C}D_{KL}(1_C||\gamma_C)+m_{2_C}D_{KL}(2_C||\gamma_C)+O(log(m_{\gamma_C})) \end{split}$$ The full proof is left out for brevity. Mainly, the computation of $\Delta c(1_C,2_C)$ makes some prior terms (2 first lines of Formula \[eq:crit\]) vanish and bounds the other ones by $O(log(m_{\gamma_C}))$ terms. Then, using the Stirling approximation $log(m!)=m(log(m)-1)+O(log(m))$, the variation of the likelihood (the two last lines of Formula \[eq:crit\]) can be rewritten as a weighted sum of Kullback-Leibler divergences. The Cost of a Merge as a Dissimilarity Measure ---------------------------------------------- As the criterion defined in Theorem 1 is used to find the best model, we naturally chose it to evaluate the quality of the clustering. When two clusters are merged, the criterion decreases and its resulting variation can be viewed as dissimilarity between both clusters. When the number of points tends to infinity, the dissimilarity measure asymptotically converges to a weighted sum of Kullback-Leibler divergence (see Theorem 2). This divergence is a non symmetric measure of the difference between two distributions [@CoverEtAl91]. The variation of the criterion $\Delta c$ has some interesting properties. First, it is symmetrical, $\Delta(1_C,2_C)=\Delta(2_C,1_C)$. Then, $\Delta c(1_C,2_C)$ is asymptotically non-negative since the Kullback-Leibler divergence is also [@CoverEtAl91]. The weights have an important impact on the merge in the case of unbalanced clusters. A trade-off is achieved between merging two balanced clusters with similar distributions and merging two different clusters, one of them having a tiny weight. The best merge is the one with the least loss of information, as $c(M)$ can be interpreted as the total coding length of the clustering model plus the data points given the model. The Agglomerative Hierarchical Classification --------------------------------------------- The principle of the agglomerative clustering is to merge successively the clusters in order to build a tree called dendrogram. The usual dissimilarity measures for the dendrogram are based on Euclidean distances (Single-Linkage, Complete-Linkage, Ward ...). Here we build a dendrogram using the criterion variation $\Delta c$. Due to the properties of this dissimilarity measure, the resulting dendrogram is well-balanced. Indeed, given the trade-off between merging similarly distributed clusters and merging tiny with large clusters, we obtain clusters with comparable sizes at each level of hierarchy. Let us notice that during the agglomerative process, the best merge can relate either to the cluster variable $C$ or to the points dimensions $X$ or $Y$. Therefore, the granularity of the representation of the curves coarsens as the number of clusters decreases. As a consequence, the dissimilarity measure between two clusters of a partition “coarsens” together with the coarsening of the other partitions. This makes sense since fewer clusters in the partition need a less discriminative similarity measure to be distinguished. It is noteworthy that during the agglomerative process, partitions are coarsened but not re-optimized by locally moving the bounds of the intervals. Although this may be sub-optimal, this allows to ease the exploratory analysis by using the same family of nested intervals at any model granularity. Experiments {#sec:results} =========== In this section, we first highlight properties of our approach using an artificial data set and then apply it on a real-life data set, next we successively merge the clusters and finally show what kind of exploratory analysis can be performed. Experiments on an artificial data set ------------------------------------- A variable $z$ is sampled from an uniform distribution: $Z \sim \mathcal{U}(-1,1)$. $\varepsilon_i$ denotes a white Gaussian noise: $E \sim \mathcal{N}(0,0.25)$. Let us consider the four following distributions: - $f_1 : x = z + \varepsilon_x \mbox{ , } y = z + \varepsilon_y $ - $f_2 : x = z + \varepsilon_x \mbox{ , } y = -z + \varepsilon_y $ - $f_3 : x = z + \varepsilon_x \mbox{ , } y = \alpha z + \varepsilon_y$\ $\phantom{f_3 : }\mbox{ with } \alpha \in \{-1,1\}$\ $\phantom{f_3 : } \mbox{ and } p(\alpha=-1)=p(\alpha=1)$ - $f_4 : x = (0.75 + \varepsilon_x) cos(\pi(1+z)) \mbox{ ,}$\ $\phantom{f_3 : :} y = (0.75 + \varepsilon_y) sin(\pi(1+z))$ \ We generate a collection of $40$ curves using each distribution defined previously ($10$ curves per distribution). We generate a data set $\mathcal{P}$ of $10^5$ points. Each point is a triple of values with a randomly chosen curve (among 40), a $x$ and a $y$ value generated according to the distribution related to the curve. We apply our functional data clustering method introduced in Section \[sec:MODL\] on subsets of $\mathcal{P}$ of increasing sizes. The experiment is running 10 times per subset of points that are resampled each time. The graph on Figure \[Fig:clustersNumber\] displays the average number of clusters and the number of X and Y intervals for a given number of points $m$. For very small subsets (below 400 data points), there are not enough data to discover significant patterns, and our method produces one single cluster of curves, with one single interval for the $X$ and $Y$ variables. From 400 data points, the numbers of clusters and intervals start to grow. Finally with only 25 points per curve on average, that is 1000 points in the whole point data set, our method recovers the underlying pattern and produces four clusters of curves related to the $f_1$, $f_2$, $f_3$ and $f_4$ distributions. Despite the method retrieved the actual number of clusters, below 2000 data points, the clusters may not be totally pure and some curves misplaced into clusters. In our experiments, for $1000$ data points, $2\%$ of the curves are misplaced on average, while for $2000$ points, all the curves are systematically placed in their actual cluster. It is noteworthy that by growing the size of the subset beyond 2000 data points, the number of retrieved patterns is constant and equal to four. By contrast, the number of intervals grows with the number of data points. This shows the good asymptotic behaviour of the method: it retrieves the true number of patterns and exploits the growing number of data to better approximate the pattern shapes. ![Number of clusters (solid line), number of $X$ intervals (tight dotted line) and number of $Y$ intervals (spaced dotted line) for a given number of data points $m$. []{data-label="Fig:clustersNumber"}](artificial/graph){width="80.00000%"} Regarding the results of the experiments on this data set, it is noteworthy that MODL does not require the same point locations for each curve. This may be an usefull property to make a clustering of functionnal data for which the measurement have not been recorded at regular intervals. Moreover, beyond the clustering of functional data, our method is able to deal with distributions. Thus, it is possible to detect clusters of multimodal distributions like the ones generated using $f_3$ and $f_4$. Analysis of a power consumption data set ---------------------------------------- We use the data set [@HebrailEtAl10] which consists in the electric power consumption recorded in a personal home during almost one year ($349$ days). Each curve consists in 144 measurements which give the power consumption of a day at a $10$ minutes sampling rate. There are $50{,}256$ data points and three features: the time of the measure $X$, the power measure $Y$ and the day identifier $C$. The study of this data set aims at grouping the days according to the characteristic of the power consumption of each day. First, the optimal model is computed using the MODL approach. Finally the approach is compared to that of [@HebrailEtAl10].\ *The MODL-Optimal Discretization.* The optimal clustering consists in a data grid defined by $57$ clusters, $7$ intervals on $X$ and $10$ on $Y$. This means that the $349$ recorded days have been grouped into 57 clusters, each day has been discretized into $7$ time segments and the power measures into $10$ power segments. This result highlights some characteristic days, such as the workdays, the days off or the days when nobody is at home. The summarized prototypes, represented by piecewise constant lines, show the average power consumption per time segment. The conditional probabilities of the power segments given the time segments are represented by grey cells, where the grey level shows the related conditional probability. The first representation has been chosen in order to simplify the reading of the curve, and the second to highlight some interesting phenomena such as the multimodal distributions of data points within the time segments.\ *Multimodal distributions.* In Figure \[Fig:57clusters\].(b), we notice that the prototype is located between two dark cells for the third time segment. This means that the majority of the data points have been recorded in the higher and the lower power segments but rarely in the interval where the prototype is for this time segment. Thus, a multimodal distribution of the data points on this time segment is highlighted, which is confirmed by Figure \[Fig:stackcurves\].(b). Let us notice that \[Fig:57clusters\].(a) is another illustration of a multimodal distribution for which the points are more frequent in the lower mode than in the upper one. Overall, the method extends the clustering of curves to clustering of distributions. *Merging the Clusters.* Whereas the finest data grid yields a rich clustering and useful information for some characteristic clusters, a more synthetic and easily interpretable view of the power consumption over the year may be desirable in some applications. That is why agglomerative merges have been performed and represented on Figure \[Fig:dendro\] by a dendrogram and a Pareto chart presenting the percentage of kept information as a function of the number of clusters. This measure is defined as following: Let $M_{\emptyset}$ be the null model with one cluster of curves and one interval per point dimension, whose data grid consists in one cell containing all the points. Its properties are detailed in [@BoulleHOPR10]. We denote $M_{opt}$ the optimal model according to the optimization of the criterion defined in the Theorem \[FunctionalDataClusteringTheorem\] and $M_k$ the model resulting from successive merges until obtaining $k$ clusters. The percentage of kept information for $k$ clusters $\tau_k$ is defined as: $$\begin{aligned} \tau_k=\dfrac{c(M_k)-c(M_{\emptyset})}{c(M_{opt})-c(M_{\emptyset})}\end{aligned}$$ The dendrogram is well-balanced and the Pareto chart is concave, which allows to divide by three the number of clusters while keeping almost $90\%$ of the initial information. *Comparative analysis of the modeling results.* In order to highlight the differences between the results retrieved using MODL and the approach of [@HebrailEtAl10], we propose to study a simplified data grid by coarsening the optimal model until having four clusters, using the post-processing technique detailed in Section 4. By doing this, $50\%$ of the information is kept and the power consumption and the time discretizations are reduced to four intervals. Contrary to MODL, the approach of [@HebrailEtAl10] requires the user to specify the number of clusters and time segments. We applied therefore their clustering technique with four clusters and a total of sixteen time intervals that are optimally distributed over the four clusters. The clusters retrieved by both approaches are displayed in Figures \[fig:MODL4\] and \[fig:Hebr4\]. ![The four clusters of curves retrieved using MODL with the average (black line) and the prototype (red solid line) curves. The number in parenthesis above each curve refers to the number of curves in the cluster.[]{data-label="fig:MODL4"}](figures/unnamed-chunk-13.pdf){width="75.00000%"} ![The four clusters of days retrieved using the approach of [@HebrailEtAl10] with the average (black line) and the prototype (red solid line) curves. The number in parenthesis above each curve refers to the number of curves in the cluster.[]{data-label="fig:Hebr4"}](figures/unnamed-chunk-14.pdf){width="75.00000%"} MODL computes a global discretization for both the time and the power consumption. Conversely, the approach of [@HebrailEtAl10] makes a discretization of the temporal variable only, that is different for each cluster of curves. In certain cases like the cluster $3$ of the Figure \[fig:Hebr4\], it may be suitable to avoid over-discretizations, and a few number of time segments is better for a local interpretation. However, having common time segments for all the clusters enables an easier comparison between the clusters. In the context of the daily power consumption, MODL enables the identification of four periods: the *night* (midnight - 6.35 AM), the *morning* (6.35 AM - 8.45 AM), the *day* (8.45 AM - 6.35 PM) and the *evening* (6.35 PM - midnight). We are then able to compare the differences in terms of power consumption between the clusters of curves for each period of the day. The approach of [@HebrailEtAl10] is based on the k-means and thus minimizes the variance between the curves locally to each time segment. It is the reason why the prototype are close to the average curves in the clusters obtained by this approach. In MODL, this property is not wanted. As a consequence, the prototype and the average curves seem less correlated. MODL is based on a joint density estimation that yields more complex patterns. To highlight the differences in terms of patterns, we propose to focus on a specific time segment. The first interval (i.e the *night*) found by MODL also exists in the four clusters obtained using the approach of [@HebrailEtAl10]. Let us focus on this time segment to investigate on the distributions of the power consumption measurements for each cluster of curves. To do that, we compute the probability density function of the power consumption variable locally to the first time segment, using a kernel density estimator [@sheather1991]. The results are displayed in Figures \[fig:MODLkde\] and \[fig:Hebrkde\]. ![Kernel density estimation of the power consumption measurements between midnight and 6.35 AM for each cluster of curves retrieved using MODL.[]{data-label="fig:MODLkde"}](figures/unnamed-chunk-15.pdf){width="75.00000%"} ![Kernel density estimation of the power consumption measurements between midnight and 6.35 AM for each cluster of curves retrieved using the approach of [@HebrailEtAl10].[]{data-label="fig:Hebrkde"}](figures/unnamed-chunk-16.pdf){width="75.00000%"} The density functions for the power consumption are similar for all the four clusters retrieved by the approach of [@HebrailEtAl10] during the *night*: for all the four clusters, we observe that the power measurements are very dense around one unique low consumption value that corresponds to the year average power consumption of the studied time segment. As for MODL, the density functions are very similar for the clusters $1$ and $3$ and also very similar to the ones displayed in Figure \[fig:Hebrkde\]. However, the cluster $4$ is different in that the density peak has been translated to an upper power interval. Finally, the cluster $2$ highlights multimodalities with three power values around which the measurements are dense. This complex pattern has been retrieved by MODL since it based on joint density estimation ; the competing approach cannot track such patterns. The curves of Figures \[fig:MODL4\] and \[fig:Hebr4\] do not clearly highlight the differences between the results. Displaying the calendar with different colors for the 4 clusters gives a more powerful reading of the differences between the results obtained using both methods. This is displayed in Figures \[Fig:calendarMODL\] and \[Fig:calendarHebr\]. The calendar of the clusters retrieved using MODL (see Figure \[Fig:calendarMODL\]) emphasizes a certain seasonality. Indeed, the way the curves are grouped highlights a link with the weather and the temperatures in France this year. The summer, from June to September, is a season when the temperatures are usually high. On the calendar, there are two clusters corresponding to this period. The rest of the year, the temperatures are lower and lead to an increase of the power consumption which is materialized by the two other clusters. It appears that in late April and early May, the temperature was exceptionally high this year: these days have been classified into the summer clusters. Interestingly, the cluster shown in Figure \[Fig:57clusters\].(a) where nobody was at home and the power consumption is low, has been included into a summer cluster (periods from the $23^{th}$ of February to the $2^{nd}$ of March and from the $29^{th}$ of October to the $3^{rd}$ of November). ![Calendar of the year 2007 retrieved using MODL. Each line represents a day of the week. There are four colors (one per cluster), the redder the color, the higher the average power consumption of the cluster is. The white days correspond to days with missing data.[]{data-label="Fig:calendarMODL"}](figures/calendarMODL){width="\textwidth"} ![Calendar of the year 2007 retrieved using the approach of [@HebrailEtAl10]. Each line represents a day of the week. There are four colors (one per cluster), the bluer the color, the higher the average power consumption of the cluster is. The white days correspond to days with missing data.[]{data-label="Fig:calendarHebr"}](figures/calendarHebr){width="\textwidth"} For its part, the calendar obtained using the approach of [@HebrailEtAl10] does not show a seasonality as the one retrieved using MODL does. The clusters are more distributed all over the year. The dark blue cluster (i.e the one with the higher average power consumption) groups however only cold winter days and can be compared to the reddest cluster of the Figure \[Fig:calendarMODL\]. The palest cluster (i.e the one with the lower average power consumption) characterizes also the warmer days and the days where there is nobody at home (see Figure \[Fig:57clusters\].(a)). As for the other ones with intermediate average power consumption, they do not show any correlation with the period of the day and thus do not allow an immediate interpretation. All in all, both approaches track different patterns and consequently retrieve different clustering schemes. On the one hand, MODL requires no user-defined parameters and is suitable when there are no prior knowledges of the data. Moreover, the approach is supplemented by powerful exploratory analysis tools allowing a global interpretation of the results at different granularity levels. On the other hand, the approach of [@HebrailEtAl10] enables a thorough understanding of the clusters by making a time decomposition locally to every cluster. In this practical case study, it appears that both methods are complementary. Conclusion {#sec:Conclusion} ========== In this paper, we have focused on functional data exploratory analysis, more particularly on curves clustering. The method that is proposed in this paper does not consider the data set as a collection of curves but rather as a set of data points with three features, two continuous, the point coordinates, and one categorical, the curve identifier. By clustering the curves and discretizing each point variable while selecting the best model according to a Bayesian approach, the method behaves as a nonparametric estimator of the joint density of both the curve and point variables. In case of large data sets, the best model tends to be too fine grained for an easy interpretation. To overcome this issue, a post-processing technique is proposed. This technique aims at merging successively the clusters until obtaining a simplified clustering while losing the least accuracy. This process is equivalent to making a hierarchical agglomerative classification, whose dissimilarity measure is a weighted sum of Kullback-Leibler divergences from the new cluster to the two merged clusters. Experimentations have been conducted on an artificial data set in order to highlight interesting properties of the method and on a real world data set, the power consumption of a home over a year. On the one hand, the finest model highlights interesting phenomena such as multimodal distributions for some time segments among the same cluster. As for the post-processing technique, a well-balanced dendrogram and a concave Pareto chart emphasize the ability of the finest model to be simplified with few information loss, leading to a more interpretable clustering. An interpretation of these results has been made focusing on the differences with an alternative approach. Beyond clustering of curves, the proposed method is able to cluster a collection of distributions. In future works, we plan to extend the method to multidimensional distributions by considering more than two point dimensions.
--- abstract: 'Recently, we introduced an approach for more easily interpreting searches for resonances at the LHC – and to aid in distinguishing between realistic and unrealistic alternatives for potential signals. This “simplified limits" approach was derived using the narrow width approximation (NWA) – and therefore was not obviously relevant in the case of wider resonances. Here, we broaden the scope of the analysis. First, we explicitly generalize the formalism to encompass resonances of finite width. We then examine how the width of the resonance modifies bounds on new resonances that are extracted from LHC searches. Second, we demonstrate, using a wide variety of cases, with different incoming partons, resonance properties, and decay signatures, that the limits derived in the NWA yield pertinant, and somewhat conservative (less stringent) bounds on the model parameters. We conclude that the original simplified limits approach is useful in the early stages of evaluating and interpreting new collider data and that the generalized approach is a valuable further aid when evidence points toward a broader resonance.' author: - 'R. Sekhar Chivukula$^1$' - Pawin Ittisamai$^2$ - Kirtimaan Mohan$^1$ - 'Elizabeth H. Simmons$^1$' bibliography: - 'broad\_simlim\_refs.bib' title: | Broadening the Reach of\ Simplified Limits on Resonances at the LHC --- =1 Introduction ============ New physics searches at the LHC commonly explore two-body scattering processes for signs that a resonance arising from physics Beyond the Standard Model (BSM) is being produced in the $s$-channel and immediately decaying to visible final state particles. Observed limits on the production cross-section ($\sigma$) times branching fraction ($BR$) for the process as a function of the resonance mass are compared with the predictions of a few benchmark models, each corresponding to one choice of spin, electric charge, weak charge, and color charge for the new resonance, evaluated for specific parameter values. However, for a given choice of spin and charges there will actually be multiple detailed theoretical realizations corresponding to different strengths and chiralities of the resonance’s couplings to initial-state partons and to decay products. The benchmarks shown in the analyses often correspond to convenient examples that have large production rates (like a leptophobic $Z'$ boson) or are already encoded in available analysis tools. In recent work [@Chivukula:2016hvp], we argued that when first evaluating new results, especially if signs of a small excess exist, it would be valuable to compare the data with entire classes of models, to see whether any resonances with particular production modes and/or decay patterns (e.g., a spin-zero state produced through gluon fusion and decaying to diphotons) could conceivably be responsible for a given deviation in cross-section data relative to standard model predictions. Using a simplifed model of the resonance allowed us to convert an estimated signal cross section into bounds on the product of the branching ratios corresponding to production and decay. This quickly reveals whether a given class of models could possibly produce a signal of the required size at the LHC and circumvents the present need to make laborious comparisons of many individual theories with the data one by one. Moreover, the “simplified limits variable" $\zeta$, which factors in the width-to-mass ratio of the resonance, produces even more compact and easily interpretable results. We began by establishing a general framework for obtaining simplified limits and outlining how it applies for narrow resonances with different numbers of production and decay modes. We then analyzed applications of current experimental interest, including resonances decaying to dibosons, diphotons, dileptons, or dijets. We further illustrated how easy it was to compare the calculated value of the simplified limits variable $\zeta$ for a specific instance of a new state with the experimental upper bound on $\zeta$ in order to determine whether that particular instance was a viable candidate to explain the excess. Here, we report on how to broaden the “simplified limits" approach of [@Chivukula:2016hvp]. After all, new physics may appear as a scattering excess that is not obviously due to a narrow s-channel resonance. We are therefore generalizing our simplified limits framework to handle resonances of moderate width treated in the Breit-Wigner approximation. Our generalized method addresses the implications of any signs of a small excess, as well as indicating how to interpret experimental exclusion curves. It builds upon our previous results for identifying the color [@Atre:2013mja; @Chivukula:2014npa] and spin [@Chivukula:2014pma] properties of new resonances decaying to dijet final states, extending them to a wider variety of final states and to situations in which only a small deviation possibly indicative of a resonance has been observed. This contrasts with studies in the literature that have focused on the discovery reaches for multiple $Z'$ models at a single collider [@Carena:2004xs], compared discovery reaches across multiple colliders [@Dobrescu:2013coa], or assessed the potential reach of proposed new colliders [@Eichten:1984eu]. Recent work [@Franceschini:2015kwy] more similar in spirit to ours has focused specifically on a potential 750 GeV diphoton signal at the LHC [@ATLAS-Diphoton; @Moriond-ATLAS; @ATLAS-CONF-2016-018; @CMS:2015cwa; @CMS:2015dxe; @Moriond-CMS; @CMS-PAS-EXO-16-018]. In the next section, we will briefly review the key results from our work on narrow resonances. Section III discusses how we extend these to broader resonances, treated in the Breit-Wigner approximation. Section IV discusses applications to broader resonances decaying to dileptons, dibosons, and dijets. The final section presents our conclusions. Recap: Simplified Limits on Narrow Resonances ============================================= In [@Chivukula:2016hvp; @Chivukula:2017qsi] we proposed a general method for quickly determining whether a small excess observed in collider data could potentially be attributable to the production and decay of a single, relatively narrow, s-channel resonance belonging to a generic category, such as a leptophobic $Z^\prime$ boson or a fermiophobic $W^\prime$ boson. Using a simplifed model of the resonance allows us to convert an estimated signal cross section into bounds on the product of the branching ratios corresponding to production and decay. Moreover, the “simplified limits variable" $\zeta$, which factors in the width-to-mass ratio of the resonance, produces even more compact and easily interpretable analyses. Here we mention a few key results that set the context for our present work. The tree-level partonic production cross-section for an arbitrary $s$-channel resonance $R$ produced by collisions of particular initial state partons $i,j$ and decaying to a single final state $x,y$ at the LHC can be written [@Harris:2011bh; @Agashe:2014kda] $$\hat{\sigma}_{ij\to R\to xy}(\hat{s}) = 16 \pi (1 + \delta_{ij}) \cdot {\cal N} \cdot \frac{\Gamma(R\to i+j) \cdot \Gamma(R\to x+y)} {(\hat{s}-m^2_R)^2 + m^2_R \Gamma^2_R} ~ \qquad {\cal N} = \frac{N_{S_R}}{N_{S_i} N_{S_j}} \cdot \frac{C_R}{C_i C_j}, \label{eq:nr}$$ where $N_S$ and $C$ count[^1] the number of spin- and color-states for initial state partons $i$ and $j$ and for the resonance $R$. In the narrow-width approximation, one focuses on the region $\hat{s} \approx m^2_R$ and approximates $$\frac{1} {(\hat{s}-m^2_R)^2 + m^2_R \Gamma^2_R} \approx \frac{\pi}{m_R \Gamma_R} \delta(\hat{s} - m^2_R)~.$$ Integrating over parton densities, and summing over incoming partons and over the outgoing partons which produce experimentally indistinguishable final states, we find the tree-level hadronic cross section to be $$\begin{aligned} \sigma^{XY}_R &\ \equiv \sigma_R \times BR(R \to X + Y) = 16\pi^2 \cdot {\cal N} \cdot \frac{ \Gamma_R}{m_R} \times \nonumber \\ & \left( \sum_{ij} (1 + \delta_{ij}) BR(R\to i+j) \left[\frac{1}{s} \frac{d L^{ij}}{d\tau}\right]_{\tau = \frac{m^2_R}{s}}\right) \cdot \left(\sum_{xy\, \in\, XY} BR(R\to x+y)\right)~. \label{eq:cross-section}\end{aligned}$$ Here ${d L^{ij}}/{d\tau}$ corresponds[^2] to the luminosity function for the $ij$ combination of partons and $X\, Y$ label the set of experimentally indistinguishable final states. Defining a weighting function $\omega_{ij}$ allows us to reframe the sum over $ij$ as follows: $$\begin{aligned} \sum_{ij} & (1 + \delta_{ij}) BR(R\to i+j) \left[\frac{1}{s} \frac{d L^{ij}}{d\tau}\right]_{\tau = \frac{m^2_R}{s}} = \left[\sum_{ij} \omega_{ij} \left[\frac{1}{s} \frac{d L^{ij}}{d\tau}\right]_{\tau = \frac{m^2_R}{s}}\right] \cdot \left[\sum_{i'j'} (1 + \delta_{i'j'}) BR(R\to i'+j') \right] \nonumber \label{eq:rewritten}\end{aligned}$$ where $$\omega_{ij} \equiv \dfrac {(1 + \delta_{ij}) BR(R\to i+j)} {\sum_{i'j'} (1 + \delta_{i'j'}) BR(R\to i'+j')}~.$$ The fraction $\omega_{ij}$ lies in the range $ 0 \le \omega_{ij} \le 1$ and by construction $\sum_{ij} \omega_{ij} = 1$. Substituting this into the cross-section in Eqn. \[eq:cross-section\], we may obtain an expression for the product of the sums of incoming and outgoing branching ratios: $$\begin{aligned} \left[\sum_{i'j'} (1 + \delta_{i'j'}) BR(R\to i'+j') \right] &\cdot \left(\sum_{xy\, \in\, XY} BR(R\to x+y)\right) = \label{eq:gen-bran-init}\\ &\frac{\sigma^{XY}_R} { 16\pi^2 \cdot {\cal N} \cdot \frac{\Gamma_R}{m_R} \times \left[\sum_{ij} \omega_{ij} \left[\frac{1}{s} \frac{d L^{ij}}{d\tau}\right]_{\tau = \frac{m^2_R}{s}}\right]} ~.\nonumber\end{aligned}$$ This product is bounded from above by a value depending on the identities of the incoming ($i'j'$) and outgoing ($x,y$) partons, $$BR(R\to i+j) (1 + \delta_{ij}) \cdot \sum_{xy\,\in\, XY} BR(R\to x+y) \le \begin{cases} 1/4 & i\neq j,\, ij \neq xy\in XY \\ 1 & i\neq j,\, ij = xy\in XY \\ 1/2 & i=j,\, x=y,\, ij\neq xy\in XY \\ 2 & i=j, x=y, ij = xy\in XY \end{cases} \label{eq:combined}$$ Framing the information in this way is what enables one to swiftly discern whether a given class of models is potentially consistent with a given data set. Comparisons between data and theory are simplified by re-arranging Eqn. \[eq:gen-bran-init\] so that the left-hand side includes the ratio of resonance width to mass. This defines the “simplified limits variable”, $\zeta$: $$\begin{aligned} \zeta \equiv \left[\sum_{i'j'} (1 + \delta_{i'j'}) BR(R\to i'+j') \right] &\cdot \left(\sum_{xy\, \in\, XY} BR(R\to x+y)\right) \cdot \frac{\Gamma_R}{m_R} = \label{eq:gen-bran-rat} \\ &\frac{\sigma^{XY}_R} { 16\pi^2 \cdot {\cal N} \times \left[\sum_{ij} \omega_{ij} \left[\frac{1}{s} \frac{d L^{ij}}{d\tau}\right]_{\tau = \frac{m^2_R}{s}}\right]} ~. \nonumber\end{aligned}$$ When working in the narrow width approximation, and assuming that $\Gamma/M \leq 10\%$, the upper bounds on the products of branching ratios mentioned above correspond to upper limits on $\zeta$ (Eqn. \[eq:combined\]) that are a factor of ten smaller. Extending the Method to Broader Resonances ========================================== We now generalize the results obtained in [@Chivukula:2016hvp] for resonances of larger widths by employing a Breit-Wigner representation of the resonance. We focus on resonances with fully-reconstructable final states: dileptons, dibosons, and dijets and on situations in which the resonance is far more massive than its decay products. The total cross-section for the production and decay of an s-channel resonance in the channel $i + j \to R \to x + y$ can be obtained by convoluting the parton luminosity with the partonic cross-section $\hat{\sigma}(\hat{s})$ as follows $$\sigma_R^{ij,xy} = \int_{s_{min}}^{s_{max}}d\hat{s}\, \hat{\sigma}(\hat{s}) \cdot \left[ \frac{d L^{ij}}{d\hat{s}}\right]~, {\rm where} \label{eq:BW-cs}$$ $$\hat{\sigma}(\hat{s})^{ij,xy} \equiv \frac{\Gamma^2_R}{m^2_R} \cdot\frac{\hat{s}}{m^4_R}\cdot \frac{16 \pi{\cal N} (1 + \delta_{ij}) BR(R\to i+j) \cdot BR(R\to x+y)} {\left(\frac{\hat{s}}{m^2_R}-1\right)^2+\frac{\Gamma^2_R}{m^2_R}}~. \label{eq:sigshat}$$ One can parametrize the cross-section in eqn. \[eq:sigshat\] in terms of the resonance mass ($m_R$), its width-to-mass ratio ($\Gamma_R/m_R)$, and the product of the relevant branching ratios ($BR_{ij}\cdot BR_{xy})$. In arriving at the form of eqn. \[eq:sigshat\], we have made several approximations. Because we are studying systems where a heavy resonance is decaying to states that are far lighter than $m_R$, we have approximated each of the running partial-widths $(\Gamma(\hat{s}))$ of the resonance in the numerator by a phase space factor times the on-shell partial-width $\sqrt{\hat{s}} \Gamma / m_R$; this gives rise to the factor of $\hat{s}/ m_R$ that impacts the overall magnitude of $\hat{\sigma}(\hat{s})$. Corrections to this approximation are suppressed by powers of $m^2/\hat{s}$, where $m$ is the mass of standard-model particles in the decay – and are therefore negligible for TeV scale resonance searches. At the same time, we noted that the presence of the running total-width in the denominator serves mainly to shift the location of the resonance peak; we have neglected this smaller effect, replacing $\sqrt{\hat{s}}\Gamma_R(\hat{s})$ by $m_R \Gamma_R$. We have checked numerically that this second approximation has a negligible effect unless $\Gamma_R \simeq m_R$. In general, experiments searching for resonances present their constraints either as limits on $\sigma \times BR$ or in terms of parameters of a given model. More specifically, experiments count the number of events in each bin of the invariant mass distribution. Constraints are set by defining likelihoods (usually Poissonian) for signal and signal + background hypotheses and performing statistical tests on these hypotheses. The theoretical prediction for signal cross-section is determined in terms of a model. Our proposal of simplified limits simply replaces the theoretical model prediction with expressions of the cross-sections given above. Now instead of couplings and masses, constraints are placed on the parameters of simlified limits– $\zeta$ and the mass of the resonance. In order to map simplified limits to a specific model one would need to specify $\omega_{ij}$ as well. In this work, we do not follow the procedure described above to extract limits, simply because the full likelihoods, nuisance parameters and errors are not available. We instead use “Brazil band" plots provided by experimental papers to extract the $2\sigma$ exclusion on $\sigma\times BR$ as a function of mass. We then equate the extracted cross-section to the expressions for cross-section given earlier in order to extract $2\sigma$ constraints on the parameter $\zeta$. This provides only an approximate estimate of the exclusion on $\zeta$, which suffices for our purpose of demonstrating the salient features of simplified limits. We take care to integrate $\hat{s}$ in eqn. \[eq:sigshat\] only over the range specified by the kinematic cuts implemented in each experiment. Extensions and Limitations of this Approach {#subsec:limitations} ------------------------------------------- As mentioned above, the simplified limits approach provides a compact and easily interpretable method of presenting limits on resonance searches. In this section we list possible extensions of this approach as well as describing some limitations. - As is done in traditional limits on cross-section times branching ratio, it is possible to include higher order corrections to limits on $\zeta$ by simply using K-factors. However, one has to be careful when higher order effects change the acceptance due to kinematical cuts. - The acceptance also depends on the spin of the resonance. As is sometimes done in the case of traditional Brazil band plots where limits are displayed in terms of $\sigma\times\text{Branching Ratio}\times\text{Acceptance}$, one could also present limits in terms of $\zeta\times\text{Acceptance}$. - The simplified limits approach is most directly applicable to searches where the kinematics of the final state can be reconstructed entirely, i.e. when searching for bumps in a invariant mass spectrum. For resonance searches in which the invariant mass spectrum cannot be reconstructed, such as ($W^{\prime}\to l \nu$), other kinematic variables (transverse mass for $W^{\prime}\to l \nu$ ) are analyzed. One could also apply the simplified limits approach in this case. However, again one needs to be careful about issues of acceptance and kinematic cuts. - The simplified limits approach works when interference of the BSM production process with SM backgrounds is negligible. This is the case for most $s$-channel resonance processes. - Simplified limits for resonances produced in pairs or produced in association with other particles may be interetsting to consider in the future. Keeping in mind the limitations of our method, we restrict our attention to $s$-channel resonance searches in which the invariant mass of the resonance can be completly reconstructed, and consider leading-order analyses. Applications to Broader Resonances ================================== We will now apply the extended simplified limits technique to various situations of general theoretical and experimental interest. Here we discuss electrically neutral spin-1 resonances decaying to dileptons; a $W'$ state decaying to dibosons; and resonances of various spins and colors decaying to dijets. Note that in each of these examples the final state may be fully reconstructed. In the first example, we will illustrate the power of the extended simplified limits analysis by showing results both in terms of an upper bound on the resonance’s combined production and decay branching ratios and separately in terms of a bound on the simplified limits variable $\zeta$. Thereafter, we will show results only in terms of $\zeta$. In discussing each application, we will show how the limits compare when the resonance is treated in the narrow width approximation (NWA) or assumed to be broader and treated as a Breit-Wigner shape (BW). Throughout, we will show the observed limits on $\zeta$ corresponding to LHC data from the ATLAS or CMS experiments, and in some cases we also show the expected limits. As discussed in [@Chivukula:2016hvp] if the observed limit is ever seen to be much weaker than the expected limit, meaning that some evidence of a new state has been found, then a given class of resonance (with a particular set of dominant production and decay modes) will be a candidate explanation for the excess only if the product of branching ratios (or corresponding value of $\zeta$) required to produce the observed signal falls in the physical zone (e.g., the product of branching ratios can never be required to exceed 1). In each application, we separately illustrate the specific value the $\zeta$ variable takes in benchmark theoretical models from the literature. Again, as discussed in [@Chivukula:2016hvp], if an excess were found, only a model whose predicted value of $\zeta$ fell in the window between the expected and observed limits on $\zeta$ would be a good candidate for explaining the excess. $u\bar{u} + d\bar{d}\to R \to l\bar{l}$ {#app:dileptons} ---------------------------------------- In this application, we study colorless spin-1 resonances that decay to dileptons. We employ the ATLAS analysis [@ATLAS-CONF-2016-045] of dilepton final states at $\sqrt{s}=13 ~\text{TeV}$ as the source of our information on the observed limits on branching ratios or $\zeta$. The cuts used to identify events for this analysis are summarized in Appendix A for the reader’s convenience. In order to extract the $\zeta$ variable we assume that the acceptance times efficiency for the resonances under consideration would be identical to that of the $Z^{\prime}$ considered by the ATLAS experiment. Since the only kinematic cuts employed are those on rapidity and transverse momentum, the geometrical acceptance depends only on the spin of the resonance – in this case a spin-1 resonance. In the dijet applications discussed later on, we will study resonances of different spins and our analysis will specifically incorporate the impact of resonance spin upon acceptance. Fig. \[fig:simlim-zprime\] shows the observed upper limits[^3] (at 95% credibility level) on hadronically-produced vector resonances decaying to dielectron final states, expressed through the simplified limits analysis. The upper pane of Fig. \[fig:simlim-zprime\] shows upper limits on the value of the product of branching ratios $BR(j\bar{j})BR(e^+e^-)$, where $j=\{u,d\}$. Here we have assumed universal couplings to quarks (as with a resonance coupling to baryon number) and neglected the small contribution of $(s,c,b)$ quarks to the resonance production cross-section. Similarly the lower plot of Fig. \[fig:simlim-zprime\] shows upper limits on $\zeta$. The thicker lines correspond to using the Breit-Wigner (BW) distribution to evaluate upper limits whereas the thinner lines are evaluated using the narrow width approximation (NWA). The grey-shaded rectangle in the upper pane is the area in which the product of branching ratios is physical: it cannot exceed 1/4 since the initial and final states are different, and neither the two initial nor the two final state particles are identical. The grey-shaded rectangle in the lower pane is the corresponding physical region of $\zeta$, given that we are assuming $\Gamma_R / M_R < 0.3$ in our BW analysis. From examining either pane, we can see that using the narrow width approximation gives a conservative upper limit on the vertical-axis variable, in the sense of not overstating the strength of the bound. In the upper pane, the upper bounds on resonances of $\Gamma/M = 0.3$ and $\Gamma/M = 0.03$ are vertically displaced from one another by an order of magnitude. When the same upper limits are re-expressed in terms of the simplified-limits variable $\zeta$ in the lower pane, the thin NWA curves now overlap since the value of $\Gamma/M$ is incorporated within $\zeta$. The bold curves for Breit-Wigner resonances of different widths are distinct and we can observe that a broader BW resonance pulls away from the NWA curve at a relatively lower value of $M$. We use as our comparison a benchmark $Z'$ model which couples universally[^4] to all quarks (in particular, which has the same value of $g^2_L+g^2_R$ for all up- and down-quarks); a $Z'$ coupling to $B-L$ would be a familiar example of such a $Z'$. The horizontal green dotted lines in the upper (lower) pane correspond to the product of branching ratios (value of $\zeta$) for a resonance of this sort and the indicated values of the resonance width-to-mass ratio. From either pane, it is clear that the ATLAS upper limits on the vertical-axis variable exclude this particular benchmark model for $Z'$ masses below at least 4 TeV. Limits that are set using the BW shape tend to be stronger than those set using the NWA, expecially at larger masses. In other words, the cross-section as evaluated using the NWA is smaller than the cross-section as evaluated using the BW shape. This occurs because large mass resonances require a large parton momentum fraction ($x$) to be produced. At large values of $x$, the parton distribution functions fall rapidly. The BW resonance integrates some of the luminosity $\sqrt{\hat{s}} < M$, thus giving rise to a larger cross-section. So long as there are no additional kinematic cuts (especially those affecting the invariant mass distribution, see Sec. \[app:dibosons\]), this pattern is typical of the limits set on resonances. While Fig. \[fig:simlim-zprime\] was produced under the simplifying assumption that the resonance had flavor-universal couplings to quarks, that assumption does not hold for most models. Since the relative strength of a resonance’s couplings to $u\bar{u}$ and $d\bar{d}$ will affect its production cross-section, we have also made a more general analysis. Fig. \[fig:simlim-zprime-uudd\] illustrates the degree to which the relative strength of the couplings to up-type and down-type quarks impacts the simplified limits on spin-1 resonances decaying to dileptons. Again, the grey-shaded rectangle shows the physical region of $\zeta$, given that we take $\Gamma_R / M_R < 0.3$ in our BW analysis. The upper pair of diagonal curves represent the 95% confidence level upper bounds on $\zeta$ for a vector boson that couples only to down-type quarks; the upper, thin bound curve was derived in the NWA while the lower thick one was derived assuming the resonance has a Breit-Wigner form with $\Gamma/M = 0.3$. The lower pair is similar, but derived under the assumption that the vector boson couples only to up-type quarks. The limit on any intermediate case where the resonance couples to both up-type and down-type quarks will lie between these extremes; the difference in the strength of the bound on $\zeta$ varies from a factor of a few at low $M_R$ to nearly a factor of ten at high $M_R$. As a benchmark, the horizontal dotted line shows the value of $\zeta$ for a Sequential Standard Model $Z^{\prime}_{SSM}$ boson, which, like the Standard Model $Z$ boson, has unequal couplings to up-type and down-type quarks. The ATLAS upper limits on $\zeta$ exclude this benchmark model for resonances masses below at least 4 TeV. Example: $ud \to R \to W^{\pm} Z$ {#app:dibosons} ----------------------------------- In this second application, we study colorless, electrically-charged spin-1 resonances that decay to $WZ$. We use the ATLAS analysis [@ATLAS-CONF-2016-055] with $ 15.5{\text{fb}^{-1}}$ at $\sqrt{s} = 13$ TeV as the basis for our work. In this analysis ATLAS searched for resonances with mass $M_{R}> 1$ TeV decaying to dibosons ($WW$, $WZ$, $ZZ$ ), in the fully hadronic channel $qqqq$. Selection criteria are summarized in Appendix B for the reader’s convenience. Fig. \[fig:simlim-Wprime\] shows the expected and observed 95% confidence level limits on $\zeta$ for vector resonances decaying to diboson final states. As above, the grey-shaded region is the area in which $\zeta$ has a physically reasonable value, given that we are assuming $\Gamma_R / M_R < 0.3$ in our BW analysis; in the upper pane where the initial and final states are identical, the product of branching ratios is bounded from above by 1, while in the lower pane it is bounded from above by 1/4. In both panes, the diagonal solid curves correspond to observed limits while the diagonal dashed ones correspond to expected limits. The thinner red curves have been derived using the NWA and the thicker blue ones have been derived assuming a Breit-Wigner form for the resonance, with $\Gamma/M = 0.3$. Also shown in each pane are two horizontal short-dashed curves corresponding to the value of $\zeta$, as a function of resonance mass, for our comparison benchmark models: the Heavy Vector Triplet (HVT) Models [@Pappadopulo:2014qza]. The HVT phenomenological model was introduced to study charged vector bosons potentially coupling both to fermions and to electroweak bosons, and following [@Pappadopulo:2014qza] we illustrate with two choices for the defining paramter, set A ($g_V=1$; bold) and set B ($g_V=3$; thin). The upper pane explores the situation where the vector resonance is both produced through $WZ$ fusion and decays back to a hadronically-decaying $WZ$ pair. We see that the upper limit on $\zeta$ lies orders of magnitude outside the (shaded) region of physically reasonable values of $\zeta$. One implication is that the upper bound is too weak to give a meaningful constraint on a fermiophobic vector resonance. Another is that a fermiophobic vector resonance would not be a viable candidate to explain any signs of an excess of events; e.g., if one were to interpret the fact that the observed limit lies above the expected limit near $M_R = 2$ TeV as possible evidence of a resonance, one would have to look to another class of resonance for an explanation. All of this is consistent with the results from [@Chivukula:2016hvp]. In contrast, the lower pane explores the case where the vector resonance is produced through $u\bar{d} + d\bar{u}$ initial states and decays to $WZ$; here, the observed upper bound lies well within the physical region. So if the area around $M_R = 2$ TeV (where the observed limit is weaker than expected) were taken as a possible locus of a new resonance, this production mode would be a viable candidate. In addition, we see that the $\zeta$ values predicted by Models A and B both fall within or near the “window” between observed and expected limits, which would make them worthy of further examination. Again, this is consistent with Ref. [@Chivukula:2016hvp]. What is new here is that we can see the impact of going beyond the narrow width approximation. As shown in Fig. \[fig:simlim-Wprime\], the limits obtained from assuming a BW distribution are similar to those obtained in the NWA, but not identical. In this case, ATLAS selected events such that the invariant mass of the two-fat-jet system lies in the range $1.0~\text{TeV}< m_{JJ}< 3.5~\text{TeV}$. The presence of the hard upper bound on the invariant mass results tends to “clip" the high-mass end of the broader BW signal distribution in a way that does not happen for the NWA case (where all signal events are in a single invariant mass bin). As a result, the upper bound on $\zeta$ turns out to be slightly weaker than the limit derived using the NWA limit, contrary to what we observed in the dilepton example. However, just as in the dilepton example, the NWA limit still gives a solid first estimate of the simplified limits constraint even for a resonance of moderate width. Example: Dijets {#app:dijets} ---------------- We now apply the extended framework to new resonances that decay to dijets. We use the CMS results on dijets with $20{\text{fb}^{-1}}$ of $8$ TeV data [@Khachatryan:2015sja] as the source of our limits on $\zeta$; the various selection criteria are summarized in Appendix C for the reader’s convenience. We study a variety of scenarios, including scalar, vector, and spin-2 resonances, for interpreting the experimental data. These cases not only show how the resonance’s spin impacts the bounds but also illustrate how the limits on resonances having the same spin but being produced through different initial-state partons can differ, due to the impact of the parton distribution functions. Appendix C describes how we account for the impact of resonance spin on detector acceptance. Alongside the experimental limits on $\zeta$ for each scenario, we show the predicted $\zeta(M_R)$ for one or two benchmark models from the literature. Our benchmarks for scalar resonances decaying to dijets are the scalar octet resonance of [@Hill:1991at; @Frampton:1987dn; @Martynov:2009en; @Bai:2010dj; @Harris:2011bh] and the scalar diquark [@Angelopoulos:1986uq; @Hewett:1988xc; @King:2005jy; @Kang:2007ib]; the vector resonances we use as benchmarks are the Sequential Standard Model $Z^{\prime}$ and flavor-universal Colorons [@Frampton:1987dn; @Frampton:1987ut; @Chivukula:1996yr; @Simmons:1996fz]. We use the excited quarks from [@Baur:1989kv; @Baur:1987ga; @Redi:2013eaa] and the RS Graviton [@Randall:1999ee; @Bijnens:2001gh] as samples of fermionic and spin-2 resonances, respectively. In Fig. \[fig:scalar-dijet\] we consider the 95% confidence level upper limits on color-octet scalar resonances produced via gluon fusion (upper panel) and on scalar diquarks produced by quark fusion (lower panel). In the upper panel, the shaded rectangle encompasses the area corresponding to physically reasonable values of $\zeta$, as understood via Eq. (\[eq:combined\]) with $\Gamma_R / M_R \leq 0.3$ to be 0.3. The gold shaded region illustrates the difference between using the narrow width and Breit-Wigner approximations. In the lower panel, the dark-shaded rectangle applies to all diquarks and the light-shaded extension applies only to cases with identical incoming partons ($uu$ or $dd$). This panel highlights the dramatic difference in the ranges of model parameters excluded depending on the flavor properties of the diquark - and hence the flavor composition of the incoming partons; accordingly, we obtain lower mass limits from less than 2 TeV to more than 5 TeV, for the benchmark model illustrated. For scalars decaying to dijets, we find that using the NWA somewhat understates the LHC reach; again, that approximation therefore provides a conservative upper limit on the value of $\zeta$. For the color-octet scalar, the NWA and BW curves are quite close together except at the higher resonance mass values where the experimental constraints also become too weak to impact the physical region of $\zeta$. For diquarks, the experimental limits generally fall within physical region for $\zeta$, so that the divergence between the BW and NWA curves, including the larger separation at high mass values, is potentially of greater importance. When we compare the results in Fig. \[fig:scalar-dijet\] with those for a vector resonance decaying to dileptons in Fig. \[fig:simlim-zprime-uudd\], we see that the BW curve begins to visibly diverge from the NWA curve at different resonance masses: 1.5 TeV for a colorless vector resonance produced through $u\bar{u}$ or $d\bar{d}$ annihilation, 2.5 TeV for color octet scalars produced via $gg$ fusion, and 3.5 TeV for diquarks produced from $uu$ or $dd$ . However, due to the properties of the parton luminosity functions of the incoming states, the BW curve falls below the NWA curve more rapidly for the dijet scenarios; in all three cases, the NWA and BW curves are an order of magnitude apart in $\zeta$ for a resonance mass of 5 TeV. Fig. \[fig:vector-dijet\] shows the corresponding limits for flavor-universal[^5] vector resonances that decay to dijets; to leading order, these are always produced via $q\bar{q}$ annihilation rather than $gg$ fusion [@Zerwekh:2001uq]. As noted in the figure, we have used $N_{S_R} C_R \cdot \zeta$ as the vertical axis variable, rather than $\zeta$; as can be seen from Eqs. \[eq:gen-bran-rat\] and \[eq:nr\], this allows us to display the limits for both color-singlet and color-octet vector-bosons via the same curves. The dark-shaded rectangle indicates the physical region of $N_{S_R} C_R \cdot \zeta$ for a flavor-universal $Z'_U$ boson (with $\Gamma_R / M_R \leq 0.3$, $N_{S_R} = 3$, and $C_R = 1$) while the light-shaded rectangle shows how the physical region is extended in the case of a coloron (with $N_{S_R} = 3$, and $C_R = 8$). The red-shaded region between the diagonal curves illustrates the difference between the narrow width and Breit-Wigner approximations. At low resonance masses, the NWA yields an upper limit on $N_{S_R} C_R \cdot \zeta$ that is virtually identical to the BW curve; at higher masses, the NWA computation gives a conservative, but reasonable, approximation to the BW result. In fact, for the $Z'$ resonance, the mass range for which the BW curve diverges most strongly from the NWA curve lies outside the physical region of $ N_{S_R} C_R \cdot \zeta$. Finally, Fig. \[fig:graviton-dijet\] illustrates the upper limits for spin-2 resonances produced either through gluon fusion (upper curves) or quark annhilation (lower curve), and the gold-shaded region illustrates the differences between the NWA and BW calculations. The dark-shaded rectangle shows the physical region of $\zeta$ for a spin-2 resonance produced via $gg$ fusion and decaying to $q\bar{q}$, while the light-shaded rectangle shows how the physical region is extended when the intial and final states are both $q\bar{q}$. The limits on spin-2 states produced via both these channels would lie between these extremes. Once again, the NWA yields limits that are more conservative than those assuming a Breit-Wigner form for the resonance. At lower resonance masses, where the experimental constraints fall within the physically reasonable range of $\zeta$, the NWA and BW results are virtually identical. At higher resonance masses, where the BW constraints start to become significantly stronger than those from the NWA, both sets of constraints eventually become too weak to limit the physical region of $\zeta$. This plot also illustrates the same pattern noted earlier, whereby the BW curve drops visibly below the NWA curve at a lower resonance mass for states produced via $gg$ fusion compared with those produced via $q\bar{q}$ annihilation; again, this is due to the behavior of the parton luminosity functions. It is informative to compare the results for resonances with differing spins that are produced through the same initial state partons, and therefore incorporate the same parton luminosity functions. For instance, the exclusion curves in Fig. \[fig:vector-dijet\] and the lower pair of exclusion curves in Fig. \[fig:graviton-dijet\] both show results for the $q\bar{q} \to R \to q\bar{q}$ channel; a spin-1 (spin-2) resonance is studied in Fig. \[fig:vector-dijet\] (\[fig:graviton-dijet\]). As noted in Appendix C, the acceptances for the two different spin states are quite similar in this channel. So we expect that the exclusion curves should have the same shape (due to the same ${\cal L}_{ij}$) and be vertically displaced from one another. More precisely, since the vertical axis variable for Fig. \[fig:vector-dijet\] is $N_{S_R} C_R\cdot \zeta$ while that for Fig. \[fig:graviton-dijet\] is $\zeta$, and since $N_{S_R} C_R = 5$ for the spin-2 resonance, we would expect the $q\bar{q}$ curve in Fig. \[fig:graviton-dijet\] to lie $\log 5 \approx 0.7$ below its analog in Fig. \[fig:vector-dijet\]. Indeed, this is what we observe. Conclusions =========== A “simplified limits" analysis of hadron collider data [@Chivukula:2016hvp] casts resonance search results in terms of the variable $\zeta$, defined in Eq. \[eq:gen-bran-rat\], by exploiting the fact that the new physics cross-sections actually depend (to a good approximation) only on the production and decay (signal) modes considered. Using this framework, one can easily understand whether any resonance with a particular dominant production and decay channel could possibly produce a signal at the LHC matching any observed excess. Once a viable class of models has been identified, the degree to which any given theory within that class matches the observed excess can then be easily found, as it depends only on the width and branching-ratios of the resonance. The original simplified limits framework employed the narrow width approximation. In this paper, we examined how allowing for a finite width of the resonance modifies the simplified limit bounds extracted from LHC searches. We did so by comparing the simplified limit bounds obtained in the narrow width approximation and at finite width with the resonance described using the Breit-Wigner approximation. In particular, we illustrated applications to data from recent LHC searches covering a variety of different incoming partons, resonance properties, and decay signatures: - dilepton resonances [@ATLAS-CONF-2016-045], which yield the limits illustrated in Figs. \[fig:simlim-zprime\] and \[fig:simlim-zprime-uudd\], - diboson ($WZ$) resonances [@ATLAS-CONF-2016-055], with the bosons decaying to dijets, deriving the results shown in Fig. \[fig:simlim-Wprime\], - and dijet resonances [@Khachatryan:2015sja], whose implications for particles of various spins and colors are shown in Figs. \[fig:scalar-dijet\], \[fig:vector-dijet\], and \[fig:graviton-dijet\]. We have demonstrated that it is straightforward to extend the simplified limits methodology to resonances with finite width. Moreover, we found that the simplified limits derived in the narrow width approximation yield reasonable, and usually somewhat conservative (less stringent) bounds on the model parameters, compared to limits obtained by incorporating the resonance’s finite width. We have enumerated limitations and possible extensions of our approach. We have shown that the simplified limits framework remains extremely valuable in the early stages of evaluating and interpreting new collider data – and is not restricted to the case of narrow resonances. Acknowledgments {#acknowledgments .unnumbered} =============== The work of. R.S.C., K.M., and E.H.S. was supported by the National Science Foundation under Grant PHY-1519045. R.S.C. and E.H.S. also acknowledge the hospitality of the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611, during work on this paper. P.I. is supported by Research Grant for New Scholar Ratchadaphiseksomphot Endowment Fund, Chulalongkorn University. Appendix {#appendix .unnumbered} ======== Dilepton Selection Criteria --------------------------- Here, we summarize the experimental event selection criteria used in the ATLAS analysis [@ATLAS-CONF-2016-045] of dilepton final states at $\sqrt{s}=13 ~\text{TeV}$. This applies to our study of spin-1 resonances decaying to dileptons in section \[app:dileptons\]. - For electrons, the pseudo-rapidity satisfies $|\eta| < 2.47$, with the transition region between central and forward regions excluded ($1.37\le |\eta| \le 1.52$). For muons the pseudo-rapidity $|\eta|<2.5$ and the region $1.01\le |\eta|\le 1.10$ is excluded. - Electron discriminant variable ($95-96 \%$ efficiency) as well electron isolation requirements($99 \%$ efficiency) are used. - Muon isolation requirements. - Electron $E_T > 17 \text{GeV}$. Muon $p_T$ thresholds of $26~\text{GeV}$ and $50~\text{GeV}$ are used. - Efficiency of triggers for a sample of $Z^{\prime}_{\chi}$ ($M_{Z^{\prime}_{\chi}} = 3~\text{TeV}$): $87\%$ for dielectron and $94\%$ for dimuon channel. - Further $E_T(p_T) 30~\text{GeV}$ for electron (muon) pair. Data derived corrections + smearing. - Representative values of the total acceptance times efficiency for ($M_{Z^{\prime}_{\chi}} = 3~\text{TeV}$) are $73\%$ in the dielectron channel and $44\%$ in the dimuon channel. WZ Selection Criteria --------------------- Here, we summarize the experimental event selection criteria used in the ATLAS analysis [@ATLAS-CONF-2016-055] with $ 15.5{\text{fb}^{-1}}$ at $\sqrt{s} = 13$ TeV. This applies to our study of $W'$ resonances decaying to dibosons in section \[app:dibosons\]. - Large $R=1.0$ jets are identified and after a trimming and subjet identification procedure, the trimmed jets are required to have $p_{T,J}> 200$ GeV, $m_{J}>30$ GeV and $|\eta|<2.0$. - Boson ($W$ or $Z$) jets are identified using a boson tagging procedure that uses two selection criteria, namely $m_J$ and a variable $D_2^{(\beta =1)}$ that can be used to measure the compatibility of a two-prong decay topology. $|m_{W/Z} - m_J|< 15$  GeV the second criterion requires a $p_T$ dependent selection of $D_2^{(\beta =1)}$. The boson-tagging algorithm is configured so that the average identification efficiency for longitudinally polarised, hadronically decaying W or Z bosons is $50\%$. This tagging selection reduces the multi-jet background by a factor of approximately 60 per jet. - Further discrimination between boson and background jets is achieved by requiring that $N_{trk} < 30$, where $N_{trk}$ is defined as the number of charged-particle tracks pointing to the primary vertex3 with $p_T > 0.5$ GeV. - leptonic decay modes of W and Z are rejected. - Events are required have two trimmed jets with $p_{T,J}> 450$ GeV for the leading jet (to ensure full trigger efficiency) and $m_{JJ}> 1 TeV$ to avoid distortions to the mass spectrum from the $p_{T,J}$ cut. - Small rapidity separation for jets $\Delta y _{12}< 1.2$ for the leading jets (to reduce t-channel backgrounds). - $p_T$ asymmetry $\mathcal{A} = \frac{p_{T,J1} - p_{T,J2}}{p_{T,J1} + p_{T,J2}< 0.15}$. The signal efficiency for this requirement is very high, e.g. the efficiency for a HVT $W^{\prime}$ signal with a mass of $2.1$ TeV is approximately $97\%$. Dijet Selection Criteria ------------------------ Here, we summarize the experimental event selection criteria used in the CMS results on dijets with $20{\text{fb}^{-1}}$ of $8$ TeV data [@Khachatryan:2015sja]. This applies to our study of various resonances decaying to dijets in section \[app:dijets\]. - $p_{T_{j}}> 30$ GeV. - $2\eta^*=\eta_1 -\eta_2 > 1.3$. - $|\eta_{j}| < 2.5$ - $m_{jj} > 890$ GeV. In the narrow width approximation and for the range of interest of resonance masses $(1250<M_R<5500)$  GeV, it is straightforward to determine the acceptance of these cuts by simply integrating the appropriate normalized Wigner-d functions. We obtain the following values of acceptance $(A^{spin})$ for resonances with various spins: $$\begin{aligned} &A^0\simeq 0.57,\quad A^1\simeq 0.47,\quad A^{1/2} \simeq 0.57 &\nonumber \\ &A^2(q\bar{q} \to q\bar{q}) \simeq 0.54,\quad A^2(gg \to q\bar{q}) \simeq 0.69,\quad A^2(gg \to gg) \simeq 0.3.&\end{aligned}$$ For broader resonances, signal events are sometimes subject to the additional requirement $m_{jj}< 1250$ . This can cause small deviations in the values of the acceptances; since they are small, we have neglected them in our analysis. [^1]: While ${\cal N}$ depends on the color and spin properties of the incoming partons $i,j$, in most cases [@Chivukula:2016hvp] this factor is the same for all relevant production modes in a given situation. [^2]: In particular, $$\left[ \frac{d{L}^{ij}}{d\tau}\right] \equiv \frac{1}{1 + \delta_{ij}} \int_{\tau}^{1} \frac{dx}{x} \left[ f_i\left(x, \mu_F^2\right) f_j\left( \frac{\tau}{x}, \mu_F^2 \right) + f_{j}\left(x, \mu_F^2\right) f_i\left( \frac{\tau}{x}, \mu_F^2 \right) \right] \,, \label{eq:lumi-fun}$$ where in this paper, for the purposes of illustration, we calculate these parton luminosities using the [CT14LO]{} [@Pumplin:2002vw] parton density functions, setting the factorization scale $\mu_F^2= m^2_R$. [^3]: Differences between the limits as displayed here and as originally reported in ref. [@ATLAS-CONF-2016-045] arise from choice of PDF (and scale) and the use of a mass dependent K-factor ($\sim \in [1.1,1.3]$). [^4]: In general, the $U(1)$ gauge theory of a $Z'$ coupling universally to all quarks would have gauge and/or gravitational anomalies and a full model could require additional spectators. We use this object here purely as an illustration. [^5]: Coupling equally to $u$- and $d$-quarks, as for a resonance coupling to baryon number.
--- author: - Jeffrey Kuan title: Stochastic duality of ASEP with two particle types via symmetry of quantum groups of rank two --- Introduction ============ The asymmetric simple exclusion process (ASEP) is a widely studied model in mathematics and physics. Particles occupy a one–dimensional lattice, with at most one particle at each site. The particles jump to neighboring sites asymmetrically, meaning that particles will drift to either the right of the left. If the particle jumps to an occupied site, the jump is blocked. In [@BS0],[@CS] and [@TW], there is additionally a second–class particle. This particle jumps according to the same rule as ASEP. However, if a first–class particle attempts to jump to a site occupied by a second–class particle, the particles switch positions. If a second–class particle attempts to jump to a site occupied by a first–class particle, the jump is blocked. Observe that the second–class particles do not affect the first–class particles, or in other words, the projection to the first–class particles is Markov. This paper will introduce so-called “semi–second–class” particles. These are particles which can not jump over the first–class particles; however, their presence can influence the jump rates of adjacent first–class particles. Thus, the projection to first–class–particles is *not* Markov. These particles will also be called “type 1” and “type 2” particles. Two particular sets of values for the jump rates will be studied in detail. In these two cases, we will show that the processes are self–dual and explicitly write the duality function. The duality is proved using symmetry of the rank two quantum groups $\mathcal{U}_q(\mathfrak{gl}_3)$ and $\mathcal{U}_q(\mathfrak{sp}_4)$. The use of algebra symmetries to prove duality has a well–established history (e.g. [@SS],[@S],[@BS1],[@IS]). The proofs in this work follow the method laid out in [@CGRS]. Recent work [@BCS],[@CP] has also developed proofs for duality using more “direct” (that is, without algebra) methods. It has also been previously known that ASEP with second–class particles is integrable (e.g. [@AB]) and satisfies $\mathcal{U}_q(\mathfrak{gl}_3)$ symmetry [@AR], but the explicit representations had not been constructed. Similar models have also been shown to be integrable (e.g. [@C],[@DE]). The remainder of this paper is organized as follows: section \[Overview\] gives an explicit description of the processes, states the duality results as well as an application, and state the quantum group symmetry. Section \[Central Element\] reviews the background on quantum groups and constructs a central element necessary for the proof. Section \[C2\] finishes the proofs for the $\mathcal{U}_q(\mathfrak{sp}_4)$ case, and section \[A2\] finishes the proofs for the $\mathcal{U}_q(\mathfrak{gl}_3)$ case. During the writing of this paper, another paper [@BS] was posted to arXiv with similar results. That paper studies the process arising from $\mathcal{U}_q(\mathfrak{gl}_3)$ symmetry and finds a duality function similar to the one presented here. The approach is different in that it uses the Perk–Schultz quantum spin chain [@PS] to construct the representations, rather than explicitly constructing a central element. That paper also explicitly constructs all invariant measures and proves an interesting sum rule for the duality functions, neither of which are addressed here. **Acknowledgments**. The author would like to thank Alexei Borodin and Ivan Corwin for helpful conversations. The author was partially supported by a National Science Foundation Graduate Research Fellowship. Overview {#Overview} ======== Description ----------- Consider a one–dimensional lattice. Each lattice site has three possible states: either empty, occupied by a first–class particle, or occupied by a semi–second–class particle. Describe a particle configuration by $ \xi= \{\xi_i\}$ where $\xi_i\in \{0,1,2\}$ for each lattice site $i$, corresponding to an empty site, occupation by a first-class particle, and occupation by a semi–second–class particle, respectively. Each particle has two independent exponential clocks, one for left jumps and one for right jumps. The rate of the left clock depends on the state of the site to the left of the particle, and the rate of the right clock depends on the state of the site to the right of the particle. Let $L(i,j)$ denote the rate of the left clock of the $i$–th class particle when the site to the left has state $j$, and similarly denote $R(i,j)$. The particles interact according to the following rules: If a first–class particle attempts to jump to a site occupied by a semi–second–class particle, then the particles switch positions. If a semi–second–class particle attempts to jump to a site occupied by a first–class particle, the jump is blocked. This implies that $$L(2,1)=R(2,1)=0.$$ If a first–class particle attempts to jump to a site occupied by another first–class particle, then the jump is blocked. The same holds for the semi–second–class particle. This means that $$L(1,1)=L(2,2)=R(2,2)=R(1,1)=0.$$ This leaves six remaining jump rates, $L(1,0),R(1,0),L(2,0),R(2,0),L(1,2),R(1,2)$. Observe that if $$L(1,0)=L(1,2) , \quad R(1,0)=R(1,2),$$ then the first class particles evolve independently of the semi–second–class particles. In other words, the behavior of the first–class particles is Markov. In this case, the semi–second–class particles have been described in the literature as second–class particles (see e.g. [@S; @TW]). In general, however, the semi–second–class particles still affect the jump rates of the first–class particles, even if they can not jump over them. Also observe that if the six jump rates are all multiplied by the same positive constant, then this corresponds to rescaling the time, and hence has no effect on the interaction of the particles. In this paper, we will consider two particular sets of values for the jump rates. The first is when $$L(1,0)=L(2,0)=L(1,2)=1, \quad R(1,0)=R(2,0)=R(1,2)=q^{-2}.$$ This is called $\textit{spin } 1/2 \textit{ type } A_2 \textit{ ASEP}$, or ASEP with second–class particles. Here, the asymmetry parameter is $q$ for all particles. The second set of values for the jump rates is when $$L(1,0)=L(2,0)=1, \quad L(1,2)=a, \quad R(1,0)=R(2,0)=q^{-2}, \quad R(1,2)=aq^{-4},$$ where $a$ solves $(q^{-4}+q^6)a=q^2(q^2+q^{-2})^2$. This is called $\textit{spin } 1/2 \textit{ type } C_2 \textit{ ASEP }$. In other words, the asymmetry parameter is $q$ for particles of type $1$ and $2$, and $q^2$ when particles of type $1$ and type $2$ interact. The reasons for the names will become clear later in the paper. Duality results --------------- Let us review the definition of duality. Suppose that $X(t)$ and $Y(t)$ are Markov processes on state spaces $X$ and $Y$ respectively. Given a function $D$ on $X\times Y$, let $\mathcal{S}_D\subseteq X\times Y$ be the set of all $(x,y)$ such that $$\mathbb{E}_x[D(x(t),y)] = \mathbb{E}_y[D(x,y(t))],$$ where on the left hand side, the process $x(t)$ starts at $x(0)=x$, and on the right hand side, the process $y(t)$ starts at $y(0)=y$. If $\mathcal{S}_D = X\times Y$, then we say that $X(t)$ and $Y(t)$ are dual with respect to $D(x,y)$. If furthermore, $X(t)$ and $Y(t)$ are the same process, then we say that $X(t)$ is self–dual. In order to write the explicit formula for the duality functions, define $$\begin{aligned} N^L_i(\eta) &= \sum_{j=1}^{i-1} 1_{\{\eta_j\neq 0\}}, \quad \quad \tilde{N}^L_i(\eta) = \sum_{j=1}^{i-1} 1_{\{\eta_j = 1\}}, \\ N^R_i(\eta) &= \sum_{j=i+1}^{L} 1_{\{\eta_j\neq 0\}}, \quad \quad \tilde{N}^R_i(\eta) = \sum_{j=i+1}^{L} 1_{\{\eta_j = 1\}}. \\\end{aligned}$$ \[SDF\] In the $A_2$ case, if $\xi$ is the particle configuration with particles of type $1$ at $n_1,\ldots,n_r$ and particles of type $2$ at $m_1,\ldots,m_{r'}$, then the function $D(\cdot,\cdot)$ defined by $$D(\eta,\xi) = \prod_{s=1}^r 1_{\{\eta_{n_s}=1\}}q^{2\tilde{N}^R_{n_s}(\eta)+2n_s} \prod_{s'=1}^{r'} 1_{\{\eta_{m_{s'}}\neq 0\}}q^{2N^R_{m_{s'}}(\eta)+2m_{s'}}$$ is a self–duality function. This function is similar to Proposition 2 of [@IS] or (3.12) of [@S]. Indeed, if $\xi$ only contains type $2$ particles, one recovers the self–duality function for the projection of type $A_2$ ASEP to the number of particles, which is still ASEP. If $\xi$ only contains type $1$ particles, one recovers the self–duality function for the projection of type $A_2$ ASEP to the type $1$ particles, which is again still ASEP. In the $C_2$ case, we give two duality functions: \[C2Duality\] (1) In the $C_2$ case, the function $$D(\eta,\xi) = \prod_{i=1}^L \left( 1_{\{\xi_i = \eta_i=1\}}q^{2(i-1)} + 1_{\{\xi_i = \eta_i=2\}}q^{2(i-1 + N^L_i(\eta) + N^L_i(\xi))} + 1_{\{\xi_i=1,\eta_i=2\}}q^{2(N_i^L(\eta) + i-1+ 2N_i^L(\xi)-\tilde{N}_i^L(\xi) )}\right)$$ is a self–duality function. \(2) In the $C_2$ case, there is a function $D$ such that $$\mathcal{S}_D = \{0,1,2\}^L \times \{0,1\}^L.$$ Explicitly, if for $n_1<\ldots<n_r,$ the particle configuration $\xi^{(n_1,\ldots,n_r)}$ is defined by $$\xi^{(n_1,\ldots,n_r)}_i = \begin{cases} 1, i \in \{n_1,\ldots,n_r\} \\ 0, i \notin \{n_1,\ldots,n_r\} \end{cases}$$ then $$D(\eta,\xi^{(n_1,\ldots,n_r)}) = \prod_{s=1}^r 1_{\{\eta_{n_s}\neq 0\}} q^{ 2N_{n_s}^R(\eta)+2n_s}$$ **Remark**. Theorem \[C2Duality\](2) can also be stated as “spin $1/2$ type $C_2$ ASEP is dual to usual ASEP with respect to $D$.” Also observe that the function $D(\cdot,\cdot)$ only detects the number, not the type, of particles. The projection of type $A_2$ and $C_2$ ASEP to particle occupation is simply the usual ASEP, and the duality function matches that from [@S]. The interest lies in that $D(\cdot,\cdot)$ can be constructed from the representation theory of $\mathcal{U}_q(\mathfrak{sp}_4)$, which will be seen below. Construction {#Construction} ------------ In [@CGRS], there is a general description of how to construct particle systems from quantum groups, as well as finding self–duality functions for these particle systems. Let us review the idea in several steps. The first step is to consider the quantum group $\mathcal{U}_q(\mathfrak{g})$ for some finite–dimensional simple Lie algebra $\mathfrak{g}$. Find an explicit central element $C\in \mathcal{U}_q(\mathfrak{n}_-)\mathcal{U}_q(\mathfrak{h}) \mathcal{U}_q(\mathfrak{n}_+)$. Next, consider a finite–dimensional irreducible representation $V$ of $\mathcal{U}_q(\mathfrak{g})$ with a basis $v_1,\ldots,v_d$ consisting of weight space vectors. If $v_1$ denotes the highest weight vector and $v_d$ denotes the lowest weight vector, then compute the value of $a$ for which $\Delta(C)(v_1\otimes v_1)=a v_1\otimes v_1$. Now compute the $d\times d$ matrix of $A:=\Delta(C-a)$ acting on $V\otimes V$ with respect to the basis $\{v_i\otimes v_j, 1\leq i,j\leq d\}$. Observe that since $C-a$ is still central. Assume that this matrix has non–positive diagonal entries, and non–negative off–diagonal entries (this will not always be true). Now consider the operator on $V^{\otimes L}$ defined by $$\sum_{i=1}^{L-1} 1^{\otimes i-1} \otimes A \otimes 1^{\otimes L-i-1}.$$ Suppose that we have a vector $g \in V^{\otimes L}$ such that $A^{(L)}g=0$. It is possible to find such a vector by applying elements of $\mathcal{U}_q(\mathfrak{n}_+)$ to the lowest weight vector $v_d\otimes \cdots \otimes v_d$ (in physics language, this is applying creation operators to the vacuum state). Write $g$ in terms of the canonical basis $$\sum_{1\leq i_1,\ldots,i_L \leq d} g(i_1,\ldots,i_L) v_{i_1} \otimes \cdots \otimes v_{i_L}$$ Assume that $g(i_1,\ldots,i_L)$ is always positive and define $G$ to the diagonal operator on $V^{\otimes L}$ defined by $$G(v_{i_1} \otimes \cdots \otimes v_{i_L}) = g(i_1,\ldots,i_L) v_{i_1} \otimes \cdots \otimes v_{i_L}.$$ By the assumptions on $A$, the matrix $\mathcal{L} = G^{-1}A^{(L)}G$ is the generator of a continuous–time Markov chain on the state space $\{1,\ldots,d\}^L$. Finally, if $A^{(L)}$ is self–adjoint on $V^{\otimes L}$ and $S$ is an operator that commutes with $A$, then $D=G^{-1}SG^{-1}$ is a self–duality function for the particle system generated by $\mathcal{L}$. This paper will consider the situation in which $\mathfrak{g} = \mathfrak{sp}_4$ or $\mathfrak{gl}_3$ and $V$ is the fundamental representation. The precise statements are as follows: There exists a central element $C\in \mathcal{U}_q(\mathfrak{gl}_3)$ and an operator $G$ on $V^{\otimes L}$ such that for $$A^{(L)} := \sum_{i=1}^{L-1} 1^{\otimes i-1} \otimes \Delta(C) \otimes 1^{\otimes L-i-1},$$ $G^{-1}A^{(L)}G$ is the generator of spin $1/2$ type $A_2$ ASEP on the lattice $\{1,\ldots,L\}$ with domain wall boundary conditions. The self–duality function $D(\cdot,\cdot)$ in Theorem \[SDF\] is of the form $G^{-1}SG^{-1}$ for a symmetry $S$ of $A^{(L)}$. In the following theorem, the notation for $A^{(L)}$ and $\tilde{A}^{(L)}$ are the same. \[C2Construction\] There exists a central element $C\in \mathcal{U}_q(\mathfrak{sp}_4)$ and operators $G_{\epsilon}$ on $V^{\otimes L}$ such that the limit $\lim_{\epsilon\rightarrow 0} G_{\epsilon}^{-1}A^{(L)}G_{\epsilon}$ is the generator of spin $1/2$ type $C_2$ ASEP on the lattice $\{1,\ldots,L\}$ with domain wall boundary conditions. The functions in Theorem \[C2Duality\] are of the form $G^{-1}SG^{-1}$ for a symmetry $S$ of $A^{(L)}$. Central Element {#Central Element} =============== The first step is to find a suitable central element in $\mathcal{U}_q(\mathfrak{g})$. This will be done with the quantum Harish–Chandra isomorphism. In principle, one could directly check that the resulting element is central using only the commutation relations, but the whole proof is presented here in order to make the construction less mysterious and more applicable for other Lie algebras. Given a simple Lie algebra $\mathfrak{g}$ of rank $n$, the quantum group ${\mathcal{U}_q(\mathfrak{g})}$ is the Hopf algebra generated by $\{e_i,f_i,k_i\},1\leq i\leq n$ with relations $$[e_i,f_j] = \delta_{ij}\frac{k_i-k_i^{-1}}{q_i-q_i^{-1}} , \quad [k_i,k_j]=0$$ $$[e_i,e_j]=[f_i,f_j]=0,\ \ i\neq j+1,$$ $$k_{i}e_{j}= q^{(\alpha_i,\alpha_j)}e_{j}k_{i} \quad k_{i}f_{j}= q^{-(\alpha_i,\alpha_j)}f_{j}k_{i}$$ $$\sum_{r=0}^{-a_{ij}} \binom{-a_{ij}}{r}_q e_i^{r}e_{j}e_i^{-a_{ij}-r}=0$$ where $$q_i = q^{(\alpha_i,\alpha_i)/2}$$ $$\binom{n}{m}_q = \frac{(n)_q!}{(m)_q!(n-m)_q!}, \quad (n)_q! = \prod_{k=1}^n (k)_q, \quad (n)_q = \frac{q^n-q^{-n}}{q-q^{-1}}$$ and $$a_{ij} = \frac{2(\alpha_i,\alpha_j)}{(\alpha_i,\alpha_i)}$$ is the Cartan matrix. (Recall that $(\alpha_i,\alpha_i)=2$ for short roots and $4$ for long roots.) The co–product is $$\Delta(e_i) = e_i \otimes 1 + k_i \otimes e_i \quad \Delta(f_i) = 1\otimes f_i + f_i\otimes k_i^{-1}$$ and the antipode is $$S(e_i) = -k_i^{-1}e_i \quad S(f_i) = -f_i k_i, \quad S(k_i) = k_i^{-1},$$ We will also use Greek letter subscripts $k_{\alpha}$ to denote the $k_i$, when it is notationally more convenient to do so, and $k_{\alpha+\beta}$ denotes $k_{\alpha}k_{\beta}$. Letting $\mathfrak{b}_{\pm}$ denote the Borel subalgebras, there is a pairing (see Proposition 6.12 of [@J]) on $ \mathcal{U}_q(\mathfrak{b}_-) \times \mathcal{U}_q(\mathfrak{b}_+)$ defined by $$\langle k_{\alpha}, k_{\beta}\rangle = q^{-(\alpha,\beta)_{\mathfrak{g}}}, \quad \langle f_i,e_j\rangle = \frac{-\delta_{ij}}{q_i-q_i^{-1}}, \quad \langle k_i,e_j\rangle = \langle f_i,k_j\rangle = \langle 1,e_i, \rangle=\langle f_j,1\rangle=0, \quad \langle 1, 1\rangle=1$$ $$\langle y, x\cdot x' \rangle = \langle \Delta(y), x'\otimes x\rangle, \quad \langle y \cdot y', x\rangle = \langle y \otimes y', \Delta(x)\rangle$$ where $( \cdot,\cdot ){\mathfrak{g}}$ is an invariant, non–degenerate invariant symmetric bilinear form on $\mathfrak{h}^*$. Furthermore, according to Lemma 6.16 of [@J], $$\label{anti} (\omega(x),\omega(y))=(y,x)=(\tau(y),\tau(x))$$ where $\omega$ is the automorphism and $\tau$ is the antiautomorphism defined by $$\begin{aligned} \omega(e_i)=f_i, \quad \omega(f_i)=e_i, \quad \omega(k_i)=k_i^{-1}\\ \tau(e_i)=e_i, \quad \tau(f_i)=f_i, \quad \tau(k_i)=k_i^{-1}\end{aligned}$$ Let $V$ be the fundamental representation of $\mathfrak{g}$ and let $\{v_{\mu}\}$ be a basis of $V$ such that $v_{\mu} \in V[\mu]$, the $\mu$–weight space of $V$. For any $\mu \geq \lambda$ such that $V[\mu]$ and $V[\lambda]$ are nonzero, let $e_{\mu\lambda}$ and $f_{\lambda\mu}$ be elements of $U^+$ and $U^-$ respectively such that $e_{\mu\lambda}v_{\lambda}=v_{\mu}$ and $f_{\lambda\mu}v_{\mu}=v_{\lambda}$, Let $\rho$ be half the sum of the positive roots of $\mathfrak{g}$, and recall that $(2\rho,\alpha)=(\alpha,\alpha)$ for the simple roots $\alpha$. \[CentralLemma\] If $q$ is not a root of unity and $2\mu$ is in the root lattice of $\mathfrak{g}$ for all weights $\mu$ of $V$, then the element $$\label{central} \sum_{\mu \geq\lambda } q^{(\mu-\lambda,\mu)} q^{-(2\rho,\mu)} e_{\mu\lambda}^* k_{-\lambda-\mu} f_{\lambda\mu}^*$$ is central in ${\mathcal{U}_q(\mathfrak{g})}$, where the star $^*$ denotes the dual element under $\langle,\cdot,\cdot\rangle$. By following the construction of the Harish–Chandra isomorphism in [@J], one sees that $$\sum_{\mu \geq\lambda } q^{(\nu,\mu)} q^{-(2\rho,\lambda)} q^{-(2\rho,\nu)} e_{\mu\lambda}^* k_{\nu}k_{-2\lambda-2\nu} f_{\lambda\mu}^*, \quad \nu:=\mu-\lambda$$ is central, which simplifies to . $\mathfrak{sp}_4$ ----------------- Recall that $\mathfrak{sp}_{2n}$ is the rank $n$ Lie algebra consisting of $2n\times 2n$ matrix of the form $$ { ( ------ ------ $A $ $B $ $C$ $D$ ------ ------ ) : A=-D\^T, B=B\^T, C=C\^T } $$ Letting $E_{ij}$ denote the matrix with a $1$ at the $(i,j)$-entry and zeroes elsewhere, define $$\begin{aligned} e_i &= E_{i,i+1} - E_{n+i+1,n+i}, \quad f_i = E_{i+1,i} - E_{n+i,n+i+1} \\ h_i &= E_{ii} - E_{i+1,i+1} - E_{n+i,n+i} + E_{n+i+1,n+i+1} \\ e_n &= E_{n,2n}, \quad f_n = E_{2n,n} \quad h_n = E_{n,n} - E_{2n,2n} \end{aligned}$$ The simple roots and fundamental weights are $$\begin{aligned} \alpha_i &= \epsilon_i - \epsilon_{i+1} , 1\leq i\leq n-1\\ \alpha_n &= 2\epsilon_n, \\ \omega_i &= \epsilon_1 + \epsilon_2 + \cdots + \epsilon_n, 1\leq i\leq n\end{aligned}$$ where $\epsilon_i(M)=M_{ii}$. We have that $(\alpha_i, \alpha_i) =2$ for $1\leq i\leq n-1$ and $(\alpha_n,\alpha_n)=4$. We have that $(\alpha_{n-1},\alpha_n) = -2$. When $n=2$, the Cartan matrix of $\mathfrak{sp}_4$ is simply $$ ( ------ ------ 2 $-2$ $-1$ 2 ------ ------ ) $$ The Dynkin diagram of $\mathfrak{sp}_4$ is of type $C_2$, hence the notation. To make notation clearer, $k_{(1,-1)}$ denotes $k_1$ and $k_{(0,2)}$ denotes $k_2$. Let $V$ be the fundamental representation of $\mathfrak{sp}_4$. It has a basis $v_1,v_2,v_4,v_3$ which are in the weight spaces $\epsilon_1,\epsilon_2, -\epsilon_2, -\epsilon_1$. It is immediate that the condition of lemma \[CentralLemma\] holds. Index these and order them by $\mathbf{1} \geq \mathbf{2} \geq \mathbf{\bar{2}} \geq \mathbf{\bar{1}} $. We have that $$\mathbf{1} \stackrel{f_1}{\longrightarrow} \mathbf{2} \stackrel{f_2}{\longrightarrow} \mathbf{\bar{2}} \stackrel{f_1}{\longrightarrow} \mathbf{\bar{1}} .$$ Here, the sum of the positive roots is $$2\rho = 4\epsilon_1 + 2\epsilon_2$$ So that $$\label{rho} (-2\rho,\mu) = \begin{cases} -4, &\mu = \mathbf{1} \\ -2, &\mu = \mathbf{2} \\ 2, &\mu = \mathbf{\bar{2}} \\ 4, &\mu = \mathbf{\bar{1}} \\ \end{cases}$$ In order to write the central element, the dual elements need to be calculated: The dual elements are: $$\begin{aligned} (e_1)^* &= -(q-q^{-1})f_1, \quad &(f_1)^* = -(q-q^{-1})e_1 \\ (e_2)^* &= -(q^2-q^{-2})f_2, \quad & (f_2)^* = -(q^2-q^{-2})e_2 \\ (e_1e_2)^* &= (q-q^{-1}) (q^{2}f_1f_2 - f_2f_1), \quad & (f_1f_2)^* = (q-q^{-1}) (q^{2}e_1e_2 - e_2e_1) \\ (e_2e_1)^* &= (q-q^{-1}) (q^{2}f_2f_1 - f_1f_2), \quad & (f_2f_1)^* = (q-q^{-1}) (q^{2}e_2e_1 - e_1e_2) \\ (e_1e_2e_1)^* &= (q-q^{-1})(q f_1f_1f_2 - (q^{-1} + q^3)f_1f_2f_1 + qf_2f_1f_1) \\ (e_2e_1e_1)^* &= \frac{q-q^{-1}}{q+q^{-1}} \left( f_1(q^2 f_2f_1 - f_1f_2) - (q^2 f_2f_1 - f_1f_2)f_1\right)\\ (e_1e_1e_2)^* &= \frac{q-q^{-1}}{q+q^{-1}} (q^2 f_1f_2 - f_2f_1)f_1 - f_1(q^2 f_1f_2 - f_2f_1) \\ (f_1f_2f_1)^* &= (q-q^{-1})(q e_1e_1e_2 - (q^{-1} + q^3)e_1e_2e_1 + qe_2e_1e_1) \\ (f_2f_1f_1)^* &= \frac{q-q^{-1}}{q+q^{-1}} \left( e_1(q^2 e_2e_1 - e_1e_2) - (q^2 e_2e_1 - e_1e_2)e_1\right)\\ (f_1f_1f_2)^* &= \frac{q-q^{-1}}{q+q^{-1}} \left( q^2 e_1e_2 - e_2e_1)e_1 - e_1(q^2 e_1e_2 - e_2e_1 \right)\end{aligned}$$ The first two lines follow immediately from the definition of the pairing. For the next two lines, we have that $$\begin{aligned} \langle f_1f_2, e_1e_2\rangle &= \langle f_2 \otimes f_1k_2^{-1}, e_2\otimes e_1 \rangle\\ &=\langle f_2,e_2\rangle \langle f_1 \otimes k_2^{-1}, e_1 \otimes 1\rangle \\ &= (q^2-q^{-2})^{-1}(q - q^{-1})^{-1}\end{aligned}$$ By , $$\langle f_2f_1,e_2e_1\rangle = (q^2-q^{-2})^{-1}(q - q^{-1})^{-1}.$$ Furthermore, $$\begin{aligned} \langle f_1f_2,e_2e_1 \rangle &= \langle f_1\otimes f_2, k_2 e_1\otimes e_2 \rangle \\ &=\langle f_1 \otimes k_1^{-1}, e_1\otimes k_2 \rangle \langle f_2,e_2 \rangle \\ &=q^{-2} (q^2-q^{-2})^{-1}(q - q^{-1})^{-1}\end{aligned}$$ and $$\begin{aligned} \langle f_2f_1,e_1e_2 \rangle &= \langle f_2\otimes f_1, k_1 e_2 \otimes e_1 \rangle \\ &=\langle f_2 \otimes k_2^{-1}, e_2\otimes k_1 \rangle \langle f_1,e_1 \rangle \\ &=q^{-2} (q^2-q^{-2})^{-1}(q - q^{-1})^{-1}\end{aligned}$$ This proves lines three and four. Now for the remainder of the lemma. We have that $$\begin{aligned} \langle (e_1e_2)^*f_1 , e_1e_2e_1 \rangle &= \langle (e_1e_2)^* \otimes f_1, e_1e_2 k_1 \otimes e_1\rangle\\ &= \langle \Delta((e_1e_2)^*) , k_1 \otimes e_1e_2 \rangle \langle e_1, f_1\rangle \\ &= \langle k_1k_2 \otimes (e_1e_2)^* , k_1 \otimes e_1e_2 \rangle \langle e_1, f_1\rangle \\ &= - (q-q^{-1})^{-1}\end{aligned}$$ and also $$\begin{aligned} \langle f_1 (e_1e_2)^* , e_1e_2e_1 \rangle &= \langle f_1 \otimes (e_1e_2)^*, k_1k_2 e_1 \otimes e_1e_2\rangle\\ &= \langle f_1,k_1k_2e_1\rangle \\ &= \langle f_1,e_1\rangle \langle k_1^{-1}, k_1k_2 \rangle\\ &= - (q-q^{-1})^{-1}\end{aligned}$$ and additionally $$\begin{aligned} \langle (e_1e_2)^*f_1 , e_2 e_1 e_1\rangle &= 0\end{aligned}$$ because $e_1e_2$ is not possible as a left tensor factor. Continuing, $$\begin{aligned} \langle (e_1e_2)^*f_1, e_1e_1e_2 \rangle &= \langle (e_1e_2)^* \otimes f_1 , e_1 k_1 e_2 \otimes e_1 + k_1 e_1 e_2 \otimes e_1 \rangle \\ &= (1+q^{-2}) \langle (e_1e_2)^* \otimes f_1 , k_1 e_1 e_2 \otimes e_1 \rangle \\ &= - (1+q^{-2})(q-q^{-1})^{-1} \end{aligned}$$ and $$\begin{aligned} \langle f_{1}(e_1e_2)^* , e_1e_1e_2 \rangle &= \langle f_1 \otimes (e_1e_2)^*, e_1 k_1 k_2 \otimes e_1e_2 + k_1 e_1 k_2 \otimes e_1e_2 \rangle\\ &= (1+q^{2})\langle f_1 \otimes (e_1e_2)^*, e_1 k_1 k_2 \otimes e_1e_2\rangle\\ &= - (1+q^{2})(q-q^{-1})^{-1}\end{aligned}$$ So we see that $$\begin{aligned} \langle (e_1e_2)^*f_1 - f_1 (e_1e_2)^*, e_1e_2e_1 \rangle &= 0 \\ \langle (e_1e_2)^*f_1 - f_1 (e_1e_2)^* , e_1e_1e_2 \rangle &= (q^2-q^{-2})(q-q^{-1})^{-1} = q+q^{-1}\\ \langle (e_1e_2)^*f_1 - f_1 (e_1e_2)^* , e_2e_1e_1 \rangle &= 0\end{aligned}$$ and that $$\begin{aligned} \langle (e_1e_2)^*f_1 - q^{-2} f_1 (e_1e_2)^* , e_1e_2e_1 \rangle &= (-1+q^{-2})/(q-q^{-1})=-q^{-1} \\ \langle (e_1e_2)^*f_1 - q^{-2} f_1 (e_1e_2)^* , e_1e_1e_2\rangle &= 0 \\ \langle (e_1e_2)^*f_1 - q^{-2}f_1 (e_1e_2)^* , e_2e_1e_1 \rangle &= 0\end{aligned}$$ Furthermore, $$\begin{aligned} \langle (e_2e_1)^*f_1, e_1e_2e_1 \rangle &= \langle (e_2e_1)^* \otimes f_1, k_1e_2e_1\otimes e_1\rangle \\ &= -(q-q^{-1})^{-1} \end{aligned}$$ and that $$\begin{aligned} \langle f_{1}(e_2e_1)^* , e_2e_1e_1 \rangle &= \langle f_1 \otimes (e_2e_1)^*, k_2k_1e_1\otimes e_2e_1 + k_2e_1k_1 \otimes e_2 e_1 \rangle\\ &= (1+q^{-2})\langle f_1 \otimes (e_2e_1)^* , k_2k_1e_1\otimes e_2e_1 \rangle\\ &= - (1+q^{-2})(q-q^{-1})^{-1}\end{aligned}$$ and $$\begin{aligned} \langle (e_2e_1)^* f_1 , e_2e_1e_1 \rangle &= \langle (e_2e_1)^* \otimes f_1, e_2 e_1 \otimes k_2k_1 e_1 + e_2 e_1\otimes k_2 e_1 k_1 \rangle\\ &= (1+q^{2})\langle (e_2e_1)^* \otimes f_1 , e_2 e_1 \otimes k_2 k_1e_1 \rangle\\ &= -(1+q^{2})(q-q^{-1})^{-1}\end{aligned}$$ So so that $$\begin{aligned} \langle (e_2e_1)^*f_1 - f_1 (e_2e_1)^* , e_1e_2e_1 \rangle &= 0 \\ \langle (e_2e_1)^*f_1 - f_1 (e_2e_1)^* , e_1e_1e_2 \rangle &= 0 \\ \langle (e_2e_1)^*f_1 - f_1 (e_2e_1)^* , e_2e_1e_1 \rangle &= (q^{-2} - q^{2})(q-q^{-1})^{-1} = - (q + q^{-1})\end{aligned}$$ So that $$\begin{aligned} (e_1e_2e_1)^* &= (q-q^{-1})(q^{-1}f_1 (e_2e_1)^* - q (e_2e_1)^*f_1) \\ &= (q-q^{-1}) \left(q^{-1} f_1(q^2f_1f_2 - f_2f_1)-q^{}(q^2f_1f_2 - f_2f_1)f_1\right)\\ &= (q-q^{-1})(q f_1f_1f_2 - (q^{-1} + q^3)f_1f_2f_1 + qf_2f_1f_1) \\ (e_2e_1e_1)^* &= \frac{q-q^{-1}}{q+q^{-1}} \left( f_1(q^2 f_1f_2 - f_2f_1) - (q^2 f_1f_2 - f_2f_1)f_1\right)\\ (e_1e_1e_2)^* &= \frac{q-q^{-1}}{q+q^{-1}} \left( (q^2 f_1f_2 - f_2f_1)f_1 - f_1(q^2 f_1f_2 - f_2f_1) \right)\\\end{aligned}$$ which is lines five through seven. Lines eight through ten follow from . \[sp4central\] If $q$ is not a root of unity, the element $$\begin{gathered} q^{-4} k_{(-2,0)} + q^{-2} k_{(0,-2)} + q^2 k_{(0,2)} + q^4 k_{(2,0)} \\ + q^{-3} (q-q^{-1})^2 f_1 k_{(-1,-1)}e_1 + q^{-3} (e_1e_2)^* k_{(-1,1)} (f_2f_1)^* + (q^2+q^{-2})f_2e_2 \\ + q^{-2} (e_1e_2e_1)^* (f_1f_2f_1)^* + q^{-1} (e_2e_1)^* k_{(1,-1)} (f_1f_2)^* + q^3 (q-q^{-1})^2 f_1 k_{(1,1)} e_1 \end{gathered}$$ $$\begin{aligned} &=q^{-4} k_{(-2,0)} + q^{-2} k_{(0,-2)} + q^4 k_{(2,0)} + q^2 k_{(0,2)} \\ &+ (q-q^{-1})^2 q^{-3} f_1 k_{(-1,-1)}e_1 + (q-q^{-1})^2 q^3 f_1 k_{(1,1)}e_1 + (q^2-q^{-2})^2 f_2e_2\\ & + (q-q^{-1})^2( q^{-1} (qf_{1}f_{2}-q^{-1}f_2f_1) k_{(-1,1)}( q e_2e_1 - q^{-1} e_1e_2) + q (qf_2f_1-q^{-1}f_1f_2) k_{(1,-1)} (q e_1e_2 - q^{-1}e_2e_1))\\ &+ (q-q^{-1})^2 \left( f_1f_1f_2 - (q^{-2} + q^2)f_1f_2f_1 + f_2f_1f_1\right) \left( e_1e_1e_2 - (q^{-2} + q^2)e_1e_2e_1 + e_2e_1e_1\right) \end{aligned}$$ is central in $\mathcal{U}_q(\mathfrak{sp}_4)$. Use and . The terms with $\mu=\lambda$ yield $$q^{-4} k_{(-2,0)} + q^{-2} k_{(0,-2)} + q^2 k_{(0,2)} + q^4 k_{(2,0)}$$ Furthermore, $\mu=\mathbf{1} > \mathbf{2} = \lambda$ yields $$q^{-3} (q-q^{-1})^2 f_1 k_{(-1,-1)}e_1$$ and $\mu=\mathbf{1},\mathbf{2} > \mathbf{\bar{2}}=\lambda$ yields $$q^{-3} (e_1e_2)^* k_{(-1,1)} (f_2f_1)^* + (q^2+q^{-2})f_2e_2$$ and $\mu=\mathbf{1},\mathbf{2}, \mathbf{\bar{2}} > \mathbf{\bar{1}}=\lambda$ yields $$q^{-2} (e_1e_2e_1)^* (f_1f_2f_1)^* + q^{-1} (e_2e_1)^* k_{(1,-1)} (f_1f_2)^* + q^3 (q-q^{-1})^2 f_1 k_{(1,1)} e_1$$ One can check (with some calculation) that this element acts as $q^{-6} + q^{-2} + q^2 + q^6$ times the identity on $V$, which is consistent with the Harish–Chandra isomorphism. $\mathfrak{gl}_3$ ----------------- Recall that $\mathfrak{sl}_n$ is the rank $n-1$ Lie algebra consisting of all traceless $n\times n$ matrices. Set $$e_i= E_{i,i+1}, \quad f_i =E_{i+1,i}, \quad h_i= E_{ii}-E_{i+1,i+1}$$ The simple roots and fundamental weights are $$\begin{aligned} \alpha_i &= \epsilon_i - \epsilon_{i+1}, \quad 1\leq i \leq n-1, \\ \omega_i &= \epsilon_1 + \ldots + \epsilon_n, \quad 1\leq i\leq n,\end{aligned}$$ where $\epsilon_i(M)=M_{ii}$. When $n=3$ the Cartan matrix is $$ ( ------ ------ 2 $-1$ $-1$ 2 ------ ------ ) $$ and the Dynkin diagram is $A_2$. Denote $k_1,k_2 \in \mathcal{U}_q(\mathfrak{sl}_3)$ by $k_{(1,-1,0)}$ and $k_{(0,1,-1)}$ respectively. The Lie algebra $\mathfrak{gl}_3$ is the central extension of $\mathfrak{sl}_3$ by the $3\times 3$ identiy matrix. In terms of quantum groups, this corresponds to a central extension by the element $k_{(1,1,1)}$ It was shown in [@GZB] that the following element is central: $$\begin{gathered} \label{GZBC} C:=(q-q^{-1})^{-2}q^{-2} \Big( -(q^{-2} + 1 + q^6) + q^{-2}k_{(2,0,0)} + k_{(0,2,0)} + q^2 k_{(0,0,2)} \\ + (q-q^{-1})^2 (q^{-1} k_{(1,1,0)} e_1f_1 + q k_{(0,1,1)}e_2f_2 + q k_{(1,0,1)} (e_1e_2 - q^{-1}e_2e_1)(f_2f_1 - q^{-1}f_1f_2)) \Big)\end{gathered}$$ Type $C_2$ ASEP {#C2} =============== Notation -------- Because different authors use slightly different notation, it is necessary to first establish notation for this paper. The highest weight vector of the fundamental representation $V$ is denoted $v_1$, and the lowest weight vector is denoted $v_3$ (it is essentially a coincidence that $v_3$ is the lowest weight vector in both the $A_2,C_2$ cases). The lowest weight vector corresponds to an empty site, and the highest weight vector corresponds to a completely full site. In the $A_2$ case, this means that $v_3$ is an empty site, $v_2$ is a particle of type $1$ and $v_1$ is a particle of type $2$. In the $C_2$ case, this means that $v_3$ is an empty site, $v_4$ is a particle of type $1$, $v_2$ is a particle of type $2$ and $v_1$ is a site occupied by both a particle of type $1$ and $2$. The vacuum vector $\Omega = v_3^{\otimes L}$ corresponds to $L$ lattice sites all completely empty. There are two creation operators $e_1,e_2$. In the $A_2$ case, the operator $e_2$ creates a particle of type $2$ and the operator $e_1$ replaces a particle of type $2$ with a particle of type $1$. The annihilation operator $f_1$ replaces a particle of type $1$ with a particle of type $2$, and $f_2$ annihilates a particle of type $2$. In the $C_2$, case the operator $e_1$ creates a particle of type $1$ and $e_2$ replaces a particle of type $1$ with a particle of type $2$, and similarly for $f_1,f_2$. In a sense, $e_1,f_1$ are more accurately called “replacement” operators instead of creation and annihilation operators. In the $C_2$ case, $v_1$ corresponds to a site with both a type $1$ and a type $2$ particle. Under this identification, the generator $\mathcal{L}$ of a Markov process $X_t$ on the state space $\{0,1,2\}^L$ can be identified as a linear operator on $V^{\otimes L}$. An initial condition can be expressed as a vector $A_0 \in V^{\otimes L}$ by $$A_0 := \sum_{v} \mathbb{P}(X_0 = v)$$ Here, and below, the summation $\sum_v$ is over pure tensors of the form $v_{i_1} \otimes \cdots \otimes v_{i_L}$. A random variable $\mathcal{O}$ on $\{0,1,2\}^L$ can be identified with a diagonal operator on $V^{\otimes L}$ via $v\mapsto \mathcal{O}(v)v$. The same letter $\mathcal{O}$ will refer to both the random variable and the operator. The inner product $\langle \cdot,\cdot \rangle$ on $V^{\otimes L}$ is defined by $$\langle v_{i_1}\otimes \cdots \otimes v_{i_L}, v_{j_1} \otimes \cdots \otimes v_{j_L}\rangle = \delta_{i_1=j_1, \ldots, i_L=j_L}$$ This is essentially the usual bra–ket notation. The expectation of a random variable $\mathcal{O}$ at time $t$ of a Markov process with generator $\mathcal{L}$ and initial condition $A_0$ can be computed as $$\sum_{w} \langle w, \mathcal{O}e^{t\mathcal{L}}A_0\rangle.$$ Construction {#construction} ------------ Let $C$ be the central element in Proposition \[sp4central\] and let $A$ be the operator on $V\otimes V$ defined by $$q^{-2}(q^2+q^{-2})^{-2}(q-q^{-1})^{-2}\Delta\left(C - (q^{-8} + q^{-2} + q^2 + q^8)\right).$$ Note that $V\otimes V$ has the decomposition into nine different weight spaces (where $W(a,b)$ refers the $a \epsilon_1+b\epsilon_2$ weight space of the representation $W$) $$\begin{aligned} V\otimes V &= (V\otimes V) [2,0] \oplus (V\otimes V) [1,1] \oplus (V\otimes V) [0,2] \oplus (V\otimes V)[1,-1] \oplus (V\otimes V)[0,0]\\ &= (V\otimes V)[-1,1] \oplus (V\otimes V)[0,-2] \oplus (V\otimes V)[-1,-1] \oplus (V\otimes V)[-2,0]\end{aligned}$$ which have dimensions $1,2,1,2, 4,2,1,2,1$ respectively. Order the basis elements of $V\otimes V$ as $$\begin{gathered} \label{Order} v_1\otimes v_1, v_2\otimes v_1,v_1\otimes v_2, v_2\otimes v_2, v_4\otimes v_1, v_1\otimes v_4, v_2\otimes v_4,v_4\otimes v_2, v_3\otimes v_1, v_1\otimes v_3, \\ v_3\otimes v_2,v_2\otimes v_3, v_4\otimes v_4,v_3\otimes v_4,v_4\otimes v_3, v_3\otimes v_3.\end{gathered}$$ This ordering preserves the ordering of the weight spaces. As explained in Section \[Construction\], the operator $A$ needs to be conjugated with a diagonal operator corresponding to an eigenvector of $A$ with eigenvalue $0$. \[eigenvectors\] The following are linearly independent eigenvectors of $A$ with eigenvalue $0$: $$\begin{aligned} v_3 \otimes v_3 & \in (V\otimes V)[2,0]\\ e_1(v_3 \otimes v_3) &\in (V\otimes V)[1,1] \quad\\ e_1^2(v_3 \otimes v_3) &\in (V\otimes V)[0,2] \\ e_2e_1(v_3 \otimes v_3)&\in (V\otimes V)[1,-1]\\ e_2e_1^2(v_3 \otimes v_3)&\in (V\otimes V)[0,0]\\ e_1e_2e_1(v_3 \otimes v_3)&\in (V\otimes V)[0,0]\\ e_1^2e_2e_1(v_3 \otimes v_3)&\in (V\otimes V)[-1,1]\\ (e_2e_1)^2(v_3 \otimes v_3)&\in (V\otimes V)[0,-2] \\ e_1(e_2e_1)^2(v_3 \otimes v_3)&\in (V\otimes V)[-1,-1]\\ e_1^2(e_2e_1)^2(v_3 \otimes v_3)&\in (V\otimes V)[-2,0]\\\end{aligned}$$ So the $0$–eigenspace of $A$ is at least $10$–dimensional. This follows because $C(v_3 \otimes v_3) = (q^{-8} + q^{-2} + q^2 + q^8) v_3 \otimes v_3$ and $A$ commutes with $\mathcal{U}_q(\mathfrak{sp}_4)$. Note that $e_2e_1^2$ and $e_1e_2e_1$ produce linearly independent eigenvectors because the latter has $v_1\otimes v_3,v_3\otimes v_1$ terms and the former does not. Because $C\in \mathcal{U}_q(\mathfrak{sp}_4)[0],$ it follows that $A$ must preserve each summand in the weight space decomposition, so $A$ decomposes into a block matrix with $9$ blocks. By Lemma \[eigenvectors\], for the $1$–dimensional weight spaces with weights $(2,0),(0,2),(-2,0),(0,-2)$, the corresponding block matrices are $1\times 1$ zero matrices. Therefore $A$ has five non–zero blocks corresponding to $(1,1),(1,-1),(0,0),(-1,1),(-1,-1)$, with sizes $2,2,4,2,2$ respectively. Write this decomposition as $$A =q^{-2}(q^2+q^{-2})^{-2}\left( A_{(1,1)}+ A_{(1,-1)}+ A_{(0,0)}+ A_{(-1,1)} + A_{(-1,-1)}\right)$$ \[symmetry\] As matrices with respect to the ordered basis in , $$\begin{aligned} A_{(0,0)} &= \left( \begin{tabular}{cccc} $-q^2(q^2+q^{-2})^2$ & $ (q^2+q^{-2})^2$ & $-q^{-3}+q^{-1}+2q^3$ & $-2q^{-1} - q^3 +q^5$ \\ $(q^2+q^{-2})^2$ & $-q^{-2}(q^2+q^{-2})^2$ & $q^{-5}-q^{-3}-2q$ & $2q^{-3}+q-q^3$ \\ $-q^{-3}+q^{-1}+2q^3$ & $q^{-5}-q^{-3}-2q$ & $-q^{-4}+q^{-2}-1-2q^4-q^6$ & $(q^2+q^{-2})^2$ \\ $-2q^{-1}-q^3+q^5$ & $2q^{-3}+q-q^3$ & $(q^2+q^{-2})^2$ & $-q^{-6}-2q^{-4}-1+q^2-q^4$ \end{tabular} \right) \\ A_{(1,1)} &= A_{(1,-1)}=A_{(-1,1)}=A_{(-1,-1)}= \left( \begin{tabular}{cc} $-(q^{-4} + q^6 ) $ & $(q^{-5}+q^5)$ \\ $(q^{-5}+q^5)$ & $ -(q^{-6} + q^4) $ \end{tabular} \right)\end{aligned}$$ By the definition of the co–product, the matrices for the generators can be written explicitly. For $1\leq i,j\leq 16,$ let $E_{ij}$ denote the matrix with a $1$ in the $(i,j)$–entry and $0$ elsewhere. Then $$\begin{aligned} e_1&=E_{12}+qE_{13} + q^{-1}E_{24} + E_{34} + qE_{67} + E_{68} + E_{59} + qE_{5,10} + q^{-1}E_{9,11}\\ & \quad + E_{10,11}+E_{7,12}+q^{-1}E_{8,12}+qE_{13,14}+E_{13,15}+E_{14,16}+q^{-1}E_{15,16}\\ f_1 &= q^{-1}E_{21} + E_{31} + E_{42} + qE_{43} +q^{-1}E_{95}+E_{10,5}+E_{76}+q^{-1}E_{86}+qE_{12,7}\\ & \quad +E_{12,8}+E_{11,9}+qE_{11,10}+E_{14,13}+q^{-1}E_{15,13}+qE_{16,14}+E_{16,15}\\ e_2&=E_{2,5}+E_{3,6}+q^2E_{4,8} + E_{4,10} + E_{8,13} + q^{-2} E_{10,13} + E_{12,14}+E_{11,15}\\ f_2 &= E_{5,2} + E_{6,3} + E_{8,4} + q^{-2} E_{10,4} + q^2 E_{13,8} + E_{13,10} +E_{15,11}+E_{14,12}\\ k_{(a,b)} &= \mathrm{diag}\left( q^{2a},q^{a+b},q^{a+b},q^{2b}, q^{a-b}, q^{a-b}, 1, 1, 1, 1, q^{b-a},q^{b-a},q^{-2b},q^{-a-b}, q^{-a-b}, q^{-2a}\right)\end{aligned}$$ Using Proposition \[sp4central\] and explicit multiplication of $16\times 16$ matrices yields the result. Define the operator $A^{(L)}$ on $V^{\otimes L}$ by $$\begin{aligned} A^{(L)} &= \sum_{i=1}^{L-1} \mathbf{1}^{\otimes i-1} \otimes A \otimes \mathbf{1}^{\otimes L-1-i} \\\end{aligned}$$ \[commutes\] For any $u\in \mathcal{U}_q(\mathfrak{sp}_4)$, $$[A^{(L)}, \Delta^{(L)}(u)]=0.$$ It suffices to prove this for $u=e_i,f_i,k_i$. Since $\Delta^{(L)}(k_i) = k_i^{\otimes L}$ and $$[\mathbf{1}^{\otimes i-1}, k_i^{\otimes i-1}] = [A, k_i \otimes k_i] = [\mathbf{1}^{\otimes L-1-i}, k_i^{\otimes L-i-1}]=0,$$ this shows it for $u=k_i$. Now we have that $${\Delta^{(L)}}(e) = \sum_{j=1}^{L-1} k^{\otimes j-1} \otimes \Delta(e) \otimes \mathbf{1}^{\otimes L-1-j}$$ and that $$\begin{aligned} &\left[k^{\otimes j-1} \otimes \Delta(e) \otimes \mathbf{1}^{\otimes L-1-j}, \sum_{i=1}^{L-1} \mathbf{1}^{\otimes i-1} \otimes A \otimes \mathbf{1}^{\otimes L-1-i} \right] \\ &=\left[k^{\otimes j-1} \otimes \Delta(e) \otimes \mathbf{1}^{\otimes L-1-j}, \quad \mathbf{1}^{\otimes j-2} \otimes A \otimes \mathbf{1}^{\otimes L-1-(j-1)} + \mathbf{1}^{\otimes j} \otimes A \otimes \mathbf{1}^{\otimes L-1-(j+1)}\right]\end{aligned}$$ because for all other $j$ terms we can apply $$[1\otimes 1,k\otimes k]=[\Delta(e), 1\otimes 1]=[\Delta(e),A]=[k\otimes k,A]=0.$$ This then equals $$\begin{aligned} &\left[k^{\otimes j} \otimes e \otimes \mathbf{1}^{\otimes L-1-j} + k^{\otimes j-1} \otimes e \otimes \mathbf{1}^{\otimes L-j}, \quad \mathbf{1}^{\otimes j-2} \otimes A \otimes \mathbf{1}^{\otimes L-1-(j-1)} + \mathbf{1}^{\otimes j} \otimes A \otimes \mathbf{1}^{\otimes L-1-(j+1)}\right]\\ &=k^{\otimes j} \otimes [e\otimes 1,A] \otimes \mathbf{1}^{L-2-j} + k^{\otimes j-2} \otimes [k\otimes e,A] \otimes \mathbf{1}^{\otimes L-j} \end{aligned}$$ Summing over $j$ yields $$\sum_{j=0}^{L-2} k^{\otimes j} \otimes [e\otimes 1,A] \otimes \mathbf{1}^{L-2-j} + \sum_{j=2}^{L} k^{\otimes j-2} \otimes [k\otimes e,A] \otimes \mathbf{1}^{\otimes L-j} = \sum_{j=0}^{L-2} k^{\otimes j} \otimes [\Delta(e),A] \otimes \mathbf{1}^{L-2-j} =0.$$ The argument for $f$ is similar. In Lemma \[symmetry\], there is no value of $q$ for which the off-diagonal entries of $A_{(0,0)}$ are all non–negative, since the second row is $-q^2$ times the first row. This would indicate a “negative probability” of transitioning to a state with both a type 1 and a type 2 particle occupying a site. To get around this issue, we conjugate with a $G_{\epsilon}$ such that as $\epsilon\rightarrow 0$, these “negative probabilities” converge to $0$. Give $V^{\otimes L}$ the standard basis $\mathcal{B}:=\{v_{i_1}\otimes \cdots \otimes v_{i_L}: i_1,\ldots,i_L \in \{1,2,3,4\}\}$. Partition $\mathcal{B}$ into $\mathcal{B}_1 \cup \mathcal{B}_2$, where $$\mathcal{B}_1 := \{ v_{i_1} \otimes \cdots \otimes v_{i_L}: i_1,\ldots, i_L \in \{2,3,4\}\}$$ Note that $\left| \mathcal{B}_1 \right| = 3^L$ and $\left| \mathcal{B}_2 \right| = 4^L - 3^L$. Define the sets $$\begin{aligned} \mathcal{E}_1 &= \{ e_2^j e_1^k: 1\leq j\leq k\leq L\} \\ \mathcal{E}_2 &= \{ e_1^i e_2^j e_1^k: 1\leq i\leq j\leq k\leq L\}\end{aligned}$$ Let $\Omega$ be the vacuum vector $v_3^{\otimes L}$. We then have that $$e(\Omega) \in \text{span}(\mathcal{B}_1) \text{ for all } f \in \mathcal{E}_1, \quad e(\Omega) \notin \text{span}(\mathcal{B}_1) \text{ for all } f\in \mathcal{E}_2$$ Let $g_{\epsilon} \in V^{\otimes L}$ be a vector in the kernel of $A^{(L)}$, and for $x \in \mathcal{B},$ define $g_{\epsilon}(x)$ to be the coefficient of $x$ in $g_{\epsilon}.$ Suppose it satisfies $$\label{niceg} g_{\epsilon}(x)>0 \text{ for all } x\in \mathcal{B}, \quad \lim_{\epsilon \rightarrow 0} g_{\epsilon}(y) = 0 \text{ for } y\in \mathcal{B}_2, \quad \lim_{\epsilon \rightarrow 0} g_{\epsilon}(x) > 0 \text{ for } x\in \mathcal{B}_1,$$ Let $G_{\epsilon}$ be the diagonal matrix on $V^{\otimes L}$ with entries $G_{\epsilon}(x,x)=g_{\epsilon}(x)$. Let $\mathcal{L}_{\epsilon}$ be $$\label{L} \mathcal{L}_{\epsilon} = G_{\epsilon}^{-1} A^{(L)} G_{\epsilon}.$$ For a matrix $S$ that commutes with $A^{(L)}$, let $D_{\epsilon}=G_{\epsilon}^{-1}SG_{\epsilon}^{-1}$. In the $\epsilon\rightarrow 0$ limit, the subscript will be dropped. The idea for this construction of $D$ comes from Proposition 2.1 of [@CGRS]. \[duality\] If $x \in \mathrm{span}({\mathcal{B}}_1)$, then $$\lim_{\epsilon\rightarrow 0} \langle y, \mathcal{L}_{\epsilon}D_{\epsilon}(x)\rangle = \lim_{\epsilon\rightarrow 0} \langle y, D_{\epsilon}\mathcal{L}^*_{\epsilon}(x)\rangle \text{ for all } y \in \mathrm{span}({\mathcal{B}}_1)$$ (and this limit is finite). Since $A^{(L)}$ is symmetric, $$\mathcal{L}_{\epsilon}D_{\epsilon} = G_{\epsilon}^{-1}A^{(L)}SG_{\epsilon}^{-1} = G_{\epsilon}^{-1}S G_{\epsilon}^{-1} G_{\epsilon} A^{(L)}G_{\epsilon}^{-1} = D_{\epsilon} \mathcal{L}_{\epsilon}^*$$ so it remains to check that the limit is finite. But by , the limit can only be infinite if $x$ or $y$ is not in the span of $\mathcal{B}_1$. In order to find an explicit $g_{\epsilon}$ satisfying , introduce some notation first. The $q$–analog of the exponential function is $$\mathrm{exp}_q(x) := \sum_{n=1}^{\infty} \frac{x^n}{\{n\}_q!}$$ where $$\{n\}_q :=\frac{1-q^n}{1-q}.$$ The following is Proposition 5.1 from [@CGRS]. \[pseudofac\] Let $\{g_i,k_i:1\leq i\leq L\}$ be operators such that $k_ig_i=rg_ik_i$. Define $$k^{(i)}:=k_1\cdots k_i, \quad g^{(L)}:= \sum_{i=1}^L k^{(i-1)}g_i, \quad h^{(i)}:=k_i^{-1}\cdots k_{L}^{-1}, \quad \hat{g}^{(L)}:= \sum_{i=1}^L g_i h^{(i+1)}.$$ Then $$\begin{aligned} \exp_r(g^{(L)}) &= \exp_r(g_1)\cdot \exp_r(k^{(1)}g_2) \cdot \cdots \cdot \exp_r(k^{(L-1)}g_L) \\ \exp_r(\hat{g}^{(L)}) &= \exp_r(g_1 h^{(2)} )\cdot \cdots \cdot \exp_r(g_{L-1}h^{(L)}) \exp_r(g_L) \end{aligned}$$ In this paper, the proposition will be applied with $$\begin{aligned} g_i &= 1^{\otimes i-1} \otimes e \otimes 1^{\otimes L-i}\\ k_i &= 1^{\otimes i-1}\otimes k\otimes 1^{\otimes L-i}\end{aligned}$$ where $e,k$ can be either $e_1,k_1$ or $e_2,k_2$. Note that the $L$–fold co–product $\Delta^{(L)}e$ is of the form $g^{(L)}$ in the proposition. Now let $$\label{goodg} g_{\epsilon}:= \left(\exp_{q^4}\left({{\Delta^{(L)}}} e_2 \right) \cdot \exp_{q^2}\left({{\Delta^{(L)}}} e_1\right) + \epsilon \sum_{e \in \mathcal{E}_2} \Delta^{(L)}e \right) (v_3 \otimes \cdots \otimes v_3)$$ It is immediate from the definitions that holds. The fact that $g_{\epsilon}$ is in the kernel of $A^{(L)}$ follows from Lemma \[commutes\]. The first statement in Theorem \[C2Construction\] can now be proved. \[C2gen\] The restriction of $\mathcal{L}$ to ${\mathcal{B}_1}$ is the generator of spin $1/2$ type $C_2$ ASEP on $\{1,\ldots,L\}$ with domain wall boundary conditions. We will use the lemma: \[generator\] The generator of the generalized two particle type ASEP on $\{1,\ldots,L\}$ with domain wall boundary conditions is of the form $$\sum_{i=1}^{L-1} 1^{\otimes i-1} \otimes H \otimes 1^{L-i-1}$$ where the matrix of $H$ with respect to the basis $(0,0),(0,1),(1,0),(1,1),(2,1),(1,2),(2,2),(0,2),(2,0)$ is $$\left( \begin{array}{ccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -L(1,0) & L(1,0) & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & R(1,0) & -R(1,0) & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -L(1,2) & L(1,2) & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & R(1,2) & -R(1,2) & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -L(2,0) & L(2,0) \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & R(2,0) & -R(2,0) \\ \end{array} \right)$$ Since the particles in two particle type ASEP only jump at most one site, and all the jump bond rates are the same, the generator can be written in that form. The matrix entries can be found from the definition of a generator of a Markov process. To finish the proof of the theorem, it remains to show that $G^{-1}A^{(L)}G$ matches the expression in the lemma. From Proposition \[pseudofac\], $G$ can be written in the form $$G(v)= g_1(v) \cdots g_L(v) v, \text{ for } v=v_{i_1}\otimes \cdots \otimes v_{i_L}$$ where $g_j(v_{i_1}\otimes \cdots \otimes v_{i_L})$ only depends on the values of $i_1,\ldots,i_j$ irrespective of order. In other words, $g_j$ only depends on the cardinalities of the sets $\{k: 1\leq k\leq j, i_k=0\}, \{k: 1\leq k\leq j, i_k=1\}$. Thus, $$G^{-1}A^{(L)}G(v) = \sum_{i=1}^L 1^{\otimes i-1} \otimes H\otimes 1^{L-i-1}(v)$$ where $H=B^{-1}AB$ for some diagonal matrix $B$. Since $g$ is in the kernel of $A^{(L)}$, each row of $H$ must sum to $0$. Conjugating by a diagonal matrix does not change the diagonal entries, so Lemma \[symmetry\] shows that $H$ has the necessary form. Duality ------- We first prove an equivalent definition of duality. \[DualityProof\] Suppose that $\mathcal{L}$ is the generator of the Markov process $X(t)$ on state space $X$. Let $D$ be a function on $X\times X$ viewed as an operator in the sense of the formal sum $$D(y) = \sum_{x\in X} D(x,y) \mathbf{x}.$$ If $Z,Y$ are subsets of $X$ such that for all $(z,y)\in Z\times Y$ $$\langle z, \mathcal{L}D(y)\rangle = \langle z, D\mathcal{L}^*(y)\rangle$$ then $Z \times Y \subseteq \mathcal{S}_D$. By definition $$\begin{aligned} e^{t\mathcal{L}}D(y) &= \sum_{x} e^{t\mathcal{L}}\left( D(x,y)\mathbf{x} \right)\\ &= \sum_{x,z} D(x,y) e^{t\mathcal{L}}(z,x)\mathbf{z}\\ &= \sum_{x,z} \mathbb{P}_t(z \rightarrow x)D(x,y) \mathbf{z}\end{aligned}$$ and $$\begin{aligned} De^{t\mathcal{L}^*}(y) &= \sum_{x} D \left( e^{t\mathcal{L}}(y,x)\mathbf{x} \right)\\ &= \sum_{z,x} D(z,x) e^{t\mathcal{L}}(y,x)\mathbf{z}\\ &= \sum_{z,x} \mathbb{P}_t(y\rightarrow x)D(z,x)\mathbf{z}\end{aligned}$$ By the assumptions of the lemma this implies that for all $z\in Z$ and $y\in Y$, $$\label{trouble} \sum_{x} \mathbb{P}_t(z \rightarrow x)D(x,y) = \sum_{x} \mathbb{P}_t(y\rightarrow x)D(z,x)$$ which is equivalent to saying that for all $z\in Z,y\in Y $, $$\mathbb{E}_z[D(X(t),y)] = \mathbb{E}_y[D(z,X(t))],$$ which means exactly that $(z,y)\in \mathcal{S}_D$. By Proposition \[duality\], $D$ can be used to obtain a suitable duality function. The difficulty lies in the simple fact: in the equation , ignoring the summation over states $x$ with sites containing both a particle of type $1$ and a particle of type $2$ will not always leave the sum unchanged. However, certain duality functions will still work: \[Proper\] Suppose $y,z$ are such that $$D(x,y)=D(z,x)=0 \text{ for all } x\notin \mathrm{span}(\mathcal{B}_1)$$ Then $(z,y)\in \mathcal{S}_D$. With the assumptions of the lemma, the summation over $x\in \mathrm{span}(\mathcal{B}_1)$ in is $0$, as needed. Now it remains to find proper duality functions $D$ satisfying Lemma \[Proper\]. There are two natural choices. The first is to consider $$S:=\exp_{q^4}\left({{\Delta^{(L)}}} e_2 \right) \cdot \exp_{q^2}\left({{\Delta^{(L)}}} e_1\right)$$ and set $D_{\epsilon}=G_{\epsilon}^{-1}SG_{\epsilon}^{-1}$, with $D=\lim_{\epsilon\rightarrow 0}D_{\epsilon}$. The idea behind this choice is as follows. In order for Lemma \[Proper\] to hold, the symmetry $S$ should not create a site with both a type $1$ and a type $2$ particle. Since $e_1$ creates a particle of type $1$ and $e_2$ replaces a particle of type $1$ with a particle of type $2$, this holds as long as $\xi$ does not contain any particles of type $2$. Below, recall that $$v_3 \in V(-1,0), \quad v_4 \in V(0,-1), \quad v_2 \in V(0,1)$$ \[computes\] If $\xi_i=0,1$ for all $i$, then $$S(\eta,\xi) = \prod_{i=1}^L 1_{\{\xi_i\leq \eta_i\}} q^{1_{\{\xi_i=0,\eta_i\neq 0\}} \sum_{j=1}^{i-1} {\color{black}(1_{\{\xi_j=1\}} - 1_{\{\xi_j=0\}} ) }} (q^{-2})^{1_{\{\eta_i=2\}} \sum_{j=1}^{i-1} (1_{\{\xi_j=0,\eta_j\neq 0\}}+ 1_{\{\xi_j=1\}} ) }$$ Use Proposition \[pseudofac\]. Since $e_1^2$ and $e_2^2$ act as $0$ on $V$, it is equivalent to consider $$\begin{gathered} (1+e_2\otimes 1^{L-1})(1+k_2\otimes e_2\otimes 1^{L-2})\ldots (1+(k_2)^{\otimes (L-1)}\otimes e_2)\\ (1+e_1\otimes 1^{L-1})(1+k_1\otimes e_1\otimes 1^{L-2})\ldots (1+(k_1)^{\otimes (L-1)}\otimes e_1).\end{gathered}$$ First, move the $e_2$ terms from left to right to get $$\begin{gathered} (1+e_2\otimes 1^{L-1})(1+e_1\otimes 1^{L-1}) (1+k_2\otimes e_2\otimes 1^{L-2})(1+k_1\otimes e_1\otimes 1^{L-2}) \\ \ldots (1+(k_2)^{\otimes (L-1)}\otimes e_2) (1+(k_1)^{\otimes (L-1)}\otimes e_1).\end{gathered}$$ Due to the commutation relation $k_2e_1 = q^{-2}e_1k_2$, this produces the term $$\prod_{i=1}^L (q^{-2})^{1_{\{\eta_i=2\}} \sum_{j=1}^{i-1} 1_{\{\xi_j=0,\eta_j\neq 0\}}}$$ Next, applications of the $e_1$ terms to $\xi$ yield $$\prod_{i=1}^L q^{1_{\{\xi_i=0,\eta_i\neq 0\}} \sum_{j=1}^{i-1} {\color{black} (1_{\{\xi_j=1\}} - 1_{\{\xi_j=0\}}} )} .$$ And then the applications of the $e_2$ yields $$\prod_{i=1}^L (q^{-2})^{1_{\{\eta_i=2\}} \sum_{j=1}^{i-1} 1_{\{\xi_j=1\}}}$$ and combining all three lines gives the result. Recall $$\begin{aligned} N_k^R(\eta) &= \left| \{j > i: \eta_j \neq 0\} \right|\\ N_k^L(\eta) &= \left| \{i < j: \eta_j \neq 0\} \right|.\end{aligned}$$ For $n_1<\ldots<n_r$, let $\xi^{(n_1,\ldots,n_r)}$ be the state where $\xi_{n_s}=1$ and all other $\xi_i=0$. As before, $\Omega$ is the vacuum vector. Proposition \[computes\] immediately implies: We have $$G(\eta):=S(\eta,\Omega) = \prod_{i=1}^L q^{{\color{black} 1_{\{\eta_i\neq 0\}} (1-i)}} (q^{-2})^{1_{\{\eta_i=2\}} N_i^L(\eta) }$$ $$G(\xi^{(n_1,\ldots,n_r)}) = \prod_{s=1}^r q^{1-n_s}$$ And $$\begin{aligned} S(\eta,\xi^{(n_1,\ldots,n_r)}) &= \prod_{s=0}^r 1_{\{\eta_{n_s}\neq 0\}}(q^{-2})^{1_{\{\eta_s=2\}}N_{n_s}^L(\eta)}\prod_{i=n_s+1}^{n_{s+1}-1} q^{1_{\{\eta_i\neq 0\}}(2s-i+1)} (q^{-2})^{1_{\{\eta_i=2\}}N_{n_s}^L(\eta)}\\ &=1_{\{\eta_{n_1},\ldots,\eta_{n_r}\neq 0\}} \prod_{i=1}^L (q^{-2})^{1_{\{\eta_i=2\}}N_{n_s}^L(\eta)} \times \prod_{s=0}^r \prod_{i=n_s+1}^{n_{s+1}-1} q^{1_{\{\eta_i\neq 0\}}(2s-i+1)} \end{aligned}$$ Theorem \[C2Duality\](1) can now be proved. Suppose that $\eta_i=2$ exactly when $i\in \{m_1,\ldots,m_l\}$ (and possibly $1$ elsewhere). Then $$S(\eta,\xi^{(n_1,\ldots,n_r)}) = 1_{\{\eta_{n_1},\ldots,\eta_{n_r}\neq 0\}} \prod_{k=1}^l q^{-2N_{m_k}^L(\eta)} \times \prod_{s=0}^r \prod_{i=n_s+1}^{n_{s+1}-1} q^{1_{\{\eta_i\neq 0\}}(2s-i+1)}$$ so that $$\begin{aligned} D(\eta,\xi^{(n_1,\ldots,n_r)}) &= \frac{1}{G(\eta)}1_{\{\eta_{n_1},\ldots,\eta_{n_r}\neq 0\}} \prod_{k=1}^l q^{-2N_{m_k}^L(\eta)} \times \prod_{s=0}^r q^{n_s-1}\prod_{i=n_s+1}^{n_{s+1}-1} q^{1_{\{\eta_i\neq 0\}}(2s-i+1)} \\ &=1_{\{\eta_{n_1},\ldots,\eta_{n_r}\neq 0\}} \prod_{i=1}^L q^{1_{\{\eta_i\neq 0\}}(i-1)} \times \prod_{s=0}^r q^{n_s-1}\prod_{i=n_s+1}^{n_{s+1}-1} q^{1_{\{\eta_i\neq 0\}}(2s-i+1)} \\ &=1_{\{\eta_{n_1},\ldots,\eta_{n_r}\neq 0\}} \prod_{s=0}^r q^{2(n_s-1)}\prod_{i=n_s+1}^{n_{s+1}-1} q^{1_{\{\eta_i\neq 0\}}(2s)} \\ &=1_{\{\eta_{n_1},\ldots,\eta_{n_r}\neq 0\}} \prod_{s=1}^r q^{2(n_s-1)} q^{2(N_{n_s}^R(\eta)-(r-s))}\\ &= q^{-2r - (r-1)r}\prod_{s=1}^r 1_{\{\eta_{n_s}\neq 0\}} q^{2n_s + 2N_{n_s}^R(\eta)}\end{aligned}$$ which is Theorem \[C2Duality\](2). Now consider the case when $$S=exp_{q^4}\left({{\Delta^{(L)}}} e_2 \right).$$ In this case, any $\xi$ will work. $$S(\eta,\xi) = \prod_{i=1}^L \left( 1_{\{\xi_i = \eta_i\}} + 1_{\{\xi_i=1,\eta_i=2\}}(q^2)^{\sum_{j=1}^{i-1} 1_{\{\xi_j=2\}} - 1_{\{\xi_j=1\}} }\right)$$ The applications of the $e_2$ only occur when $\xi_i=1,\eta_i=2$, and the lemma follows because $k_{(0,2)}$ maps $v_3$ to $v_3$, $v_4$ to $q^{-2}v_4$ and $v_2$ to $q^2v_2$. Since $$G(\eta) = \prod_{i=1}^L q^{{\color{black} 1_{\{\eta_i\neq 0\}} (1-i)}} (q^{-2})^{1_{\{\eta_i=2\}} N_i^L(\eta) }$$ we have $$D(\eta,\xi) = \prod_{i=1}^L \left( 1_{\{\xi_i = \eta_i=1\}}q^{2(i-1)} + 1_{\{\xi_i = \eta_i=2\}}q^{2(i-1 + N^L_i(\eta) + N^L_i(\xi))} + 1_{\{\xi_i=1,\eta_i=2\}}(q^2)^{N_i^L(\eta) + i-1 + \sum_{j=1}^{i-1} (1_{\{\xi_j=2\}} - 1_{\{\xi_j=1\}} )}\right)$$ which simplifies to the expression in Theorem \[C2Duality\](1). Type $A_2$ ASEP {#A2} =============== Let $C$ be the central element of $\mathcal{U}_q(\mathfrak{gl}_3)$ from . \[A2Gen\] With respect to the basis $v_1\otimes v_1, v_2\otimes v_2,v_3\otimes v_3, v_2\otimes v_1, v_1\otimes v_2, v_3\otimes v_1,v_1\otimes v_3, v_3\otimes v_2,v_2\otimes v_3,$ the matrix of $\Delta (C)$ on $V\otimes V$ is $$\left( \begin{array}{ccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -q^2 & q & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & q & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -q^2 & q & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & q & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -q^2 & q \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & q & -1 \\ \end{array} \right)$$ By computation: $$\begin{aligned} e_1 &= q^{-1}E_{42} + E_{52} + E_{14} + qE_{15} + E_{68}+E_{79}\\ f_1 &= q^{-1}E_{41} + E_{51} + E_{24}+qE_{25} + E_{86} + E_{97} \\ k_{(a,b,c)} &= \mathrm{diag}\left( q^{2a},q^{2b},q^{2c}, q^{a+b},q^{a+b},q^{a+c},q^{a+c},q^{b+c},q^{b+c}\right)\\ e_2 &=q^{-1}E_{83} + E_{93} +E_{46} + E_{57} + E_{28} + q E_{29}\\ f_2 &= q^{-1}E_{82} + E_{92} + E_{64} + E_{75} + E_{38} + qE_{39} \end{aligned}$$ The symmetry in this case is $$S:=\exp_{q^2}\left({{\Delta^{(L)}}} e_2 \right) \cdot \exp_{q^2}\left({{\Delta^{(L)}}} e_1\right)$$ \[ExplicitS\] $$S(\eta,\xi)=\prod_{i=1}^L q^{1_{\{\xi_i = 0, \eta_i\neq 0\}} \sum_{j=1}^{i-1} (1_{\xi_j=1} - 1_{\xi_j=0})} (q^{-1})^{1_{\{\eta_i=2,\xi_i\neq 2\}}\sum_{j=1}^{i-1} ( 1_{\{\xi_j=0, \eta_j\neq 0\}} + 1_{\{\xi_j=1\}} - 1_{\{\xi_j=2\}})}$$ implying that $$G(\eta):=S(\eta,\Omega)= \prod_{i=1}^L q^{1_{\{\eta_i\neq 0\}}(1-i)}(q^{-1})^{ 1_{\{\eta_i=2\}} \sum_{j=1}^{i-1} 1_{\{\eta_j\neq 0\}} }$$ The argument is identical to that of Proposition \[computes\]. Therefore we see that The operator $$\mathcal{L}:=G^{-1}A^{(L)}G$$ is the generator of spin $1/2$ type $A_2$ ASEP on $\{1,\ldots,L\}$ with domain wall boundary conditions. The function $$D:=G^{-1}SG^{-1}$$ is a self–duality function explicitly given by the expression given in Theorem \[SDF\]. The first statement follows from an argument similar to that of Theorem \[C2gen\]. The second statement is a direct computation using Proposition \[ExplicitS\]. By Proposition \[ExplicitS\], $$\begin{aligned} D(\eta,\xi) &= \prod_{i=1}^L q^{1_{\{\xi_i = 0, \eta_i\neq 0\}} \sum_{j=1}^{i-1} (1_{\{\xi_j=1\}} - 1_{\{\xi_j=0\}})} (q^{-1})^{1_{\{\eta_i=2,\xi_i\neq 2\}}\sum_{j=1}^{i-1} ( 1_{\{\xi_j=0, \eta_j\neq 0\}} + 1_{\{\xi_j=1\}} - 1_{\{\xi_j=2\}})} \\ &\quad \quad \times q^{1_{\{\eta_i\neq 0\}}(i-1)}q^{ 1_{\{\eta_i=2\}} \sum_{j=1}^{i-1} 1_{\{\eta_j\neq 0\}} } q^{1_{\{\xi_i\neq 0\}}(i-1)}q^{ 1_{\{\xi_i=2\}} \sum_{j=1}^{i-1} 1_{\{\xi_j\neq 0\}} }\end{aligned}$$ which equals $\prod_{i=1}^L f(\eta_i,\xi_i)$ where $f(\cdot,\cdot)$ equals $$\begin{aligned} 1, &\text{ if } \xi_i=0, \eta_i=0\\ q^{\sum_{j=1}^{i-1} \left(1_{\{\xi_j=1\}} - 1_{\{\xi_j=0\}}\right)} \cdot q^{i-1}, &\text{ if } \xi_i=0,\eta_i=2 \\ q^{\sum_{j=1}^{i-1} \left(1_{\{\xi_j=1\}} - 1_{\{\xi_j=0\}}\right)} \cdot q^{-\sum_{j=1}^{i-1} ( 1_{\{\xi_j=0,\eta_j\neq 0\}} + 1_{\{\xi_j=1\}} - 1_{\{\xi_j=2\}}) } \cdot q^{i-1 + N_i^L(\eta) }, &\text{ if } \xi_i=0,\eta_i=1 \\ q^{2(i-1)}, &\text{ if } \xi_i=2,\eta_i=2\\ q^{-\sum_{j=1}^{i-1} (1_{\{\xi_j=0,\eta_j\neq 0\}} + 1_{\{\xi_j=1\}} - 1_{\{\xi_j=2\}}) + 2(i-1) + N_i^L(\eta) } , &\text{ if } \xi_i=2,\eta_i=1\\ q^{2(i-1) + N_i^L(\xi) + N_i^L(\eta)}, &\text{ if } \xi_i=1,\eta_i=1\end{aligned}$$ If there are $s_2$ type $2$ particles and $s$ type $1$ particles in $\xi$ to the left of $i$, then this becomes $$\begin{aligned} 1, &\text{ if } \xi_i=0, \eta_i=0\\ q^{ s_2 - (i - 1 - s_2-s ) } \cdot q^{i-1} = q^{2s_2+s}, &\text{ if } \xi_i=0,\eta_i=2 \\ q^{ s_2 - (i - 1 - s_2-s ) } \cdot q^{2 {r}} \cdot q^{i-1 } = q^{2s_2+3s}, &\text{ if } \xi_i=0,\eta_i=1 \\ q^{2(i-1)}, &\text{ if } \xi_i=2,\eta_i=2\\ q^{2s + 2(i-1) } , &\text{ if } \xi_i=2,\eta_i=1\\ q^{2(i-1) + s + s_2 + N_i^L(\eta)}, &\text{ if } \xi_i=1,\eta_i=1\end{aligned}$$ The $q^{2s}$ term in the fifth line and $q^{s+s_2}$ in the sixth line result in a contribution from the configuration of $\xi$. If $\xi$ has a total of $r$ type $1$ particles all to the left of $r'$ type $2$ particles, then the contribution is $ q^{(r-1)r/2 +r'r}$, which is constant with respect to the dynamics of $\xi$. Each time a type $1$ particle jumps to the right of a type $2$ particle, the contribution is unchanged, and hence remains a constant. Let $\xi$ denote the particle configuration with particles of type $1$ at $n_1,\ldots,n_r$ and particles of type $2$ at $m_1,\ldots,m_{r'}$. The sixth line yields $$\prod_{s=1}^{r} q^{N_{n_s}^L(\eta)} = \prod_{i=1}^L q^{1_{\{\eta_i\neq 0\}} \cdot \tilde{N}^R_i(\xi)} = \prod_{s=0}^r q^{r-s}\prod_{i = n_s + 1}^{n_{s+1}-1} q^{1_{\{\eta_i\neq 0\}} \cdot (r-s)} = \mathrm{const} \prod_{s=0}^r \prod_{i = n_s + 1}^{n_{s+1}-1} q^{1_{\{\eta_i\neq 0\}} \cdot (r-s)}$$ This combines with the $q^s$ and $q^{3s}$ in the second and third lines to contribute $$\begin{aligned} \prod_{s=0}^r \prod_{i = n_s + 1}^{n_{s+1}-1} q^{1_{\{\eta_i\neq 0\}} \cdot (r-s)} q^{s\cdot 1_{\{\eta_i\neq 0\}}} q^{2s\cdot 1_{\{\eta_i=1\}}} = \mathrm{const} \prod_{s=1}^r q^{2s\left( \tilde{N}^L_{n_{s+1}}(\eta) - \tilde{N}_{n_s}^L(\eta) -1 \right) }\end{aligned}$$ Similarly, the $2s_2$ contributes $$\prod_{s'=1}^{r'} \prod_{i = m_{s'} + 1}^{m_{s'+1}-1} q^{2s' \cdot 1_{\{\eta_i\neq 0\}}} = \mathrm{const} \prod_{s'=1}^{r'} q^{2s'\left( {N}^L_{m_{s'+1}}(\eta) - {N}_{m_{s'}}^L(\eta) -1 \right) }$$ Combining the terms yields $$D(\eta,\xi) = \mathrm{const} \prod_{s=1}^r 1_{\{\eta_{n_s}=1\}}q^{2\tilde{N}^R_{n_s}(\eta)+2n_s} \prod_{s'=1}^{r'} 1_{\{\eta_{m_{s'}}\neq 0\}}q^{2N^R_{m_{s'}}(\eta)+2m_{s'}} ,$$ which proves Theorem \[SDF\]. We remark that Theorem \[SDF\] provides a formula for the $r+r'$ moments of the exponentiated current of type $A_2$ ASEP at distinct points. By following the argument in [@IS], it should be possible to write the moments at any points in terms of $k$–particle evolution for $k\leq r+r'$, but this is not pursued here. F.C. Alcaraz., R.Z. Bariev: Exact solution of asymmetric diffusions with second-class particles of arbitrary size. Braz. J. Phys. 30 (2000), 13–26. F.C. Alcaraz, V. Rittenberg. Reaction–diffusion processes as physical realizations of Hecke algebras. Physics Letters B **314** 377–380 (1993). M. Balázs and T. Seppäläinen, order of current variance and diffusivity in the asymmetric simple exclusion process, Annals of Mathematics, **171** (2010), 1237–1265. V. Belitsky, G.M. Schütz.: Diffusion and coalescence of shocks in the partially asymmetric exclusion process. Electron. J. Prob. **7**, Paper No. 11, 1–21 (2002) V. Belitsky, G.M. Sch[" u]{}tz, Self-Duality for the Two-Component Asymmetric Simple Exclusion Process, preprint: [arXiv:1504.05096](http://arxiv.org/abs/1504.05096) V. Belitsky, G.M. Sch[" u]{}tz, Quantum algebra symmetry and reversible measures for the ASEP with second-class particles, preprint: [arXiv:1504.06958v1](http://arxiv.org/abs/1504.06958v1) A. Borodin, I. Corwin, T. Sasamoto, From duality to determinants for q-TASEP and ASEP, Annals of Probability 2014, Vol. 42, No. 6, 2314–2382. L. Cantini, Algebraic Bethe Ansatz for the two species ASEP with different hopping rates, J. Phys. A: Math. Theor. 41 095001 (2008) G. Carinci, C. Giardinà, F. Redig, T. Sasamoto, A generalized Asymmetric Exclusion Process with $U_q(\mathfrak{gl}_2)$ stochastic duality, preprint: [arXiv:1407.3367](http://arxiv.org/abs/1407.3367) S. Chatterjee, G.M. Schütz, Determinant representation for some transition probabilities in the TASEP with second class particles, Journal of Statistical Physics, Volume 140, Number 5, 900–916 (2010). I. Corwin, L. Petrov, Stochastic higher spin vertex models on the line, preprint: [arXiv:1502.07374 ](http://arxiv.org/abs/1502.07374) B. Derrida, M.R. Evans, J. Phys. A: Math. Gen. 32 (1999) 4833–4850. M.D. Gould, R.B. Zhang, and A.J. Bracken, Generalized Gel’fand invariants and characteristic identities for quantum groups, J. Math Phys **32** 2298 (1991). T. Imamura, T. Sasamoto, *Current moments of 1D ASEP by duality*. Phys. **142**(5), 919–930 (2011) J.C. Jantzen, Lectures on Quantum Groups, American Mathematical Society (1995). J.H.H. Perk, C.L. Schultz, New families of commuting transfer matrices in $q$–state vertex models. Phys. Lett **84A**, 407–410 (1981). G. Schütz, S. Sandow: Non-abelian symmetries of stochastic processes: derivation of correlation functions for random vertex models and disordered interact- ing many-particle systems. Phys. Rev. E **49**, 2726–2744 (1994) G. Sch[" u]{}tz, Duality relations for asymmetric exclusion processes. J. Stat. Phys. **86**(5/6), 1265–1287 (1997) C. Tracy, H. Widom, On the Distribution of a Second Class Particle in the Asymmetric Simple Exclusion Process, J. Phys. A: Math. Theor. 42 (2009) 425002 (6pp)
--- abstract: 'The motion of molecular motor is essential to the biophysical functioning of living cells. In principle, this motion can be regraded as a multiple chemical states process. In which, the molecular motor can jump between different chemical states, and in each chemical state, the motor moves forward or backward in a corresponding potential. So, mathematically, the motion of molecular motor can be described by several coupled one-dimensional hopping models or by several coupled Fokker-Planck equations. To know the basic properties of molecular motor, in this paper, we will give detailed analysis about the simplest cases: in which there are only two chemical states. Actually, many of the existing models, such as the flashing ratchet model, can be regarded as a two-state model. From the explicit expression of the mean velocity, we find that the mean velocity of molecular motor might be nonzero even if the potential in each state is periodic, which means that there is no energy input to the molecular motor in each of the two states. At the same time, the mean velocity might be zero even if there is energy input to the molecular motor. Generally, the velocity of molecular motor depends not only on the potentials (or corresponding forward and backward transition rates) in the two states, but also on the transition rates between the two chemical states.' author: - Yunxin Zhang title: ' **The mean velocity of two-state models of molecular motor** ' --- Introduction ============ Molecular motors are biogenic force generators acting in the nanometer range, and converting chemical energy into mechanical work [@Bray2001; @Howard2001], which play essential roles in eukaryotic cells [@Badoual2002; @Lipowsky2005; @Riedel2007; @Zhang2009; @Howard2009]. In the super family of molecular motors [@Vale2003], the most extensively studied ones are conventional kinesin [@Fisher2001; @Carter2005; @Block2007; @Zhang2008; @Toprak2009; @Guydosh2009; @Hyeon2009; @Hariharan2009], cytoplasmic dynein [@Samara2006; @Toba2006; @Gennerich2009; @Houdusse2009; @Roberts2009; @Kardona2009; @Serohijos2009], myosin V [@Rosenfeld2004; @Purcell2005; @Veigel2005; @Sakamoto2008; @Jackson2009; @Fedorov2009], and ${\rm F_0F_1}-$ATPase [@Wang1998; @Kazuhiko2000; @Nishizaka2004; @Adachi2007; @Muneyuki2007; @Junge2009; @Miller2009]. The conventional kinesin can walk hand-over-hand along microtubule about 1 $\mu$m to the plus end direction of the microtubule before its dissociation from the track [@Block1990; @Yildiz2004; @Asbury2003], with step size 8.2 nm [@Schnitzer1997; @Coy1999; @Fehr2007] and stall force 6$-$8 pN [@Guydosh2009; @Gennerich2009; @Yildiz2008; @Block2007; @Hackney2005; @Taniguchi2005; @Carter2005; @Nishiyama2002; @Schnitzer2000], which is independent of ATP concentration [@Carter2005]. In saturating ATP solution, its zero load velocity is about 700$-$1000 nm/s [@Nishiyama2002; @Carter2005; @Block2003]. Cytoplasmic dynein also can walk hand-over-hand along microtubule with average step size 8.2 nm [@Kardona2009; @Gennerich2007; @Watanabe2007; @Mallik2004; @Hirakawa2000], but to the minus end direction [@Toba2006]. Recent experimental data indicate that its stall force is also about 6$-$8 pN [@Gennerich2007; @Hirakawa2000; @Cho2008], and independent of ATP concentration [@Gennerich2007]. To the dynein which is purified from mammalian animals, its maximal velocity is also about 700$-$1000 nm/s [@Toba2006; @Ross2008; @King2000]. Myosin V is also a processive motor but walks along actin filaments with average step size 36 nm, and ATP independent stall force 2$-$3 pN [@Cappello2007; @Tsygankov2007; @Christof2006; @Clemen2005; @Kolomeisky2003; @Uemura2004]. ATPase consists of two portions ${\rm F_0}$ and ${\rm F_1}$ connected to a $\gamma$ shaft. It can use the proton-motive force across the mitochondrial membranes to make ATP from ADP and Pi, and also can use ATP to drive the rotation of the $\gamma$ shaft [@Oster2000]. Recent experiments found that there are also many other molecular motors that can move processively, such as kinesin CENP-E [@Yardimci2008], myosin VI [@Sweeney2007; @Oguchi2008; @Bryant2007; @Iwaki2009], myosin VIIa [@Udovichenko2002], myosin IXb [@Inoue2002], myosin XI [@Tominaga2003], and T7 DNA helicase [@Kim2002]. There are many mathematical models to describe the motion of molecular motor, such as Fokker-Planck equation [@Risken1989; @Zhang20091; @Wang2002; @Howard2001], Langevin equation [@Gehlen2008], and master equation [@Fisher2001; @Nieuwenhuizen2004; @Kolomeisky2007; @Liepelt2007; @Zhang20093]. However, so far, almost all of the explicit formulations of biophysical properties of molecular motor, such as mean velocity [@Fisher1999; @Howard2001], effective diffusion constant [@Reimann2001; @Zhang20092], and mean first passage time [@Pury2003; @Kolomeisky2005], are obtained by employing one-sate models, in which the molecular motor moves along its track in one tilted periodic potential [^1]. One of the basic properties of such models is that the mean velocity of molecular motor does not vanish as long as the input energy is positive. These models and their corresponding results are valuable to describe the [*tightly*]{} mechanochemical coupled cases of motor motion. However, recent experimental data indicate the motion of molecular motors, including conventional kinesin [@Bieling2008; @Endres2006; @Seidel2008; @Shaevitz2005; @Yildiz2008], cytoplasmic dynein [@Gao2006], myosin II [@Nishikawa2008; @Masuda2009], and $\textrm{F}_1$-ATPase [@Gerritsma2009] are usually [*loosely*]{} coupled to ATP hydrolysis, i.e., the input energy might be nonzero even if the mean velocity vanishes. To study these loosely coupled cases, it is necessary to use multi-state models. In fact, the multi-state models have been used by some authors [@Lipowsky2000; @Lipowsky2003; @Zhang20091]. However, it is hard to get meaningful explicit results for the general $N$-state models. Usually the numerical calculations are employed [@Chen1999; @Wang2003; @Wang2004]. In this paper, we will give a detailed theoretical analysis to the two-state models. Actually, the two-state models have most of the essential properties of the general multi-state models, and they have been used in many studies [@Astumian1997; @Parmeggiani1999; @Parrondo2002; @Reimann20021; @Chen1999; @Bier1993; @Frank1995; @Prost1994]. There are two different forms of two-state models: (1) two coupled one-dimensional hopping models, and (2) two coupled one-dimensional Fokker-Planck equations, which is equivalent to two coupled Langevin equations (in fact, it also can be verified that, any one-dimensional hopping model can be well approximated by a one-dimensional Fokker-Planck equation [@Zhang2010]). In the following, we will give the explicit formulation of the mean velocity of molecular motor by using two coupled one-dimensional hopping models and two coupled one-dimensional Fokker-Plank equations respectively. From this formulation, the [*stall force*]{}, i.e., the external load under which the mean velocity vanishes, can be obtained. We find that, the mean velocity, and consequently the stall force depend not only on potentials in the two states (or corresponding forward and backward transition rates), but also on transition rates between the two chemical states. In general, part of the input energy will dissipate into the environment, and so the [*energy efficiency*]{}, i.e., the ratio of mechanical work done by the molecular motor to the input energy, might be far less than 1. For example, the mean velocity might be zero even if the input energy in each state is nonzero. The organization of this paper is as follows. In the next section, the two coupled one-dimensional hopping models are discussed, and then in Section [**III**]{}, the two coupled Fokker-Planck equations are analyzed. In each models, three special cases are further analyzed: (1) The motor can jump between the two chemical states at only one position. In fact, the properties of this special case are much similar as the usual one state model. At steady state, there is no energy input to the molecular motor during its transition between the two chemical states. (2) The motor can jump between the two states at two positions. This special case has the typical properties of the general cases. (3) One of the two potentials is constant, or all the corresponding transition rates (forward and backward) are equal to each other in one of the two states. This special case corresponds to the flashing ratchet model of molecular motors. Finally, the results are briefly summarized in Section [**IV**]{}. Two coupled one-dimensional hopping models ========================================== The two coupled one-dimensional hopping models are schematically depicted in Fig. \[Fig1\]. In which, the forward and backward transition rates in state 1 are denoted by $F_n$ ($n\to n+1$) and $B_n$ ($n\to n-1$), the forward and backward transition rates in state 2 are denoted by $f_n$ ($n\to n+1$) and $b_n$ ($n\to n-1$), and the transition rates between the two states at position $n$ are denoted by $\omega^n_a$ (state 1 $\to$ state 2) and $\omega^n_d$ (state 2 $\to$ state 1). Under the assumption of periodicity, we have $$\label{eq1} \begin{array}{lll} F_{lN+n}=F_n,\quad &B_{lN+n}=B_n,\quad &\omega_a^{lN+n}=\omega^n_d,\cr f_{lN+n}=f_n, &b_{lN+n}=b_n, &\omega_d^{lN+n}=\omega^n_d, \end{array}$$ where $l$ is an integer number $l$, $N$ is the period of hopping models. Let $\tilde{P}_n(t)$ be the probability of finding molecular motor at position $n$ of state 1 (denoted by $\textsf{1}_n$) at time $t$, and $\tilde{\rho}_n(t)$ be the probability of finding molecular motor at position $n$ of state 2 (denoted by $\textsf{2}_n$) at time $t$. Then the evolution of probabilities $\tilde{P}_n(t)$ and $\tilde{\rho}_n(t)$ are governed by the following master equations: $$\label{eq2} \left\{\begin{aligned} &\frac{d}{dt}\tilde{P}_n(t)=F_{n-1}\tilde{P}_{n-1}(t)-(F_n+B_n)\tilde{P}_n(t)+B_{n+1}\tilde{P}_{n+1}(t)\cr &\qquad\qquad -\omega_n^a\tilde{P}_n(t)+\omega_n^d\tilde{\rho}_n(t),\cr &\frac{d}{dt}\tilde{\rho}_n(t)=f_{n-1}\tilde{\rho}_{n-1}(t)-(f_n+b_n)\tilde{\rho}_n(t)+b_{n+1}\tilde{\rho}_{n+1}(t)\cr &\qquad\qquad+\omega_n^a\tilde{P}_n(t)-\omega_n^d\tilde{\rho}_n(t),\qquad\qquad n=0, \pm 1,\pm 2,\cdots. \end{aligned}\right.$$ Let $$\label{eq3} P_n(t)=\sum_{l=-\infty}^{\infty}\tilde{P}_{lN+n}(t),\qquad \rho_n(t)=\sum_{l=-\infty}^{\infty}\tilde{\rho}_{lN+n}(t),$$ then, at steady state, $P_n$ and $\rho_n$ satisfy [@Derrida1983] $$\label{eq4} \begin{aligned} &\left\{\begin{aligned} &F_{n-1}P_{n-1}-(F_n+B_n)P_n+B_{n+1}P_{n+1}-\omega_n^aP_n+\omega_n^d\rho_n=0,\cr &f_{n-1}\rho_{n-1}-(f_n+b_n)\rho_n+b_{n+1}\rho_{n+1}+\omega_n^aP_n-\omega_n^d\rho_n=0, \end{aligned}\right. \end{aligned}$$ with $n=1,2,\cdots, N$, and the total flux of probability, $$\label{eq5} J_{n+\frac12}=(F_nP_n-B_{n+1}P_{n+1})+(f_n\rho_n-b_{n+1}\rho_{n+1}),$$ is constant, i.e. $J_{n+\frac12}\equiv J$ for $n=1,2,\cdots, N$. From the first equation of (\[eq4\]), one sees that $$\label{eq6} \rho_n=[(F_n+B_n+\omega^n_a)P_n-F_{n-1}P_{n-1}-B_{n+1}P_{n+1}]/\omega^n_d.$$ Substituting (\[eq6\]) into (\[eq5\]), one can easily verify $$\label{eq7} J=A_{n-1}P_{n-1}+C_nP_n+D_{n+1}P_{n+1}+E_{n+2}P_{n+2},$$ where $$\label{eq8} \left\{\begin{aligned} A_n&=-f_{n+1}F_n/\omega^{n+1}_d,\cr C_n&=[f_n(F_n+B_n+\omega^n_a)]/\omega^n_d+F_n+b_{n+1}F_n/\omega^{n+1}_d,\cr D_n&=-[B_n+f_{n-1}B_n/\omega^{n-1}_d+b_n(f_n+B_n+\omega^n_a)/\omega^n_d],\cr E_n&=b_{n-1}B_n/\omega^{n-1}_d. \end{aligned}\right.$$ By (\[eq7\]) and routine analysis, we obtain $$\label{eq9} P_i=X_iJ+Y_iP_1+Z_iP_2+W_iP_3,$$ where $$\label{eq10} X_N=\frac{1}{A_N}\quad Y_N=-\frac{C_1}{A_N},\quad Z_N=-\frac{D_2}{A_N},\quad W_N=-\frac{E_3}{A_N},$$ $$\label{eq11} \begin{array}{ll} X_{N-1}=\frac{1}{A_{N-1}}\left(1-\frac{C_N}{A_N}\right),&\quad Y_{N-1}=\frac{C_NC_1}{A_{N-1}A_N}-\frac{D_1}{A_{N-1}},\cr Z_{N-1}=\frac{C_ND_2}{A_{N-1}A_N}-\frac{E_2}{A_{N-1}},&\quad W_{N-1}=\frac{C_NE_3}{A_{N-1}A_N}, \end{array}$$ $$\label{eq12} \left\{\begin{aligned} &X_{N-2}=\frac{1}{A_{N-2}}-\frac{C_{N-1}}{A_{N-2}A_{N-1}}+\frac{C_{N-1}C_N}{A_{N-2}A_{N-1}A_N}-\frac{D_{N}}{A_{N-2}A_{N}},\cr &Y_{N-2}=-\frac{E_1}{A_{N-2}}+\frac{C_{N-1}D_1}{A_{N-2}A_{N-1}}+\frac{D_NC_1}{A_{N-2}A_N}-\frac{C_{N-1}C_{N}C_1}{A_{N-2}A_{N-1}A_{N}},\cr &Z_{N-2}=\frac{C_{N-1}E_2}{A_{N-2}A_{N-1}}+\frac{D_{N}D_2}{A_{N-2}A_{N}}-\frac{C_{N-1}C_ND_2}{A_{N-2}A_{N-1}A_N},\cr &W_{N-2}=\frac{D_{N}E_3}{A_{N-2}A_{N}}-\frac{C_{N-1}C_NE_3}{A_{N-2}A_{N-1}A_N},\cr \end{aligned}\right.$$ and for $1\le k\le N-3$, $$\label{eq13} \left\{\begin{aligned} &X_{k}=\frac{1-(C_{k+1}X_{k+1}+D_{k+2}X_{k+2}+E_{k+3}X_{k+3})}{A_k},\cr &Y_{k}=-\frac{C_{k+1}Y_{k+1}+D_{k+2}Y_{k+2}+E_{k+3}Y_{k+3}}{A_k},\cr &Z_{k}=-\frac{C_{k+1}Z_{k+1}+D_{k+2}Z_{k+2}+E_{k+3}Z_{k+3}}{A_k},\cr &W_{k}=-\frac{C_{k+1}W_{k+1}+D_{k+2}W_{k+2}+E_{k+3}W_{k+3}}{A_k}. \end{aligned}\right.$$ Then, for $i=1,2,3$ in equation (\[eq9\]), we obtain the following equations $$\label{eq14} AP=JX,$$ where $$\label{eq15} A=\left(\begin{array}{ccc} 1-Y_1 & -Z_1 & -W_1\cr-Y_2 & 1-Z_2 & -W_2\cr -Y_3 & -Z_3 & 1-W_3 \end{array}\right),\quad P=\left(\begin{array}{c} P_1\cr P_2\cr P_3\end{array}\right),\quad X=\left(\begin{array}{c} X_1\cr X_2\cr X_3\end{array}\right).$$ So, $P_i=\hat{P}_iJ$ for $i=1,2,3$, with $\hat{P}=(\hat{P}_1, \hat{P}_2, \hat{P}_3)^T$ satisfy $A\hat{P}=X$. Consequently, $P_i$, for $3<i\le N$, can be obtained by Eq. (\[eq9\]), $$\label{eq16} P_i=(X_i+Y_i\hat{P}_1+Z_i\hat{P}_2+W_i\hat{P}_3)J=:\hat{P}_iJ,$$ and therefore, $\rho_i$ can be obtained by Eq. (\[eq6\]), $$\label{eq17} \begin{aligned} \rho_i=&[(F_i+B_i+\omega^i_a)P_i-F_{i-1}P_{i-1}-B_{i+1}P_{i+1}]/\omega^i_d,\cr =&[(F_i+B_i+\omega^i_a)\hat{P}_i-F_{i-1}\hat{P}_{i-1}-B_{i+1}\hat{P}_{i+1}]J/\omega^i_d,\cr =&:\hat{\rho}_iJ. \end{aligned}$$ The probability flux $J$ in Eqs. (\[eq16\]) (\[eq17\]) is determined by the normalization condition $\sum\limits_{i=1}^N(P_i+\rho_i)=1$, $$\label{eq18} \begin{aligned} J=&1/\sum\limits_{i=1}^N(\hat{P}_i+\hat{\rho}_i),\cr =&1/\sum\limits_{i=1}^N\left[\frac{\omega^i_a+\omega^i_d}{\omega^i_d}+\left(\frac{1}{\omega^i_d}-\frac{1}{\omega^{i+1}_d}\right)F_i+\left(\frac{1}{\omega^i_d}-\frac{1}{\omega^{i-1}_d}\right)B_i\right]\hat{P}_i. \end{aligned}$$ Specially, if $\omega^i_a\equiv \omega_a$ and $\omega^i_d\equiv \omega_d$ are constants, then $$\label{eq19} \begin{aligned} J=\frac{\omega_d}{(\omega_a+\omega_d)\sum\limits_{i=1}^N\hat{P}_i}=\frac{\omega_d}{(\omega_a+\omega_d)\sum\limits_{i=1}^N(X_i+Y_i\hat{P}_1+Z_i\hat{P}_2+W_i\hat{P}_3)}. \end{aligned}$$ Special case I: $\omega^i_a=\omega^i_d=0\ \textrm{for }1\le i\le N-1$ --------------------------------------------------------------------- For convenience, we denote $\omega^N_a, \omega^N_d$ by $\omega_a, \omega_d$ respectively (see Fig. \[Fig2\]). For this special case, the steady state probabilities $P_n, \rho_n$ satisfy $$\label{eq20} \left\{\begin{aligned} &F_NP_N-B_1P_1=F_1P_1-B_2P_2=\cdots=F_{N-1}P_{N-1}-B_NP_N=:J,\cr & f_N\rho_N-b_1\rho_1=f_1\rho_1-b_2\rho_2=\cdots=f_{N-1}\rho_{N-1}-b_N\rho_N=:j. \end{aligned}\right.$$ It can be readily verified that $$\label{eq21} P_k=X_kP_N-Y_kJ,$$ with $$\label{eq21of1} \begin{aligned} X_k=\prod_{i=1}^k\frac{F_{i-1}}{B_i},\quad Y_k=\frac{1}{F_k}\sum_{i=1}^{k}\prod_{j=i}^{k}\frac{F_j}{B_j}. \end{aligned}$$ Specially, $$\label{eq22} P_N=\left(\prod_{i=1}^N\frac{F_{i-1}}{B_i}\right)P_N-\left(\frac{1}{F_N}\sum_{i=1}^{N}\prod_{j=i}^{N}\frac{F_j}{B_j}\right)J,$$ which implies $$\label{eq23} P_N=\frac{\frac{1}{F_N}\sum\limits_{i=1}^{N}\prod\limits_{j=i}^{N}\frac{F_j}{B_j}}{\prod\limits_{i=1}^N\frac{F_{i-1}}{B_i}-1}J.$$ Combining (\[eq21\]) (\[eq21of1\]) and (\[eq23\]), one finds $$\label{eq24} \begin{aligned} P_k=&\left(\frac{\frac{1}{F_N}\left(\sum\limits_{i=1}^{N}\prod\limits_{j=i}^{N}\frac{F_j}{B_j}\right)\left(\prod\limits_{i=1}^k\frac{F_{i-1}}{B_i}\right)}{\prod\limits_{i=1}^N\frac{F_{i-1}}{B_i}-1} -\frac{1}{F_k}\sum_{i=1}^{k}\prod_{j=i}^{k}\frac{F_j}{B_j}\right)J,\cr =&\frac{\left(\sum\limits_{i=1}^{N}\prod\limits_{j=i}^{N}\frac{F_j}{B_j}\right)\left(\prod\limits_{i=1}^{k}\frac{F_{i}}{B_i}\right)-\left(\sum\limits_{i=1}^{k}\prod\limits_{j=i}^{k}\frac{F_j}{B_j}\right)\left(\prod\limits_{i=1}^N\frac{F_{i-1}}{B_i}-1\right)}{\prod\limits_{i=1}^N\frac{F_{i-1}}{B_i}-1}\frac{J}{F_k}. \end{aligned}$$ Using the periodic conditions (\[eq1\]), one can verify that $$\label{eq25} \begin{aligned} P_k =\frac{\frac{1}{F_k}\sum\limits_{i=k+1}^{N+k}\prod\limits_{j=i}^{N+k}\frac{F_j}{B_j}}{\prod\limits_{i=1}^N\frac{F_{i}}{B_i}-1}J. \end{aligned}$$ Using the same method, the probability $\rho_k$ can be obtained $$\label{eq26} \begin{aligned} \rho_k =\frac{\frac{1}{f_k}\sum\limits_{i=k+1}^{N+k}\prod\limits_{j=i}^{N+k}\frac{f_j}{b_j}}{\prod\limits_{i=1}^N\frac{f_{i}}{b_i}-1}j. \end{aligned}$$ At steady state, $\omega_aP_N=\omega_d\rho_N$, which implies $$\label{eq27} j=\frac{\frac{1}{F_N}\sum\limits_{i=1}^{N}\prod\limits_{j=i}^{N}\frac{F_j}{B_j}\left/\left(\prod\limits_{i=1}^N\frac{F_{i}}{B_i}-1\right)\right.} {\frac{1}{f_N}\sum\limits_{i=1}^{N}\prod\limits_{j=i}^{N}\frac{f_j}{b_j}\left/\left(\prod\limits_{i=1}^N\frac{f_{i}}{b_i}-1\right)\right.}\frac{\omega_a}{\omega_d}J=:\Xi J.$$ Therefore, $$\label{eq28} \rho_k =\frac{\frac{1}{f_k}\sum\limits_{i=k+1}^{N+k}\prod\limits_{j=i}^{N+k}\frac{f_j}{b_j}}{\prod\limits_{i=1}^N\frac{F_{i}}{B_i}-1}\Theta J,\qquad\textrm{where}\ \ \Theta=\frac{\frac{\omega_a}{F_N}\sum\limits_{i=1}^{N}\prod\limits_{j=i}^{N}\frac{F_j}{B_j}}{\frac{\omega_d}{f_N}\sum\limits_{i=1}^{N}\prod\limits_{j=i}^{N}\frac{f_j}{b_j}}.$$ Since $P_k, \rho_k$ satisfy $\sum\limits_{k+1}^N(P_k+\rho_k)=1$, from (\[eq25\], \[eq28\]), the probability flux $J$ can be obtained as follows $$\label{eq29} \begin{aligned} J=\frac{\phi\left(\prod\limits_{i=1}^N\frac{F_{i}}{B_i}-1\right)}{\phi\Psi+\Phi\psi}. \end{aligned}$$ where $$\label{eq29of1} \begin{array}{ll} \phi=\frac{\omega_d}{f_N}\sum\limits_{i=1}^{N}\prod\limits_{j=i}^{N}\frac{f_j}{b_j},\qquad &\psi=\sum\limits_{k=1}^N\left(\frac{1}{f_k}\sum\limits_{i=k+1}^{N+k}\prod\limits_{j=i}^{N+k}\frac{f_j}{b_j}\right),\cr \Phi=\frac{\omega_a}{F_N}\sum\limits_{i=1}^{N}\prod\limits_{j=i}^{N}\frac{F_j}{B_j}, &\Psi=\sum\limits_{k=1}^N\left(\frac{1}{F_k}\sum\limits_{i=k+1}^{N+k}\prod\limits_{j=i}^{N+k}\frac{F_j}{B_j}\right). \end{array}$$ So the total flux of this system is $$\label{eq30} \begin{aligned} J+j=(1+\Xi )J=\left(1+\frac{\prod\limits_{i=1}^N\frac{f_{i}}{b_i}-1}{\prod\limits_{i=1}^N\frac{F_{i}}{B_i}-1}\Theta\right)J =\frac{\phi\left(\prod\limits_{i=1}^N\frac{F_{i}}{B_i}-1\right)+\Phi\left(\prod\limits_{i=1}^N\frac{f_{i}}{b_i}-1\right)}{\phi\Psi+\Phi\psi}. \end{aligned}$$ Combining (\[eq25\]) (\[eq28\]) and (\[eq29\]), the probabilities $P_k$ and $\rho_k$ can be obtained as follows $$\label{eq31} \begin{aligned} P_k=\frac{\phi}{\phi\Psi+\Phi\psi}\frac{1}{F_k}\sum\limits_{i=k+1}^{N+k}\prod\limits_{j=i}^{N+k}\frac{F_j}{B_j},\quad \rho_k=\frac{\Phi}{\phi\Psi+\Phi\psi}\frac{1}{f_k}\sum\limits_{i=k+1}^{N+k}\prod\limits_{j=i}^{N+k}\frac{f_j}{b_j}. \end{aligned}$$ By (\[eq30\]), one easily sees that, if $\prod\limits_{i=1}^N\frac{f_{i}}{b_i}=\prod\limits_{i=1}^N\frac{F_{i}}{B_i}=1$, then $J=j=0$, consequently the total probability flux $J+j=0$. In other words, for this special case, if there is no energy input to the molecular motor in each state, then the mean velocity would be zero. But the reverse does not hold. Note, the potential changes in one period of state 1 and state 2 are $\Delta G_1=k_BT\ln\left(\prod_{i=1}^N\frac{F_i}{B_i}\right)$ and $\Delta G_2=k_BT\ln\left(\prod_{i=1}^N\frac{f_i}{b_i}\right)$ respectively [@Qian1997; @Fisher2001]. Special case II: $\omega^i_a=\omega^i_d=0\ \textrm{for}\ i\ne M,N$ ------------------------------------------------------------------ For convenience, we denote $\omega^N_a, \omega^N_d$ by $\Omega_a, \Omega_d$, and $\omega^M_a, \omega^M_d$ by $\omega_a, \omega_d$ (see Fig. \[Fig3\]). At steady state, $P_k, \rho_k$ satisfy $$\label{eq32} \left\{\begin{aligned} &F_NP_N-B_1P_1=F_1P_1-B_2P_2=\cdots =F_{M-1}P_{M-1}-B_MP_M=:J_1,\cr &F_MP_M-B_{M+1}P_{M+1} =\cdots=F_{N-1}P_{N-1}-B_NP_N=:J_2,\cr &f_N\rho_N-b_1\rho_1=f_1\rho_1-b_2\rho_2=\cdots =f_{M-1}\rho_{M-1}-b_M\rho_M=:j_1,\cr &f_M\rho_M-b_{M+1}\rho_{M+1} =\cdots=f_{N-1}\rho_{N-1}-b_N\rho_N=:j_2,\cr &\omega_aP_M+\Omega_aP_N=\omega_d\rho_M+\Omega_d\rho_N,\cr &J_2=J_1+\Omega_aP_N-\Omega_d\rho_N,\cr &j_2=j_1-\Omega_aP_N+\Omega_d\rho_N,\cr &\sum\limits_{k=1}^N(P_k+\rho_k)=1. \end{aligned}\right.$$ From the first equation in (\[eq32\]), one can easily get $$\label{eq33} \begin{aligned} P_k=&\left(\prod_{i=1}^k\frac{F_{i-1}}{B_i}\right)P_N-\frac{1}{F_k}\left(\sum_{i=1}^{k}\prod_{j=i}^{k}\frac{F_j}{B_j}\right)J_1, \quad 1\le k\le M. \end{aligned}$$ At the same time, from the second equation in (\[eq32\]), $$\label{eq34} \begin{aligned} P_k=&\left(\prod_{i=M+1}^k\frac{F_{i-1}}{B_i}\right)P_M-\frac{1}{F_k}\left(\sum_{i=M+1}^{k}\prod_{j=i}^{k}\frac{F_j}{B_j}\right)J_2\cr =&\left(\prod_{i=M+1}^k\frac{F_{i-1}}{B_i}\right)\left[\left(\prod_{i=1}^M\frac{F_{i-1}}{B_i}\right)P_N-\frac{1}{F_M}\left(\sum_{i=1}^{M}\prod_{j=i}^{M}\frac{F_j}{B_j}\right)J_1\right]\cr &-\frac{1}{F_k}\left(\sum_{i=M+1}^{k}\prod_{j=i}^{k}\frac{F_j}{B_j}\right)(J_1+\Omega_aP_N-\Omega_d\rho_N)\cr =&\left[\prod_{i=1}^k\frac{F_{i-1}}{B_i}-\frac{\Omega_a}{F_k}\left(\sum_{i=M+1}^{k}\prod_{j=i}^{k}\frac{F_j}{B_j}\right)\right]P_N\cr &+\frac{\Omega_d}{F_k}\left(\sum_{i=M+1}^{k}\prod_{j=i}^{k}\frac{F_j}{B_j}\right)\rho_N -\frac{1}{F_k}\left(\sum_{i=1}^{k}\prod_{j=i}^{k}\frac{F_j}{B_j}\right)J_1\cr =&:(R_k-\Omega_aS_k)P_N+\Omega_dS_k\rho_N-T_kJ_1, \quad \textrm{for } M+1\le k\le N. \end{aligned}$$ In particular, $P_N=(R_N-\Omega_aS_N)P_N+\Omega_dS_N\rho_N-T_NJ_1$, which gives $$\label{eq35} J_1=\frac{(R_N-\Omega_aS_N-1)P_N+\Omega_dS_N\rho_N}{T_N}.$$ Substituting (\[eq35\]) into (\[eq33\]) (\[eq34\]), we obtain $$\label{eq36} P_k=G_kP_N+H_k\rho_N,$$ where $$\label{eq37} \begin{aligned} G_k=&\left\{\begin{array}{ll} R_k-\frac{T_k}{T_N}(R_N-\Omega_aS_N-1),\qquad &1\le k\le M,\cr R_k-\Omega_aS_k-\frac{T_k}{T_N}(R_N-\Omega_aS_N-1), &M+1\le k\le N, \end{array}\right.\cr H_k=&\left\{\begin{array}{ll} -\frac{T_k}{T_N}\Omega_dS_N,\qquad\qquad\qquad &1\le k\le M,\cr -\frac{T_k}{T_N}\Omega_dS_N+\Omega_dS_k, & M+1\le k\le N. \end{array}\right. \end{aligned}$$ Similarly, $$\label{eq38} \rho_k=g_k\rho_N+h_kP_N,$$ where $g_k, h_k$, and the corresponding $r_k, s_k, t_k$ in expressions of $g_k, h_k$, can be obtained by replacing $F_j, B_j, \Omega_a, \Omega_d$ in the expressions of $R_k, S_k, T_k, G_k, H_k$ with $f_j, b_j, \Omega_d, \Omega_a$ respectively. Combining (\[eq36\]) (\[eq38\]) and the fifth equality in (\[eq32\]), we have $$\label{eq39} \omega_a(G_MP_N+H_M\rho_N)+\Omega_aP_N=\omega_d(g_M\rho_N+h_MP_N)+\Omega_d\rho_N,$$ i.e. $$\label{eq39of1} (\Omega_a+\omega_aG_M-\omega_dh_M)P_N=(\Omega_d+\omega_dg_M-\omega_aH_M)\rho_N.$$ So $$\label{eq40} \rho_N=\frac{\Omega_a+\omega_aG_M-\omega_dh_M}{\Omega_d+\omega_dg_M-\omega_aH_M}P_N=:\frac{U}{V}P_N.$$ From (\[eq36\]) (\[eq38\]) and (\[eq40\]), one finds $$\label{eq41} P_k=\left(G_k+\frac{U}{V}H_k\right)P_N,\qquad \rho_k=\left(h_k+\frac{U}{V}g_k\right)P_N.$$ In view of the last equation in (\[eq32\]), one gets $$\label{eq41of1} \left[\sum_{k=1}^N\left((G_k+h_k)+\frac{U}{V}(g_k+H_k)\right)\right]P_N=1,$$ which implies $$\label{eq42} P_N=\frac{1}{\sum\limits_{k=1}^N\left[(G_k+h_k)+\frac{U}{V}(g_k+H_k)\right]}.$$ By (\[eq35\]) (\[eq40\]) (\[eq42\]), we have $$\label{eq43} J_1=\frac{(R_N-\Omega_aS_N-1)+\Omega_dS_N\frac{U}{V}}{T_N\sum\limits_{k=1}^N\left[(G_k+h_k)+\frac{U}{V}(g_k+H_k)\right]}.$$ Similarly, one can verify that $$\label{eq44} j_1=\frac{(r_N-\Omega_ds_N-1)\frac{U}{V}+\Omega_as_N}{t_N\sum\limits_{k=1}^N\left[(G_k+h_k)+\frac{U}{V}(g_k+H_k)\right]}.$$ Therefore, the total flux of this special case is $$\label{eq45} J_1+j_1=\frac{\left((R_N-\Omega_aS_N-1)+\Omega_dS_N\frac{U}{V}\right)t_N+\left((r_N-\Omega_ds_N-1)\frac{U}{V}+\Omega_as_N\right)T_N}{T_Nt_N\sum\limits_{k=1}^N\left[(G_k+h_k)+\frac{U}{V}(g_k+H_k)\right]}.$$ More specially, if $R_N=r_N=1$, then the total probability flux is $$\label{eq46} \begin{aligned} J_1+j_1=&\frac{1}{\sum\limits_{k=1}^N\left[(G_k+h_k)+\frac{U}{V}(g_k+H_k)\right]}\left(\frac{s_N}{t_N}-\frac{S_N}{T_N}\right)\left(\Omega_a-\frac{U}{V}\Omega_d\right),\cr =&\frac{\Omega_a\omega_dr_M-\omega_a\Omega_dR_M}{\sum\limits_{k=1}^N\left[(G_k+h_k)V+(g_k+H_k)U\right]}\left(\frac{s_N}{t_N}-\frac{S_N}{T_N}\right), \end{aligned}$$ where $$\label{eq46of1} \begin{aligned} U=&\Omega_a+\omega_aR_M+\omega_a\Omega_aS_NT_M/T_N+\omega_d\Omega_as_Nt_M/t_N>0,\cr V=&\Omega_d+\omega_dr_M+\omega_d\Omega_ds_Nt_M/t_N+\omega_a\Omega_dS_NT_M/T_N>0. \end{aligned}$$ So the direction of probability flux is determined by the sign of $\left(\frac{s_N}{t_N}-\frac{S_N}{T_N}\right)$ and $\left(\Omega_a\omega_dr_M-\omega_a\Omega_dR_M\right)$. One can see that, $R_N=r_N=1$, i.e., $\Delta G_1=\Delta G_2=0$, does not read the mean velocity vanishes. To better understand the inter-state transition rates dependence of the total probability flux, we assume that $$(\Omega_a, \Omega_d, \omega_a, \omega_d)=\lambda(\tilde\Omega_a, \tilde\Omega_d, \tilde\omega_a, \tilde\omega_d).$$ It can be verified that the total probability flux $J:=J_1+j_1$ in (\[eq46\]) increases monotonically with parameter $\lambda$. If $\lambda=0$ then $J=0$. If $\lambda\to \infty$, them $J$ tends to $$\label{eq48} \begin{aligned} \frac{\tilde\Omega_a\tilde\omega_dr_M-\tilde\omega_a\tilde\Omega_dR_M}{*}\left(\frac{s_N}{t_N}-\frac{S_N}{T_N}\right), \end{aligned}$$ where $$\begin{aligned} *=&\sum\limits_{k=1}^N\left[(\frac{\tilde\omega_d\tilde\Omega_ds_Nt_M}{t_N}+\frac{\tilde\omega_a\tilde\Omega_dS_NT_M}{T_N})R_k+(\frac{\tilde\omega_a\tilde\Omega_aS_NT_M}{T_N}+\frac{\tilde\omega_d\tilde\Omega_as_Nt_M}{t_N})r_k\right]\cr &+(\tilde\Omega_a\tilde\omega_dr_M-\tilde\omega_a\tilde\Omega_dR_M)\left[\sum\limits_{k=1}^N\left(\frac{T_kS_N}{T_N}-\frac{t_ks_N}{t_N}\right) +\sum\limits_{k=M+1}^N(s_k-S_k)\right]\cr =&\left(\frac{\tilde\omega_ds_Nt_M}{t_N}+\frac{\tilde\omega_aS_NT_M}{T_N}\right)\sum\limits_{k=1}^N(\tilde\Omega_dR_k+\tilde\Omega_ar_k)\cr &+(\tilde\Omega_a\tilde\omega_dr_M-\tilde\omega_a\tilde\Omega_dR_M)\left[\sum\limits_{k=1}^N\left(\frac{T_kS_N}{T_N}-\frac{t_ks_N}{t_N}\right) +\sum\limits_{k=M+1}^N(s_k-S_k)\right]. \end{aligned}$$ Special case III: $\omega^i_a=\omega^i_d=0\ \textrm{for}\ i\ne M,N$, and $f_i=b_i\equiv f$   $1\le i\le N$ ---------------------------------------------------------------------------------------------------------- As pointed out in the Introduction, the flashing ratchet model can be regarded as one example of this special case. For this more special case, we have $$\label{eq49} \begin{aligned} r_k\equiv 1,\qquad t_k=\frac{k}{f},\quad\textrm{for }\ 1\le k\le N, \end{aligned}$$ and $s_k=(k-M)/f$ for $M+1\le k\le N$. It can be easily verified that $$\label{eq50} \begin{aligned} g_k=&\left\{\begin{array}{ll} 1+\frac{(N-M)k}{Nf}\Omega_d,\qquad &1\le k\le M,\cr 1+\frac{M(N-k)}{Nf}\Omega_d, &M+1\le k\le N, \end{array}\right.\cr h_k=&\left\{\begin{array}{ll} -\frac{(N-M)k}{Nf}\Omega_a,\qquad\quad &1\le k\le M,\cr -\frac{M(N-k)}{Nf}\Omega_a, & M+1\le k\le N. \end{array}\right. \end{aligned}$$ So $$\label{eq51} \begin{aligned} U=&\Omega_a+\omega_aG_M-\omega_dh_M\cr =&\Omega_a+\omega_a\left(R_M+\frac{T_M}{T_N}S_N\Omega_a-\frac{T_M}{T_N}(R_N-1)\right)+\frac{M(N-M)}{Nf}\omega_d\Omega_a,\cr V=&\Omega_d+\omega_dg_M-\omega_aH_M\cr =&\Omega_d+\omega_d\left(1+\frac{M(N-M)}{Nf}\Omega_d\right)+\frac{T_M}{T_N}S_N\omega_a\Omega_d. \end{aligned}$$ Moreover, if $R_N=1$ then the total probability flux is $$\label{eq52} \begin{aligned} J_1+j_1=&\frac{\Omega_a\omega_d-\omega_a\Omega_dR_M}{\Delta}\left(\frac{N-M}{N}-\frac{S_N}{T_N}\right), \end{aligned}$$ where $$\begin{aligned} \Delta=&\sum\limits_{k=1}^M\left(VR_k+Ur_k\right)+(\Omega_a\omega_dr_M-\omega_a\Omega_dR_M)\left[\sum\limits_{k=1}^M\left(\frac{T_kS_N}{T_N}-\frac{t_ks_N}{t_N}\right) +\sum\limits_{k=M+1}^N(s_k-S_k)\right],\cr =&(\Omega_a\omega_d-\omega_a\Omega_dR_M) \left[\frac{S_N}{T_N}\sum\limits_{k=1}^MT_k-\sum\limits_{k=M+1}^NS_k+\frac{(N-M)[(N-M)(N+M+1)-MN]}{2Nf}\right]\cr &+MU+V\sum\limits_{k=1}^MR_k. \end{aligned}$$ Two coupled Fokker-Planck equations =================================== The general two coupled Fokker-Planck equations are as follows $$\label{eq53} \left\{\begin{aligned} \partial_t\tilde P=&D\partial_x(\beta V_1'\tilde P+\partial_x\tilde P)+\omega_d(x)\tilde \rho-\omega_a(x)\tilde P,\cr \partial_t\tilde \rho=&D\partial_x(\beta V_2'\tilde \rho+\partial_x\tilde \rho)-\omega_d(x)\tilde \rho+\omega_a(x)\tilde P, \end{aligned}\right.\quad -\infty\le x\le +\infty,$$ where $D$ is free diffusion constant, $\beta=1/k_BT$ with $k_B$ is Boltzmann constant, and $T$ is absolute temperature, $P(x,t)$ and $\rho(x,t)$ are probability densities of finding molecular motor at position $x$ at time $t$ and in states 1 and 2 respectively, $V_1, V_2$ are (tilted) periodic potentials with period $L$. $\omega_a(x), \omega_d(x)$ are transition rates between states 1 and 2 at position $x$ [^2]. Similar as in [@Zhang20092], let $$\label{eq55} P(x, t)=\sum_{k=-\infty}^{+\infty}P(x+kL, t),\qquad \rho(x, t)=\sum_{k=-\infty}^{+\infty}\rho(x+kL, t),$$ then $P(x, t), \rho(x, t)$ satisfy $$\label{eq56} \left\{\begin{aligned} \partial_t P=&D\partial_x(\beta V_1' P+\partial_x P)+\omega_d(x) \rho-\omega_a(x) P,\cr \partial_t \rho=&D\partial_x(\beta V_2' \rho+\partial_x \rho)-\omega_d(x) \rho+\omega_a(x) P, \end{aligned}\right.\quad 0\le x\le L.$$ The steady state solution of (\[eq56\]) can be obtained under the following constraints: $$\label{eq57} \begin{aligned} P(0)=P(L),\quad \rho(0)=\rho(L),\quad \int_0^L(P+\rho)dx=1,\quad \int_0^L\omega_d\rho dx=\int_0^L\omega_aP dx. \end{aligned}$$ The corresponding probability flux is $$\label{eq58} \begin{aligned} J=-D\left(\beta V_1' P+\partial_x P+\beta V_2' \rho+\partial_x \rho\right), \end{aligned}$$ and the mean velocity of molecular motor is $V=\int_{0}^{L}Jdx=-\beta D\int_{0}^{L}(V_1' P+V_2' \rho)dx=JL$. If $\omega_a(x), \omega_d(x)$ are constants, Eq. (\[eq56\]) had been discussed by Y.-D. Chen [@Chen1999], and it can be solved numerically using the similar method as the one used in WPE method [@Wang2003; @Wang2004]. Special case I: $\omega_a(x)=\omega_d(x)\equiv 0$ for $0<x<L$ ------------------------------------------------------------- For this special case, the steady state probability densities $P(x), \rho(x)$ of finding molecular motor at position $x$ are governed by the following equations $$\label{eq59} \left\{\begin{aligned} &D\partial_x(\beta V_1' P+\partial_x P)=0,\cr &D\partial_x(\beta V_2' \rho+\partial_x \rho)=0, \end{aligned}\right.\quad 0< x< L.$$ Meanwhile, $P(x), \rho(x)$ satisfy the following boundary conditions and normalization constraint: $$\label{eq60} P(0)=P(L),\quad\rho(0)=\rho(L),\quad\omega_aP(0)=\omega_d\rho(0),\quad\int_0^L(P+\rho)dx=1,$$ where $\omega_a=\omega_a(L), \omega_d=\omega_d(L)$. The probability fluxes in the two states are $$\label{eq61} J=-D(\beta V_1' P+\partial_x P),\qquad j=-D(\beta V_2' \rho+\partial_x \rho).$$ So Eqs. (\[eq59\]) can be reformulated as $$\label{eq62} \left\{\begin{aligned} &\beta V_1' P+\partial_x P=-J/D,\cr &\beta V_2' \rho+\partial_x \rho=-j/D, \end{aligned}\right.\quad 0< x< L.$$ The general solutions of (\[eq62\]) are $$P(x)=\left(-\frac{J}{D}\int_0^xe^{\beta V_1(y)}dy+C_1\right)e^{-\beta V_1(x)},\quad \rho(x)=\left(-\frac{j}{D}\int_0^xe^{\beta V_2(y)}dy+C_2\right)e^{-\beta V_2(x)},$$ where the constants $C_1, C_2$ can be determined by the periodic boundary conditions $P(0)=P(L), \rho(0)=\rho(L)$: $$C_1=\frac{\frac{J}{D}\int_0^Le^{\beta V_1(y)}dy}{1-e^{-\beta\Delta V_1}},\qquad C_2=\frac{\frac{J}{D}\int_0^Le^{\beta V_2(y)}dy}{1-e^{-\beta\Delta V_2}},$$ with $\Delta V_i=V_i(0)-V_i(L)$. Therefore $$\label{eq63} P(x)=\frac{\frac{J}{D}\int_x^{x+L}e^{\beta [V_1(y)-V_1(x)]}dy}{1-e^{-\beta\Delta V_1}},\qquad \rho(x)=\frac{\frac{j}{D}\int_x^{x+L}e^{\beta [V_2(y)-V_2(x)]}dy}{1-e^{-\beta\Delta V_2}}.$$ From $\omega_aP(0)=\omega_d\rho(0)$, one sees $$\omega_a\frac{\frac{J}{D}\int_0^{L}e^{\beta [V_1(y)-V_1(0)]}dy}{1-e^{-\beta\Delta V_1}}= \omega_d\frac{\frac{j}{D}\int_0^{L}e^{\beta [V_2(y)-V_2(0)]}dy}{1-e^{-\beta\Delta V_2}},$$ so $$\label{eq64} j=\frac{\omega_a\left(e^{\beta V_2(0)}-(e^{\beta V_2(L)}\right)\int_0^{L}e^{\beta V_1(y)}dy}{\omega_d\left(e^{\beta V_1(0)}-(e^{\beta V_1(L)}\right)\int_0^{L}e^{\beta V_2(y)}dy}J.$$ From (\[eq63\]) (\[eq64\]) and the normalization condition $\int_0^L(P+\rho)dx=1$, one can easily get $$\label{eq65} \begin{aligned} J=&\frac{\omega_dD\left(e^{\beta V_1(0)}-e^{\beta V_1(L)}\right)\int_0^{L}e^{\beta V_2(y)}dy}{\star},\cr j=&\frac{\omega_aD\left(e^{\beta V_2(0)}-e^{\beta V_2(L)}\right)\int_0^{L}e^{\beta V_1(y)}dy}{\star}, \end{aligned}$$ where $$\begin{aligned} \star=&\omega_de^{\beta V_1(0)}\left(\int_0^{L}e^{\beta V_2(y)}dy\right)\left[\int_0^{L}e^{-\beta V_1(x)}\left(\int_x^{x+L}e^{\beta V_1(y)}dy\right)dx\right]\cr &+\omega_ae^{\beta V_2(0)}\left(\int_0^{L}e^{\beta V_1(y)}dy\right)\left[\int_0^{L}e^{-\beta V_2(x)}\left(\int_x^{x+L}e^{\beta V_2(y)}dy\right)dx\right]. \end{aligned}$$ It can be easily found that, for this special case, the total probability flux $J+j=0$ if potentials $V_1, V_2$ are periodic, i.e., $\Delta V_1=\Delta V_2=0$. From (\[eq30\]) and (\[eq65\]), one sees that, the properties of this special case are similar as those of the special case [**I**]{} of the two coupled one-dimensional hopping models [@Zhang2010]. Special case II: $\omega_a(x)=\omega_d(x)\equiv 0$ for $x\ne a, L$ ------------------------------------------------------------------ For this special case, the governing equations of the steady state probability densities $P(x), \rho(x)$ are $$\label{eq66} \begin{aligned} &\left\{\begin{aligned} &D\partial_x(\beta V_1' P_1+\partial_x P_1)=0,\cr &D\partial_x(\beta V_2' \rho_1+\partial_x \rho_1)=0, \end{aligned}\right.\quad 0< x< a,\cr &\left\{\begin{aligned} &D\partial_x(\beta V_1' P_2+\partial_x P_2)=0,\cr &D\partial_x(\beta V_2' \rho_2+\partial_x \rho_2)=0, \end{aligned}\right.\quad a< x< L, \end{aligned}$$ with the following constraints $$\label{eq67} \begin{aligned} &P_1(0)=P_2(L),\quad P_1(a)=P_2(a),\cr &\rho_1(0)=\rho_2(L),\quad \rho_1(a)=\rho_2(a),\cr &J_1=J_2+\omega_aP(a)-\omega_d\rho(a),\cr &j_1=j_2-\omega_aP(a)+\omega_d\rho(a),\cr &\omega_aP(a)+\Omega_aP(L)=\omega_d\rho(a)+\Omega_d\rho(L),\cr &\int_0^a(P_1+\rho_1)dx+\int_a^L(P_2+\rho_2)dx=1, \end{aligned}$$ where $\omega_a=\omega_a(a), \omega_d=\omega_d(a)$, $\Omega_a=\omega_a(L), \Omega_d=\omega_d(L)$, and $$J_i=-D(\beta V_1' P_i+\partial_x P_i)\qquad j_i=D-(\beta V_2' \rho_i+\partial_x \rho_i),\qquad\textrm{for } i=1, 2,$$ are probability fluxes in the two states. The general solutions of (\[eq66\]) can be written as follows $$\label{eq68} \begin{aligned} P_i(x)=-F_i(x)J_i+G_i(x)C_i,\qquad \rho_i(x)=-f_i(x)j_i+g_i(x)c_i,\qquad i=1, 2, \end{aligned}$$ where $$\label{eq69} \begin{array}{lll} F_1(x)=\frac{1}{D}e^{-\beta V_1(x)}\int_0^xe^{\beta V_1(y)}dy,\quad &G_1(x)=e^{-\beta V_1(x)},\quad &0\le x\le a,\cr F_2(x)=\frac{1}{D}e^{-\beta V_1(x)}\int_a^xe^{\beta V_1(y)}dy,\quad &G_2(x)=e^{-\beta V_1(x)},&a\le x\le L,\cr f_1(x)=\frac{1}{D}e^{-\beta V_2(x)}\int_0^xe^{\beta V_2(y)}dy,\quad &g_1(x)=e^{-\beta V_2(x)},&0\le x\le a,\cr f_2(x)=\frac{1}{D}e^{-\beta V_2(x)}\int_a^xe^{\beta V_2(y)}dy,\quad &g_2(x)=e^{-\beta V_2(x)},&a\le x\le L. \end{array}$$ From (\[eq67\]) (\[eq68\]), one can verify that $J_i, j_i$ and $C_i, c_i$ satisfy the following equations $$\label{eq70} \begin{aligned} &G_1(0)C_1=-F_2(L)J_2+G_2(L)C_2,\cr &-F_1(a)J_1+G_1(a)C_1=G_2(a)C_2,\cr &g_1(0)c_1=-f_2(L)j_2+g_2(L)c_2,\cr &-f_1(a)j_1+g_1(a)c_1=g_2(a)c_2,\cr &J_1=J_2+\omega_aG_2(a)C_2-\omega_dg_2(a)c_2,\cr &J_1+j_1=J_2+j_2,\cr &\omega_aG_2(a)C_2+\Omega_aG_1(0)C_1=\omega_dg(a)c_2+\Omega_dg_1(0)c_1,\cr &-\left(\int_0^aF_1dx\right)J_1+\left(\int_0^aG_1dx\right)C_1-\left(\int_a^LF_2dx\right)J_2+\left(\int_a^LG_2dx\right)C_2,\cr &-\left(\int_0^af_1dx\right)j_1+\left(\int_0^ag_1dx\right)c_1-\left(\int_a^Lf_2dx\right)j_2+\left(\int_a^Lg_2dx\right)c_2=1. \end{aligned}$$ For the sake of convenience, we rewrite equations in (\[eq70\]) as $AX=B$, with $X=(J_1, C_1, J_2, C_2, j_1, c_1, j_2, c_2)^T$, $B=(0, 0, 0, 0, 0, 0, 0, 1)^T$ and $$A=\left[\begin{array}{cccccccc} 0& G_1(0)&F_2(L)&-G_2(L)&0&0&0&0\cr -F_1(a)&G_1(a)&0&-G_2(a)&0&0&0&0\cr 0&0&0&0&0&g_1(0)&f_2(L)&-g_2(L)\cr 0&0&0&0&-f_1(a)&g_1(a)&0&-g_2(a)\cr 1&0&-1&-\omega_aG_2(a)&0&0&0&\omega_dg_2(a)\cr 1&0&-1&0&1&0&-1&0\cr -IF_1&IG_1&-IF_2&IG_2&-If_1&Ig_1&-If_2&Ig_2 \end{array}\right],$$ in which $IH_1=\int_0^aH_1dx, IH_2=\int_a^LH_2dx$ for $H=F, G, f, g$. Although it can be obtained explicitly, the solution of $AX=B$ is very complex. So, for simplicity, we only discuss the special cases in which potentials $V_1, V_2$ satisfy $\Delta V_1=\Delta V_2=0$. By routine analysis, one can obtain $$\begin{array}{ll} J_1=&F_2(L)G_1(a)[-\omega_a\Omega_dG_1(a)g_1(0)f_2(L)g_1(a)-\omega_a\Omega_dG_1(a)f_1(a)g_1(0)^2\cr &\Omega_a\omega_dG_1(0)g_1(a)^2f_2(L)+\Omega_a\omega_dG_1(0)f_1(a)g_1(0)g_1(a)]/\det(A), \end{array}$$ $$\begin{array}{ll} J_2=&-F_1(a)G_1(0)[-\omega_a\Omega_dG_1(a)g_1(0)f_2(L)g_1(a)-\omega_a\Omega_dG_1(a)f_1(a)g_1(0)^2\cr &\Omega_a\omega_dG_1(0)g_1(a)^2f_2(L)+\Omega_a\omega_dG_1(0)f_1(a)g_1(0)g_1(a)]/\det(A), \end{array}$$ $$\begin{array}{ll} j_1=&-f_2(L)g_1(a)[-\omega_a\Omega_dF_1(a)G_1(0)G_1(a)g_1(0)-\omega_a\Omega_dF_2(L)G_1(a)^2g_1(0)\cr &+\Omega_a\omega_dF_1(a)G_1(0)^2g_1(a)+\Omega_a\omega_dG_1(0)F_2(L)G_1(a)g_1(a)]/\det(A), \end{array}$$ $$\begin{array}{ll} j_2=&f_1(a)g_1(0)[-\omega_a\Omega_dF_1(a)G_1(0)G_1(a)g_1(0)-\omega_a\Omega_dF_2(L)G_1(a)^2g_1(0)\cr &+\Omega_a\omega_dF_1(a)G_1(0)^2g_1(a)+\Omega_a\omega_dG_1(0)F_2(L)G_1(a)g_1(a)]/\det(A), \end{array}$$ where $\det(A)$ is the determinant of matrix $A$, and it can be proved that $\det(A)<0$. So the total probability flux is $$\label{eq71} \begin{aligned} J_1+j_1=&J_2+j_2\cr =&[\Omega_a\omega_dG_1(0)g_1(a)-\omega_a\Omega_dG_1(a)g_1(0)]\cr &\times[f_1(a)g_1(0)F_2(L)G_1(a)-F_1(a)G_1(0)f_2(L)g_1(a)]/\det(A)\cr =&\frac{[g_1(0)G_1(0)]^2f_1(a)F_1(a)}{\det(A)}\left[\Omega_a\omega_d\frac{g_1(a)}{g_1(0)}-\omega_a\Omega_d\frac{G_1(a)}{G_1(0)}\right]\cr &\times\left[\frac{F_2(L)}{F_1(a)}\frac{G_1(a)}{G_1(0)}-\frac{f_2(L)}{f_1(a)}\frac{g_1(a)}{g_1(0)}\right]\cr =&\frac{[g_1(0)G_1(0)]^2f_1(a)F_1(a)}{\det(A)}\left[\Omega_a\omega_de^{\beta(V_2(0)-V_2(a))}-\omega_a\Omega_de^{\beta(V_1(0)-V_1(a))}\right]\cr &\times\left[\frac{\int_a^Le^{\beta V_1(y)}}{\int_0^ae^{\beta V_1(y)}}-\frac{\int_a^Le^{\beta V_2(y)}}{\int_0^ae^{\beta V_2(y)}}\right]. \end{aligned}$$ Obviously, $J_i+j_i>0$ if and only if $\left[\Omega_a\omega_de^{\beta(V_2(0)-V_2(a))}-\omega_a\Omega_de^{\beta(V_1(0)-V_1(a))}\right]\times$ $\left[\frac{\int_a^Le^{\beta V_1(y)}}{\int_0^ae^{\beta V_1(y)}}-\frac{\int_a^Le^{\beta V_2(y)}}{\int_0^ae^{\beta V_2(y)}}\right]<0$. In view of the expression in (\[eq46\]), one can find that, the properties of this special case are similar as those of the special case [**II**]{} of the coupled one-dimensional hopping models. The mean velocity of molecular motor might not be zero even if there is no energy input in each state. For this special case, the energy for motor motion comes from the processes that drive the motor from one state to another [@Parmeggiani1999; @Astumian1997; @Parrondo2002; @Reimann20021]. Special case III: $\omega_a(x)=\omega_d(x)\equiv 0$ for $x\ne a, L$, and $V_2(x)$ is constant --------------------------------------------------------------------------------------------- For this special case, the governing equations of steady state probability densities $P(x), \rho(x)$ are as follows $$\label{eq72} \begin{aligned} &\left\{\begin{aligned} &D\partial_x(\beta V_1' P_1+\partial_x P_1)=0,\cr &D\partial^2_x\rho_1=0, \end{aligned}\right.\quad 0< x< a,\cr &\left\{\begin{aligned} &D\partial_x(\beta V_1' P_2+\partial_x P_2)=0,\cr &D\partial^2_x\rho_2=0, \end{aligned}\right.\quad a< x< L. \end{aligned}$$ Its general solutions are (\[eq68\]) but with $f_i(x)=x/D, g_i(x)\equiv 1$. The solution which satisfies the constraints (\[eq67\]) is as follows $$\begin{aligned} &\begin{array}{ll} J_1=&-2[\omega_dG_1(0)G_2(a)DL-\omega_dG_1(a)G_2(L)DL+\Omega_dG_1(0)G_2(a)DL\cr &-\Omega_dG_1(a)G_2(L)DL+\Omega_d\omega_dG_1(0)G_2(a)aL-\Omega_d\omega_dG_1(0)G_2(a)a^2\cr &-\Omega_d\omega_dG_1(a)G_2(L)aL+\Omega_d\omega_dG_1(a)G_2(L)a^2\cr &-\omega_a\Omega_dG_1(a)F_2(L)G_2(a)DL+\Omega_a\omega_dG_1(0)F_2(L)G_2(a)LD]/\det(A), \end{array}\cr &\begin{array}{ll} J_2=&-2[\omega_a\Omega_dF_1(a)G_1(0)LDG_2(a)-\Omega_a\omega_dF_1(a)G_1(0)LDG_2(L)\cr &+\omega_dG_1(0)G_2(a)DL-\omega_dG_1(a)G_2(L)DL+\Omega_dG_1(0)G_2(a)DL\cr &-\Omega_dG_1(a)G_2(L)DL+\Omega_d\omega_dG_1(0)G_2(a)aL\cr &-\Omega_d\omega_dG_1(a)G_2(L)aL-\Omega_d\omega_dG_1(0)G_2(a)a^2+\Omega_d\omega_dG_1(a)G_2(L)a^2]/\det(A), \end{array}\cr &\begin{array}{ll} j_1=&2(L-a)[-\omega_a\Omega_dF_1(a)G_1(0)G_2(a)-\omega_a\Omega_dG_1(a)F_2(L)G_2(a)\cr &+\Omega_a\omega_dF_1(a)G_1(0)G_2(L)+\Omega_a\omega_dG_1(0)F_2(L)G_2(a)]D/\det(A), \end{array}\cr &\begin{array}{ll} j_2=&-2a[-\omega_a\Omega_dF_1(a)G_1(0)G_2(a)-\omega_a\Omega_dG_1(a)F_2(L)G_2(a)\cr &+\Omega_a\omega_dF_1(a)G_1(0)G_2(L)+\Omega_a\omega_dG_1(0)F_2(L)G_2(a)]D/\det(A). \end{array} \end{aligned}$$ So the total probability flux is $$\label{eq73} \begin{aligned} J_1+j_1=&J_2+j_2\cr =&2\{[G_1(a)G_2(L)-G_1(0)G_2(a)][\omega_dDL+\Omega_dDL+\omega_d\Omega_d(L-a)a]\cr &+2aDF_2(L)G_2(a)[\omega_a\Omega_dG_1(a)-\Omega_a\omega_dG_1(0)]\cr &+2D(L-a)F_1(a)G_1(0)[\Omega_a\omega_dG_2(L)-\omega_a\Omega_dG_2(a)]\}/\det(A). \end{aligned}$$ More specially, if potential $V_1(x)$ is periodic and continuous at $a$, then $G_1(0)=G_2(L), G_1(a)=G_2(a)$. So $$\label{eq74} \begin{aligned} J_1+j_1=&\frac{2D}{\det(A)}[\omega_a\Omega_dG_1(a)-\Omega_a\omega_dG_1(0)][aF_2(L)G_2(a)-(L-a)F_1(a)G_1(0)]\cr =&\frac{2D(G_1(0))^2G_2(a)}{\det(A)}[\omega_a\Omega_de^{\beta (V_1(0)-V_1(a))}-\Omega_a\omega_d]\cr &\times\left(a\int_a^Le^{\beta V_1(x)}dx-(L-a)\int_0^ae^{\beta V_1(x)}dx\right)\cr =&\frac{2D(G_1(0))^2G_2(a)}{\det(A)}[\omega_a\Omega_de^{\beta (V_1(0)-V_1(a))}-\Omega_a\omega_d]\cr &\times\left(a\int_0^Le^{\beta V_1(x)}dx-L\int_0^ae^{\beta V_1(x)}dx\right)\cr =&\frac{2LD(G_1(0))^2G_2(a)\int_0^Le^{\beta V_1(x)}dx}{\det(A)}[\Omega_a\omega_d-\omega_a\Omega_de^{\beta (V_1(0)-V_1(a))}]\cr &\times\left(\frac{L-a}{L}-\frac{\int_a^Le^{\beta V_1(x)}dx}{\int_0^Le^{\beta V_1(x)}dx}\right) \end{aligned}$$ Therefore, the total probability flux $J_1+j_1>0$ if and only if $[\omega_a\Omega_de^{\beta (V_1(0)-V_1(a))}-\Omega_a\omega_d]\left(a\int_0^Le^{\beta V_1(x)}dx-L\int_0^ae^{\beta V_1(x)}dx\right)<0$. Similar as before, from (\[eq52\]) and (\[eq74\]) one can find the similarity between them [@Zhang2010]. To better understand the properties of our model, we discuss the direction of probability fluxes here. For the special case in which there are only two locations at which the inter-state transition rates are nonzero, i.e., the special case [**II**]{}, there are altogether $18$ different types of probability flux. Since the states 1 and 2 are temporally symmetric, we restrict our discussion only on the cases in which $\omega_aP(a)-\omega_d\rho(a)\ge 0$ for the continuous model, or $\omega_aP_M-\omega_d\rho_M\ge 0$ for the hopping model. Then, there are altogether $9$ different types of probability flux (see Fig. \[Fig5\]). Furthermore, if $\Delta V_1=\Delta V_2=0$, then there is only one type (see the figure (2, 2) in Fig. \[Fig5\]). On the other hand, if the potential $V_2$ is constant (or $f_i=b_i\equiv f$ for the hopping model), then there are altogether 3 different types of probability flux (see the second column in Fig. \[Fig5\]). Conclusions =========== In conclusion, two chemical states models of molecular motor are discussed in this paper. For some special cases, explicit expressions of mean velocity are obtained. We find that the mean velocity of molecular motor might not be zero even if both of the potentials are periodic, which means there is no energy input to the molecular motor in each of the chemical states. The energy for the motion molecular motor motion comes from the processes that drive the motor from one state to another. For motor proteins, these processes are ATP hydrolysis. At the same time, from the expression of mean velocity, we find that the velocity of molecular motor might be zero even if there exists nonzero input energy. Which implies that the motion of motor protein is usually loosely coupled to ATP hydrolysis [@Bieling2008; @Endres2006; @Seidel2008; @Shaevitz2005; @Yildiz2008; @Gao2006; @Nishikawa2008; @Masuda2009; @Gerritsma2009]. 0.5cm [107]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ** (, ). , ** (, ). , , , ****, (). , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , , ****, (). , **** (). , , , ****, (). , ****, (). , , , , , , ****, (). , , , , , ****, (). , ****, (). , ****, (). , , , , , , , , , , , ****, (). , , , ****, (). , , , , , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , , , , , ****, (). , ****, (). , , , , , , , , , , , ****, (). , ****, (). , , , , ****, (). , , , , , , , ****, (). , , , , , , , , ****, (). , , , , , , ****, (). , , , ****, (). , , , , , , , ****, (). , , , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , , , ****, (). , , , (). , , , , ****, (). , ****, (). , , , , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , ****, (). , , , , , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , , , , , , , ****, (). , ****, (). , , , , ****, (). , , , , , , ****, (). , ****, (). , , , , , ****, (). , ****, (). , , , , , ****, (). , ****, (). , , , , , , ****, (). , , , ****, (). , , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , , , , , , ****, (). , , , ****, (). , ** (, ). , ****, (). , ****, (). , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , , , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , ****, (). , (). , ****, (). , ****, (). , , , ****, (). , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). ![Schematic depiction of two coupled one-dimensional hopping models. In which the forward and backward transition rates of molecular motor in state 1 are denoted by $F_n$ and $B_n$, and are denoted by $f_n$ and $b_n$ for molecular motor in state 2, here $1\le n\le N$ with $N$ is the period of the hopping models. The inter-state transition rates at position $n$ are denoted by $\omega_a^n$ (states 1$\to$2) and $\omega_d^n$ (states 2$\to$1). For motor proteins, $\omega_a^n$, $\omega_d^n$ depend on the chemical potentials and concentrations of ATP and ADP. []{data-label="Fig1"}](lat2 "fig:"){width="400pt"}\ ![Special case [**I**]{} of two coupled one-dimensional hopping models. In which $\omega_a^n=\omega_d^n=0$ for $1\le n\le N-1$, and $\omega_a^N=\omega_a$, $\omega_d^N=\omega_d$. For this special case, the mean velocity of molecular motor would be zero if there is no energy input in each of the two states, i.e., $\Delta G_1=\Delta G_2=0$. In fact, at steady state, there is also no energy input during the process that drives the motor from one state to another, since $\omega_aP_N=\omega_d\rho_N$. []{data-label="Fig2"}](lat3 "fig:"){width="400pt"}\ ![Special case [**II**]{} of two coupled one-dimensional hopping models. In which $\omega_a^n=\omega_d^n=0$ for $n\ne M, N$, and $\omega_a^M=\omega_a$, $\omega_d^M=\omega_d$, $\omega_a^N=\Omega_a$, $\omega_d^N=\Omega_d$. For this special case, the mean velocity of molecular motors might not be zero even if there is no energy input in each of the two states, i.e., $\Delta G_1=\Delta G_2=0$. Since there usually exists energy input to molecular motor during its jump from one state to another unless $\omega_aP_M=\omega_d\rho_M$ and $\Omega_aP_N=\Omega_d\rho_N$. []{data-label="Fig3"}](lat4 "fig:"){width="400pt"}\ ![Different types of probability flux for special case [**II**]{}. From these figures, one can see that the motion of molecular motors is usually loosely coupled to the energy input process, i.e., there might exist energy input but without directed macroscopic mechanical motion. Part of the input energy will be consumed during substep oscillation.[]{data-label="Fig5"}](lata1 "fig:"){width="130pt"}![Different types of probability flux for special case [**II**]{}. From these figures, one can see that the motion of molecular motors is usually loosely coupled to the energy input process, i.e., there might exist energy input but without directed macroscopic mechanical motion. Part of the input energy will be consumed during substep oscillation.[]{data-label="Fig5"}](lata2 "fig:"){width="130pt"}![Different types of probability flux for special case [**II**]{}. From these figures, one can see that the motion of molecular motors is usually loosely coupled to the energy input process, i.e., there might exist energy input but without directed macroscopic mechanical motion. Part of the input energy will be consumed during substep oscillation.[]{data-label="Fig5"}](lata3 "fig:"){width="130pt"}\ ![Different types of probability flux for special case [**II**]{}. From these figures, one can see that the motion of molecular motors is usually loosely coupled to the energy input process, i.e., there might exist energy input but without directed macroscopic mechanical motion. Part of the input energy will be consumed during substep oscillation.[]{data-label="Fig5"}](lata4 "fig:"){width="130pt"}![Different types of probability flux for special case [**II**]{}. From these figures, one can see that the motion of molecular motors is usually loosely coupled to the energy input process, i.e., there might exist energy input but without directed macroscopic mechanical motion. Part of the input energy will be consumed during substep oscillation.[]{data-label="Fig5"}](lata5 "fig:"){width="130pt"}![Different types of probability flux for special case [**II**]{}. From these figures, one can see that the motion of molecular motors is usually loosely coupled to the energy input process, i.e., there might exist energy input but without directed macroscopic mechanical motion. Part of the input energy will be consumed during substep oscillation.[]{data-label="Fig5"}](lata6 "fig:"){width="130pt"}\ ![Different types of probability flux for special case [**II**]{}. From these figures, one can see that the motion of molecular motors is usually loosely coupled to the energy input process, i.e., there might exist energy input but without directed macroscopic mechanical motion. Part of the input energy will be consumed during substep oscillation.[]{data-label="Fig5"}](lata7 "fig:"){width="130pt"}![Different types of probability flux for special case [**II**]{}. From these figures, one can see that the motion of molecular motors is usually loosely coupled to the energy input process, i.e., there might exist energy input but without directed macroscopic mechanical motion. Part of the input energy will be consumed during substep oscillation.[]{data-label="Fig5"}](lata8 "fig:"){width="130pt"}![Different types of probability flux for special case [**II**]{}. From these figures, one can see that the motion of molecular motors is usually loosely coupled to the energy input process, i.e., there might exist energy input but without directed macroscopic mechanical motion. Part of the input energy will be consumed during substep oscillation.[]{data-label="Fig5"}](lata9 "fig:"){width="130pt"}\ [^1]: The one-state model can be regarded as a simplification of the multi-state model but with an effective potential, Theoretically, this effective potential can be obtained by weighted average of potentials in each state of the multi-state model [@Wang2004]. [^2]: For motor proteins, $\omega_a(x), \omega_d(x)$ depend on the standard chemical potentials and concentrations of ATP, ADP and ionic phosphate Pi [@Howard2001; @Parrondo2002].
--- author: - | Ari S. Morcos[^1]\ Facebook AI Research\ `arimorcos@fb.com`\ Haonan Yu\ Facebook AI Research\ `haonanu@gmail.com`\ Michela Paganini\ Facebook AI Research\ `michela@fb.com`\ Yuandong Tian\ Facebook AI Research\ `yuandong@fb.com`\ bibliography: - 'references.bib' title: 'One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers' --- [^1]: To whom correspondence should be addressed
--- abstract: 'Lack of interactions and engagement among different welfare organizations, donors, and the general public severely limits opportunities that welfare organizations create for street children in third-world countries. Developing a digital hub can eliminate barriers by integrating people across all groups. However, in doing so, critical human factor challenges need to be overcome. Accordingly, in this study conducted over a couple of years, we design and develop a digital hub deployed to serve children living on the streets in Dhaka, the capital of Bangladesh (an excellent candidate to represent life in highly populated cities in a third-world country). Here, we - a) conduct face-to-face interviews with members of representative welfare organizations to identify fundamental human-centric challenges; b) design a digital hub that provides an effective interface for pairing welfare organizations, donors, and general public; c) enable different organizations to use our system 24$\times$7, and report results; and d) present critical human-computing issues, more challenges, theoretical advancement, and potential scopes for future work in this realm with broad applicability.' author: - | \ \ \ \ \ \ bibliography: - 'sample.bib' --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10003130&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human Centered Computing Collaborative and Social Computing Design and Evaluation Method&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10003123&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human-centered computing Interaction design&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; Introduction ============ *Sustainable development* [@sustainable] (such as economic, social, educational, etc.) is a major challenge all over the world. Sustainable development for society cannot be achieved when resource allocation consistently favors privileged communities only, while underprivileged communities are made to fend for themselves [@ict4d]. Experts across the world are now trying to eradicate such barriers against sustainable development via computing technologies [@effectiveComputing]. In doing so, they face unique human and socio-economic challenges [@humanfactors] that limit the potential of computing technologies from development to deployment phases. Such challenges are more prominent when target populations have special needs, for example, underprivileged children, in poorer countries. Let us look at Bangladesh, which has $164$ million people and a density of $1,250$ people per square kilometer [@population]. People from all over the country converge to Dhaka (the capital city) for livelihood, and most of them face severe economic hardships and exploitation. The most vulnerable part of people here are *Street Children* (known as *‘tokai’* in Bengali), who suffer from starvation, physical abuse, sexual harassment, bonded labor, lack of health care and more. They often end-up committing crimes that warrant arrests, which further their spiral towards destruction. Naturally, in a country like Bangladesh, or in any similar country with children still living on streets, sustainable development for them will only be a dream with the status quo. Furthermore, the poorer a country is, the more are the challenges and ensuing inaction from governments. While larger organizations such as **UNICEF, JAAGO, Save the Children**, etc., are doing their part in Bangladesh, these alone are insufficient, and and a result, several Non-Governmental Organizations (called NGOs) mainly operated by young and educated people are now emerging and trying to tackle this problem, either independently by themselves, or by being a bridge between needy children and more established organizations [@brown1991bridging]. Our motivation in this paper is to analyze the operational mechanisms of these smaller NGOs that work with street children in Bangladesh and use findings to enhance their effectiveness. Despite common goals and objectives among multiple such NGOs, we find that there are subtle, and yet, tangible diversities among them, that lead to barriers in their effectiveness. Most critically, we found that there is always some form of a “communication/ engagement gap" between NGOs, donors, and the general public. Unless there is sufficient engagement, credibility, accountability, and clearly defined agendas, donors are unwilling to financially support NGOs, without which NGOs cannot create impact for street children. In this paper, we design, deploy, and evaluate a digital and integrated organizational hub for a) addressing current deficiencies of small–scale NGOs working for street children in Bangladesh; and b) assisting donors and the general public stay engaged with the NGOs. Based on the study, our contributions are as follows: - After several formal and informal discussions with correspondents from NGOs, social workers, and street children, and based on membership sizes and operating structures, we first classified small-scale NGOs as micro and semi-micro organizations. Micro organizations are smaller in size and with limited financial resources, with about $20$ volunteers. To establish an initial foothold in the field, organizations prefer this model, and it is also easier for external entities (e.g., the authors of this paper) to engage with these NGOs. Semi-structured NGOs are larger with about $80$ volunteers, with more financial clout. In such NGOs, we identified complex rules, rigid traditions/ regulations, and significant diverse visions among members that often create unification challenges. Furthermore, convincing all members to adopt any new technology (e.g., our system) was much harder in this case. Designing technologies to accommodate both models is challenging and are elaborated later in the paper. - We conducted two rounds of rigorous and semi-structured face-to-face interviews with three NGOs (two micro and one semi-micro) working for street children. The first interview helped us understand operating structures, challenges, needs, policies, working procedures, goals, traditions, values, and visions of NGO personnel. Analyzing these outcomes led us to the design of a unique and novel participatory social interactive platform specially designed for NGOs working towards the upliftment of street children. Our system enables NGOs to engage with local communities such as donors, the general public, interested volunteers, etc., by showcasing organizational activities and events. On the other hand, the general public can be a part of organizations through voluntary deeds and funding. After testing our system for $30$ days, each of the three NGOs was contacted for a second round of interviews for soliciting feedback, which as we report later in the paper is not only very encouraging in terms of donation and donor behavior, but also helped us identify unique ergonomic challenges. - We also incorporated two new organizations (that we did not interview before) - one working for street children similar to three organization and another working for orphans (i.e., an actual orphanage home). Both organizations actively used our system. More encouragingly, the orphanage received donations from new donors after it was being introduced via our system. The donations exhibit a degree of temporal alignment with the Arabic month of Ramadan (a holy month for Muslims), - Finally, we provide a comprehensive discussion on all lessons learned so that ideas using computing technologies such as the ones we develop could be scaled and hopefully find wider acceptance, for the upliftment of vulnerable communities like children living on streets in low-income countries. Based on our fieldwork, we have added some valuable insights in light of existing models on ICT adoption in non–profit organizations that might be valuable for community stakeholders, designers, social entrepreneurs, researchers, and practitioners. Related Work {#sec:relat} ============ The HCI community has a significant interest in developing computing solutions for the enhancement of marginalized communities such as refugees, women, religious minorities, etc. However, the needs of each community are distinct, and also economic and cultural sensitivities do play a role in any solutions design to offer help. In this paper [@DIKC], some ICT enabled institutions named Digital Knowledge Information Center (DKIC) are used in the context of Latin America, (Argentina) to improve literacy, e-literacy and provide useful skills for the street children. The DKIC focuses on the facilities that is required to fulfil the strategies taken for street children. This information center is operated by the government agencies, while in our solution we focus on the struggling NGOs with a goal of effective engagements of the stakeholders to alleviate financial and organizational challenges in a 24-7 interactive platform. To improve the educational experience of the marginalized children of rural India, an interactive teaching learning tool with multimedia applications are introduced here [@interactivetool]. BingBee, [@slay2006bingbee] is an information kiosk, deployed among the street children of South Africa to improve the educational experiences. In a similar context, an ethnographic study of the homeless people of Los Angeles reveals how the technology is owned and used by homeless people by enabling social ties [@homelessness]. A qualitative study [@perception] on homeless people in Los Angeles details how technologies can improve their lives. The study specifically investigated how homeless people use computing devices and mobile phones (which is not uncommon for them to have) for earning money, searching for jobs, socializing and even sleeping at night. However, it is important to point out that such digital devices are simply unaffordable for homeless people in poorer countries, and more so unaffordable for street children, and even for volunteers that want to help them due to low–income settings. In western countries, the structure of non–profit organizations and fundraising processes are different from those in third world countries. In these studies, [@goecks2008charitable; @merkel2007managing] how participatory design can be used to empower organizational activities are discussed. We see, upon surveying related work that there are very limited studies, on how computing technologies can assist welfare organizations. This may be because in Western and Far Eastern societies, welfare organizations are relatively richer and they can afford to pay for customized technologies by themselves. With more taxpayer money, governments also can chip in with resources. In the paper [@Organization], two non-profit voluntary organizations dedicated to homeless people in a US metropolitan city are studied. The findings are related to challenges emanating from the adoption of technologies, how the nature of volunteerism impacts these technologies, and how governmental assistance interacts with services offered by welfare organizations. From the perspective of improving the lives of children, there are some very interesting research studies are going on. For example, the government of Cambodia [@combodia] has taken an initiative of counting homeless adolescent aged between $13$ to $17$ across seven cities of the country using manual counting and forecasting. In Germany, there is a project that connects children and adult computer club participants in a participatory online platform called “come\_NET” [@comenet] to promote sharing of ideas across borders, supporting collaboration, and enhancing life skills in local neighborhoods. A five-week-long qualitative study on children in a refugee camp in Palestine used 3D modeling and printing technologies for education [@palestine]. These types of technologies focus only on the specific need on the focus group with customized solution. However, we propose a theory driven unique model for connecting the all stakeholders of society for sustainability that can be applied to similar context (e.g., orphan). While the technologies surveyed above (for refugees, homeless people, and children) are all exciting ideas, the problem addressed in our paper is unique. Here, as focused in Bangladesh, we consider a real case of NGOs having a very low number of volunteers, funds, and visibility with almost no governmental assistance. Providing food, clothing, shelter, and medicines for street children are major challenges for these NGOs. Furthermore, the plight of such children and NGOs in poorer countries has not been formally studied yet. As such, the interviews conducted in this paper, challenges of NGOs in this space, trends reported after interviews, our system design, and results on using our systems by the NGOs – are all very unique, and can contribute to peer researchers attempting to address similar problems in poorer countries. Welfare Organizations We Engaged with and Our Context {#sec:Related Context} ===================================================== It is sadly difficult to determine the actual number of street children in Bangladesh, though a report [@Street] indicates that it was about $1.5$ million in $2015$. However, with more than a few organizations (a few big, and some small) working for the welfare of street children, we wondered why are there so many still on the streets begging for food, impoverished and vulnerable to exploitation in Dhaka? \[fig:A schooling session arranged by an organization\] \[fig:A schooling session arranged by an organization project khata kolom\] \[fig:Free schooling arranged by an organization named “Shishu Bikash”\] To answer this question, we chalked out a process to primarily interact with NGOs (Non-Governmental Organizations) themselves, along with street children, since any focus only on the latter group may help only identify problems, but not solutions. However, instead of blindly hunting down NGOs, we decided to narrow our search to ones run by younger people – like recent university grads, or those still studying at universities or freelance voluntary services. It is a fact that these NGOs are critically important for improving lives of street children, even though they experience more financial, time, and other resource constraints compared to more established organizations (e.g., the UNICEF), therefore, we forecasted that these NGOs may be willing to give us time for our study. Furthermore, it is natural to infer that these smaller-scale NGOs will also be more amenable to experimenting with our technologies, and adapt their existing ideas and execution mechanisms accordingly, for the greater good. The first thing we found out that there was no established database anywhere on NGOs working for street children in Bangladesh. We also found that common citizens across all age groups and wealth status simply had no idea about these NGOs, or how to reach them, beyond a basic notion that “[*such organizations exist somewhere in Dhaka and are doing something*]{}". This was sad for us to discover. We then had to query our friends, and also some street children who knew NGO members. We also visited many universities with social science programs to talk to faculty and students there. After five weeks of effort, we found three NGOs whose members were willing to engage with us. These three organizations in our study are *Putul* (meaning Doll), *Project Child Care* (formerly called [*Khata Kolom*]{}, meaning *Book and Pen*), and *Shishu Bikash* (meaning Blooming Child). Fig. \[fig:A schooling session arranged by an organization\], \[fig:A schooling session arranged by an organization project khata kolom\], and \[fig:Free schooling arranged by an organization named “Shishu Bikash”\] depict different sessions organized by these three NGOs. Note that all images were captured with the approval of NGO members, and they permitted us to use these images for research purposes. We point out that *Putul* and *Project Child Care* have around $20$ volunteering members, which we classify as micro-organizations. These are relatively new NGOs and are in operation for less than three years. *Shishu Bikash* is a more established NGO with a sound operating structure for several years and has more than $80$ volunteers. We classified this as a semi-micro organization. We found out that each organization caters to a specific geographic zone in Dhaka, and they serve anywhere from $50$ to $100$ children living on the streets. Activities include informal schooling sessions at specific times of the week, supplying medicines, providing clothes, feeding them nutritious meals, conducting special events on national holidays and other basic services. The organizations primarily depend on personal funds from volunteers and funds from donors to support these efforts. Naturally, the latter portion of funds is most vital for sustenance. Anyone can become a volunteer once he/she agrees to the basic principles of the respective NGOs. Membership can be generally divided into three parts: advisory, volunteers, and administrative panel depending on the size. Despite having similar goals, there are mild differences in the organizational structure, meeting times, meeting types, degree of communication or cooperation among members, etc., that will be discussed later in the paper. Steps Executed in Our Research {#sec:Steps} ============================== In Fig \[fig:Workflow of This Study\], we present the workflow of our study. At the outset, we have conducted two rounds of semi-structured face-to-face interviews with personnel from these three organizations. In the first face-to-face interviews, we created a questionnaire with $15$ questions to get an overview of a particular organization’s nature, working procedure, operating structure, goals, vision, policy, etc. We have also gathered basic metadata of the organizations such as the total number of members, the total number of children served, etc. Then, based on these interviews and our analysis of responses, we design and deploy a digital hub to benefit these NGOs, and enabled them to access our system for $30$ days. Subsequently, we conduct a second round of interviews with the members of NGOs to understand their perspectives that lead us to more rigorous insights on how computing technologies can aid their operation. Related discussions and findings are elaborated later in the paper. First Round of Face-to-Face Interviews and Its Findings ------------------------------------------------------- We visited each NGO in person multiple times and attended meetings after initial phone calls. We planned to design our questionnaire on basic metadata about the NGOs, members, and related problems. The interviewees were the administrator or founder of each NGO and four other members in that NGO, in total 5-6 members. The interview duration was about $1.5$ to $2$ hours and we took numerous notes, comments, in some cases recorded the audio conversation with permission. The age range of each interviewee was $20$ to $25$ years. The language of the interview was in Bengali. At the outset, since the number of volunteers and external funds are most vital for the sustenance of welfare organizations, eventually, our interviews centered on these topics more than others. Firstly, we wanted to find out how a common citizen can know about such organizations if he/she wants to be a part of the organization by becoming a volunteer or by donating funds. The answer is not straightforward. As stated earlier, there is no specific list or database of such organizations, and it is not easy for a common citizen to find their existences. Current volunteers informed us that the effort to join/ contribute to a welfare organization in Bangladesh can be so strenuous that many are dissuaded from even attempting to do so. This was a serious challenge identified in our discovery process. Secondly, we find that organizations are currently unable to showcase their voluntary activities effectively to the general public. What is sadder is that all interviewees knew the importance of doing so. Here, financial and time constraints are the most notable barriers. Small organizations are forced to use every Taka (Bangladesh Currency) that they receive to satisfy basic needs of street children, and nothing is left for promotional activities. With whatever is left, they end up putting fliers and posters on roads, bus stations, and train stations, and the general public hardly notices them in the realm of numerous posters of other purposes (such as political, job, religious, etc.) aside. The problem is more complicated since more established agencies with their existing clout do attract funds from donors, however, these donors that fund such big agencies are simply unaware of local organizations in their neighborhoods, only because there is no mechanism for them to know. With these barriers, it became clear that nothing else can be done to improve this situation, and volunteers we spoke also agreed to this point. Thirdly, volunteers also mentioned that in low economy countries, donors are fewer in number, and those that are willing to contribute funds expect high standards in accountability and credibility, and demonstrating this to donors was also identified as a major challenge. We now present critical features of our interview process, and how we arrived at the above conclusions. First off, it took us a month to finish all interviews and digest responses. We followed the thematic analysis [@thematic] to transcribe our collected information. A total of $15$ volunteers spread evenly across the three NGOs were the interviewees. We repeatedly read all notes collected during interviews, listened to audio recording many times, and discussed findings among ourselves many time overs to arrive at our conclusions. All conclusions identified above were similar for all three NGOs (two micro and one semi-micro). To reiterate, the key findings are– a) the impact that small scale NGOs created for street children by proving for their most basic needs like food, medicine, clothes, and shelter are vital in their local neighborhoods; b) the NGOs suffer from low membership numbers, and more importantly scant funds, which is unsustainable; c) merely putting up fliers on public spaces does not provide any benefit to these organizations, since potential donors simply do not see these fliers, or are not impressed enough; and d) a digital hub that can create effective engagement between NGOs and general public are critically important for showcasing activities, demonstrating impacts, and improving credibility/ accountability of NGOs, which when done may attract more members and funds that can be used by NGOs to improve lives of street children. Design and Development ---------------------- After our first round of interviews, we were able to understand a clear overview of NGOs, demographic information of volunteers, and the problems faced by them while running these organizations. Through iterative processing and reviewing of the data (notes and audio records), we found a common keyword **Effective Engagement** from all of three NGOs in addition to identifying unique details on NGOs’ organizational structure. However, we were also excited in that we were confident now to design a technology-based solution to address their most critical need - how to effectively showcase their activities via designing a deploying a digital hub to connect volunteers, donors, and the general public. Fig. \[fig:Block Diagram\] presents a simple block diagram of our design in this regard. Based on the design, we proceeded to develop an integrated participatory organization hub encompassing diverse welfare organizations and called our system **BONDHON** (in Bengali language BONDHON means strong ties with people). In **BONDHON**, our goal is to bring organizations under a unified platform, while still preserving respective individualities. Most importantly, the general public who are willing to volunteer or donate to smaller NGOs now have **ONE** unified web platform to engage with members. Volunteers and organizations can use our free portal to enter their credentials, and can immediately be noticed based on featured services provided or locations (as queried by the general public) without a) the organizations having to spend time and money on publicity, and b) the general public struggling to locate organizations in and around where they want to help[^1]. Our system was developed in Java using the Spring MVC Framework. The front end was developed using html, css, javascript, bootstrap, and the database was developed using MySql. We have hosted it on Amazon Web Services (AWS). In our web-based system, where a) volunteers in any NGO can manage their data of existing children and services better; create simple dashboards; maintain records of activities; upload videos, pictures, and promotional materials in an engaging form, see outstanding requests for membership or comments or intents for funds from the general public and more; b) prospective volunteers from the general public can search for any organization by interest or zip code or colloquial names and volunteer can join as a member; and c) donors can do the above, browse activities, see promotional materials, and have a mechanism to express intents to donate funds. In a nutshell, our initial prototype offers the following services based on stakeholders: - **Virtual Organization:** A publicly accessible platform that enables members in any existing NGO to create a specific portal for that particular organization which is shown in Fig. \[fig Creating new organization page\]. Databases pertaining to the organization can be custom created and maintained. Outstanding requests for membership or questions or comments can be viewed and responded to. The portal easy uploads of photos, videos and other forms of promotional materials with keywords, which are searchable and viewable. Fig. \[fig:Organization activity showcasing\] shows some representative images that could be uploaded. We were motivated from the following speech regarding the feature–\ *We cannot publicize our daily activities like schooling and other events like free health campaign at a large scale. We do need a way to showcase our activities to a wider scale and obviously at a low cost. Postering, flyers are cost centers for us right now.* - **Smart Information Management:** Any person willing to provide voluntary services at any NGO or who wants to become a member can apply for membership at that specific organization after filling out information on demographics, interests, and experiences. The information is automatically sent to members who could approve, and the response is conveyed via an email back to the prospective volunteer. Administrators can control the volunteer and children management system with necessary information as they see fit as shown in Fig. \[fig:Information management system of volunteer and children of BONDHON\]. - **Foster Parent Management** Children being serviced by an organization are now cataloged. In Bangladesh, children can be adopted by foster parents, however, such children need a different kind of tracking mechanism since they now reside in another home. In our portal, separately managing foster children from others is enabled as seen in Fig. \[fig:Information management system of volunteer and children of BONDHON\]. One owner of the organization stated during the interview–\ *Since these children are vulnerable and can go anywhere at any time, we try to engage foster parent to take the responsibility of at least one children. It is obvious that we cannot manage the whole process efficiently in many cases.* - Any citizen can view all the activities of any NGO without any restrictions/ obligations. Moreover, any individual can express his/her opinion through comment(s) on any particular post of an organization as well as can rate any organization on a scale of $1$ to $5$ (Likert scale) based on its activities. Thus, the members of an organization can get feedback from the general public, which is critical for their continued sustenance. - The general public can visit this portal to search for NGOs using zip code, common colloquial names, kinds of activities performed (e.g., education vs. healthcare vs. adoption, etc.), and then organizations’ matching criteria are easily retrieved. Our portal also enables donors to contribute funds to organizations. We set a limit of $10,000$ BDT (around $125$), as per legal restrictions. A very popular, quick and easy-to-use mobile transaction environment called ‘bKash’[^2] is used for the money transaction. It is an established system in Bangladesh for small money transaction. It requires just a cell phone for the recipient and hence making it easy to use. Depending on the success of our current system, a payment gateway, if asked from the stakeholders, for larger transactions will be set-up soon. To summarize here, we have voluntarily created a web-portal with the theme of **Effective Engagement** so that the general public, donors, and NGOs can stay engaged with each other. Our design attempted to integrate critical requirements of three NGOs we surveyed, which is to a) enable increased publicity for NGOs; b) enable volunteers to join NGOs as members; and c) create opportunities for donors to locate, engage, and reform possible fund-related activities. Next, we present the results of a phase of our deployment. \[fig:A schooling session arranged by an organization prothom akkhor founadtion\] \[fig:A schooling session arranged by an orphanage\] Deployment, Usage, and Responses -------------------------------- It took us eight weeks to develop the platform and test it to our satisfaction. Then, we contacted members of the three NGOs whom we met earlier as well as two new organizations named [*PROTHOM AKKHOR FORUNDATION*]{} and [*Siddiqia Bissho Islami Mission*]{} (an orphanage for Muslim children) to request them to evaluate our system. Recall that since these are young people with university degrees (or in the process of a degree) in case of [*PROTHOM AKKHOR FORUNDATION*]{} similar to our earlier organizations, they were well versed in digital/ web technologies, and how to evaluate them. Besides, the engagement of [*Siddiqia Bissho Islami Mission*]{} was done through one of the authors on behalf of the organization, as personnel of this organization is not that experienced from the technological perspectives shown in Fig. \[fig:Schooling and campaign arranged by different orphanage and NGOs\]. The motive behind bringing [*Siddiqia Bissho Islami Mission*]{} in our system is to explore how people react with an orphanage beside organizations for street children. Besides, the orphanages in Bangladesh generally receive some donations from general people. Therefore, we intended to explore whether having an orphanage in our system could enable making such donations. Evaluation of inclusion of this orphanage can be done through such donations if any. We will present the extent of the donations and their nature later in the paper. This orphanage has around 30 children and fully rely on the mass people donations. This type of organizations is also operated by a founding body. The second new organization has 20 volunteers, the primary functionality is to run school among the street children of the southern part of Dhaka city. The organization has around 50 children in their school and donation is collected from the volunteers and internal donors. Besides, all members in [*Child Care*]{}, and [*Putul*]{}, [*PROTHOM AKKHOR FORUNDATION*]{}, were also eager to evaluate our system, and it was very easy to convince them once they saw a one-hour demonstration. On the other hand, it was not easy convincing [*Shishu Bikash*]{} to use our system, for reasons we will explain later. As such, members of organizations [*Child Care*]{} and [*Putul*]{} tested our system for a period of $30$ days. We set up Google Analytics into our system to assess their engagement. For example, in October $2018$, the total number of users was $73$ ($63$ new users and $10$ returning users). The total count of *pages viewed* was $1605$, the average number of pages viewed per session was $6.30$; average session duration was $7$ min and $38$ sec, and the bounce rate was $11.36\%$. Findings and Insights from Second Face-to-Face Interviews --------------------------------------------------------- After the $30$–day evaluation of our system, we scheduled a second round of semi-structured face-to-face interviews with admins, founders, and volunteers of the previous three organizations along with the newer two NGOs that participated in the evaluation. Six participants from one organization and eight from each from the other four participated in one on one interviews. They included the founder/ admin of each organization. This time we set $20$ questions to gauge their feedback on our system for its evaluation and we also see if we rightly got their problems right after the earlier interviews. Each interview lasts for around two hours. Some specifics whether we were interested in their suggestions on the user interface, ease of overall use, whether their core requirements were satisfied, and whether or not they agreed that our portal eases their pains towards external engagement. The entire duration to complete all interviews was a month. Numerous notes, records, and audio clips were collected and analyzed with thematic process by our team after the interview. At the outset, using the thematic analysis for transcribing the notes, all interviewees agreed that our proposed system can be very important to augment their efforts. There were only minor comments about the user interface, our dashboards design, etc. All interviewees agreed that the open nature of our system meant that members of NGOs can now freely showcase their activities to the outside world. They all agreed that this approach to enhance engagement was vital for them to both increase membership, and also attract external funds more confidently, and with more credibility. One concern they raised was preserving the core individualities of their organizations. Our discussions with them revealed that volunteers have their personal views on our system that may conflict with views of other members of the same NGO. The diversity is more prominent in the semi-micro organization, despite the common theme of working for street children. Some differences arose in regarding how to handle events of organizations, how to present policies to the public, how to manage donors, how to give feedback to donors, and how to enhance their campaigns with our system in place. We finally concluded that our system while being stable and useful needs a ground-up refinement following the Value Sensitive Design (VSD) [@VSD] for wide-scale usability as well as acceptability. We are now doing this by passively observing NGO meetings, health/ educational camps, campaign activities of NGOs, donor meetings, and much more to document related activities and re-design our system with VSD principles. Next, we present findings for future guidance in this aspect. Values in Design ---------------- The stakeholders of our system, i.e., NGOs and donors hold a common value towards upgrading lifestyle for the street children. To address the values through the principle of the VSD, we have explored different aspects presented in the earlier section. The challenging part is here how a system can realize all the values effectively. In case of the Western culture, the *Theory of Planned Behavior* (TPB) [@TPB] model has been employed to find the intention of charity donations [@knowles2012]. However, in the case of developing countries, several variables are used for understanding charity donations [@rogers2018social]. Firstly, subjective norms and attitude are recognized as strong predictors of donation behaviour [@fishbein1977]. Accordingly, we put the donors recognition (by taking the permission from donors) in front of public, they should be get encouraged to donate again. From the perspective of VSD, we have made the donation information publicly available for respective organizations. Secondly, for injunctive and descriptive norms [@warburton2000volunteer; @mcmillan2003using], we can disclose identity of each commentators and donors so that if someone see people in his group to make donations he may also come toward to donate. Based on this, we go for a “I vote" button on each post to make the donors interested. Thirdly, aside from this, moral norms have also been used to predict altruistic behaviour in studies pertaining to money donation [@warburton2000volunteer; @armitage2001efficacy]. To encourage such moral norms we have added some religious quotes in our platform of each organization shown in Fig. \[fig:Religious quote enclosed in black rectangle, rating options \] Changing the Scenario --------------------- During the second interview, we have interviewed two new organizations (“PROTHOM AKKHOR FOUNDATION" and “Siddiqua Bissho Islami Mission"). Among these two organizations, “PROTHOM AKKHOR FOUNDATION" represents an organization of street children. Besides, “Siddiqua Bissho Islami Mission" represents an orphanage. Note that an orphanage has some baseline similarities with an organization for street children, as both the types of organizations are a charity in nature having wretched poverty-stricken children. Here, a difference between the two types of organizations lies in the fact that the orphanage hosts its service recipients, i.e., the orphans, whereas the organization for street children does not do so. These two new organizations act as validators of our design that was an outcome of the feedback from the earlier organizations. These new organizations also agreed with the design and we have found a notable donor–organization interactions that were hardly seen previously (before the iteration of VSD). After adopting the system, in the case of the “Siddiqua Bissho Islami Mission", we have received around 107k BDT (1278 USD) as donation within a month after introducing it in our system. Here, among the six donors, male and female percentages of donors are 83.33% and 16.67% respectively. Most importantly, after the iteration of VSD, we can attract new donors for this organization. This happening of attracting new donors indicate the success of iteration of VSD principle into the design, which eventually resulted in an enhanced donation. Here, it is noteworthy that we introduced “Siddiqua Bissho Islami Mission" before the Arabic month of Ramadan, which is treated as a holy month for the Muslims (the majority in Bangladesh). Most (four out of six) of the donations we received for the organization was during the month of Ramadan. Notably, all the donations were made by Muslims. This happening suggests that there exist some triggering with the month having religious value. We elaborate this happening more later in this paper in the context of HCI. Confusions and Questions about Resources, Competition, and Donors {#sec: Confusions} ================================================================= We now present additional insights into impending complexities as we move forward with our system design. For example, one organization requested an option for advisory panel management, which others did not ask for. We have incorporated it for them. However, in the case of semi-macro organizations, the situation is a bit more complex, where organizational decisions are taken after discussions among volunteers, founders, and advisory panels. Informal communications between our team and other semi-micro organizations revealed that some semi-micro NGOs are willing to accept our system as a showcasing platform after a lot of discussions and arguments. Some other NGOs are still in a fix as to whether or not they should be using our system. Careful discussions revealed that as NGOs grow more in member size, donor funding is paramount for success. This makes sense since large organizations cannot succeed without external funds. The semi-micro organizations are worried that an integrated platform that will be shared by multiple NGOs may decrease their chances of securing funding. One founding member of the semi-micro organization [*Shishu Bikash*]{} mentioned this point as follow *“We appreciate the idea of organization hub, however, our organizations are yet to take some time to take decisions since our acting body are divided among them about the decision whether we should join or not. We are trying to build our website rather. We are not sure about how people will get us and how the engagement can enhance our activities, and most importantly how we can reach more donors. If there are many organizations in your hub, then there is a sharp chance of losing our donors."* It is not too hard to understand this sentiment. In poorer countries, there will be intense competition for donor support, and an avenue that can potentially decrease donor support will be viewed very negatively by NGO members. This is especially true for larger-scale organizations where decisions take time. Hence, a need to maintain status-quo may be seen as a good option sometimes by members. We present some perspectives on this now. Our discussions revealed that funding for smaller–scale and independently-run NGOs are geographic specific. Hence, we provide location-based services in our system wherein NGOs can identify areas they operate, and donors can choose to view NGOs based on location. We expect this to alleviate some concerns regarding competition among resources. Furthermore, most NGO members do agree that the ultimate goal is to improve the lives of street children, and therefore, any hub that will enable organizations to see and share what others are doing will be beneficial overall to learn best practices, which our system enables seamlessly. The question regarding the loss of potential donors needs some explanation though. In the case of making donations, citizens often have a concern for credibility and accountability. In this context, any platform that enables effective advertisement is vital to seek funding. Donors also want to see how their money is being used for donating further, and digital platforms are most suitable to satisfy this expectation. Moreover, we believe that competition breeds creativity, and NGOs that do better will be rewarded better. Furthermore, different organizations can have different focus points (e.g., education vs. healthcare vs. medicines vs. enabling adoption, etc.), therefore, when multiple organizations can be showcased at the same time, different donors may see things differently and spread their donations among NGOs as they see fit. This may be a welcome motivation for NGOs to engage with our platform. One donor’s comment summarizes our perspective best. The donor says that– *“There are many people who want to donate, but the fact is we don’t know where to donate and you know what, the most important thing is trust. I want to know how my money is being utilized and I want accountability and credibility. Again, people want trustworthy organizations and the more volunteers work, the more they can be trusted. A public open platform can gear up organizations to do better work than another and certainly, competition for good work is not bad at all."* Our platform currently provides a platform for donors to engage with NGO members. Donors can ask questions, see responses, provide comments, and also like/ dislike activities. In this manner, we believe that many concerns regarding donor support to NGOs are mitigated, and resulting in long-lasting, meaningful, and credible interactions. While we are excited at these developments, still some challenges remain, which we present next. Impacts of Human/ Organizational Factors on Our System {#sec:Human factor} ====================================================== After the deployments of the system among the organizations, we faced human and organizational factors that limit the scope of the deployment in semi-micro organizations. These are stated below– Mistrust Among NGOs in a Shared Platform ---------------------------------------- In some sense, founders of small–scale NGOs working for street children can be called “social entrepreneurs", who work to collect and maximize funds to develop, fund, and implement solutions to social, cultural, or environmental issues. As such, marketing, publicity, credibility, trust, accountability, legal aspects, unified vision and several more are vital for sustenance of these NGOs. We found that despite having common (and noble) goals, there is an inherent sense of fear and mistrust between organizations, due to which NGOs (especially larger ones) are reluctant to participate in shared efforts. This was disappointing for us, however nevertheless understandable, since in poorer countries, any small donation does not come without significant effort. Therefore, in parallel to working for the cause of street children, NGO members need to devote time, energy and sometimes even money to protect and preserve what they are creating/ operating. While small scale organizations are more willing to participate in our platform for engagement, we could sense that while the ability to help street children was their primary agenda, we get a sense that members here are equally excited about growth and publicity of their organization, which will help them attract more funds. Moreover, we are wondering whether or not these micro organizations will still remain engaged with our system should they reach a larger membership number, and hence increase complexities in decision making. This is an open issue for us now. Discussion {#sec: disc} ========== After the mixed experience of both success and failure in two different contexts, we look for determining the internal and external variables for the success and failure by conceptualizing the donors and NGOs behavior. During the month of Ramadan, the orphanage achieves a significant amount of donations. In this study [@ranganathan2008determinants], a path model is defined to identify the intention of donor behaviors. The variables of the model are [*religiosity*]{}, [ *attitude towards helping others (AHO)*]{}, [*attitude towards charitable organizations (ACO)*]{}, [*attitude towards the advertisement (Attad)*]{}, and [*behavioral intentions (BI)*]{} where [*religiosity*]{} have a direct positive impact on all other factors. The generality of the system needs to be explored by considering the social, cultural, religious context of the other countries. About 99% of people in Bangladesh are religious minded, therefore, we have tried to conceptualize the norms. Since Bangladesh is a Muslim majority country, during the holy month of Ramadan, the donation rate is high. To this context, our system aligns with the key features (accessibility, accountability, and interaction) of trustworthy web platform for charity donors described here [@sargeant; @donortrust]. Therefore, micro organizations took the platform as a medium of their publicity towards a wider donor community. However, before a particular time (in the month of Ramadan), the enlisted micro organizations gain a little attraction from the donors and at the month of Ramadan, the donation enhances significantly. Most importantly, the latest organization, which was not involved in our first interview, achieves the success and indicates the validity of the design principles followed in our system since we did not take any explicit requirements from them rather just they just adopt the system. On the other hand, in case of complex organizations, organizational ergonomics, and trust issues still limit the opportunity for semi-micro organizations though they were one of the groups that have participated in all interviews, designs, and feedback of the design. The initial success of the micro organizations might give them a room of reviving trust. Therefore, we find that *time* is an important factor in the case of micro organizations for describing donors behavior in developing country context. In addition, complex *organizational ergonomics*, and *trust* issues should be taken into account. However, in a country where religious practices hardly affect the social, cultural and behavioral norms, the donor behavior might be different in this context. Besides, the organizational and human factors are studied in the developing countries context. For example, if the platform needs to be deployed in any western countries as a solution of homeless people, the VSD principles of design, as well as donor behavior, needs to be revisited. Conclusion and Future Work {#sec:Discussion} ========================== Street Children are one of the most vulnerable communities in the world specially in poorer countries. While local (called NGOs) and global organizations are attempting to address this problem in a country like Bangladesh, smaller organizations are disproportionately disadvantaged due to lack of brand name/ publicity. Creating technology-based solutions to cater to smaller-scaled organizations working for the upliftment of street children is hence a critical need of the hour in poorer countries. In doing so, NGOs can be classified into two types - micro and semi-micro according to the behavior of the organization. Considering such a classification, in this study, we have taken two rigorous face to face interviews to identify the actual scenarios and according to the responses we have designed an integrated hub intended for the NGOs. During our deployment for the process of evaluation of our system, we have also incorporated two new organizations (one of them is an orphanage) to let the system presented to more general behavior. After the successful donation through the system in micro-organizations, we establish that, we find the positive co relation of donors behavior with relevant study and discussed how the organizational and human factors limit the tendency. In the future, we plan to expand the scale of our deployment by incorporating more organizations in Bangladesh. Nonetheless, we intend to enlarge it across the border by taking organizations of other similar countries on board. Finally, leveraging our digital hub, we plan to explore the possibility of integrating interactions of the children (served by the organizations) in response to various activities performed by the organizations. Such integration could bring the voice of the service recipients on the surface eventually enabling a 360-degree interaction overall stakeholders of the ecosystem. [^1]: We do not specify the URL of our web portal in this paper to prevent author identification while paper is under review. [^2]: <https://www.bkash.com/>
--- abstract: 'Most current approaches to characterize and detect hate speech focus on *content* posted in Online Social Networks. They face shortcomings to collect and annotate hateful speech due to the incompleteness and noisiness of OSN text and the subjectivity of hate speech. These limitations are often aided with constraints that oversimplify the problem, such as considering only tweets containing hate-related words. In this work we partially address these issues by shifting the focus towards *users*. We develop and employ a robust methodology to collect and annotate hateful users which does not depend directly on lexicon and where the users are annotated given their entire profile. This results in a sample of Twitter’s retweet graph containing $100,386$ users, out of which $4,972$ were annotated. We also collect the users who were banned in the three months that followed the data collection. We show that hateful users differ from normal ones in terms of their activity patterns, word usage and as well as network structure. We obtain similar results comparing the neighbors of hateful vs. neighbors of normal users and also suspended users vs. active users, increasing the robustness of our analysis. We observe that hateful users are densely connected, and thus formulate the hate speech detection problem as a task of semi-supervised learning over a graph, exploiting the network of connections on Twitter. We find that a node embedding algorithm, which exploits the graph structure, outperforms content-based approaches for the detection of both hateful ($95\%$ AUC vs $88\%$ AUC) and suspended users ($93\%$ AUC vs $88\%$ AUC). Altogether, we present a user-centric view of hate speech, paving the way for better detection and understanding of this relevant and challenging issue.' author: - | Manoel Horta Ribeiro, Pedro H. Calais, Yuri A. Santos, Virgílio A. F. Almeida, Wagner Meira Jr.\ `{manoelribeiro,pcalais,yuriasantos,virgilio,meira}@dcc.ufmg.br`\ Universidade Federal de Minas Gerais\ Belo Horizonte, Minas Gerais, Brazil bibliography: - 'bibfile.bib' title: 'Characterizing and Detecting Hateful Users on Twitter[^1]' --- Introduction {#sec:introduction} ============ The importance of understanding hate speech in Online Social Networks (OSNs) is manifold. Countries such as Germany have strict legislation against the practice [@stein1986history], the presence of such content may pose problems for advertisers [@youtubeboycott] and users [@sabatini2017online], and manually inspecting all possibly hateful content in OSNs is unfeasible [@schmidt2017survey]. Furthermore, the trade-off between banning such behavior from platforms and not censoring dissenting opinions is a major societal issue [@rainie2017future]. This scenario has motivated work that aims to understand and detect hateful content [@greevy2004classifying; @warner2012detecting; @burnap2016us], by creating representations for tweets or comments in an OSN, *e.g.* word2vec [@mikolov2013efficient], and then classifying them as hateful or not, often drawing insights on the nature of hateful speech. However, in OSNs, the meaning of such content is often not self-contained, referring, for instance, to some event which just happened and the texts are packed with informal language, spelling errors, special characters and sarcasm [@dhingra2016tweet2vec; @riloff2013sarcasm]. Besides that, hate speech itself is highly subjective, reliant on temporal, social and historical context, and occurs sparsely [@schmidt2017survey]. These problems, although observed, remain unaddressed [@davidson2017automated; @magu2017detecting]. Consider the tweet: > *Timesup, yall getting w should have happened long ago* Which was in reply to another tweet that mentioned the holocaust. Although the tweet, whose author’s profile contained white-supremacy imagery, incited violence, it is hard to conceive how this could be detected as hateful with only textual features. Furthermore, the lack of hate-related words makes it difficult for this kind of tweet to be sampled. Fortunately, as we just hinted, the data in posts, tweets or messages are not the only signals we may use to study hate speech in OSNs. Most often, these signals are linked to a profile representing a person or organization. Characterizing and detecting hateful *users* shares much of the benefits of detecting hateful content and presents plenty of opportunities to explore a richer feature space. Furthermore, on a practical hate speech guideline enforcement process, containing humans in the loop, its is natural that user profiles will be checked, rather than isolated tweets. The case can be made that this wider context is sometimes *needed* to define hate speech, such as in the example, where the abuse was made clear by the neo-nazi signs in the user’s profile. Analyzing hate on a *user-level* rather than *content-level* enables our characterization to explore not only content, but also dimensions such as the user’s activity and connections. Moreover, it allows us to use the very structure of Twitter’s network in the task of detecting hateful users [@hamilton2017representation]. ![Network of $100,386$ users sampled from Twitter after our diffusion process. Red nodes indicate the proximity of users to those who employed words in our lexicon.[]{data-label="fig:hintwitter"}](./users.png){width="0.66\linewidth"} ### Present Work In this paper we characterize and detect hateful *users* on Twitter, which we define according to Twitter’s hateful conduct guidelines. We collect a dataset of $100,386$ users along with up to $200$ tweets for each with a random-walk-based crawler on Twitter’s retweet graph. We identify users that employed words from a set of hate speech related lexicon, and generate a subsample selecting users that are in different distances to such users. These are manually annotated as hateful or not through crowdsourcing. The aforementioned distances are real valued numbers obtained through a diffusion process in which the users who used the words in the lexicon are seeds. We create a dataset containing $4,972$ manually annotated users, of which $544$ were labeled as hateful. We also find the users that have been suspended after the data collection - before and after Twitter’s guideline changes, which happened on the 18/Dec/17. Studying these users, we find significant differences between the activity patterns of hateful and normal users: hateful users tweet more frequently, follow more people each day and their accounts are more short-lived and recent While the media stereotypes hateful individuals as “lone wolves” [@lonewolf], we find that hateful users are not in the periphery of the retweet network we sampled. Although they have less followers, the median for several network centrality measures in the retweet network is higher for those users. We also find that these users do not seem to behave like spammers. A lexical analysis using *Empath* [@fast2016empath] shows that their choice of vocabulary is different: words related to hate, anger and politics occur *less* often when compared to their normal counterparts, and words related to masculinity, love and curses occur more often. This is noteworthy, as much of the previous work directly employs hate-related words as a data-collection mechanism. We compare the neighborhood of hateful with the neighborhood of normal users in the retweet graph, as well as accounts that have been suspended with those who were not. We argue that these suspended accounts and accounts that retweeted hateful users are also proxies for hateful speech online, and the similar results found in many of the analyses performed increase the robustness of our findings. We also compare users who have been banned before and after Twitter’s recent guideline change, finding an increase in the number of users banned per day, but little difference in terms of their vocabulary, activity and network structure. Finally, we find that hateful users and suspended users are very densely connected in the retweet network we sampled. Hateful users are $71$ times more likely to retweet other hateful users and suspended users are $11$ times more likely to retweet other suspended users. This motivates us to pose the problem of detecting hate speech as a task of supervised learning over graphs. We employ a node embedding algorithm that creates a low-dimensional representation of nodes in a network to then classify it. We demonstrate robust performance to detect both hateful and suspended users in such fashion ($95\%$ AUC and $93\%$ AUC) and show that this approach outperforms traditional state-of-the-art classifiers ($88\%$ AUC and $88\%$ AUC, respectively). Altogether, this work presents a user-centric view of the problem of hate speech. Our code and data are available [^2]. Background {#sec:background} ========== ### Hateful Users We define “hateful user” and “hate speech” according to Twitter’s guidelines. For the purposes of this paper, “hate speech” is any type of content that ‘promotes violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.’’ [@twitterguidelines] On the other hand, “hateful user” is a user that, according to annotators, endorses such type of content. ### Retweet Graph The retweet graph $G$ is a directed graph $G=(V,E)$ where each node $u \in V$ represents a user in Twitter, and each edge $(u_1,u_2)\in E$ implies that the user $u_1$ has retweeted $u_2$. Previous work suggests that retweets are better than followers to judge users’ influence [@cha2010measuring]. As influence flows in the opposite direction of retweets, we invert the graph’s edges. ### Offensive Language We employ  @waseem2017understanding definition of explicit abusive language, which defines it as *language that is unambiguous in its potential to be abusive, for example language that contains racial or homophobic slurs*. The use of this kind of language doesn’t imply hate speech, although there is a clear correlation [@davidson2017automated]. ### Suspended Accounts Most Twitter accounts are suspended due to spam, however they are harder to reach in the retweet graph as they rarely get retweeted. We use Twitter’s API to find the accounts that have been suspended among the $100,386$ collected users, and use these as another source for potentially hateful behavior. We collect accounts that have been suspended two months after the data collection, on 12/Dec/2017, and after Twitter’s hateful conduct guideline changes, on 14/Jan/2018. The new guidelines are allegedly stricter, considering for instance, off-the-platform behavior. Data Collection {#sec:data_col} =============== Most previous work on detecting hate speech on Twitter employs a lexicon-based data collection, which involves sampling tweets that contain specific words [@davidson2017automated; @waseem2016hateful], such as `wetb*cks` or `fagg*t`. However, this methodology is biased towards a very direct, textual and offensive hate speech. It presents difficulties with statements that subtly disseminate hate with no offensive words, as in `Who convinced Muslim girls they were pretty?` [@davidson2017automated]; And also with the usage of code words, as in the use of the word `skypes`, employed to reference jews [@magu2017detecting; @operationgoogle]; In this scenario, we propose collecting users rather than tweets, relying on lexicon only *indirectly*, and collecting the structure of these users in the social network, which we will later use to characterize and detect hate. We represent the connections among users in Twitter using the retweet network [@cha2010measuring]. Sampling the retweet network is hard as we can only observe out-coming edges (due to API limitations), and as it is known that any unbiased in-degree estimation is impossible without sampling most of these “hidden” edges in the graph [@ribeiro2012sampling]. Acknowledging this limitation, we employ Ribeiro et al. Direct Unbiased Random Walk algorithm, which estimates out-degrees distribution efficiently by performing random jumps in an undirected graph it constructs online [@ribeiro2010estimating]. Fortunately, in the retweet graph the outcoming edges of each user represent the other users she - usually [@guerra2017antagonism] - endorses. With this strategy, we collect a sample of Twitter retweet graph with $100,386$ users and $2,286,592$ retweet edges along with the $200$ most recent tweets for each users, as shown in Figure \[fig:hintwitter\]. This graph is unbiased w.r.t. the out degree distribution of nodes. ![Toy exampĺe of the diffusion process. *(i)* We begin with the sampled retweet graph $G$; *(ii)* We revert the direction of the edges (the way influence flows), add self loops to every node, and mark the users who employed words in our lexicon; *(iii)* We iteratively update the belief of other nodes.[]{data-label="fig:all_users_nets"}](./diff.pdf){width="0.85\linewidth"} As the sampled graph is too large to be annotated entirely, we need to select a subsample to be annotated. If we choose tweets uniformly at random, we risk having a very insignificant percentage of hate speech in the subsample. On the other hand, if we choose only tweets that use obvious hate speech features, such as offensive racial slurs, we will stumble in the same problems pointed in previous work. We propose a method between these two extremes. We: 1. Create a lexicon of words that are mostly used in the context of hate speech. This is unlike other work [@davidson2017automated] as we do not consider words that are employed in a hateful context but often used in other contexts in a harmless way (*e.g.* `n*gger`); We use $23$ words such as `holohoax`, `racial treason` and `white genocide`, handpicked from Hatebase.org [@hatebase], and ADL’s hate symbol database [@adlhate]. 2. Run a diffusion process on the graph based on DeGroot’s Learning Model [@golub2010naive], assigning an initial belief $p_{i}^{0} = 1$ to each user $u_i$ who employed the words in the lexicon; This prevents our sample from being excessively small or biased towards some vocabulary. 3. Divide the users in $4$ strata according to their associated beliefs after the diffusion process, and perform a stratified sampling, obtaining up to $1500$ user per strata. ![image](./activity.pdf){width=".90\textwidth"} ![KDEs of the creation dates of user accounts. The white dot indicates the median and the thicker bar indicates the first and third quartiles.[]{data-label="fig:created_at"}](./created_at.pdf){width="\linewidth"} We briefly present our diffusion model, as illustrated in Figure \[fig:all\_users\_nets\]. Let $A$ be the adjacency matrix of our retweeted graph $G=(V,E)$ where each node $u \in V$ represents a user and each edge $(u,v)\in E$ represents a retweet. We have that $A[u,v] = 1$ if $u$ retweeted $v$. We create a transition matrix $T$ by inverting the edges in $A$ (as influence flows from the retweeted user to the user who retweeted him or her), adding a self loop to each of the nodes and then normalizing each row in $A$ so it sums to $1$. This means each user is equally influenced by every user he or she retweets. We then associate a belief $p_{i}^{0} = 1$ to every user who employed one of the words in our lexicon, and $p_{i}^{0} = 0$ to all who didn’t. Lastly, we create new beliefs $\mathbf{p}^{t}$ using the update rule: $\mathbf{p}^{t} = T\mathbf{p}^{t-1}$. Notice that all the beliefs $p_{i}^{t}$ converge to the same value as $t \rightarrow \infty$, thus we run the diffusion process with $t=2$. With this real value ($p_{i}^{2} \in [0,1]$) associated with each user, we get 4 strata by randomly selecting up to $1500$ users with $p_{i}$ in the intervals $[0,.25)$, $[.25,.50)$, $[.50,.75)$ and $[.75,1]$. This ensures that we annotate users that didn’t employ any of the words in our lexicon, yet have a high potential to be hateful due to homophily. We annotate $4,972$ users as hateful or not using *CrowdFlower*, a crowdsourcing service. The annotators were given the definition of hateful conduct according to Twitter’s guidelines and asked, for each user: > *Does this account endorse content that is humiliating, derogatory or insulting towards some group of individuals (gender, religion, race) or support narratives associated with hate groups (white genocide, holocaust denial, jewish conspiracy, racial superiority)?* Annotators were asked to consider the entire profile (limiting the tweets to the ones collected) rather than individual publications or isolate words and were given examples of terms and codewords in ADL’s hate symbol database. Each user profile was independently annotated by $3$ annotators, and, if there was disagreement, up to $5$ annotators. In the end, $544$ hateful users and $4,427$ normal ones were identified by them. The sample of the retweet network was collected between the 1st and 7th of Oct/17, and annotation began immediately after. We also obtained all users suspended up to 12/Dec/17 ($387$) and up to 14/Jan/18 ($668$). Characterizing Hateful Users {#sec:charac} ============================ We analyze how hateful and normal users differ w.r.t. their activity, vocabulary and network centrality. We also compare the neighbors of hateful and of normal users, and suspended/active users to reinforce our findings, as homophily suggests that the neighbors will share a lot of characteristics with annotated users, and as suspended users may have been banned because of hateful conduct [^3]. We compare those in pairs as the sampling mechanism for each of the populations is different. We argue that each one of these pairs contains a proxy for hateful speech in Twitter, and thus inspecting the three increases the robustness of our analysis. P-values given are from unequal variances t-tests to compare the averages across distinct populations. When we refer to “hateful users”, we refer to the ones annotated as hateful. The number of users in each of these groups is given in the table bellow: [Hateful]{} [Normal]{} [Banned]{} [Active]{} ------------- ------------ -------- --------- ------------ ------------ $544$ $4427$ $3471$ $33564$ $668$ $99718$ : Number of users in each group.[]{data-label="tab:numbers"} Hateful users have newer accounts --------------------------------- The account creation date of users is depicted in Figure \[fig:created\_at\]. Hateful users were created later than normal ones (p-value $< 0.001$). A hypothesis for this difference is that hateful users are banned more often due to Twitter’s guidelines infringement. This resonates with existing methods for detecting fake accounts in which using the account’s creation date have been successful [@viswanath2015strength]. We obtain similar results w.r.t. the 1-neighborhood of such users, where the hateful neighbors were also created more recently (p-value $< 0.001$), and also when comparing suspended and active accounts (p-value $< 0.001$). ![Boxplots for the distribution of metrics that indicate spammers. Hateful users have slightly *less* followers per followee, *less* URLs per tweet, and *less* hashtags per tweet.[]{data-label="fig:spam"}](./spam.pdf){width="\linewidth"} ![image](./lexical.pdf){width="\textwidth"} ![Box plots for the distribution of sentiment and subjectivity and bad-words usage. Suspended users, hateful users and their neighborhood are more negative, and use more bad words than their counterparts.[]{data-label="fig:sent"}](./sentiment.pdf){width="\linewidth"} ![Network centrality metrics for hateful and normal users, their neighborhood, and suspended/non-suspended users calculated on the sampled retweet graph.[]{data-label="fig:betweenness"}](./betweenness.pdf){width="\linewidth"} Hateful users are power users ----------------------------- Other interesting metrics for analysis are the number of tweets, followers, followees and favorite tweets a user has, and the interval in seconds between their tweets. We show these statistics in Figure \[fig:attributes\]. We normalize the number of tweets, followers and followees by the number of days the users have since their account creation date. Our results suggest that hateful users are “power users” in the sense that they tweet more, in shorter intervals, favorite more tweets by other people and follow other users more (p-values $<0.01$). The analysis yields similar results when we compare the 1-neighborhood of hateful and normal users, and when comparing suspended and active accounts (p-values $<0.01$, except for the number of favorites when comparing suspended/active users, and for the average interval, when comparing the neighborhood). Hateful users don’t behave like spammers ---------------------------------------- We investigate whether users that propagate hate speech are spammers. We analyze metrics that have been used by previous work to detect spammers, such as the numbers of URLs per tweet, of hashtags per tweet and of followers per followees [@benevenuto2010detecting]. The boxplot of these distributions is shown on Figure \[fig:spam\]. We find that hateful users use, in average, *less* hashtags (p-value $< 0.001$) and *less* URLs (p-value $< 0.001$) per tweet than normal users. The same analysis holds if we compare the 1-neighborhood of hateful and non-hateful, or suspended and active users (with p-values $< 0.05$, except for the number of followers per followees, where there is no statistical significance to the t-test). Additionally, we also find that, in average, normal users have more followers per followees than hateful ones (p-value $< 0.05$), which also happens for their neighborhood (p-value $< 0.05$). This suggests that the hateful and suspended users do not use systematic and programmatic methods to deliver their content. Notice that it is not possible to extrapolate this finding to Twitter in general, as there maybe be hateful users with other behaviors which our data collection methodology does not consider, as we do not specifically look for trending topics or popular hashtags. The median hateful user is more central --------------------------------------- We analyze different measures of centrality for users, as depicted in Figure \[fig:betweenness\]. The median hateful user is more central in all measures when compared to their normal counterparts. This is a counter-intuitive finding, as hateful crimes have long been associated with “lone wolves”, and anti-social people [@lonewolf]. We observe similar results when comparing the median eigenvector centrality of the neighbors of hateful and normal users, as well as suspended and active users. In the latter pair, suspended users also have higher median out degree. When analyzing the average for such statistics, we observe the average eigenvector centrality is higher for the opposite sides of the previous comparisons. This happens because some very influential users distort the value: for example, the $970$ most central users according to the metric are normal. Notice that despite of this, hateful and suspended users have higher average out degree than normal and active users respectively (p-value $< 0.05$). Hateful users use non-trivial vocabulary ---------------------------------------- We characterize users w.r.t. their content with *Empath* [@fast2016empath], as depicted in Figure \[fig:lex\]. Hateful users use *less* words related to hate, anger, shame and terrorism, violence, and sadness when compared to normal users (with p-values $< 0.001$). A question this raises is how sampling tweets based exclusively in a hate-related lexicon biases the sample of content to be annotated to a very specific type of “hate-spreading” user, and reinforces the claims that sarcasm, code-words and very specific slang plays a significant role in defining such users [@davidson2017automated; @magu2017detecting]. Categories of words more used by hateful users include positive emotions, negative emotions, suffering, work, love and swearing (with p-values $< 0.001$), suggesting the use of emotional vocabulary. An interesting direction would be to analyze the sensationalism of their statements, as it has been done in the context of *clickbaits* [@chen2015misleading]. When we compare the neighborhood of hateful and normal users and suspended vs active users, we obtain very similar results (with p-values $< 0.001$ except for when comparing suspended vs. active users usage of anger, terrorism and sadness, swearing and love). Overall, the non-triviality of the vocabulary of these groups of users reinforces the difficulties found in the NLP approaches to sample, annotate and detect hate speech [@davidson2017automated; @magu2017detecting]. ![Corhort-like depiction of the banning of users.[]{data-label="fig:cohor_hate"}](./cohor_hate__4_.pdf){width="1\linewidth"} Susp. Accounts Hateful Normal Others ---------------- ---------------- --------------- ---------------- 2017-12-12 $$9.09\%/55$$ $$0.32\%/14$$ $$0.33\%/318$$ 2018-01-14 $$17.64\%/96$$ $$0.90\%/40$$ $$0.55\%/532$$ : Percentage/number of accounts that got suspended up before and after the guidelines changed.[]{data-label="tab:sus"} We also explore the sentiment in the tweets users write using a corpus based approach, as depicted in Figure \[fig:sent\]. We find that sentences written by hateful and suspended users are more negative, and are less subjective (p-value $<0.001$). The neighbors of hateful users in the retweet graph are also more negative (p-value $<0.001$), however not less subjective. We also analyze the distribution of profanity per tweet in hateful and non-hateful users. The latter is obtained by matching all words in Shutterstock’s “List of Dirty, Naughty, Obscene, and Otherwise Bad Words” [^4]. We find that suspended users, hateful users and their neighbors employ more profane words per tweet, also confirming the results from the analysis with *Empath* (p-value $< 0.01$). Node Type ($\%$) Node Type ($\%$) --------------- --------- --------------- --------- $\rightarrow$ $41.50$ $\rightarrow$ $13.10$ $\rightarrow$ $15.90$ $\rightarrow$ $2.86$ $\rightarrow$ $7.50$ $\rightarrow$ $92.50$ $\rightarrow$ $99.35$ $\rightarrow$ $0.65$ : Occurrence of the edges between hateful and normal users, and between suspended and active users. Results are normalized w.r.t. to the type of the source node, as in: $P($source type$\rightarrow$dest type$|$source type$)$. Notice that the probabilities do not add to $1$ in hateful and normal users as we don’t present the statistics for non-annotated users.[]{data-label="tab:links"} More users are banned after the guideline changes, but they are similar to the ones banned before ------------------------------------------------------------------------------------------------- Twitter has changed its enforcement of hateful conduct guidelines in 18/Dec/2017. We analyze the differences among accounts that have been suspended two months after the end of the annotation, in 12/Dec/2017 and in 14/Jan/2018. The intersection between these groups and the ones we annotated as hateful or not is shown in Table \[tab:sus\]. In the first period from the end of the data annotation to the 12/Dec, there were approximately $6.45$ banned users a day whereas in the second period there were $9.05$. This trend, illustrated in Figure \[fig:cohor\_hate\], suggests an increased banning activity. Performing the lexical analysis we previously applied to compare hateful and normal users we do not find statistically significant difference w.r.t. the averages for users banned before and after the guideline change (except for government-related words, where p-value $< 0.05$). We also analyze the number of tweets, followers/followees, and the previously mentioned centrality measures, and observe no statistical significance in difference between the averages or the distributions (which were compared using KS-test). This suggests that Twitter has not changed the type of users banned. Hateful users are densely connected ----------------------------------- Finally, we analyze the frequency at which hateful and normal users, as well as suspended and active users, interact within their own group and with each other. Table \[tab:links\] depicts the probability of an node of a given type retweeting other type of node. We find that $41\%$ of the retweets of hateful users are to other hateful users, which means that they are $71$ times more likely to retweet another hateful user, considering the occurrence of hateful users in the graph. We observe a similar phenomenon with suspended users, which have $7\%$ of their retweets redirected towards other suspended users. As suspended users correspond to only $0.68\%$ of the users sampled, this means they are approximately $11$ times more likely to retweet other suspended users. The high density of connections among hateful and suspended users suggest a strong modularity. We exploit this, along with activity and network centrality attributes to robustly detect these users. ------------- -------------- ------------------------- ------------------------- ------------------------- ------------------------- ------------------------- ------------------------- Model Features Accuracy F1-Score AUC Accuracy F1-Score AUC `GradBoost` `user+glove` $84.6 \pm 1.0$ $52.0 \pm 2.2$ $88.4 \pm 1.3$ $81.5 \pm 0.6$ $48.4 \pm 1.1$ $88.6 \pm 0.1$ `glove` $84.4 \pm 0.5$ $52.0 \pm 1.3$ $88.4 \pm 1.3$ $78.9 \pm 0.7$ $44.8 \pm 0.7$ $87.0 \pm 0.5$ `AdaBoost` `user+glove` $69.1 \pm 2.4$ $37.6 \pm 2.4$ $85.5 \pm 1.4$ $70.1 \pm 0.1$ $38.3 \pm 0.9$ $84.3 \pm 0.5$ `glove` $69.1 \pm 2.5$ $37.6 \pm 2.4$ $85.5 \pm 1.4$ $69.7 \pm 1.0$ $37.5 \pm 0.8$ $82.7 \pm 0.1$ `GraphSage` `user+glove` $\mathbf{90.9 \pm 1.1}$ $\mathbf{67.0 \pm 4.1}$ $\mathbf{95.4 \pm 0.2}$ $\mathbf{84.8 \pm 0.3}$ $\mathbf{55.8 \pm 4.0}$ $\mathbf{93.3 \pm 1.4}$ `glove` $90.3 \pm 1.9$ $65.9 \pm 6.2$ $94.9 \pm 2.6$ $84.5 \pm 1.0$ $54.8 \pm 1.6$ $93.3 \pm 1.5$ ------------- -------------- ------------------------- ------------------------- ------------------------- ------------------------- ------------------------- ------------------------- Detecting Hateful Users ======================= As we consider users and their connections in the network, we can use information that is not available for models which operate on the granularity level of tweets or comments to detect hate speech. - **Activity/Network:** Features such as number of statuses, followers, followees, favorites, and centrality measurements such as betweenness, eigenvector centrality and the in/out degree of each node. We refer to these as `user`. - **GloVe:** We also use spaCy’s off-the-shelf 300-dimensional GloVe’s vector [@pennington2014glove] as features. We average the representation across all words in a given tweet, and subsequently, across all tweets a user has. We refer to these as `glove`. Using these features, we compare experimentally two traditional machine learning models known to perform very well when the number of instances is not very large: Gradient Boosted Trees (`GradBoost`) and Adaptive Boosting (`AdaBoost`); and a model aimed specifically at learning in graphs, GraphSage [@hamilton2017inductive] (`GraphSage`). Interestingly, the latter approach is semi-supervised, and allows us to use the neighborhood of the users we are classifying even though they are not labeled, exploiting the modularity between hateful and suspended users we observed. The algorithm creates low-dimensional embeddings for nodes, given associated features (unlike other node embeddings, such as `node2vec` [@grover2016node2vec]). Moreover, it is inductive - which means we don’t need the entire graph to run it. For additional information on node embeddings methods, refer to [@hamilton2017representation]. The GraphSage algorithm creates embeddings for each node given that the nodes have associated features (in our case the *GloVe* embeddings and activity/network-centrality attributes associated with each user). Instead of generating embeddings for all nodes, it learns a function that generate embeddings by sampling and aggregating features from a node’s local neighborhood. This strategy exploits the structure of the graph beyond merely using the features of the neighborhood of a given node. Experimental Settings --------------------- We run the algorithms trying to detect both hateful and normal users, as annotated by the crowdsourcing service, as well as trying to detect which users got banned. We perform a 5-fold cross validation and report the F1-score, the accuracy and the area under the ROC curve (AUC) for all instances. In all approaches we accounted for the class imbalance (of approximately $1$ to $10$) in the loss function. We keep the same ratio of positive/negative classes in both tasks, which, in practice, means we used the $4981$ annotated users in the first setting (where approximately $11\%$ were hateful) and, in the second setting, selected $6680$ users from the graph, including the $668$ suspended users, and other $5405$ users randomly sampled from the graph. Notice that, as we are dealing with a binary classification problem, we may control the trade-off between specificity and sensitivity by varying the positive-class threshold. In this work we simply pick the largest value, and report the resulting $AUC$ score - which can be interpreted as the probability of a classifier correctly ranking a random positive case higher than a random negative case. Results ------- The results of our experiments are shown in Table \[tab:res\]. We find that the node embedding approach using the features related to both users and the *GloVe* embeddings yields the best results for all metrics in the two considered scenarios. The Adaptative Boosting approach yields good $AUC$ scores, but incorrectly classifies many normal users as hateful, which results in a low accuracy and F1-score. Using the features related to users makes little difference in many settings, yielding, for example, exactly the same $AUC$, and very similar accuracy/F1-score in the Gradient Boosting models trained with the two sets of parameters. However, the usage of the retweet network yields promising results, especially because we observe improvements in both the detection of hateful users and of suspended users, which shows the performance improvement occurs independently of our annotation process. Related Work {#sec:related} ============ We review previous efforts to charactere and detect hate speech in OSNs. Tangent problems such as cyber-bullying and offensive language are not extensively covered; refer to @schmidt2017survey. Characterizing Hate ------------------- Hate speech has been characterized in websites and different Online Social Networks. @gerstenfeld2003hate analyze hateful websites characterizing their *modus operandi* w.r.t. monetization, recruitment, and international appeal. @chau2007mining identified and analyzed how hate-groups organize around blogs. @silva2016analyzing matches regex-like expressions on large datasets on Twitter and Whisper to characterize the targets of hate in Online Social Networks. @chatzakou2017hate characterized users and their tweets in the specific context surrounding the \#GamerGate controversy. More generally, abuse online also has been characterized on Community-Based Question Answering  [@kayes2015social], and in Ask.fm [@van2015automatic]. Detecting Hate -------------- We briefly go through different steps carried by previous work on the task detecting hate speech, analyzing the similarities and differences to this work. ### Data Collection {#data-collection} Many previous studies collect data by sampling OSNs with the aid of a lexicon with terms associated with hate speech [@davidson2017automated; @waseem2016hateful; @burnap2016us; @magu2017detecting]. This may be succeeded by expanding the lexicon with co-occurring terms [@waseem2016hateful]. Other techniques employed include matching regular expressions [@warner2012detecting] and selecting features in tweets from users known to have reproduced hate speech [@kwok2013locate]. We employ a random-walk-based methodology to obtain a generic sample of Twitter’s retweet graph, use a lexicon of hateful words to obtain a subsample of potentially hateful users and then select users to annotate in different “distances” from these users, which we obtain through a diffusion process. This allows us not to rely directly on lexicon to obtain the sample to be annotated. ### Annotation Human annotators are used in most previous work on hate speech detection. This labeling may be done by researchers themselves [@waseem2016hateful; @kwok2013locate; @djuric2015hate; @magu2017detecting], selected annotators [@warner2012detecting; @gitari2015lexicon], or crowd-sourcing services [@burnap2016us]. Hate speech speech has been pointed out as a difficult subject to annotate on [@waseem2016you; @ross2017measuring]. @chatzakou2017mean annotate tweets in sessions, clustering several tweets to help annotators get a grasp on context. We also employ *CrowdFlower* to annotate our data. Unlike previous work, we give annotators the entire profile of an user, instead of individual tweets. We argue this provides better context for the annotators [@waseem2017understanding]. The extent additional context improves annotation quality is a promising research direction. ### Features Features used in previous work are almost exclusively content-related. The content from tweets, posts and websites has been represented as n-grams, BoWs [@waseem2016hateful; @kwok2013locate; @magu2017detecting; @greevy2004classifying; @gitari2015lexicon], and word embeddings such as *paragraph2vec* [@djuric2015hate], *GloVe* [@pennington2014glove] and *FastText* [@badjatiya2017deep]. Other techniques used to extract features from content include POS tagging, sentiment analysis and ease of reading measures [@warner2012detecting; @davidson2017automated; @burnap2016us; @gitari2015lexicon]. @waseem2016hateful employ features not related to the content itself, using the gender and the location of the creator of the content. We use attributes related to the user’s activity, his network centrality and the content he or she produced in our characterization and detection. In the context of detecting aggression and cyber-bullying on Twitter, @chatzakou2017mean employ a similar set of features as we do. ### Models Models used to classify these features in the existing literature include supervised classification methods such as Naïve-Bayes [@kwok2013locate], Logistic Regression [@waseem2016hateful; @davidson2017automated; @djuric2015hate], Support Vector Machines [@warner2012detecting; @burnap2016us; @magu2017detecting], Rule-Based Classifiers [@gitari2015lexicon], Random Forests [@burnap2016us], Gradient-Boosted Decision Trees [@badjatiya2017deep] and Deep Neural Networks [@badjatiya2017deep]. We use Gradient-Boosted Decision Trees, Adaptative Boosting and a semi-supervised node embedding approach (GraphSage). Our experiments shows that the latter performs significantly better. A possible explanation for this is because hateful users retweet other hateful users very often, which makes exploiting the network structure beneficial. Conclusion and Discussion ========================= We present an approach to characterize and detect hate speech on Twitter at a user-level granularity. Our methodology differs previous efforts, which focused on isolated pieces of content, such as tweets and comments. [@greevy2004classifying; @warner2012detecting; @burnap2016us]. We developed a methodology to sample Twitter which consists of obtaining a generic subgraph, finding users who employed words in a lexicon of hate-related words and running a diffusion process based on DeGroot’s learning model to sample for users in the neighborhood of these users. We then used *Crowdflower*, a crowdsourcing service to manually annotate $4,988$ users, of which $544$ ($11\%$) were considered to be hateful. We argue that this methodology aids two existing shortcomings of existing work: it allows the researcher to balance between having a generic sample and a sample biased towards a set of words in a lexicon, and it provides annotators with realistic context, which is sometimes necessary to identify hateful speech. Our findings shed light on how hateful users differ from normal ones with respect to their user activity patterns, network centrality measurements, and the content they produce. We discover that hateful users have created their accounts more recently and write more negative sentences. They use lexicon associated with categories such as hate, terrorism, violence and anger *less* than normal ones, and categories of words such as love, work and masculinity *more* frequently. We also find that the median hateful user is more central and that hateful users are densely connected in the retweet network. The latter finding motivates the use of an inductive graph embedding approach to detect hateful users, which outperforms widely used algorithms such as Gradient Boosted Trees. As moderation of Online Social Networks in many cases analyzes users, characterizing and detecting hate on a user-level granularity is an essential step for creating workflows where humans and machines can interact to ensure OSNs obey legislation, and to provide a better experience for the average user. Nevertheless, our approach still has limitations that may lead to interesting future research directions. Firstly, our characterization only considered the behavior of users on Twitter, and the same scenario in other Online Social Networks such as Instagram of Facebook may present different challenges. Secondly, although classifying hateful users provides contextual clues that are not available when looking only at a piece of content, it is still a non-trivial task, as hateful speech is subjective, and people can disagree with what is hateful or not. In that sense, an interesting direction would be to try to create mechanisms of consensus, where online communities could help moderate their content in a more decentralized fashion (like Wikipedia [@shi2017wisdom]). Lastly, a research question in the context of detecting hate speech on a user-level granularity that this work fails to address is *how much hateful content comes from how many users*. This is particularly important as, if we have a Pareto-like distribution where most of the hate is generated by very few users, then analyzing hateful users rather than content becomes even more attractive. An interesting debate which may arise when shifting the focus on hate speech detection from content to users is how this can potentially blur the line between individuals and their speech. Twitter, for instance, implied it will consider conduct occurring “off the platform” in making suspension decisions [@slate]. In this scenario, approaching the hate speech detection problem as we propose could allow users to be suspended to “contextual” factors - and not for a specific piece of content he or she wrote. However, as mentioned previously, such models can be used as a first step to detect these users, which then will be assessed by humans or other more specific methods. The broader question this brings is to what extent a “black-box” model may be used to aid in tasks such as content moderation, where this model may contain accidental or intentional bias. These models can be used to moderate Online Social Networks, without the supervision of a human, in which case its bias could be very damaging towards certain groups, even leading to possible suppressions of individual’s human rights, notably the right to free speech. Another option would be to make a clear distinction between using the model to detect possibly hateful or inadequate content and delegating the task of moderation exclusively to a human. Although there are many shades of gray between these two approaches, an important research direction is how to make the automated parts of the moderation process fair, accountable and transparent, which is hard to achieve even for content-based approaches. Acknowledgments. {#acknowledgments. .unnumbered} ================ We would like to thank Nikki Bourassa, Ryan Budish, Amar Ashar and Robert Faris from the BKC for their insightful suggestions. This work was partially supported by CNPq, CAPES and Fapemig, as well as projects InWeb, INCT-Cyber, MASWEB, BigSea and Atmosphere. [^1]: This is an extended version of the homonymous short paper to be presented at ICWSM-18. [^2]: https://github.com/manoelhortaribeiro/HatefulUsersTwitter [^3]: We use suspended and banned interchangeably. [^4]: [https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words]{}
--- author: - 'Mia Hubert[^1], Michiel Debruyne[^2], Peter J. Rousseeuw' title: Minimum Covariance Determinant and Extensions --- ### Abstract {#abstract .unnumbered} The Minimum Covariance Determinant (MCD) method is a highly robust estimator of multivariate location and scatter, for which a fast algorithm is available. Since estimating the covariance matrix is the cornerstone of many multivariate statistical methods, the MCD is an important building block when developing robust multivariate techniques. It also serves as a convenient and efficient tool for outlier detection. The MCD estimator is reviewed, along with its main properties such as affine equivariance, breakdown value, and influence function. We discuss its computation, and list applications and extensions of the MCD in applied and methodological multivariate statistics. Two recent extensions of the MCD are described. The first one is a fast deterministic algorithm which inherits the robustness of the MCD while being almost affine equivariant. The second is tailored to high-dimensional data, possibly with more dimensions than cases, and incorporates regularization to prevent singular matrices. biblabel\[1\][\#1.]{} INTRODUCTION {#introduction .unnumbered} ============ The Minimum Covariance Determinant (MCD) estimator is one of the first affine equivariant and highly robust estimators of multivariate location and scatter [@Rousseeuw:LMS; @Rousseeuw:MVE]. Being resistant to outlying observations makes the MCD very useful for outlier detection. Although already introduced in 1984, its main use has only started since the construction of the computationally efficient FastMCD algorithm of [@Rousseeuw:FastMCD] in 1999. Since then, the MCD has been applied in numerous fields such as medicine, finance, image analysis and chemistry. Moreover the MCD has also been used to develop many robust multivariate techniques, among which robust principal component analysis, factor analysis and multiple regression. Recent modifications of the MCD include a deterministic algorithm and a regularized version for high-dimensional data. DESCRIPTION OF THE MCD ESTIMATOR {#description-of-the-mcd-estimator .unnumbered} ================================ Motivation {#motivation .unnumbered} ---------- In the multivariate location and scatter setting the data are stored in an $n \times p$ data matrix $\bX=(\bx_1,\ldots,\bx_n)'$ with $\bx_i=(x_{i1},\ldots,x_{ip})'$ the $i$-th observation, so $n$ stands for the number of objects and $p$ for the number of variables. We assume that the observations are sampled from an elliptically symmetric unimodal distribution with unknown parameters $\bmu$ and $\bSigma$, where $\bmu$ is a vector with $p$ components and $\bSigma$ is a positive definite $p \times p$ matrix. To be precise, a multivariate distribution is called elliptically symmetric and unimodal if there exists a strictly decreasing real function $g$ such that the density can be written in the form $$\begin{aligned} \label{eq:ell} f(\bx)=\frac{1}{\sqrt{|\bSigma|}}\; g(d^2(\bx,\bmu,\bSigma))\end{aligned}$$ in which the *statistical distance* $d(\bx,\bmu,\bSigma)$ is given by $$\begin{aligned} \label{eq:d} d(\bx,\bmu,\bSigma)= \sqrt{(\bx-\bmu)'\bSigma^{-1}(\bx-\bmu)}\ \ .\end{aligned}$$ To illustrate the MCD, we first consider the wine data set available in [@Hettich:UCIKDDArchive] and also analyzed in [@Maronna:RobStat]. This data set contains the quantities of 13 constituents found in three types of Italian wines. We consider the first group containing 59 wines, and focus on the constituents ‘Malic acid’ and ‘Proline’. This yields a bivariate data set, i.e. $p=2$. A scatter plot of the data is shown in Figure \[fig:tolellipswine\], in which we see that the points on the lower right hand side of the plot are outlying relative to the majority of the data. ![Bivariate wine data: tolerance ellipse of the classical mean and covariance matrix (red), and that of the robust location and scatter matrix (blue).[]{data-label="fig:tolellipswine"}](wine_tolellipse_cropped.pdf) In the figure we see two ellipses. The classical tolerance ellipse is defined as the set of $p$-dimensional points $\bx$ whose [*Mahalanobis distance*]{} $$\text{MD}(\bx) = d(\bx,\bar{\bx},\mbox{Cov}(X)) = \sqrt{(\bx -\bar{\bx})' \mbox{Cov}(X)^{-1} (\bx -\bar{\bx})} \label{eq:md}$$ equals $\sqrt{\chi^2_{p,0.975}}$. Here $\bar{\bx}$ is the sample mean and $\mbox{Cov}(X)$ the sample covariance matrix. The Mahalanobis distance $\text{MD}(\boldsymbol{x}_i)$ should tell us how far away $\boldsymbol{x}_i$ is from the center of the data cloud, relative to its size and shape. In Figure \[fig:tolellipswine\] we see that the red tolerance ellipse tries to encompass all observations. Therefore none of the Mahalanobis distances is exceptionally large, as we can see in Figure \[fig:wined\](a). Based on Figure \[fig:wined\](a) alone we would say there are only three mild outliers in the data (we ignore borderline cases). On the other hand, the robust tolerance ellipse is based on the robust distances $$\text{RD}(\bx) = d(\bx, \hbmu_{MCD}, \hat{\bSigma}_{MCD}) \label{eq:rd}$$ where $\hbmu_{MCD}$ is the MCD estimate of location and $\hat{\bSigma}_{MCD}$ is the MCD covariance estimate, which we will explain soon. In Figure \[fig:tolellipswine\] we see that the robust ellipse (in blue) is much smaller and only encloses the regular data points. The robust distances shown in Figure \[fig:wined\](b) now clearly expose 8 outliers. 0.1in ------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------- ![(a) Mahalanobis distances and (b) robust distances for the bivariate wine data.[]{data-label="fig:wined"}](wine_mah_cropped.pdf "fig:") ![(a) Mahalanobis distances and (b) robust distances for the bivariate wine data.[]{data-label="fig:wined"}](wine_robdist_cropped.pdf "fig:") (a) (b) ------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------- This illustrates the *masking effect*: the classical estimates can be so strongly affected by contamination that diagnostic tools such as the Mahalanobis distances become unable to detect the outliers. To avoid masking we instead need reliable estimators that can resist outliers when they occur. The MCD is such a robust estimator. Definition {#definition .unnumbered} ---------- The raw Minimum Covariance Determinant (MCD) estimator with tuning constant $n/2\leqslant h\leqslant n$ is $(\hat{\bmu}_0,\hat{\bSigma}_0)$ where 1. the location estimate $\hat{\bmu}_0$ is the mean of the $h$ observations for which the determinant of the sample covariance matrix is as small as possible; 2. the scatter matrix estimate $\hat{\bSigma}_0$ is the corresponding covariance matrix multiplied by a consistency factor $c_0$. Note that the MCD estimator can only be computed when $h > p$, otherwise the covariance matrix of any $h$-subset has determinant zero, so we need at least $n > 2p$. To avoid excessive noise it is however recommended that $n > 5p$, so that we have at least 5 observations per dimension. (When this condition is not satisfied one can instead use the MRCD method described near the end of this article.) To obtain consistency at the normal distribution, the consistency factor $c_0$ equals $\alpha/F_{\chi^2_{p+2}}(q_\alpha)$ with $\alpha=\lim_{n \to \infty} h(n)/n$, and $q_\alpha$ the $\alpha$-quantile of the $\chi^2_p$ distribution [@Croux:IFMCD]. Also a finite-sample correction factor can be incorporated [@Pison:Corfac]. Consistency of the raw MCD estimator of location and scatter at elliptical models, as well as asymptotic normality of the MCD location estimator has been proved in [@Butler:AsymptMCD]. Consistency and asymptotic normality of the MCD covariance matrix at a broader class of distributions is derived in [@Cator:AsympMCD; @Cator:Infl]. The MCD estimator is the most robust when taking $h=[(n+p+1)/2]$ where $[a]$ is the largest integer $\leqslant a$. At the population level this corresponds to $\alpha=0.5$. But unfortunately the MCD then suffers from low efficiency at the normal model. For example, if $\alpha=0.5$ the asymptotic relative efficiency of the diagonal elements of the MCD scatter matrix relative to the sample covariance matrix is only 6% when $p=2$, and 20.5% when $p=10$. This efficiency can be increased by considering a higher $\alpha$ such as $\alpha=0.75$. This yields relative efficiencies of 26.2% for $p=2$ and 45.9% for $p=10$ (see [@Croux:IFMCD]). On the other hand this choice of $\alpha$ diminishes the robustness to possible outliers. In order to increase the efficiency while retaining high robustness one can apply a weighting step [@Lopuhaa:BDP; @Lopuhaa:RewEst]. For the MCD this yields the estimates $$\begin{aligned} \label{eq:rewMCD} \begin{split} \hat{\bmu}_{MCD} & = \frac{\sum_{i=1}^nW(d_i^2) \bx_i}{\sum_{i=1}^nW(d^2_i)}\\ \hat{\bSigma}_{MCD} & = c_1 \frac{1}{n} \sum_{i=1}^nW(d_i^2)(\bx_i-\hat{\bmu}_{MCD}) (\bx_i-\hat{\bmu}_{MCD})' \end{split}\end{aligned}$$ with $d_i=d(\bx,\hbmu_0,\hat{\bSigma}_0)$ and $W$ an appropriate weight function. The constant $c_1$ is again a consistency factor. A simple yet effective choice for $W$ is to set it to 1 when the robust distance is below the cutoff $\sqrt{\chi^2_{p,0.975}}$ and to zero otherwise, that is, $W(d^2)=I(d^2 \leqslant \chi^2_{p,0.975})$. This is the default choice in the current implementations in R, SAS, Matlab and S-PLUS. If we take $\alpha=0.5$ this weighting step increases the efficiency to 45.5% for $p=2$ and to 82% for $p=10$. In the example of the wine data (Figure \[fig:tolellipswine\]) we applied the weighted MCD estimator with $\alpha=0.75$, but the results were similar for smaller values of $\alpha$. Note that one can construct a robust correlation matrix from the MCD scatter matrix. The robust correlation between variables $X_i$ and $X_j$ is given by $$r_{ij} = \frac{s_{ij}}{\sqrt{s_{ii} s_{jj}}}$$ with $s_{ij}$ the $(i,j)$-th element of the MCD scatter matrix. In Figure \[fig:tolellipswine\] the MCD-based robust correlation is $0.10 \approx 0$ because the majority of the data do not show a trend, whereas the classical correlation of $-0.37$ was caused by the outliers in the lower right part of the plot. Outlier detection {#outlier-detection .unnumbered} ----------------- As already illustrated in Figure \[fig:wined\], the robust MCD estimator is very useful to detect outliers in multivariate data. As the robust distances are not sensitive to the masking effect, they can be used to flag the outliers [@Rousseeuw:Diagnostic; @Cerioli:outliers]. This is crucial for data sets in more than three dimensions, which are difficult to visualize. We illustrate the outlier detection potential of the MCD on the full wine data set, with all $p=13$ variables. The *distance-distance plot* of [@Rousseeuw:FastMCD] in Figure \[fig:winedd\] shows the robust distances based on the MCD versus the classical distances . From the robust analysis we see that seven observations clearly stand out (plus some mild outliers), whereas the classical analysis does not flag any of them. ![Distance-distance plot of the full 13-dimensional wine data set.[]{data-label="fig:winedd"}](wine_dd_cropped.pdf) Note that the cutoff value $\sqrt{\chi^2_{p,0.975}}$ is based on the asymptotic distribution of the robust distances, and often flags too many observations as outlying. For relatively small $n$ the true distribution of the robust distances can be better approximated by an $F$-distribution, see [@Hardin:RD]. PROPERTIES {#properties .unnumbered} ========== Affine equivariance {#affine-equivariance .unnumbered} ------------------- The MCD estimator of location and scatter is *affine equivariant*. This means that for any nonsingular $p \times p$ matrix $\bA$ and any $p$-dimensional column vector $\bb$ it holds that $$\begin{aligned} \label{eq:affine} \hat{\bmu}_{MCD}(\bX \bA'+\mathbf{1}_n \bb')& =\hat{\bmu}_{MCD}(\bX)\bA'+\bb \\ \hat{\bSigma}_{MCD}(\bX \bA'+\mathbf{1}_n \bb')& =\bA\hat{\bSigma}_{MCD}(\bX)\bA'\end{aligned}$$ where the vector $\mathbf{1}_n$ is $(1,1,\dots,1)'$ with $n$ elements. This property follows from the fact that for each subset $H$ of $\{1,2,\ldots,n\}$ of size $h\,$ and corresponding data set $\bX_H\,$, the determinant of the covariance matrix of the transformed data equals $$|\bS(\bX_H \bA')|= |\bA \bS(\bX_H) \bA'| = |\bA|^2 |\bS(\bX_H)|.$$ Therefore, transforming an $h$-subset with lowest determinant yields an $h$-subset $\bX_H \bA'$ with lowest determinant among all $h$-subsets of the transformed data set $\bX \bA'\,$, and its covariance matrix is transformed appropriately. The affine equivariance of the raw MCD location estimator follows from the equivariance of the sample mean. Finally we note that the robust distances $d_i=d(\bx,\hbmu_0,\hat{\bSigma}_{0})$ are affine *invariant*, meaning they stay the same after transforming the data, which implies that the weighted estimator is affine equivariant too. Affine equivariance implies that the estimator transforms well under any non-singular reparametrization of the space in which the $\bx_i$ live. Consequently, the data might be rotated, translated or rescaled (for example through a change of measurement units) without affecting the outlier detection diagnostics. The MCD is one of the first high-breakdown affine equivariant estimators of location and scatter, and was only preceded by the Stahel-Donoho estimator [@Stahel:SDest; @Donoho:Depth]. Together with the MCD also the Minimum Volume Ellipsoid estimator was introduced [@Rousseeuw:LMS; @Rousseeuw:MVE] which is equally robust but not asymptotically normal, and is harder to compute than the MCD. Breakdown value {#breakdown-value .unnumbered} --------------- The breakdown value of an estimator is the smallest fraction of observations that need to be replaced (by arbitrary values) to make the estimate useless. For a multivariate [*location*]{} estimator $T_n$ the breakdown value is defined as $$\begin{aligned} \label{eq:fsbp} \varepsilon^*_n(T_n;\bX_n)=\frac{1}{n} \min\left\{m: \sup ||T_n(\bX_{n,m}) - T_n(\bX_n)|| = +\infty\right\}\end{aligned}$$ where $1 \leqslant m \leqslant n$ and the supremum is over all data sets $\bX_{n,m}$ obtained by replacing any $m$ data points $\bx_{i_1},\hdots,\bx_{i_m}$ of $\bX_n$ by arbitrary points. For a multivariate [*scatter*]{} estimator $C_n$ we set $$\begin{aligned} \varepsilon^*_n(C_n;\bX_n)=\frac{1}{n} \min\{ m:\sup\; \max_i|\log(\lambda_i(C_n(\bX_{n,m})))- \log(\lambda_i(C_n(\bX_n)))| = +\infty \}\end{aligned}$$ with $\lambda_1(C_n) \geqslant \hdots \geqslant \lambda_p(C_n) > 0$ the eigenvalues of $C_n\,$. This means that we consider a scatter estimator to be broken when $\lambda_1$ can become arbitrarily large (‘explosion’) and/or $\lambda_p$ can become arbitrary close to $0$ (‘implosion’). Implosion is a problem because it makes the scatter matrix singular whereas in many situations its inverse is required, e.g. in . Let $k(\bX_n)$ denote the highest number of observations in the data set that lie on an affine hyperplane in $p$-dimensional space, and assume $k(\bX_n) < h$. Then the raw MCD estimator of location and scatter satisfies [@Roelant:MWCD] $$\varepsilon^*_n(\hat{\bmu}_0;\bX_n)= \varepsilon^*_n(\hat{\bSigma}_0;\bX_n)= \frac{\min(n-h+1,h-k(\bX_n))}{n}\;\;.$$ If the data are sampled from a continuous distribution, then almost surely $k(\bX_n) = p$ which is called *general position*. Then $\varepsilon^*_n(\hat{\bmu}_0;\bX_n)= \varepsilon^*_n(\hat{\bSigma}_0;\bX_n)= \min(n - h + 1, h - p)/n$, and consequently any $[(n + p)/2] \leqslant h \leqslant [(n + p + 1)/2]$ gives the breakdown value $[(n - p + 1)/2]$. This is the highest possible breakdown value for affine equivariant scatter estimators [@Davies:AsymptSest] at data sets in general position. Also for affine equivariant location estimators the upper bound on the breakdown value is $[(n - p + 1)/2]$ under natural regularity conditions [@Rousseeuw:Discussion]. Note that in the limit $\lim_{n \to \infty} \varepsilon^*_n = \min(1-\alpha,\alpha)$ which is maximal for $\alpha=0.5$. Finally we note that the breakdown value of the weighted MCD estimators $\hat{\bmu}_{MCD}$ and $\hat{\bSigma}_{MCD}$ is not lower than the breakdown value of the raw MCD estimator, as long as the weight function $W$ used in is bounded and becomes zero for large $d_i\,$, see [@Lopuhaa:BDP]. Influence function {#influence-function .unnumbered} ------------------ The influence function [@Hampel:IFapproach] of an estimator measures the effect of a small (infinitesimal) fraction of outliers placed at a given point. It is defined at the population level hence it requires the functional form of the estimator $T$, which maps a distribution $F$ to a value $T(F)$ in the parameter space. For multivariate location this parameter space is $\rz^p$, whereas for multivariate scatter the parameter space is the set of all positive semidefinite $p \times p$ matrices. The influence function of the estimator $T$ at the distribution $F$ in a point $\bx$ is then defined as $$IF(\bx,T,F) = \lim_{\varepsilon \to 0} \frac{T(F_\varepsilon)-T(F)}{\varepsilon} \label{eq:IF}$$ with $F_\varepsilon=(1-\varepsilon)F+ \varepsilon \Delta_x$ a contaminated distribution with point mass in $\bx$. The influence function of the raw and the weighted MCD has been computed in [@Croux:IFMCD; @Cator:Infl] and turns out to be bounded. This is a desirable property for robust estimators, as it limits the effect of a small fraction of outliers on the estimate. At the standard multivariate normal distribution, the influence function of the MCD location estimator becomes zero for all $\bx$ with $\|\bx\|^2 > \chi^2_{p,\alpha}$ hence far outliers do not influence the estimates at all. The same happens with the off-diagonal elements of the MCD scatter estimator. On the other hand, the influence function of the diagonal elements remains constant (different from zero) when $\|\bx\|^2$ is sufficiently large. Therefore the outliers still have a bounded influence on the estimator. All these influence functions are smooth, except at those $\bx$ with $\|\bx\|^2 = \chi^2_{p,\alpha}\,$. The weighted MCD estimator has an additional jump in $\|\bx\|^2 = \chi^2_{p,0.975}$ due to the discontinuity of the weight function, but one could use a smooth weight function instead. Univariate MCD {#univariate-mcd .unnumbered} -------------- For univariate data $x_1,\ldots,x_n$ the MCD estimates reduce to the mean and the standard deviation of the $h$-subset with smallest variance. They can be computed in O($n \log n$) time by sorting the observations and only considering contiguous $h$-subsets so that their means and variances can be calculated recursively [@Rousseeuw:RobReg]. Their consistency and asymptotic normality is proved in [@Butler:uniMCD; @Rousseeuw:MVE]. For $h=[n/2]+1$ the MCD location estimator has breakdown value $[(n+1)/2]/n$ and the MCD scale estimator has $[n/2]/n$. These are the highest breakdown values that can be attained by univariate affine equivariant estimators [@Croux:HBscale]. The univariate MCD estimators also have bounded influence functions, see [@Croux:IFMCD] for details. Their maximal asymptotic bias is studied in [@Croux:BiasScale; @Croux:BiasLoc] as a function of the contamination fraction. Note that in the univariate case the MCD estimator corresponds to the Least Trimmed Squares (LTS) regression estimator [@Rousseeuw:LMS], which is defined by $$\hat{\beta}_{LTS} = \operatorname*{argmin}_\mu \sum_{i=1}^h (r^2_\beta)_{i:n} \label{eq:LTS}$$ where $(r^2_\beta)_{1:n} \leqslant (r^2_\beta)_{2:n} \leqslant \ldots \leqslant (r^2_\beta)_{n:n}$ are the ordered squared residuals. For univariate data these residuals are simply $(r_{\beta})_i = x_i-\beta\,$. COMPUTATION {#computation .unnumbered} =========== The exact MCD estimator is very hard to compute, as it requires the evaluation of all $\binom{n}{h}$ subsets of size $h$. Therefore one switches to an approximate algorithm such as the FastMCD algorithm of [@Rousseeuw:FastMCD] which is quite efficient. The key component of the algorithm is the C-step: 0.2in [**Theorem.**]{} [ *Take $\bX=\{\boldsymbol{x}_1,\dots,\boldsymbol{x}_n\}$ and let $H_1 \subset \{1,\dots,n\}$ be a subset of size $h$. Put $\boldsymbol{\hat{\mu}}_1$ and $\boldsymbol{\hat{\Sigma}}_1$ the empirical mean and covariance matrix of the data in $H_1$. If $|\boldsymbol{\hat{\Sigma}}_1| \neq 0$ define the relative distances $d_1(i) := d(\boldsymbol{x}_i,\boldsymbol{\hat{\mu}}_1, \boldsymbol{\hat{\Sigma}}_1)$ for $i=1,\dots,n$. Now take $H_2$ such that $\{d_1(i) ; i \in H_2\} := \{(d_1)_{1:n},\dots,(d_1)_{h:n}\}$ where $(d_1)_{1:n} \leqslant (d_1)_{2:n} \leqslant \dots \leqslant (d_1)_{n:n}$ are the ordered distances, and compute $\boldsymbol{\hat{\mu}}_2$ and $\boldsymbol{\hat{\Sigma}}_2$ based on $H_2$. Then $$|\boldsymbol{\hat{\Sigma}}_2| \leqslant |\boldsymbol{\hat{\Sigma}}_1|$$ with equality if and only if $\boldsymbol{\hat{\mu}}_2 = \boldsymbol{\hat{\mu}}_1$ and $\boldsymbol{\hat{\Sigma}}_2 = \boldsymbol{\hat{\Sigma}}_1$.* ]{} 0.2in If $|\boldsymbol{\hat{\Sigma}}_1| > 0$, the C-step thus easily yields a new $h$-subset with lower covariance determinant. Note that the C stands for ‘concentration’ since $\boldsymbol{\hat{\Sigma}}_2$ is more concentrated (has a lower determinant) than $\boldsymbol{\hat{\Sigma}}_1$. The condition $|\boldsymbol{\hat{\Sigma}}_1| \neq 0$ in the theorem is no real restriction because if $|\boldsymbol{\hat{\Sigma}}_1| = 0$ the minimal objective value is already attained (and in fact the $h$-subset $H_1$ lies on an affine hyperplane). C-steps can be iterated until $|\boldsymbol{\hat{\Sigma}}_{\text{new}}| = |\boldsymbol{\hat{\Sigma}}_{\text{old}}|$. The sequence of determinants obtained in this way must converge in a finite number of steps because there are only finitely many $h$-subsets, and in practice converges quickly. However, there is no guarantee that the final value $|\boldsymbol{\hat{\Sigma}}_{\text{new}}|$ of the iteration process is the global minimum of the MCD objective function. Therefore an approximate MCD solution can be obtained by taking many initial choices of $H_1$ and applying C-steps to each, keeping the solution with lowest determinant. To construct an initial subset $H_1$ one draws a random $(p+1)$-subset $J$ and computes its empirical mean $\boldsymbol{\hat{\mu}}_0$ and covariance matrix $\boldsymbol{\hat{\Sigma}}_0$. (If $|\boldsymbol{\hat{\Sigma}}_0| =0$ then $J$ can be extended by adding observations until $|\boldsymbol{\hat{\Sigma}}_0| >0$.) Then the distances $d_0^2(i):= d^2(\boldsymbol{x}_i, \boldsymbol{\hat{\mu}}_0, \boldsymbol{\hat{\Sigma}}_0)$ are computed for $i=1,\ldots,n$ and sorted. The initial subset $H_1$ then consists of the $h$ observations with smallest distance $d_0\,$. This method yields better initial subsets than drawing random $h$-subsets directly, because the probability of drawing an outlier-free $(p+1)$-subset is much higher than that of drawing an outlier-free $h$-subset. The FastMCD algorithm contains several computational improvements. Since each C-step involves the calculation of a covariance matrix, its determinant and the corresponding distances, using fewer C-steps considerably improves the speed of the algorithm. It turns out that after two C-steps, many runs that will lead to the global minimum already have a rather small determinant. Therefore, the number of C-steps is reduced by applying only two C-steps to each initial subset and selecting the 10 subsets with lowest determinants. Only for these 10 subsets further C-steps are taken until convergence. This procedure is very fast for small sample sizes $n$, but when $n$ grows the computation time increases due to the $n$ distances that need to be calculated in each C-step. For large $n$ FastMCD partitions the data set, which avoids doing all calculations on the entire data set. Note that the FastMCD algorithm is itself affine equivariant. Implementations of the FastMCD algorithm are available in R (as part of the packages *rrcov*, [*robust*]{} and [*robustbase*]{}), in SAS/IML Version 7 and SAS Version 9 (in *PROC ROBUSTREG*), and in S-PLUS (as the built-in function *cov.mcd*). There is also a Matlab version in LIBRA, a LIBrary for Robust Analysis [@Verboven:Toolbox; @Verboven:WIRE-LIBRA] which can be downloaded from <http://wis.kuleuven.be/stat/robust>. Moreover, it is available in the PLS toolbox of Eigenvector Research(<http://www.eigenvector.com>). Note that some MCD functions use $\alpha=0.5$ by default, yielding a breakdown value of 50%, whereas other implementations use $\alpha=0.75$. Of course $\alpha$ can always be set by the user. APPLICATIONS {#applications .unnumbered} ============ There are many applications of the MCD, for instance in finance and econometrics [@Zaman:Econometrics; @Welsh:asset; @Gambacciani:mixtures], medicine [@Prastawa:brain], quality control [@Jensen:controlcharts], geophysics [@Neykov:sites], geochemistry [@Filzmoser:expl], image analysis [@Vogler:outlier; @Lu:brain] and chemistry [@vanHelvoort:geo], but this list is far from complete. MCD-BASED MULTIVARIATE METHODS {#mcd-based-multivariate-methods .unnumbered} ============================== Many multivariate statistical methods rely on covariance estimation, hence the MCD estimator is well-suited for constructing robust multivariate techniques. Moreover, the trimming idea of the MCD and the C-step have been generalized to many new estimators. Here we list some applications and extensions. The MCD analog in regression is the Least Trimmed Squares regression estimator [@Rousseeuw:LMS] which minimizes the sum of the $h$ smallest squared residuals . Equivalently, the LTS estimate corresponds to the least squares fit of the $h$-subset with smallest sum of squared residuals. The FastLTS algorithm [@Rousseeuw:FASTLTS] uses techniques similar to FastMCD. The outlier map introduced in [@Rousseeuw:Diagnostic] plots the robust regression residuals versus the robust distances of the predictors, and is very useful for classifying outliers, see also [@Rousseeuw:WIRE-Anomaly]. Moreover, MCD-based robust distances are also useful for robust linear regression [@Simpson:Onestep; @Coakley:Schweppe], regression with continuous and categorical regressors [@Hubert:Binary], and for logistic regression [@Rousseeuw:Sep; @Croux:Roblogreg]. In the multivariate regression setting (that is, with several response variables) the MCD can be used used directly to obtain MCD-regression [@Rousseeuw:MCDreg], whereas MCD applied to the residuals leads to multivariate LTS estimation [@Agullo:MLTS]. Robust errors-in-variables regression is proposed in [@Fekri:orthog]. Covariance estimation is also important in principal component analysis and related methods. For low-dimensional data (with $n > 5p$) the principal components can be obtained as the eigenvectors of the MCD scatter matrix [@Croux:PCAMCD], and robust factor analysis based on the MCD has been studied in [@Pison:RobFA]. The MCD was also used for invariant coordinate selection [@Tyler:ICS]. Robust canonical correlation is proposed in [@Croux:Cancor]. For high-dimensional data, projection pursuit ideas combined with the MCD results in the ROBPCA method [@Hubert:ROBPCA; @Debruyne:IFROBPCA] for robust PCA. In turn ROBPCA has led to the construction of robust Principal Component Regression [@Hubert:RPCR] and robust Partial Least Squares Regression [@Hubert:RSIMPLS; @VandenBranden:PLS], together with appropriate outlier maps, see also [@Hubert:ReviewHighBreakdown]. Also methods for robust PARAFAC [@Engelen:robParafac] and robust multilevel simultaneous component analysis [@Ceulemans:RobMSCA] are based on ROBPCA. The LTS subspace estimator [@Maronna:OrReg] generalizes LTS regression to subspace estimation and orthogonal regression. An MCD-based alternative to the Hotelling test is provided in [@Willems:Hotelling]. A robust bootstrap for the MCD is proposed in [@Willems:bootstrapMCD] and a fast cross-validation algorithm in [@Hubert:FastCV]. Computation of the MCD for data with missing values is explored in [@Cheng:Missing; @Copt:missingsMCD; @Serneels:MissingsPCA]. A robust Cronbach alpha is studied in [@Christmann:Cronbach]. Classification (i.e. discriminant analysis) based on MCD is constructed in [@Hawkins:Discrim; @Hubert:Discrim], whereas an alternative for high-dimensional data is developed in [@VandenBranden:RSIMCA]. Robust clustering is handled in [@Rocke:Cluster; @Hardin:ClusteringMCD; @Gallegos:cluster]. The trimming procedure of the MCD has inspired the construction of maximum trimmed likelihood estimators [@Vandev:MTL; @Hadi:MTL; @Muller:Genreg; @Cizek:Binary], trimmed $k$-means [@Cuesta:kmeans; @Cuesta:mixture; @Garcia:linclust], least weighted squares regression [@Visek:LWS], and minimum weighted covariance determinant estimation [@Roelant:MWCD]. The idea of the C-step in the FastMCD algorithm has also been extended to S-estimators [@Salibian:FastS; @Hubert:DetS]. RECENT EXTENSIONS {#recent-extensions .unnumbered} ================= Deterministic MCD {#deterministic-mcd .unnumbered} ================= As the FastMCD algorithm starts by drawing random subsets, it does not necessarily give the same result at multiple runs of the algorithm. (To address this, most implementations fix the seed of the random selection.) Moreover, FastMCD needs to draw many initial subsets in order to obtain at least one that is outlier-free. To circumvent both problems, a deterministic algorithm for robust location and scatter has been developed, denoted as DetMCD [@Hubert:DetMCD]. It uses the same iteration steps as FastMCD but does not start from random subsets. Unlike FastMCD it is permutation invariant, i.e.  the result does not depend on the order of the observations in the data set. Furthermore DetMCD runs even faster than FastMCD, and is less sensitive to point contamination. DetMCD computes a small number of deterministic initial estimates, followed by concentration steps. Let $X_j$ denote the columns of the data matrix $\bX$. First each variable $X_j$ is standardized by subtracting its median and dividing by the $Q_n$ scale estimator of [@Rousseeuw:Scale]. This standardization makes the algorithm location and scale equivariant, i.e. equations hold for any non-singular diagonal matrix $\bA$. The standardized data set is denoted as the $n \times p$ matrix $\bZ$ with rows $\bz'_i$ ($i=1,\ldots,n$) and columns $Z_j$ ($j=1,\ldots,p$). Next, six preliminary estimates $\bS_k$ are constructed ($k=1,\ldots,6$) for the scatter or correlation of $\bZ$: 1. $\bS_1=\textrm{corr}(\bY)$ with $Y_j=\textrm{tanh}(Z_j)$ for $j=1,\ldots,p$. 2. Let $R_j$ be the ranks of the column $Z_j$ and put $\bS_2= \textrm{corr}(\bR)$. This is the Spearman correlation matrix of $\bZ$. 3. $\bS_3=\textrm{corr}(\bT)$ with the normal scores $T_j=\Phi^{-1}((R_j - 1/3)/(n+1/3))$. 4. The fourth scatter estimate is the spatial sign covariance matrix [@Visuri:rank]: define $\bk_i=\bz_i/\| \bz_i \|$ for all $i$ and let $\bS_4 =(1/n)\sum_{i=1}^n \bk_i \bk_i'\,$. 5. $\bS_5$ is the covariance matrix of the $\left\lceil n/2 \right\rceil$ standardized observations $\bz_i$ with smallest norm, which corresponds to the first step of the BACON algorithm [@Billor:Bacon]. 6. The sixth scatter estimate is the raw orthogonalized Gnanadesikan-Kettenring (OGK) estimator [@Maronna:OGK]. As these $\bS_k$ may have very inaccurate eigenvalues, the following steps are applied to each of them: 1. Compute the matrix $\bE$ of eigenvectors of $\bS_k$ and put $\bV = \bZ \bE\,$. 2. Estimate the scatter of $\bZ$ by $\bS_{k}(\bZ) = \bE \bLambda \bE'$ where $\bLambda = \text{diag}(Q^2_n(\bV_1), \ldots,Q^2_n(\bV_p))\,$. 3. Estimate the center of $\bZ$ by $\hat{\bmu}_k(\bZ) = \bS^{1/2}_k (\mbox{comed}(\bZ \bS^{-1/2}_k ))$ where $\mbox{comed}$ is the coordinatewise median. For the six estimates $(\hat{\bmu}_k(\bZ),\bS_k(\bZ))$ the statistical distances $d_{ik} = d(\bz_i,\hat{\bmu}_k(\bZ), \bS_k(\bZ))$ of all points are computed as in . For each initial estimate $k=1,\ldots,6$ we compute the mean and covariance matrix of the $h_0 = \lfloor n/2\rfloor$ observations with smallest $d_{ik}\,$, and relative to those we compute statistical distances (denoted as $d^*_{ik}$) of all $n$ points. For each $k=1,\ldots,6$ the $h$ observations $\bx_i$ with smallest $d^*_{ik}$ are selected, and C-steps are applied to them until convergence. The solution with smallest determinant is called the raw DetMCD. Then a weighting step is applied as in , yielding the final DetMCD. DetMCD has the advantage that estimates can be quickly computed for a whole range of $h$ values (and hence a whole range of breakdown values), as only the C-steps in the second part of the algorithm depend on $h$. Monitoring some diagnostics (such as the condition number of the scatter estimate) can give additional insights in the underlying data structure, as in the example in [@Hubert:DetS]. Note that even though DetMCD is not affine equivariant, it turns out that its deviation from affine equivariance is very small. Minimum regularized covariance determinant {#minimum-regularized-covariance-determinant .unnumbered} ========================================== In high dimensions we need a modification of MCD, since the existing MCD algorithms take long and are less robust in that case. For large $p$ we can still make a rough estimate of the scatter as follows. First compute the first $q < p$ robust principal components of the data. For this we can use the MCD-based ROBPCA method [@Hubert:ROBPCA], which requires that the number of components $q$ be set rather low. The robust PCA yields a center $\hbmu$ and $q$ loading vectors. Then form the $p \times q$ matrix $\bL$ with the loading vectors as columns. The principal component scores $\bt_i$ are then given by $\bt_i = \bL'(\bx_i - \bmu)\,$. Now compute $\lambda_j$ for $j=1,\ldots,q$ as a robust variance estimate of the $j$-th principal component, and gather all the $\lambda_j$ in a diagonal matrix $\bLambda$. Then we can robustly estimate the scatter matrix of the original data set $\bX$ by $\hbSigma(\bX) = \bL \bLambda \bL'$. Unfortunately, whenever $q < p$ the resulting matrix $\hbSigma(\bX)$ will have $p-q$ eigenvalues equal to zero, hence $\hbSigma(\bX)$ is singular. If we require a nonsingular scatter matrix we need a different approach using regularization. The [*minimum regularized covariance determinant*]{} (MRCD) method [@Boudt:MRCD] was constructed for this purpose, and works when $n < p$ too. The MRCD minimizes $$\text{det}\{\rho \bT + (1-\rho)\mbox{Cov}(\bX_H)\} \label{eq:MRCD}$$ where $\bT$ is a positive definite ‘target’ matrix and $\mbox{Cov}(\bX_H)$ is the usual covariance matrix of an $h$-subset $\bX_H$ of $\bX$. Even when $\mbox{Cov}(\bX_H)$ is singular by itself, the combined matrix is always positive definite hence invertible. The target matrix $\bT$ depends on the application, and can for instance be the $p \times p$ identity matrix or an equicorrelation matrix in which the single bivariate correlation is estimated robustly from all the data. Perhaps surprisingly, it turns out that the C-step theorem can be extended to the MRCD. The MRCD algorithm is similar to the DetMCD described above, with deterministic starts followed by iterating these modified C-steps. The method simulates well even in 1000 dimensions. Software for DetMCD and MRCD is available from <http://wis.kuleuven.be/stat/robust> . CONCLUSIONS {#conclusions .unnumbered} =========== In this paper we have reviewed the Minimum Covariance Determinant (MCD) estimator of multivariate location and scatter. We have illustrated its resistance to outliers on a real data example. Its main properties concerning robustness, efficiency and equivariance were described, as well as computational aspects. We have provided a detailed reference list with applications and generalizations of the MCD in applied and methodological research. Finally, two recent modifications of the MCD make it possible to save computing time and to deal with high-dimensional data. [85]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{} P.J. Rousseeuw. Least median of squares regression. *Journal of the American Statistical Association*, 79:0 871–880, 1984. P.J. Rousseeuw. Multivariate estimation with high breakdown point. In W. Grossmann, G. Pflug, I. Vincze, and W. Wertz, editors, *Mathematical Statistics and Applications, Vol. B*, pages 283–297, Dordrecht, 1985. Reidel Publishing Company. P.J. Rousseeuw and K. Van Driessen. A fast algorithm for the [M]{}inimum [C]{}ovariance [D]{}eterminant estimator. *Technometrics*, 41:0 212–223, 1999. S. Hettich and S.D. Bay. *The [UCI]{} [KDD]{} [A]{}rchive*, 1999. URL <http://kdd.ics.uci.edu>. Irvine, CA: University of California, Department of Information and Computer Science. R.A. Maronna, D.R. Martin, and V.J. Yohai. *Robust Statistics: Theory and Methods*. Wiley, New York, 2006. C. Croux and G. Haesbroeck. Influence function and efficiency of the [M]{}inimum [C]{}ovariance [D]{}eterminant scatter matrix estimator. *Journal of Multivariate Analysis*, 71:0 161–190, 1999. G. Pison, S. Van Aelst, and G. Willems. Small sample corrections for [LTS]{} and [MCD]{}. *Metrika*, 55:0 111–123, 2002. R.W. Butler, P.L. Davies, and M. Jhun. Asymptotics for the [M]{}inimum [C]{}ovariance [D]{}eterminant estimator. *The Annals of Statistics*, 210 (3):0 1385–1400, 1993. E.A. Cator and H.P. Lopuha[ä]{}. Asymptotic expansion of the minimum covariance determinant estimators. *Journal of Multivariate Analysis*, 101:0 2372–2388, 2010. E.A. Cator and H.P. Lopuha[ä]{}. Central limit theorem and influence function for the [MCD]{} estimators at general multivariate distributions. *Bernoulli*, 18:0 520–551, 2012. H.P. Lopuha[ä]{} and P.J. Rousseeuw. Breakdown points of affine equivariant estimators of multivariate location and covariance matrices. *The Annals of Statistics*, 19:0 229–248, 1991. H.P. Lopuha[ä]{}. Asymptotics of reweighted estimators of multivariate location and scatter. *The Annals of Statistics*, 27:0 1638–1665, 1999. P.J. Rousseeuw and B.C. van Zomeren. Unmasking multivariate outliers and leverage points. *Journal of the American Statistical Association*, 85:0 633–651, 1990. A. Cerioli. Multivariate outlier detection with high-breakdown estimators. *Journal of the American Statistical Association*, 104:0 147–156, 2010. J. Hardin and D. M. Rocke. The distribution of robust distances. *Journal of Computational and Graphical Statistics*, 140 (4):0 928–946, 2005. W.A. Stahel. *Robuste [S]{}ch[ä]{}tzungen: infinitesimale [O]{}ptimalit[ä]{}t und [S]{}ch[ä]{}tzungen von [K]{}ovarianzmatrizen*. PhD thesis, [ETH]{} [Z]{}[ü]{}rich, 1981. D.L. Donoho and M. Gasko. Breakdown properties of location estimates based on halfspace depth and projected outlyingness. *The Annals of Statistics*, 200 (4):0 1803–1827, 1992. E. Roelant, S. Van Aelst, and G. Willems. The minimum weighted covariance determinant estimator. *Metrika*, 70:0 177–204, 2009. L. Davies. Asymptotic behavior of [S]{}-estimators of multivariate location parameters and dispersion matrices. *The Annals of Statistics*, 15:0 1269–1292, 1987. P.J. Rousseeuw. Discussion on ‘[B]{}reakdown and groups’. *Annals of Statistics*, 33:0 1004–1009, 2005. F.R. Hampel, E.M. Ronchetti, P.J. Rousseeuw, and W.A. Stahel. *Robust Statistics: The Approach Based on Influence Functions*. Wiley, New York, 1986. P.J. Rousseeuw and A.M. Leroy. *Robust Regression and Outlier Detection*. Wiley-Interscience, New York, 1987. R.W. Butler. Nonparametric interval and point prediction using data trimmed by a [G]{}rubbs-type outlier rule. *The Annals of Statistics*, 100 (1):0 197–204, 1982. C. Croux and P.J. Rousseeuw. A class of high-breakdown scale estimators based on subranges. *Communications in Statistics-Theory and Methods*, 21:0 1935–1951, 1992. C. Croux and G. Haesbroeck. Maxbias curves of robust scale estimators based on subranges. *Metrika*, 53:0 101–122, 2001. C. Croux and G. Haesbroeck. Maxbias curves of location estimators based on subranges. *Journal of Nonparametric Statistics*, 14:0 295–306, 2002. S. Verboven and M. Hubert. : a [M]{}atlab library for robust analysis. *Chemometrics and Intelligent Laboratory Systems*, 75:0 127–136, 2005. S. Verboven and M. Hubert. library [LIBRA]{}. *Wiley Interdisciplinary Reviews: Computational Statistics*, 2:0 509–515, 2010. A. Zaman, P.J. Rousseeuw, and M. Orhan. Econometric applications of high-breakdown robust regression techniques. *Economics Letters*, 71:0 1–8, 2001. R. Welsh and X. Zhou. Application of robust statistics to asset allocation models. *Revstat*, 5:0 97–114, 2007. M. Gambacciani and M.S. Paolella. Robust normal mixtures for financial portfolio allocation. *Econometrics and Statistics*, 3:0 91–111, 2017. M. Prastawa, E. Bullitt, S. Ho, and G. Gerig. A brain tumor segmentation framework based on outlier detection. *Medical Image Analysis*, 8:0 275–283, 2004. W.A. Jensen, J.B. Birch, and W.H. Woodal. High breakdown estimation methods for phase [I]{} multivariate control charts. *Quality and Reliability Engineering International*, 230 (5):0 615–629, 2007. P. Filzmoser, R.G. Garrett, and C. Reimann. Multivariate outlier detection in exploration geochemistry. *Computers and Geosciences*, 310 : 0 579–587, 2005. N.M. Neykov, P.N. Neytchev, P.H.A.J.M. Van Gelder, and V.K. Todorov. Robust detection of discordant sites in regional frequency analysis. *Water Resources Research*, 430 (6), 2007. C. Vogler, S. Goldenstein, J. Stolfi, V. Pavlovic, and D. Metaxas. Outlier rejection in high-dimensional deformable models. *Image and Vision Computing*, 250 (3):0 274–284, 2007. Y. Lu, J. Wang, J. Kong, B. Zhang, and J. Zhang. An integrated algorithm for [MRI]{} brain images segmentation. *Computer Vision Approaches to Medical Image Analysis*, 4241:0 132–1342, 2006. P.J. van Helvoort, P. Filzmoser, and P.F.M. van Gaans. Sequential factor analysis as a new approach to multivariate analysis of heterogeneous geochemical datasets: An application to a bulk chemical characterization of fluvial deposits ([R]{}hine-[M]{}euse delta, [T]{}he [N]{}etherlands). *Applied Geochemistry*, 200 (12):0 2233–2251, 2005. P.J. Rousseeuw and K. Van Driessen. Computing [LTS]{} regression for large data sets. *Data Mining and Knowledge Discovery*, 12:0 29–45, 2006. P.J. Rousseeuw and M. Hubert. Anomaly detection by robust statistics. *arXiv:* *1707.09752*, 2017. D.G. Simpson, D. Ruppert, and R.J. Carroll. On one-step [GM]{}-estimates and stability of inferences in linear regression. *Journal of the American Statistical Association*, 87:0 439–450, 1992. C.W. Coakley and T.P. Hettmansperger. A bounded influence, high breakdown, efficient regression estimator. *Journal of the American Statistical Association*, 88:0 872–880, 1993. M. Hubert and P.J. Rousseeuw. Robust regression with both continuous and binary regressors. *Journal of Statistical Planning and Inference*, 57:0 153–163, 1996. P.J. Rousseeuw and A. Christmann. Robustness against separation and outliers in logistic regression. *Computational Statistics & Data Analysis*, 43:0 315–332, 2003. C. Croux and G. Haesbroeck. Implementing the [B]{}ianco and [Y]{}ohai estimator for logistic regression. *Computational Statistics & Data Analysis*, 44:0 273–295, 2003. P.J. Rousseeuw, S. Van Aelst, K. Van Driessen, and J. Agull[ó]{}. Robust multivariate regression. *Technometrics*, 46:0 293–305, 2004. J. Agull[ó]{}, C. Croux, and S. Van Aelst. The multivariate least trimmed squares estimator. *Journal of Multivariate Analysis*, 99:0 311–318, 2008. M. Fekri and A. Ruiz-Gazen. Robust weighted orthogonal regression in the errors-in-variables model. *Journal of Multivariate Analysis*, 880 (1):0 89–108, 2004. C. Croux and G. Haesbroeck. Principal components analysis based on robust estimators of the covariance or correlation matrix: influence functions and efficiencies. *Biometrika*, 87:0 603–618, 2000. G. Pison, P.J. Rousseeuw, P. Filzmoser, and C. Croux. Robust factor analysis. *Journal of Multivariate Analysis*, 84:0 145–172, 2003. D.E. Tyler, F. Critchley, L. Dümbgen, and H. Oja. Invariant co-ordinate selection. *Journal of the Royal Statistical Society Series B*, 71:0 549–592, 2009. C. Croux and C. Dehon. Analyse canonique basée sur des estimateurs robustes de la matrice de covariance. *La Revue de Statistique Appliquée*, 2:0 5–26, 2002. M. Hubert, P.J. Rousseeuw, and K. Vanden Branden. : a new approach to robust principal components analysis. *Technometrics*, 47:0 64–79, 2005. M. Debruyne and M. Hubert. The influence function of the [S]{}tahel-[D]{}onoho covariance estimator of smallest outlyingness. *Statistics & Probability Letters*, 79:0 275–282, 2009. M. Hubert and S. Verboven. A robust [PCR]{} method for high-dimensional regressors. *Journal of Chemometrics*, 17:0 438–452, 2003. M. Hubert and K. Vanden Branden. Robust methods for [P]{}artial [L]{}east [S]{}quares [R]{}egression. *Journal of Chemometrics*, 17:0 537–549, 2003. K. Vanden Branden and M. Hubert. Robustness properties of a robust [PLS]{} regression method. *Analytica Chimica Acta*, 515:0 229–241, 2004. M. Hubert, P.J. Rousseeuw, and S. Van Aelst. High breakdown robust multivariate methods. *Statistical Science*, 23:0 92–119, 2008. S. Engelen and M. Hubert. Detecting outlying samples in a parallel factor analysis model. *Analytica Chimica Acta*, 705:0 155–165, 2011. E. Ceulemans, M. Hubert, and P.J. Rousseeuw. Robust multilevel simultaneous component analysis. *Chemometrics and Intelligent Laboratory Systems*, 0 129:0 33–39, 2013. R.A. Maronna. Principal components and orthogonal regression based on robust scales. *Technometrics*, 47:0 264–273, 2005. G. Willems, G. Pison, P.J. Rousseeuw, and S. Van Aelst. A robust [H]{}otelling test. *Metrika*, 55:0 125–138, 2002. G. Willems and S. Van Aelst. A fast bootstrap method for the [MCD]{} estimator. In J. Antoch, editor, *Proceedings in Computational Statistics*, pages 1979–1986, Heidelberg, 2004. Springer-Verlag. M. Hubert and S. Engelen. Fast cross-validation for high-breakdown resampling algorithms for [PCA]{}. *Computational Statistics & Data Analysis*, 51:0 5013–5024, 2007. T.-C. Cheng and M. Victoria-Feser. High breakdown estimation of multivariate location and scale with missing observations. *British Journal of Mathematical and Statistical Psychology*, 55:0 317–335, 2002. S. Copt and M.-P. Victoria-Feser. Fast algorithms for computing high breakdown covariance matrices with missing data. In M. Hubert, G. Pison, A. Struyf, and S. Van Aelst, editors, *Theory and Applications of Recent Robust Methods*, pages 71–82, Basel, 2004. Statistics for Industry and Technology, Birkh[ä]{}user. S. Serneels and T. Verdonck. Principal component analysis for data containing outliers and missing elements. *Computational Statistics & Data Analysis*, 52:0 1712–1727, 2008. A. Christmann and S. Van Aelst. Robust estimation of [C]{}ronbach’s alpha. *Journal of Multivariate Analysis*, 970 (7):0 1660–1674, 2006. D.M. Hawkins and G.J. McLachlan. High-breakdown linear discriminant analysis. *Journal of the American Statistical Association*, 92:0 136–143, 1997. M. Hubert and K. Van Driessen. Fast and robust discriminant analysis. *Computational Statistics & Data Analysis*, 45:0 301–320, 2004. K. Vanden Branden and M. Hubert. Robust classification in high dimensions based on the [SIMCA]{} method. *Chemometrics and Intelligent Laboratory Systems*, 79:0 10–21, 2005. D.M. Rocke and D.L. Woodruff. A synthesis of outlier detection and cluster identification, 1999. technical report. J. Hardin and D.M. Rocke. Outlier detection in the multiple cluster setting using the minimum covariance determinant estimator. *Computational Statistics & Data Analysis*, 44:0 625–638, 2004. M.T. Gallegos and G. Ritter. A robust method for cluster analysis. *The Annals of Statistics*, 33:0 347–380, 2005. D.L. Vandev and N.M. Neykov. About regression estimators with high breakdown point. *Statistics*, 32:0 111–129, 1998. A.S. Hadi and A. Luce [n]{}o. Maximum trimmed likelihood estimators: a unified approach, examples and algorithms. *Computational Statistics & Data Analysis*, 25:0 251–272, 1997. C.H. M[ü]{}ller and N. Neykov. Breakdown points of trimmed likelihood estimators and related estimators in generalized linear models. *Journal of Statistical Planning and Inference*, 116:0 503–519, 2003. P. Čižek. Robust and efficient adaptive estimation of binary-choice regression models. *Journal of the American Statistical Association*, 1030 (482):0 687–696, 2008. J.A. Cuesta-Albertos, A. Gordaliza, and C. Matrán. Trimmed $k$-means: An attempt to robustify quantizers. *The Annals of Statistics*, 25:0 553–576, 1997. J.A. Cuesta-Albertos, C. Matrán, and A. Mayo-Iscar. Robust estimation in the normal mixture model based on robust clustering. *Journal of the Royal Statistical Society: Series B*, 70:0 779–802, 2008. L.A. García-Escudero, A. Gordaliza, R. San Martín, S. Van Aelst, and R.H. Zamar. Robust linear clustering. *Journal of the Royal Statistical Society B*, 71:0 1–18, 2009. J.Á Víšek. The least weighted squares [I]{}. the asymptotic linearity of normal equations. *Bulletin of the Czech Econometric Society*, 9:0 31–58, 2002. M. Salibian-Barrera and V.J. Yohai. A fast algorithm for [S]{}-regression estimates. *Journal of Computational and Graphical Statistics*, 15:0 414–427, 2006. M. Hubert, P.J. Rousseeuw, D. Vanpaemel, and T.Verdonck. The [D]{}et[S]{} and [D]{}et[MM]{} estimators for multivariate location and scatter. *Computational Statistics & Data Analysis*, 81:0 64–75, 2015. M. Hubert, P.J. Rousseeuw, and T. Verdonck. A deterministic algorithm for robust location and scatter. *Journal of Computational and Graphical Statistics*, 21:0 618–637, 2012. P.J. Rousseeuw and C. Croux. Alternatives to the median absolute deviation. *Journal of the American Statistical Association*, 88:0 1273–1283, 1993. S. Visuri, V. Koivunen, and H. Oja. Sign and rank covariance matrices. *Journal of Statistical Planning and Inference*, 91:0 557–575, 2000. N. Billor, A.S. Hadi, and P.F. Velleman. : blocked adaptive computationally efficient outlier nominators. *Computational Statistics & Data Analysis*, 34:0 279–298, 2000. R.A. Maronna and R.H. Zamar. Robust estimates of location and dispersion for high-dimensional data sets. *Technometrics*, 44:0 307–317, 2002. K. Boudt, P.J. Rousseeuw, S. Vanduffel, T. Verdonck. The Minimum Regularized Covariance Determinant estimator. *arXiv: 1701.07086*, 2017. [^1]: Department of Mathematics, KU Leuven, Celestijnenlaan 200B, BE-3001 Leuven, Belgium [^2]: Dexia Bank, Belgium
--- abstract: 'In this work we show the operation and results of an X-ray fluorescence imaging system using a cascade of three gas electron multipliers (GEM) and a pinhole assembly. The detector operates in Ar/CO$_2$ (90/10) at atmospheric pressure, with resistive chains applied to the strip readout, which allow to use only five electronic channels: two for each dimension and a fifth for energy and trigger. The corrections applied to the energy spectra to compensate for small changes in the signal amplitude and also differences in gain throughout the sensitive area are described and the clear improvement of the energy resolution is shown. To take advantage of the simultaneous sensitivity to the energy and to the position of interaction, a color scale matching the energy spectrum to the RGB range was applied, resulting in images where the color has a direct correspondence to the energy in each pixel and the intensity is reflected by the brightness of the image. The results obtained with four different color pigments are shown.' address: | Instituto de Física, Universidade de São Paulo\ Rua do Matão 1371, 05508-090 Cidade Universitária, São Paulo, Brasil author: - 'Geovane G. A. de Souza' - Hugo Natal da Luz bibliography: - 'mybibfile.bib' title: XRF element localization with a triple GEM detector using resistive charge division --- Gas Electron Multiplier,X-ray fluorescence imaging,Position sensitive detectors,resistive charge division Introduction ============ The Gas Electron Multiplier (GEM) [@Sau97] is a Micropattern Gaseous Detector (MPGD) that has undergone a steady development and maturity process over the last two decades. Thanks to its well studied properties such as high counting rate capability, fair energy resolution, good ion backflow suppression and good stability against electrical discharges, allied to the possibility of building detection areas of the order of the square meter, it has been selected to operate in major Particle Physics experiments, such as LHCb [@Car12] and COMPASS [@Alt02] and its use is also foreseen for upgrades in ALICE [@ALICEUP] and CMS [@CMS15]. It consists on a kapton foil, typically thick, coated on both sides with copper layers. A triangular matrix of biconical holes is etched through the copper and the foil ( in the copper layer and diameter in the kapton substrate). The centers of the holes are at a distance of from their nearest neighbors. When the GEM foil is immersed in a gas mixture and suitable voltage differences are applied between two electrodes to define the electric fields above, below and inside the holes of the GEM, the very high electric field inside the holes focuses free electrons generated by the interaction of ionizing radiation with the atoms of the gas. When these electrons penetrate the holes, Townsend avalanches are generated multiplying the primary charge. The possibility of cascading several GEMs, where each GEM multiplies the charge from the preceding one increases significantly the gain in charge of the detector, as well as its stability. Applications in low energy Physics, namely in X-ray imaging have been developed towards very high resolutions, often at a cost of the detection areas. In fact, the newest room temperature solid state detectors achieve remarkable energy and position resolutions in the range. This high resolution spans small areas that range a few at most. In the study of historical artifacts or art pieces by X-ray Fluorescence, usually areas in the order of hundreds of must be studied, with a resolution around 1mm. This is usually done with small solid state detectors without position sensibility, that are scanned through the whole area that must be studied. GEM-based detectors can present an advantage, reconstructing elemental distributions over large areas in a much shorter time, sparing the need of long and tedious scans. This approach has been gaining some space, with different groups using different MPGD and presenting promising results with GEM [@Zie13], Micro-Hole & Strip Plate (MHSP) [@Sil11] and Thick-Cobra[@Sil13]. Reference [@Vel18] makes a review on some of the work done by different research groups using MPGD with this technique. The energy resolution limitations of this type of detector with respect to the solid state ones is obvious. Nevertheless, by applying corrections related to variations of the signal amplitude during acquisitions and also to local gain variations throughout the sensitive areas improve very much its performance, promising a very valuable tool in X-ray Fluorescence Imaging. Experimental setup ================== The detector consists on a cascade of three GEMs immersed in a mixture of Ar/CO$_2$(90/10) at atmospheric pressure. The detector window consists of a kapton foil thick. The gas passes through tubes and the flow is set to 6 l/h. The triple-GEM geometry can be seen in figure \[ImaGEM2\], where the dimensions and typical electric fields and voltages are also depicted. ![The triple-GEM setup.[]{data-label="ImaGEM2"}](ImaGEM2.png){width="7.5cm"} The hole pitch of the GEMs is . The readout system is segmented in 256 strips in each dimension (fig.\[readout\]), which are interconnected through resistive chains. By collecting the charge at both ends of each resistive chain, it is possible to calculate the projection of the primary X-ray ionization on the X–Y plane for each coordinate through eq. \[interaction\]. $$x= l \frac{X_L-X_R}{A},\qquad y = l \frac{Y_L-Y_R}{A} \label{interaction}$$ where $X_L$, $X_R$, $Y_L$ and $Y_R$ are the signal amplitudes for the left and right ends of the $X$ and $Y$ resistive chains according to figure \[readout\], $l$ is the length/width of the detector and $A$ is given either by the sum of the amplitudes of all four channels or by the amplitude of the signal collected from the bottom electrode of the last GEM. This signal also served as the global trigger of the electronic system. The charge collected by each channel is integrated by a standard charge sensitive pre-amplifier with a charge sensitivity around 1V/pC and a rise time around 50ns, and shaped by differentiating the signals, resulting in a gaussian peak with a width of around $\SI{3}{\micro\s}$, which is suitable for the counting rates used throughout this work. After application of simple logic, it is sampled by a standard 12 bit peaking ADC. ![Scheme of the strip readout system using resistive chains.[]{data-label="readout"}](imagesystem.png){width="7cm"} As X-ray source we used the Amptek Mini-X [@minix] with a silver target, operating at a high voltage of typically 15kV and filament current around 15$\upmu$A. We also used a $^{55}$Fe radioactive source, which decays into manganese by electron capture emitting 5.9keV (K$_{\alpha}$) and 6.4keV (K$_{\beta}$) characteristic X-rays, for energy calibration and to determine the energy resolution. A framework for data processing, image reconstruction and analysis was developed using the ROOT framework developed by CERN [@root] and other C++ libraries. To characterize the detector in terms of intrinsic position resolution, different masks were imaged in transmission mode, with the object placed between the detector and the X-ray source, directly on the detector window, ensuring that there was no magnification in the transmission images obtained. For the X-ray fluorescence imaging, a stainless steel pinhole with diameter and thickness of 1mm was placed between the sample and the detector window, at 10cm from both, leading to a magnification of 1. The sample object was irradiated with the high intensity X-ray source. The fluorescence X-rays crossed the pinhole before entering the detector, as shown in figure \[fig:pinhole\]. The pinhole assures that the photons arriving at each point in the sensitive area of the detector correspond univocally to one position in the sample, thus allowing to accurately reconstruct the elemental distribution in the sample. To test this capability, a set of four different pigments was irradiated and imaged. ![The pinhole setup used in this work for X-ray fluorescence imaging.[]{data-label="fig:pinhole"}](pinhole.png){width="40.00000%"} Results and discussion {#sec:res} ====================== Energy resolution — signal amplitude corrections {#sec:corr} ------------------------------------------------ The energy resolution was measured using a $^{55}$Fe radioactive source. The full window area was irradiated with the source, which was placed about 10cm above the detector window. During the irradiation time, which in the final application is expected to be 3 to 4 hours long, small drifts in the detector’s gain and consequently in the amplitude of the signals are expected reflecting the changes in environmental conditions of the detector surroundings, such as the temperature or atmospheric pressure. The study of the mechanisms of this influence is beyond the scope of this work, however, gain changes were forced by changing the temperature of the room and test the algorithms for corrections of temporal variations of the signal amplitude. Figure \[gaintemp\] shows how a change in the gain of the detector can be induced by changing the environment temperature. ![Detector’s gain and room temperature measured over three hours, while varying the temperature of the lab.[]{data-label="gaintemp"}](datagaintemp.png){width="50.00000%"} The correction applied to the signal amplitude was done offline and consisted in dividing the acquired data in time slices, each one containing enough events to allow a correct determination of the center of the main peak. A number of $10^5$ events in each time slice was considered more than sufficient for this correction and can be reduced in case of large and fast changes. After this, the position of the main peak is normalized between all the time slices, eliminating the temporal differences of the signal amplitudes, as shown in figure \[1stcorrection\]. This correction can be done to every set of data, regardless of the reason that made the signal amplitudes drift. ![Left: Distribution of the signal amplitude obtained with the $^{55}$Fe radioactive source over time. The color scale indicates the number of counts. The bright part of the spectrum is the main peak of the energy spectrum. In this example, the amplitude of the signals increased slowly during 140 minutes. Right: The distribution on the top, with the signal amplitude corrected. The variations in the amplitude over time were corrected for the whole energy spectrum, including adjacent peaks, such as the scape peak.[]{data-label="1stcorrection"}](1stcorrection.png "fig:"){width="48.00000%"} ![Left: Distribution of the signal amplitude obtained with the $^{55}$Fe radioactive source over time. The color scale indicates the number of counts. The bright part of the spectrum is the main peak of the energy spectrum. In this example, the amplitude of the signals increased slowly during 140 minutes. Right: The distribution on the top, with the signal amplitude corrected. The variations in the amplitude over time were corrected for the whole energy spectrum, including adjacent peaks, such as the scape peak.[]{data-label="1stcorrection"}](2ndcorrection.png "fig:"){width="48.00000%"} Besides the corrections due to drifts of the amplitude in time, a second offline correction was done to compensate for the gain non-uniformity across the detector area. It is know that many factors such differences of the dielectric thickness, different hole diameters or even slight staggering of the GEM foils may locally affect the detector gain and, consequently, the energy resolution. To overcome this issue the effective area of detection was divided into 1024 different sectors and, again, the center of the main peak was determined. The correction factor is calculated by dividing the correspondent position of the peak in ADC channels by the average of all 1024 sectors. Figure \[mapa\] shows the main peak position relative to the average of the 1024 sectors. The gain variations have a standard deviation of 7% and could be as high as 20%. ![Left: the amplitude of the signals across the detector normalized to the average. Right: the amplitudes show a standard deviation of 7% throughout the sensitive area of the detector.[]{data-label="mapa"}](mapa.png "fig:"){width="48.00000%"} ![Left: the amplitude of the signals across the detector normalized to the average. Right: the amplitudes show a standard deviation of 7% throughout the sensitive area of the detector.[]{data-label="mapa"}](HistCorrection.png "fig:"){width="48.00000%"} The spectra before each step of correction and the final one can be seen in figure \[X\]. As we can see, the energy resolution improves significantly with each correction, achieving in the end 6.8% energy resolution ($\sigma$), after fitting the two gaussian curves corresponding to the K$_\alpha$ and K$_\beta$ lines of manganese. The argon escape peak can also be seen at around 2.9keV in the energy distribution. This peak is related to argon fluorescence X-rays from the K shell that escape the detector, a well-known effect (see [@escape] for example). The presence of this escape peak is inevitable in detectors using argon mixtures and will be discussed further ahead. ![Left: Green — The raw energy spectrum; Blue — after temporal correction; Red — both temporal and spatial corrections applied. Right: The energy resolution obtained after the corrections is 6.8% ($\sigma$).[]{data-label="X"}](EnergySpec.png "fig:"){width="48.00000%"} ![Left: Green — The raw energy spectrum; Blue — after temporal correction; Red — both temporal and spatial corrections applied. Right: The energy resolution obtained after the corrections is 6.8% ($\sigma$).[]{data-label="X"}](energyResolution.png "fig:"){width="48.00000%"} Position resolution ------------------- The imaging capability of the detector was characterized using X-ray transmission through different masks that allowed to calculate its performance in terms of resolution and contrast as a function of the spatial resolution. One of the most reliable methods for estimating the performance of an imaging system is calculated from the image of a sharp edge. The edge intensity profile is a step function, where its slope is a measure of the position resolution. The width of its derivative is used to quantify this slope and by application of a Fourier Transform, the contrast of the imaging system as a function of the spacial frequencies, i.e., the Modulation Transfer Function (MTF) can be estimated. Figure \[fig:MTF\] shows process of obtaining the MTF obtained for this detector, when the full energy spectrum from a silver target X-ray source was used. The distances in the image were calibrated by using the known distance between the two edges. ![The Modulation Transfer Function obtained by imaging a sharp edge. Left: a sharp edge profile in the image is the Edge Spread Function (ESF). The ESF is derivated resulting in the Line Spread Function (LSF). Right: The Fast Fourier Transform of the LSF results in the Modulation Transfer Function (MTF). The red curve is not a fit, it is placed to guide the eye. The position resolution is usually taken from the MTF at 10% (marked in the plot), which is around 1.8mm($\frac{1}{0.56}$lp/mm), consistent with the width of the LSF.[]{data-label="fig:MTF"}](ESF.png "fig:"){width="32.00000%"} ![The Modulation Transfer Function obtained by imaging a sharp edge. Left: a sharp edge profile in the image is the Edge Spread Function (ESF). The ESF is derivated resulting in the Line Spread Function (LSF). Right: The Fast Fourier Transform of the LSF results in the Modulation Transfer Function (MTF). The red curve is not a fit, it is placed to guide the eye. The position resolution is usually taken from the MTF at 10% (marked in the plot), which is around 1.8mm($\frac{1}{0.56}$lp/mm), consistent with the width of the LSF.[]{data-label="fig:MTF"}](LSF.png "fig:"){width="32.00000%"} ![The Modulation Transfer Function obtained by imaging a sharp edge. Left: a sharp edge profile in the image is the Edge Spread Function (ESF). The ESF is derivated resulting in the Line Spread Function (LSF). Right: The Fast Fourier Transform of the LSF results in the Modulation Transfer Function (MTF). The red curve is not a fit, it is placed to guide the eye. The position resolution is usually taken from the MTF at 10% (marked in the plot), which is around 1.8mm($\frac{1}{0.56}$lp/mm), consistent with the width of the LSF.[]{data-label="fig:MTF"}](MTF.png "fig:"){width="32.00000%"} One of the features of this type of detector is the dependence of the spatial resolution on the X-ray energy. Figure \[fig:range\] shows that dependency in this specific detector. It is directly related to the range of the photo-electrons, that in Argon, above the K-absorption edge at 3keV, increases monotonically. Since the data was collected event-by-event, it was possible to measure resolutions of images only for certain energy range. With this, it was possible to study the dependence of the position resolution with the energy. For lower energies the position resolution also worsens due to the smaller signal-to-noise ratio. This has been studied for this detector and reported in [@EXRS]. The best result in terms of position resolution is 1.2mm, achieved for the 8–9keV range. ![The position resolution of the detector as a function of the X-ray energy (red line). The position resolution worsens for higher energies due to the higher range of the photo-electrons. For the lower energy, its deterioration is related with the poorer signal-to-noise ratio. The blue line indicates the resolution limit for pure Argon as simulated in [@Aze15].[]{data-label="fig:range"}](resolution.png){width="48.00000%"} To address the image distortions caused by the resistive chain either due to resistor inaccuracies or to imperfections in the mounting of the components, not only the resistors, but also the 128 pin connector, a study of the differential non-linearity was carried out. The differential non-linearity is defined here in analogy to the analog to digital converters, and is the deviation between the distances measured by the detector, relatively to their correct value. For this, a 1mm thick steel plate perforated with a square matrix of 1mm holes at a pitch of 10mm was placed on the detector window and imaged with the X-ray source at a reasonable distance to keep the magnification negligible. Figure \[fig:DNL\] shows the imaged plate on the left, with the real position of the holes marked with black dots. A histogram of the distance between the pairs of holes in the image, normalized by the real one (10mm) is shown in the left side of fig. \[fig:DNL\] for the $x$ and $y$ directions. The DNL is given by the standard deviations of the distributions and is 3.4% and 6.2% for $x$ and $y$, respectively. ![Left: the image of a square matrix of 1mm holes with a pitch of 10mm. Right: Histogram of the distance between each pair of holes as measured by the detector, normalized by the real distance of 10mm. The standard deviation of these distributions is the differential non-linearity which is 3.4% and 6.2% for $x$ and $y$, respectively.[]{data-label="fig:DNL"}](points.png "fig:"){height="5cm"} ![Left: the image of a square matrix of 1mm holes with a pitch of 10mm. Right: Histogram of the distance between each pair of holes as measured by the detector, normalized by the real distance of 10mm. The standard deviation of these distributions is the differential non-linearity which is 3.4% and 6.2% for $x$ and $y$, respectively.[]{data-label="fig:DNL"}](DNL.png "fig:"){height="5cm"} X-Ray fluorescence imaging -------------------------- The X-ray Fluorescence Imaging capability of the system was tested by irradiating a set of four different ink pigments. The pigments were chrome yellow, cadmium yellow, cerulean blue and cobalt blue. The pigments may also contain zinc white ink in their composition [@pigments]. Figure \[fig:pigs\] shows the pigments used. They spanned an area around . The sample was irradiated during 4 hours with a filament current of . The reconstructed total energy distribution after the corrections described in subsection \[sec:res\].\[sec:corr\], and elemental map of the sample is shown in figure \[fig:XRF\]. The color scale in the image does not reflect the X-ray intensity, but the average energy of the spectrum in each pixel, as shown by the RGB rainbow (built from the red, green and blue palletes) in the energy spectrum. The X-ray intensity is given by the brilliance. To match the energies with the color scale, each ADC value is weighted by the correspondent amount of each of the three primary colors as figure \[fig:EscalaCores\] shows. Based on this weight, tree different images — one for each primary color — are reconstructed. After a trivial leveling of the colors for white balancing, the images are merged, resulting in an image that has information of both the intensity and the energy of the X-rays. This makes an automatic color separation of the different elements in the sample. ![Top: The energy spectrum for the total area. Bottom: XRF image generated for the four pigments. The colors indicate the mean energy deposited in each pixel.[]{data-label="fig:XRF"}](foto.png "fig:"){width="45.00000%"}\ ![Top: The energy spectrum for the total area. Bottom: XRF image generated for the four pigments. The colors indicate the mean energy deposited in each pixel.[]{data-label="fig:XRF"}](totalfinal "fig:"){width="50.00000%"}![Top: The energy spectrum for the total area. Bottom: XRF image generated for the four pigments. The colors indicate the mean energy deposited in each pixel.[]{data-label="fig:XRF"}](energy "fig:"){width="45.00000%"} ![Left: the colors at the limit of the RGB spectrum are adjusted to the maximum and minimum energies of the X-ray energy spectrum. Right: the intensity of each X-ray energy is given by the brightness: more dark for lower intensities and more bright for higher intensities.[]{data-label="fig:EscalaCores"}](EscalaCores.png){width="50.00000%"} From the energy spectra of the four regions corresponding to the four pigments, it is possible to extract information of the elements present in each one. The energy distributions are shown in figure \[fig:spectra\], where the main peaks are identified. As mentioned before, one feature of this type of detector, using argon-based mixtures is the presence of an argon escape peak associated to every major peak, appearing at an energy 3keV smaller. The interpretation of the energy spectra must take this into account. Some of the argon escape peaks are identified in the spectra of figure \[fig:spectra\] with a red scale, showing how they can overlap other peaks or the background, making the identification of elements somehow more complex. The differences between the two spectra of the yellow pigments are very clear. Where the cadmium yellow shows zinc and cadmium in its composition, the chromium yellow pigment contains copper and lead, besides chromium. The leftmost peak at around 3keV and the traces of zinc evident by the small shoulder in the copper peak, suggest a small contamination of this pigment with cadmium yellow. In the blue pigments, it is interesting to note that both were composed of cobalt and zinc, but in clearly different concentrations. It is important to notice that this analysis is qualitative. In the case of the blue pigments, it is possible to conclude that the cerulean blue has a higher concentration of zinc, when compared with the cobalt blue. However, the determination of the absolute concentration of each one of the elements must take into account the X-ray emission yields and the efficiency of the detector for each X-ray energy range. This efficiency drops for higher energies due to the small thickness of the absorption region, which is 8mm. For the lower energies, the kapton window and cathode, with a total thickness of , and the distance the X-rays must travel in air before entering the detector also limit the efficiency. The efficiency calculated by taking into account the fraction of X-ray photons hitting the detector that are actually detected is shown in figure  \[fig:efficiency\]. This calculation is done from the absorption coefficients from the argon-based mixture and the transmission of the kapton and the layer of air, according to the geometry of the system, and using the X-ray interaction tables from [@Hen93]. The detection efficiency peaks around 7keV and drops very sharply for lower energies. The efficiency for lower energies can be increased by changing the materials of the window and the cathode. For higher energies, the drop in efficiency is less steep. On this side, the efficiency can be increased by increasing the depth of the drift region. ![Energy spectrum for each pigment. The dotted black lines show the expected elements for each sample. The dotted red rules mark an energy difference of 3keV to each peak, showing the expected position of the argon escape peaks. []{data-label="fig:spectra"}](energies.png){width="45.00000%"} ![The detection efficiency taking into account the transmission of X-rays through the two layers of kapton in the detector and 20cm of air and absorption in the 8mm of Ar/CO$_2$ in the drift region. The axis in the left refers to the transmission and absorption of the X-rays (red and blue curves) and the axis on the right refers to the detection efficiency (black curve).[]{data-label="fig:efficiency"}](efficiency.png){width="45.00000%"} Conclusion and outlook ====================== The position sensitive gaseous detector prototype for X-ray fluorescence imaging using a pinhole described in this work is capable of mapping elemental distributions in space with a position resolution close to 1mm. Its large sensitive area () and simultaneous sensitivity to both energy and position make it suitable to study large distributions, such as those seen in paintings or other cultural and archaeological artifacts. The elemental distribution in space is reconstructed with a single data acquisition, without the need of scanning the detector or the X-ray source through the object. The gain in time and system simplicity is an advantage that can make this concept competitive, even with the disadvantages of limited energy and position resolutions. The corrections done to the energy distributions to compensate for small drifts in the gain during the acquisition and for the spacial non-uniformities of the detector kept the energy resolution at the same value as when only a small area of the detector is used. This gives very good perspectives for scaling the sensitive area, to obtain even larger images, eventually with geometries that take advantage of magnification, to image smaller objects with higher position resolution. The aim of this paper is to describe the performance of the detector, therefore, the practical example shown does not make a quantitative elemental analysis of the pigments. However, by applying corrections to the X-ray yield of each element, the self absorption and the detector and geometry efficiency as a function of the energy, it would be possible to estimate absolute concentrations in the different regions of the image. The electronic system is very simple, with only five electronic channels. This has the advantage of cost effectiveness when the energies are above 6keV. Below these values, the signal-to-noise ratio becomes a problem and the loss of position resolution is inevitable. To solve this problem, the replacement of the acquisition system is foreseen. The new system will consist of discrete electronics, with 512 electronic channels readout via an ASIC. The elimination of the resistive chains will dramatically reduce the noise at a small cost, significantly improving the resolution of the images for low X-ray energies. Other major improvements are planned for the detector itself, related to the improvement of the efficiency for lower and higher energies, by replacing the window by a thinner kapton foil and the cathode foil by a thick aluminized polypropylene foil and by increasing the thickness of the drift region to around 20mm. Finally, a new pinhole in a thin() gold foil will also be tested, to make sure the improvements of this system in terms of position resolution will not be limited by geometrical features or by contamination of the spectra with fluorescences that do not occur in the sample. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by grants 2016/05282-2 and 2017/00426-9 from Fundação de Amparo à Pesquisa do Estado de São Paulo, Brasil.
--- abstract: 'Campisi, Zhan, Talkner and Hänggi have recently proposed [@Campisi] the use of the logarithmic oscillator as an ideal Hamiltonian thermostat, both in simulations and actual experiments. However, the system exhibits several theoretical drawbacks which must be addressed if this thermostat is to be implemented effectively.' --- **On the logarithmic oscillator as a thermostat** Marc Meléndez\ *Dpto. Física Fundamental, Universidad Nacional de Educación a Distancia,*\ *Madrid, Spain*[.]{} [2]{} The logarithmic oscillator ========================== A logarithmic oscillator is a point mass $m$ in a central logarithmic potential. The Hamiltonian for such a particle is$$H_{osc.}\left(\boldsymbol{q},\,\boldsymbol{p}\right)=\frac{\boldsymbol{p}^{2}}{2m}+k_{B}T\,\ln\left(\frac{\left\Vert \boldsymbol{q}\right\Vert }{b}\right)=E,\label{eq:Hamiltonian-osc}$$ where $k_{B}T$ and $b$ can be considered arbitrary parameters for the time being. The Hamiltonian equations of motion are therefore$$\left\{ \begin{array}{ccc} \dot{q}_{i}=\frac{\partial H}{\partial p_{i}} & = & \frac{p_{i}}{m},\\ \dot{p}_{i}=-\frac{\partial H}{\partial q_{i}} & = & -k_{B}T\frac{q_{i}}{\boldsymbol{q}^{2}}.\end{array}\right.\label{eq:Eq_motion-osc}$$ This mechanical system has several interesting properties. In the one-dimensional version of the oscillator, it is particularly easy to find the equations of motion by direct integration (we will disregard the singularity in the potential for the moment). From , we get the value of the momentum,$$p=\sqrt{2m\left(E-k_{B}T\,\ln\left(\frac{q}{b}\right)\right)},$$ and using the first of Hamilton’s equations of motion ,$$\dot{q}=\sqrt{\frac{2}{m}\left(E-k_{B}T\,\ln\left(\frac{q}{b}\right)\right)},$$ we get a differential equation which can be solved by separation of variables$$t=\sqrt{\frac{m}{2}}\int\frac{dq}{\sqrt{E-k_{B}T\,\ln\left(\frac{q}{b}\right)}}.\label{eq:t(q)}$$ Now, the amplitude of the oscillation is determined by the points $q_{\alpha}$ that satisfy the following equation:$$k_{B}T\,\ln\left(\frac{q_{\alpha}}{b}\right)=E,$$ that is,$$\begin{aligned} q_{A} & = & -be^{\beta E},\\ q_{B} & = & be^{\beta E},\end{aligned}$$ where $\beta$ represents $\left(k_{B}T\right)^{-1}$. The period of oscillation is just twice the time taken by the particle to go from $q_{A}$ to $q_{B}$,$$2t_{AB}=\sqrt{2m}\int_{q_{A}}^{q_{B}}\frac{dx}{\sqrt{E-k_{B}T\,\ln\left(\frac{\left|q\right|}{b}\right)}}.\label{eq:2tAB}$$ The function in the integral is even, so$$\begin{aligned} 2t_{AB} & = & \sqrt{8m}\int_{0}^{q_{B}}\frac{dx}{\sqrt{E-k_{B}T\,\ln\left(\frac{q}{b}\right)}}\\ & = & \sqrt{\frac{8\pi m}{k_{B}T}}be^{\beta E}.\end{aligned}$$ In the more general case, the motion of the particle lies on a plane. If it moves in circular orbits around the singularity with a radius $r$, then its velocity can be deduced from the fact that the central and centrifugal forces must balance,$$F=\frac{k_{B}T}{r}=m\frac{v^{2}}{r}.$$ Therefore, the speed$$v=\sqrt{\frac{k_{B}T}{m}}\label{eq:v-circular-orbits}$$ does not depend on the radius of the orbit. The radius of the orbit is a function of the total energy $E$, because inserting into , setting $q$ equal to $r$ and then solving for $r$ gets us$$r=\frac{be^{\beta E}}{\sqrt{e}}.$$ Therefore, the time it takes the particle to complete an orbit is$$t_{orb.}=\frac{2\pi r}{v}=2\pi\sqrt{\frac{m}{ek_{B}T}}be^{\beta E}.$$ For arbitrary initial conditions, the trajectory followed by the oscillator will not usually be a closed path, but the particle will never move further out than$$r_{max.}=be^{\beta E},$$ for a given energy $E$, and the time between two consecutive maximum distances will be somewhere between $2t_{AB}$ and $t_{orb.}$ (note that both times are of the same order of magnitude),$$\frac{2t_{AB}}{t_{orb.}}=\sqrt{\frac{2e}{\pi}}.\label{eq:magnitude-tper}$$ Statistical properties ====================== The fact that the speed on a circular orbit does not depend on the radius is quite surprising. It implies that, if an external perturbation were to relocate the oscillator on a new circular orbit, the kinetic energy would remain the same and all the energy absorbed would be completely converted into potential energy. In a sense, this result can be generalised to the oscillator’s other trajectories. If we define the virial $G$ as$$G=pr,\label{eq:virial-definition}$$ and calculate its time derivative using ,$$\frac{dG}{dt}=p\dot{r}+\dot{p}r=2\left(\frac{p^{2}}{2m}\right)-k_{B}T.$$ The time average of the previous formula is$$\left\langle \frac{dG}{dt}\right\rangle _{t}=2\left\langle \frac{p^{2}}{2m}\right\rangle _{t}-k_{B}T,$$ and if $\left\langle dG/dt\right\rangle _{t}=0$, then the average kinetic energy must be$$\left\langle \frac{p^{2}}{2m}\right\rangle _{t}=\frac{1}{2}k_{B}T,\label{eq:average-kinetic-energy}$$ *whatever the value of* $E$! This means that the logarithmic oscillator can absorb an arbitrary amount of energy without changing its temperature at all, behaving (in a way) like an ideal thermostat. Is it true, then, that $\left\langle dG/dt\right\rangle _{t}=0$? It certainly is, as$$\begin{aligned} \left\langle \frac{dG}{dt}\right\rangle _{t} & = & \lim_{t\rightarrow\infty}\frac{1}{t}\int_{0}^{t}\frac{dG}{d\tau}d\tau\label{eq:dGdt-is-zero}\\ & = & \lim_{t\rightarrow\infty}\frac{G\left(t\right)-G\left(0\right)}{t}=0,\nonumber \end{aligned}$$ because $G$ has upper and lower bounds, as one can see by noting that $G$ is a continuous function, except at the origin. Given that$$\begin{aligned} \lim_{r\rightarrow0}G\left(r\right) & = & 0,\\ G\left(r_{max.}\right) & = & 0,\end{aligned}$$ we can infer that $G\left(r\right)$ has upper and lower bounds in the interval $\left(0,\, r_{max}\right]$, and is correct. However, we must not forget that there is a limiting process involved in , and hence it might take a very long time for the average kinetic energy to converge to $k_{B}T/2$. In fact, we will argue that this is generally the case, and that the logarithmic oscillator is therefore a somewhat less-than-ideal thermostat. A recent article in the ar$\chi$iv [@Campisi] argued that weak coupling between a system of interest and a logarithmic oscillator will result in canonical sampling of the former’s phase space. The dynamics of the compound system would then be determined by a total Hamiltonian$$\begin{aligned} H\left(\boldsymbol{q},\,\boldsymbol{p},\, r,\, p_{r}\right) & = & H_{S}\left(\boldsymbol{q},\,\boldsymbol{p}\right)+H_{osc.}\left(r,\, p_{r}\right)\\ & & +H_{int.}\left(\boldsymbol{q},\,\boldsymbol{p},\, r,\, p_{r}\right)=E,\end{aligned}$$ where $H_{S}\left(\boldsymbol{q},\,\boldsymbol{p}\right)$ is the Hamiltonian for the system of interest, $H_{osc.}\left(r,\, p_{r}\right)$ is the one-dimensional version of , and $H_{int.}$ is the potential energy of the weak interaction between the system and the oscillator, which we will assume is negligible compared to $H_{S}$ and $H_{osc.}$. The density of states for the logarithmic oscillator is$$\begin{aligned} \Omega_{osc.}\left(E_{osc.}\right) & = & \int\delta\left(H_{osc.}\left(r,\, p_{r}\right)-E_{osc.}\right)\, dp_{r}\, dr,\end{aligned}$$ with $\delta$ representing the Dirac delta function. The integral turns out to be exactly the same as , so$$\Omega_{osc.}\left(E_{osc.}\right)=\sqrt{\frac{8\pi m}{k_{B}T}}be^{\beta E_{osc.}}.\label{eq:Omega(Eosc)}$$ Furthermore, the probability density $\rho$ for a point in the phase space of the system corresponding to $H_{S}$ is$$\rho\left(\boldsymbol{q},\,\boldsymbol{p}\right)=\frac{\Omega_{osc.}\left(E-H_{S}\left(\boldsymbol{q},\,\boldsymbol{p}\right)\right)}{\Omega\left(E\right)}.\label{eq:rho(q,p)}$$ The function $\Omega\left(E\right)$ represents the density of states of the compound system,$$\Omega\left(E\right)=\int\delta\left(E-H\left(\boldsymbol{q},\,\boldsymbol{p},\, r,\, p_{r}\right)\right)\, dr\, dp_{r}\, d\boldsymbol{q}\, d\boldsymbol{p}.\label{eq:Omega(E)}$$ Expressions and can be used to convert into$$\rho\left(\boldsymbol{q},\,\boldsymbol{p}\right)=\frac{e^{-\beta H_{S}\left(\boldsymbol{q},\,\boldsymbol{p}\right)}}{\int e^{-\beta H_{S}\left(\boldsymbol{q},\,\boldsymbol{p}\right)}d\boldsymbol{q}\, d\boldsymbol{p}},$$ which is precisely the canonical distribution for $H_{S}$. According to the authors of [@Campisi], the logarithmic oscillator thermostat has two obvious advantages. Firstly, contrary to the popular Nosé-Hoover thermostat, the dynamical equations of motion are Hamiltonian. Secondly, it is possible to design experimental setups in which the thermostat is an actual *physical* system. Hoover wrote a reply [@Hoover] to the first claim arguing that Nosé-Hoover mechanics *are* in fact Hamiltonian, and included an example of an alternative Hamiltonian thermostat of the Nosé-Hoover type. Campisi *et alii* answered explaining their claim further in [@Campisi2]. Here we will be considering the second claim instead, that is, we will concentrate on the implementation of the logarithmic oscillator as a thermostat, both in experiments and simulations. Experiments =========== An experimental thermostat that relies on the dynamics of only a few degrees of freedom is no doubt a very interesting system. However, the nature of the logarithmic oscillator imposes some serious limitations which must be taken into account before one attempts to design such an experiment. The first problem is a consequence of the length-scales involved. Assume that we wish to bring a system with $N$ degrees of freedom to the equilibrium temperature $T$. If the kinetic energy per degree of freedom is initially off by a fraction $\alpha$ of the energy,$$\left\langle \frac{p_{i}^{2}}{2m}\right\rangle _{t}=\left(1+\alpha\right)\frac{1}{2}k_{B}T,$$ then the logarithmic oscillator will have to absorb at least an amount of energy equal to $\Delta E=N\alpha k_{B}T/2$. We have seen that the oscillator typically covers distances of the order of $b\,\exp\left\{ \beta E_{osc.}\right\} $. The change in energy implies that the distances covered will change by$$\Delta r_{max.}=r_{max.}\left(e^{\beta\Delta E}-1\right).\label{eq:delta-rmax}$$ This can be problematic if $r_{max.}$ is initially comparable to the size of the experimental apparatus and the oscillator is cooling the system. The enormous changes in lengths imply similar changes in time scales. Having assumed a weak interaction between the system of interest and the oscillator, the effect of the interaction on the latter during one period of oscillation should not be significant. The period is$$t_{per.}=\lambda\sqrt{\frac{m}{k_{B}T}}be^{\beta E_{osc.}},$$ where $\lambda$ is a factor that depends on the trajectory, but which is of the order of magnitude of $\sqrt{8\pi}$, in agreement with . The change in distances carries with it a corresponding change in periods of oscillation,$$\Delta t_{per.}=t_{per.}\left(e^{\beta\Delta E}-1\right).\label{eq:delta-tper}$$ Therefore, when the oscillator is cooling down the system of interest, it will usually move very far out and oscillate very slowly. On the other hand, when it is “hotter” than the system, it will squeeze into a small neighborhood of the singularity and vibrate very quickly. Let us illustrate the problem with some numbers. The authors of [@Campisi] propose an experiment in which a small system composed of neutral atoms is contained in a box of length $L$. The logarithmic oscillator is an ion in a two-dimensional Coulomb field generated by a charged wire. Assume, for example, that we have a dilute gas of $10$ atoms of argon at an initial temperature $T_{0}=3\,\unit{K}$ and that we wish to bring them to $T=1\,\unit{K}$. This means that the logarithmic oscillator must absorb about$$\Delta E=\frac{3}{2}Nk_{B}T_{0}-\frac{3}{2}Nk_{B}T=30\, k_{B}T\label{eq:Delta-E}$$ units of energy. Let us assume further that the cross section of the charged wire has a radius equal to $10^{-3}\, L$. Then the logarithmic oscillator must move in orbits with$$r_{max.}>10^{-3}\, L.$$ However, when we insert into we find that$$\Delta r_{max.}=r_{max.}\left(e^{30}-1\right)>10^{10}\, L.$$ If we also take equation into account, it is easy to see that we should expect to find the oscillator outside the box most of the time. Simulations =========== The wide range of time and length scales affects the precision and time of computation of numerical simulations as well, but the presence of a singularity in the logarithmic potential introduces another complication in the numerical implementation of the oscillator, as stepping over the singularity will usually lead to the wrong energy $E_{osc.}$. When the particle is in the vicinity of the singularity, the slope $\partial H/\partial r$ changes very quickly. If the oscillator ends up too close to the singularity, it will feel a great force which will push it away from the singularity during the next time step, making it skip the area in which the potential would slow it down again, unless a very small time step is chosen. For the one-dimensional version of the logarithmic oscillator, the problem can be solved by calculating the new position of the logarithmic oscillator first. If the oscillator has stepped over the singularity, then expression can be used to calculate the time it would have taken to get to the new position, and one can reset its kinetic energy to the correct value and calculate the evolution of the system of interest during that time. This solution is far from satisfactory, though, because it involves finding numerical values of the error function every time the particle passes the singularity. A different approach ([@Campisi]) replaces the logarithmic potential with the approximate potential$$V\left(r\right)=\frac{1}{2}k_{B}T\,\ln\left(\frac{r^{2}+b^{2}}{b^{2}}\right),$$ thereby eliminating the singularity and introducing only a slight correction in the density of states for low values of $E_{osc.}$. Unfortunately, this imposes a limit on the amount of energy available for exchange between the oscillator and the system. If the system and oscillator are enclosed in a box of length $L$, one only has about $k_{B}T\,\ln\left(L/b\right)$ units of energy to play with. In order to allow for larger energy ranges, one must choose smaller values of $b$ (of the order of $\exp\left\{ -2\alpha3N\right\} $ if we wish to allow the energy to fluctuate by a fraction $\alpha$ either way), and this will tend to generate a small neighbourhood of $r=0$ in which the forces on the oscillator are huge. Conclusions =========== The logarithmic oscillator proposed by Campisi, Zhan, Talkner and Hänggi displays very interesting properties from the point of view of theoretical statistical mechanics. However, before it can be used as a thermostat in actual experiments and numerical simulations, three problems must be addressed. Firstly, the distances covered by the oscillator depend exponentially on its energy. Given that it must not interact strongly with container walls or other objects, one would expect that it would be very difficult to control such a system in practice. Secondly, the vast increase in the period of oscillation when a system is being cooled down suggests that the desired thermostated dynamics will be achieved very slowly. Lastly, the presence of a singularity introduces some technical complications in the numerical implementation of the dynamical behaviour of the oscillator. It seems, therefore, that Nosé-Hoover dynamics will remain a popular option in molecular dynamics at least until the problems mentioned here are resolved satisfactorily. Aknowledgments {#aknowledgments .unnumbered} ============== The author would like to express his gratitude to Pep Español for his helpful comments. [3]{} [<span style="font-variant:small-caps;">M. Campisi</span>]{}, [<span style="font-variant:small-caps;">F. Zhan</span>]{}, [<span style="font-variant:small-caps;">P. Talkner</span>]{}, and [<span style="font-variant:small-caps;">P. Hänggi</span>]{}, *Logarithmic Oscillators: Ideal Hamiltonian Thermostats*, ar$\chi$iv 1203.5968v3 (2012) [\[]{}cond-mat.stat-mech[\]]{}. [<span style="font-variant:small-caps;">Wm. G. Hoover</span>]{}, *Another Hamiltonian Thermostat – Comment on ar$\chi$iv 1203.5968 and 1204.0312*, ar$\chi$iv 1204.0312v3 (2012) [\[]{}cond-mat.stat-mech[\]]{}. [<span style="font-variant:small-caps;">M. Campisi</span>]{}, [<span style="font-variant:small-caps;">F. Zhan</span>]{}, [<span style="font-variant:small-caps;">P. Talkner</span>]{}, and [<span style="font-variant:small-caps;">P. Hänggi</span>]{}, *Reply to Hoover [\[]{}arXiv:1204.0312v2[\]]{}*, ar$\chi$iv:1204.4412v1 (2012) [\[]{}cond-mat.stat-mech[\]]{}.
--- abstract: 'Recent results on jet physics at the Fermilab Tevatron $p\bar p$ collider from the CDF Collaboration are presented. The main focus is put on results for the inclusive jet and dijet, $b\bar b$ dijet, $W/Z+$jets and $W/Z+b$-jets production.' address: 'The Rockefeller University, 1230 York Avenue, New York, NY, U.S.A.' author: - | Kenichi Hatakeyama[^1]\ for the CDF Collaboration title: Jet Physics at CDF --- =by -1 Inclusive and dijet production ============================== The differential inclusive jet cross section and dijet cross section at the Tevatron test QCD at the shortest distances currently attainable in accelerator experiments. The measurements provide a fundamental test of QCD and a constraint on the parton distribution functions (PDFs) of the proton. The dijet mass spectrum is also sensitive to the presence of new particles that decay into two jets. ![[*(left)*]{} measured inclusive jet differential cross sections in five rapidity regions compared to NLO pQCD predictions; [*(right)*]{} ratios of the measured cross sections over the NLO pQCD predictions.\[fig:incjet\]](plots/sigma_relcor.eps){width="0.495\hsize"} CDF has made inclusive jet cross section measurements using the $k_T$ algorithm and Midpoint cone clustering algorithm. The recent measurements using these algorithms are based on the 1.0 and 1.13 fb${}^{-1}$ of data, respectively, and cover the rapidity region up to $|y_{jet}|=2.1$ which is much wider than $0.1<|y_{jet}|<0.7$ in the previous measurements. The $k_T$ measurement was published in [@CDFIncJetKt] and the results from the Midpoint measurement are shown in Fig. \[fig:incjet\]. The measured cross sections are in agreement with next-to-leading order (NLO) perturbative QCD (pQCD) predictions based on CTEQ6.1M PDF. The measurements in the forward region show that the experimental uncertainties are somewhat smaller than the PDF uncertainties, and this measurement is expected to further constrain the PDFs. The dijet mass differential cross section was measured using the 1.13 $\mbox{fb}^{-1}$ of data with the Midpoint algorithm. The measured dijet mass spectrum was found to be in good agreement with NLO pQCD predictions within the uncertainties. Limits on the cross sections for new particles decaying into two jets have been worked out using this measurement. $b\bar b$ dijet production ========================== The $b\bar b$ dijet production cross section has been measured using 260 pb$^{-1}$ of data collected by triggering on two displaced tracks and two jets. $b\bar b$ dijet events are selected by requiring two jets with $|\eta_{1,2}|<1.2$ and $E_{T,1}>35$ GeV and $E_{T,2}>32$ GeV, respectively, which are tagged as $b$-jets by a secondary vertex algorithm. The measured $b\bar b$ differential cross section is compared to Pythia Tune A [@tuneA], Herwig with Jimmy [@Jimmy], and MC@NLO [@MCNLO] with and without Jimmy as a function of $\Delta\phi$. Tune A refers to a special set of Pythia parameters tuned to give a reasonable description of the underlying event (UE), and Jimmy is a program which can be used in Herwig and MC@NLO to add multiple parton interactions to events to improve the description of UE. MC@NLO+Jimmy provides the best description of the data indicating the importance of both the next-to-leading order contribution and UE effect. $W/Z$+jets and $W/Z+b$-jets production ====================================== The $W/Z$+jets and $W/Z+b$-jets production has been studied intensively at CDF. The studies of these processes provide important tests of pQCD predictions at high momentum transfers. Final states containing $W/Z$ and ($b$-)jets are signal channels for many interesting processes such as $t\bar t$ or single top production, as well as searches for the Higgs boson in the $W/Z+H\to W/Z+b\bar b$ channel and physics beyond the Standard Model (SM) such as Supersymmetry. The production of $W/Z$+jets via QCD constitutes a large background to these processes, and thus it is essential to understand these processes accurately. The $W$+jets cross sections were measured using $W\to e\nu$ events from the 320 pb${}^{-1}$ of data for four inclusive jet multiplicities ($N_{jets}\ge1,2,3,4$), and compared to NLO pQCD predictions from MCFM [@MCFM] and LO matrix element (ME) + parton shower (PS) Monte Carlo predictions based on the CKKW [@CKKW] and MLM (as in Alpgen [@Alpgen]) matching schemes. The LO ME+PS predictions are systematically lower than the measured cross sections; however, all the predictions are in agreement with data in the cross section ratios $\sigma_{n}/\sigma_{n-1}$, where $\sigma_n=\sigma(W\to e\nu+\ge n\mbox{-jet}; E_{T,\,n{\mbox{\scriptsize -th jet}}}\ge25\mbox{ GeV})$. ![[*(left)*]{} Measured inclusive cross section for $Z$+jets production as a function of $p_T^{jet}$ compared to NLO pQCD predictions. [*(right)*]{} Measured cross section as a function of inclusive jet multiplicity compared to NLO pQCD predictions. \[fig:zjets\]](plots/zjets_Pt_jet12.eps){width="0.495\hsize"} The $Z$+jets cross sections were measured using $Z\to e^+e^-$ events from the 1.7 fb${}^{-1}$ of data for jets in the kinematic region of $p_T^{jet}>30$ GeV/[*c*]{} and $|y_{jet}|<2.1$. Fig. \[fig:zjets\] shows the measured differential inclusive jet cross sections as a function of $p_T^{jet}$ in $Z$+jets production for $N_{jet}\ge1$ and $N_{jet}\ge2$ and the total cross sections, $\sigma_{N_{jet}}$, for $Z$+jets events up to the $N_{jet}\ge3$ bin. Good agreement was observed between data and NLO pQCD predictions from MCFM [@MCFM] up to the $N_{jet}\ge2$ bin where NLO predictions are available. The ratio of the data to the LO pQCD calculations indicates that the LO pQCD predictions underestimate the data by a factor of about 1.4 and this factor is constant over inclusive jet multiplicities up to the $N_{jet}\ge3$ bin. CDF has recently updated the measurement on $Z+b$-jet production using 1.5 fb$^{-1}$ of data. The measurement was made using jets with $E_T>20$ GeV and $|\eta|<1.5$ tagged as $b$-jets by the secondary vertex algorithm in $Z\to e^+e^-$ and $Z\to\mu^{+}\mu^{-}$ events and the results are summarized in Table. \[tab:zb\]. The measured $Z+b$-jet cross section and its fractions in the $Z$ events and $Z$+jets events are found to be somewhat higher than the NLO pQCD predictions, and the $Z+b$-jet fractions are in better agreement with predictions from Pythia Tune A. The differences between the NLO predictions and Pythia are being investigated. ------------------------------------------------------------------------- --------------------------- -------- --------- --------------------------------------- CDF Data Pythia NLO NLO+UE $\!\!\!$+Hadronization $\!\!\!\!\!\!$ $\sigma(Z+b\mbox{-jet})$ $0.94\pm0.15\pm0.15$ pb n.a. 0.51 pb 0.56 pb $\sigma(Z+b\mbox{-jet})/\sigma(Z)$ $0.369\pm0.057\pm0.055$ % 0.35 % 0.21 % 0.23 % $\!\!\!\!\!\! \sigma(Z+b\mbox{-jet})/\sigma(Z+\mbox{jet}) \!\!\!\!\!\!$ $2.35\pm0.36\pm0.45$ % 2.18 % 1.88 % 1.77 % ------------------------------------------------------------------------- --------------------------- -------- --------- --------------------------------------- : Results on the $Z+b$-jet production. \[tab:zb\] Summary ======= CDF has a broad program on jet physics including the measurements on inclusive jet, dijet, $b\bar b$ dijet and boson+($b$-)jets production, and is making a significant impact on better understanding of jet production and QCD. These measurements provide tests of pQCD calculations and Monte Carlo event generators, and constraints on the proton PDFs. QCD processes are often important backgrounds to electroweak and possible new physics processes, and thus better understanding of QCD processes will enhance the potential for new physics discoveries at the Tevatron and also at the upcoming LHC. [99]{} A. Abulencia [*et al.*]{}, Phys. Rev. D 75, 092006 (2007). R. Field, presented at Fermilab ME/MC Tuning Workshop, October 4, 2002. J. M. Butterworth [*et al.*]{}, Z. Phys.  C [**72**]{}, 637 (1996). S. Frixione and B. R. Webber, JHEP [**0206**]{}, 029 (2002). J.M. Cambell [*et al.*]{}, Phys. Rev. D [**69**]{}, 074021 (2004). S. Catani [*et al.*]{}, J. High Energy Phys. [**0111**]{}, 063 (2001); F. Krauss, J. High Energy Phys. [**0208**]{}, 015 (2002). M. L. Mangano [*et al.*]{}, JHEP [**0307**]{}, 001 (2003). [^1]: [hatakek@mail.rockefeller.edu]{}